The Transformer model introduced in Stable Diffusion 3. Its novelty lies in the MMDiT transformer block.
( sample_size: int = 128 patch_size: int = 2 in_channels: int = 16 num_layers: int = 18 attention_head_dim: int = 64 num_attention_heads: int = 18 joint_attention_dim: int = 4096 caption_projection_dim: int = 1152 pooled_projection_dim: int = 2048 out_channels: int = 16 pos_embed_max_size: int = 96 dual_attention_layers: typing.Tuple[int, ...] = () qk_norm: typing.Optional[str] = None )
Parameters
int, defaults to 128) —
The width/height of the latents. This is fixed during training since it is used to learn a number of
position embeddings. int, defaults to 2) —
Patch size to turn the input data into small patches. int, defaults to 16) —
The number of latent channels in the input. int, defaults to 18) —
The number of layers of transformer blocks to use. int, defaults to 64) —
The number of channels in each head. int, defaults to 18) —
The number of heads to use for multi-head attention. int, defaults to 4096) —
The embedding dimension to use for joint text-image attention. int, defaults to 1152) —
The embedding dimension of caption embeddings. int, defaults to 2048) —
The embedding dimension of pooled text projections. int, defaults to 16) —
The number of latent channels in the output. int, defaults to 96) —
The maximum latent height/width of positional embeddings. Tuple[int, ...], defaults to ()) —
The number of dual-stream transformer blocks to use. str, optional, defaults to None) —
The normalization to use for query and key in the attention layer. If None, no normalization is used. The Transformer model introduced in Stable Diffusion 3.
( chunk_size: typing.Optional[int] = None dim: int = 0 )
Parameters
int, optional) —
The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually
over each tensor of dim=dim. int, optional, defaults to 0) —
The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch)
or dim=1 (sequence length). Sets the attention processor to use feed forward chunking.
( hidden_states: Tensor encoder_hidden_states: Tensor = None pooled_projections: Tensor = None timestep: LongTensor = None block_controlnet_hidden_states: typing.List = None joint_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None return_dict: bool = True skip_layers: typing.Optional[typing.List[int]] = None )
Parameters
torch.Tensor of shape (batch size, channel, height, width)) —
Input hidden_states. torch.Tensor of shape (batch size, sequence_len, embed_dims)) —
Conditional embeddings (embeddings computed from the input conditions such as prompts) to use. torch.Tensor of shape (batch_size, projection_dim)) —
Embeddings projected from the embeddings of input conditions. torch.LongTensor) —
Used to indicate denoising step. list of torch.Tensor) —
A list of tensors that if specified are added to the residuals of transformer blocks. dict, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under
self.processor in
diffusers.models.attention_processor. bool, optional, defaults to True) —
Whether or not to return a ~models.transformer_2d.Transformer2DModelOutput instead of a plain
tuple. list of int, optional) —
A list of layer indices to skip during the forward pass. The SD3Transformer2DModel forward method.
Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused.
> This API is 🧪 experimental.
Disables the fused QKV projection if enabled.
> This API is 🧪 experimental.