Transformer2D
译者:片刻小哥哥
项目地址:https://huggingface.apachecn.org/docs/diffusers/api/models/transformer2d
原始地址:https://huggingface.co/docs/diffusers/api/models/transformer2d
A Transformer model for image-like data from CompVis that is based on the Vision Transformer introduced by Dosovitskiy et al. The Transformer2DModel accepts discrete (classes of vector embeddings) or continuous (actual embeddings) inputs.
When the input is continuous :
- Project the input and reshape it to
(batch_size, sequence_length, feature_dimension)
. - Apply the Transformer blocks in the standard way.
- Reshape to image.
When the input is discrete :
It is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised image don’t contain a prediction for the masked pixel because the unnoised image cannot be masked.
- Convert input (classes of latent pixels) to embeddings and apply positional embeddings.
- Apply the Transformer blocks in the standard way.
- Predict classes of unnoised image.
Transformer2DModel
class
diffusers.
Transformer2DModel
[<
source
](https://github.com/huggingface/diffusers/blob/v0.23.0/src/diffusers/models/transformer_2d.py#L45)
(
num_attention_heads
: int = 16
attention_head_dim
: int = 88
in_channels
: typing.Optional[int] = None
out_channels
: typing.Optional[int] = None
num_layers
: int = 1
dropout
: float = 0.0
norm_num_groups
: int = 32
cross_attention_dim
: typing.Optional[int] = None
attention_bias
: bool = False
sample_size
: typing.Optional[int] = None
num_vector_embeds
: typing.Optional[int] = None
patch_size
: typing.Optional[int] = None
activation_fn
: str = 'geglu'
num_embeds_ada_norm
: typing.Optional[int] = None
use_linear_projection
: bool = False
only_cross_attention
: bool = False
double_self_attention
: bool = False
upcast_attention
: bool = False
norm_type
: str = 'layer_norm'
norm_elementwise_affine
: bool = True
norm_eps
: float = 1e-05
attention_type
: str = 'default'
caption_channels
: int = None
)
Parameters
- num_attention_heads
(
int
, optional , defaults to 16) — The number of heads to use for multi-head attention. - attention_head_dim
(
int
, optional , defaults to 88) — The number of channels in each head. - in_channels
(
int
, optional ) — The number of channels in the input and output (specify if the input is continuous ). - num_layers
(
int
, optional , defaults to 1) — The number of layers of Transformer blocks to use. - dropout
(
float
, optional , defaults to 0.0) — The dropout probability to use. - cross_attention_dim
(
int
, optional ) — The number ofencoder_hidden_states
dimensions to use. - sample_size
(
int
, optional ) — The width of the latent images (specify if the input is discrete ). This is fixed during training since it is used to learn a number of position embeddings. - num_vector_embeds
(
int
, optional ) — The number of classes of the vector embeddings of the latent pixels (specify if the input is discrete ). Includes the class for the masked latent pixel. - activation_fn
(
str
, optional , defaults to"geglu"
) — Activation function to use in feed-forward. - num_embeds_ada_norm
(
int
, optional ) — The number of diffusion steps used during training. Pass if at least one of the norm_layers isAdaLayerNorm
. This is fixed during training since it is used to learn a number of embeddings that are added to the hidden states.
During inference, you can denoise for up to but not more steps than
num_embeds_ada_norm
.
* attention_bias
(
bool
,
optional
) —
Configure if the
TransformerBlocks
attention should contain a bias parameter.
A 2D Transformer model for image-like data.
forward
[<
source
](https://github.com/huggingface/diffusers/blob/v0.23.0/src/diffusers/models/transformer_2d.py#L240)
(
hidden_states
: Tensor
encoder_hidden_states
: typing.Optional[torch.Tensor] = None
timestep
: typing.Optional[torch.LongTensor] = None
added_cond_kwargs
: typing.Dict[str, torch.Tensor] = None
class_labels
: typing.Optional[torch.LongTensor] = None
cross_attention_kwargs
: typing.Dict[str, typing.Any] = None
attention_mask
: typing.Optional[torch.Tensor] = None
encoder_attention_mask
: typing.Optional[torch.Tensor] = None
return_dict
: bool = True
)
Parameters
- hidden_states
(
torch.LongTensor
of shape(batch size, num latent pixels)
if discrete,torch.FloatTensor
of shape(batch size, channel, height, width)
if continuous) — Inputhidden_states
. - encoder_hidden_states
(
torch.FloatTensor
of shape(batch size, sequence len, embed dims)
, optional ) — Conditional embeddings for cross attention layer. If not given, cross-attention defaults to self-attention. - timestep
(
torch.LongTensor
, optional ) — Used to indicate denoising step. Optional timestep to be applied as an embedding inAdaLayerNorm
. - class_labels
(
torch.LongTensor
of shape(batch size, num classes)
, optional ) — Used to indicate class labels conditioning. Optional class labels to be applied as an embedding inAdaLayerZeroNorm
. - cross_attention_kwargs
(
Dict[str, Any]
, optional ) — A kwargs dictionary that if specified is passed along to theAttentionProcessor
as defined underself.processor
in diffusers.models.attention_processor . - attention_mask
(
torch.Tensor
, optional ) — An attention mask of shape(batch, key_tokens)
is applied toencoder_hidden_states
. If1
the mask is kept, otherwise if0
it is discarded. Mask will be converted into a bias, which adds large negative values to the attention scores corresponding to “discard” tokens. -
encoder_attention_mask (
torch.Tensor
, optional ) — Cross-attention mask applied toencoder_hidden_states
. Two formats supported:- Mask
(batch, sequence_length)
True = keep, False = discard. - Bias
(batch, 1, sequence_length)
0 = keep, -10000 = discard.
- Mask
If
ndim == 2
: will be interpreted as a mask, then converted into a bias consistent with the format
above. This bias will be added to the cross-attention scores.
* return_dict
(
bool
,
optional
, defaults to
True
) —
Whether or not to return a
UNet2DConditionOutput
instead of a plain
tuple.
The Transformer2DModel forward method.
Transformer2DModelOutput
class
diffusers.models.transformer_2d.
Transformer2DModelOutput
[<
source
](https://github.com/huggingface/diffusers/blob/v0.23.0/src/diffusers/models/transformer_2d.py#L32)
(
sample
: FloatTensor
)
Parameters
- sample
(
torch.FloatTensor
of shape(batch_size, num_channels, height, width)
or(batch size, num_vector_embeds - 1, num_latent_pixels)
if Transformer2DModel is discrete) — The hidden states output conditioned on theencoder_hidden_states
input. If discrete, returns probability distributions for the unnoised latent pixels.
The output of Transformer2DModel .