source
stringclasses 470
values | url
stringlengths 49
167
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#modeloutput
|
.md
|
utils.ModelOutput
Base class for all model outputs as dataclass. Has a `__getitem__` that allows indexing by integer or slice (like a
tuple) or strings (like a dictionary) that will ignore the `None` attributes. Otherwise behaves like a regular
python dictionary.
<Tip warning={true}>
You can't unpack a `ModelOutput` directly. Use the [`~utils.ModelOutput.to_tuple`] method to convert it to a tuple
before.
</Tip>
- to_tuple
|
458_2_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#basemodeloutput
|
.md
|
modeling_outputs.BaseModelOutput
Base class for model's outputs, with potential hidden states and attentions.
Args:
last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Sequence of hidden-states at the output of the last layer of the model.
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
458_3_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#basemodeloutput
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
|
458_3_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#basemodeloutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
|
458_3_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#basemodeloutputwithpooling
|
.md
|
modeling_outputs.BaseModelOutput
Base class for model's outputs, with potential hidden states and attentions.
Args:
last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Sequence of hidden-states at the output of the last layer of the model.
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
458_4_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#basemodeloutputwithpooling
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
|
458_4_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#basemodeloutputwithpooling
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
WithPooling
|
458_4_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#basemodeloutputwithcrossattentions
|
.md
|
modeling_outputs.BaseModelOutput
Base class for model's outputs, with potential hidden states and attentions.
Args:
last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Sequence of hidden-states at the output of the last layer of the model.
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
458_5_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#basemodeloutputwithcrossattentions
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
|
458_5_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#basemodeloutputwithcrossattentions
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
WithCrossAttentions
|
458_5_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#basemodeloutputwithpoolingandcrossattentions
|
.md
|
modeling_outputs.BaseModelOutput
Base class for model's outputs, with potential hidden states and attentions.
Args:
last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Sequence of hidden-states at the output of the last layer of the model.
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
458_6_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#basemodeloutputwithpoolingandcrossattentions
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
|
458_6_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#basemodeloutputwithpoolingandcrossattentions
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
WithPoolingAndCrossAttentions
|
458_6_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#basemodeloutputwithpast
|
.md
|
modeling_outputs.BaseModelOutput
Base class for model's outputs, with potential hidden states and attentions.
Args:
last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Sequence of hidden-states at the output of the last layer of the model.
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
458_7_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#basemodeloutputwithpast
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
|
458_7_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#basemodeloutputwithpast
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
WithPast
|
458_7_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#basemodeloutputwithpastandcrossattentions
|
.md
|
modeling_outputs.BaseModelOutput
Base class for model's outputs, with potential hidden states and attentions.
Args:
last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Sequence of hidden-states at the output of the last layer of the model.
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
458_8_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#basemodeloutputwithpastandcrossattentions
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
|
458_8_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#basemodeloutputwithpastandcrossattentions
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
WithPastAndCrossAttentions
|
458_8_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqmodeloutput
|
.md
|
modeling_outputs.Seq2SeqModelOutput
Base class for model encoder's outputs that also contains : pre-computed hidden states that can speed up sequential
decoding.
Args:
last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Sequence of hidden-states at the output of the last layer of the decoder of the model.
If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1,
hidden_size)` is output.
|
458_9_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqmodeloutput
|
.md
|
If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1,
hidden_size)` is output.
past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
`(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
|
458_9_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqmodeloutput
|
.md
|
`(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
`(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
|
458_9_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqmodeloutput
|
.md
|
blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
458_9_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqmodeloutput
|
.md
|
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
|
458_9_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqmodeloutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
|
458_9_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqmodeloutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
Sequence of hidden-states at the output of the last layer of the encoder of the model.
|
458_9_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqmodeloutput
|
.md
|
Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
458_9_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqmodeloutput
|
.md
|
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
|
458_9_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqmodeloutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
|
458_9_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#causallmoutput
|
.md
|
modeling_outputs.CausalLMOutput
Base class for causal language model (or autoregressive) outputs.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Language modeling loss (for next-token prediction).
logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
|
458_10_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#causallmoutput
|
.md
|
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
458_10_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#causallmoutput
|
.md
|
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
|
458_10_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#causallmoutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
|
458_10_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#causallmoutputwithcrossattentions
|
.md
|
modeling_outputs.CausalLMOutput
Base class for causal language model (or autoregressive) outputs.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Language modeling loss (for next-token prediction).
logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
|
458_11_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#causallmoutputwithcrossattentions
|
.md
|
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
458_11_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#causallmoutputwithcrossattentions
|
.md
|
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
|
458_11_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#causallmoutputwithcrossattentions
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
WithCrossAttentions
|
458_11_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#causallmoutputwithpast
|
.md
|
modeling_outputs.CausalLMOutput
Base class for causal language model (or autoregressive) outputs.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Language modeling loss (for next-token prediction).
logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
|
458_12_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#causallmoutputwithpast
|
.md
|
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
458_12_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#causallmoutputwithpast
|
.md
|
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
|
458_12_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#causallmoutputwithpast
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
WithPast
|
458_12_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#maskedlmoutput
|
.md
|
modeling_outputs.MaskedLMOutput
Base class for masked language models outputs.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Masked language modeling (MLM) loss.
logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
|
458_13_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#maskedlmoutput
|
.md
|
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
458_13_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#maskedlmoutput
|
.md
|
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
|
458_13_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#maskedlmoutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
|
458_13_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqlmoutput
|
.md
|
modeling_outputs.Seq2SeqLMOutput
Base class for sequence-to-sequence language models outputs.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Language modeling loss.
logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
|
458_14_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqlmoutput
|
.md
|
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
`(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
|
458_14_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqlmoutput
|
.md
|
`(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
`(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
|
458_14_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqlmoutput
|
.md
|
blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
458_14_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqlmoutput
|
.md
|
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
|
458_14_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqlmoutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
|
458_14_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqlmoutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
Sequence of hidden-states at the output of the last layer of the encoder of the model.
|
458_14_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqlmoutput
|
.md
|
Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
458_14_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqlmoutput
|
.md
|
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
|
458_14_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqlmoutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
|
458_14_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#nextsentencepredictoroutput
|
.md
|
modeling_outputs.NextSentencePredictorOutput
Base class for outputs of models predicting if two sentences are consecutive or not.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `next_sentence_label` is provided):
Next sequence prediction (classification) loss.
logits (`torch.FloatTensor` of shape `(batch_size, 2)`):
Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
|
458_15_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#nextsentencepredictoroutput
|
.md
|
Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
458_15_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#nextsentencepredictoroutput
|
.md
|
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
|
458_15_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#nextsentencepredictoroutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
|
458_15_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#sequenceclassifieroutput
|
.md
|
modeling_outputs.SequenceClassifierOutput
Base class for outputs of sentence classification models.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Classification (or regression if config.num_labels==1) loss.
logits (`torch.FloatTensor` of shape `(batch_size, config.num_labels)`):
Classification (or regression if config.num_labels==1) scores (before SoftMax).
|
458_16_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#sequenceclassifieroutput
|
.md
|
Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
458_16_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#sequenceclassifieroutput
|
.md
|
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
|
458_16_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#sequenceclassifieroutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
|
458_16_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqsequenceclassifieroutput
|
.md
|
modeling_outputs.Seq2SeqSequenceClassifierOutput
Base class for outputs of sequence-to-sequence sentence classification models.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `label` is provided):
Classification (or regression if config.num_labels==1) loss.
logits (`torch.FloatTensor` of shape `(batch_size, config.num_labels)`):
Classification (or regression if config.num_labels==1) scores (before SoftMax).
|
458_17_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqsequenceclassifieroutput
|
.md
|
Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
`(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
`(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
|
458_17_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqsequenceclassifieroutput
|
.md
|
`(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
458_17_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqsequenceclassifieroutput
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
|
458_17_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqsequenceclassifieroutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
|
458_17_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqsequenceclassifieroutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
Sequence of hidden-states at the output of the last layer of the encoder of the model.
|
458_17_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqsequenceclassifieroutput
|
.md
|
Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
458_17_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqsequenceclassifieroutput
|
.md
|
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
|
458_17_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqsequenceclassifieroutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
|
458_17_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#multiplechoicemodeloutput
|
.md
|
modeling_outputs.MultipleChoiceModelOutput
Base class for outputs of multiple choice models.
Args:
loss (`torch.FloatTensor` of shape *(1,)*, *optional*, returned when `labels` is provided):
Classification loss.
logits (`torch.FloatTensor` of shape `(batch_size, num_choices)`):
*num_choices* is the second dimension of the input tensors. (see *input_ids* above).
Classification scores (before SoftMax).
|
458_18_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#multiplechoicemodeloutput
|
.md
|
*num_choices* is the second dimension of the input tensors. (see *input_ids* above).
Classification scores (before SoftMax).
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
458_18_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#multiplechoicemodeloutput
|
.md
|
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
|
458_18_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#multiplechoicemodeloutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
|
458_18_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#tokenclassifieroutput
|
.md
|
modeling_outputs.TokenClassifierOutput
Base class for outputs of token classification models.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided) :
Classification loss.
logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.num_labels)`):
Classification scores (before SoftMax).
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
458_19_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#tokenclassifieroutput
|
.md
|
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
|
458_19_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#tokenclassifieroutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
|
458_19_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#questionansweringmodeloutput
|
.md
|
modeling_outputs.QuestionAnsweringModelOutput
Base class for outputs of question answering models.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Span-start scores (before SoftMax).
end_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Span-end scores (before SoftMax).
|
458_20_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#questionansweringmodeloutput
|
.md
|
end_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Span-end scores (before SoftMax).
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
458_20_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#questionansweringmodeloutput
|
.md
|
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
|
458_20_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#questionansweringmodeloutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
|
458_20_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqquestionansweringmodeloutput
|
.md
|
modeling_outputs.Seq2SeqQuestionAnsweringModelOutput
Base class for outputs of sequence-to-sequence question answering models.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Span-start scores (before SoftMax).
end_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
|
458_21_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqquestionansweringmodeloutput
|
.md
|
Span-start scores (before SoftMax).
end_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Span-end scores (before SoftMax).
past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
`(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
|
458_21_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqquestionansweringmodeloutput
|
.md
|
`(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
`(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
|
458_21_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqquestionansweringmodeloutput
|
.md
|
blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
458_21_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqquestionansweringmodeloutput
|
.md
|
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
|
458_21_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqquestionansweringmodeloutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
|
458_21_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqquestionansweringmodeloutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
Sequence of hidden-states at the output of the last layer of the encoder of the model.
|
458_21_6
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqquestionansweringmodeloutput
|
.md
|
Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
458_21_7
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqquestionansweringmodeloutput
|
.md
|
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
|
458_21_8
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqquestionansweringmodeloutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
|
458_21_9
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqspectrogramoutput
|
.md
|
modeling_outputs.Seq2SeqSpectrogramOutput
Base class for sequence-to-sequence spectrogram outputs.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Spectrogram generation loss.
spectrogram (`torch.FloatTensor` of shape `(batch_size, sequence_length, num_bins)`):
The predicted spectrogram.
past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
|
458_22_0
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqspectrogramoutput
|
.md
|
Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
`(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
`(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
|
458_22_1
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqspectrogramoutput
|
.md
|
blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
458_22_2
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqspectrogramoutput
|
.md
|
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
|
458_22_3
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqspectrogramoutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
|
458_22_4
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqspectrogramoutput
|
.md
|
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
sequence_length)`.
Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
Sequence of hidden-states at the output of the last layer of the encoder of the model.
|
458_22_5
|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/main_classes/output.md
|
https://huggingface.co/docs/transformers/en/main_classes/output/#seq2seqspectrogramoutput
|
.md
|
Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
458_22_6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.