text
stringlengths 1
1.02k
| class_index
int64 0
10.8k
| source
stringlengths 85
188
|
---|---|---|
class MaxLengthCriteria(StoppingCriteria):
"""
This class can be used to stop generation whenever the full generated number of tokens exceeds `max_length`. Keep
in mind for decoder-only type of transformers, this will include the initial prompted tokens.
Args:
max_length (`int`):
The maximum length that the output sequence can have in number of tokens.
max_position_embeddings (`int`, *optional*):
The maximum model length, as defined by the model's `config.max_position_embeddings` attribute.
"""
def __init__(self, max_length: int, max_position_embeddings: Optional[int] = None):
self.max_length = max_length
self.max_position_embeddings = max_position_embeddings
| 10,714 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
@add_start_docstrings(STOPPING_CRITERIA_INPUTS_DOCSTRING)
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> torch.BoolTensor:
cur_len = input_ids.shape[-1]
is_done = cur_len >= self.max_length
if self.max_position_embeddings is not None and not is_done and cur_len >= self.max_position_embeddings:
logger.warning_once(
"This is a friendly reminder - the current text generation call will exceed the model's predefined "
f"maximum length ({self.max_position_embeddings}). Depending on the model, you may observe "
"exceptions, performance degradation, or nothing at all."
)
return torch.full((input_ids.shape[0],), is_done, device=input_ids.device, dtype=torch.bool)
| 10,714 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
class MaxTimeCriteria(StoppingCriteria):
"""
This class can be used to stop generation whenever the full generation exceeds some amount of time. By default, the
time will start being counted when you initialize this function. You can override this by passing an
`initial_time`.
Args:
max_time (`float`):
The maximum allowed time in seconds for the generation.
initial_time (`float`, *optional*, defaults to `time.time()`):
The start of the generation allowed time.
"""
def __init__(self, max_time: float, initial_timestamp: Optional[float] = None):
self.max_time = max_time
self.initial_timestamp = time.time() if initial_timestamp is None else initial_timestamp
| 10,715 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
@add_start_docstrings(STOPPING_CRITERIA_INPUTS_DOCSTRING)
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> torch.BoolTensor:
is_done = time.time() - self.initial_timestamp > self.max_time
return torch.full((input_ids.shape[0],), is_done, device=input_ids.device, dtype=torch.bool)
| 10,715 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
class StopStringCriteria(StoppingCriteria):
"""
This class can be used to stop generation whenever specific string sequences are generated. It preprocesses
the strings together with the tokenizer vocab to find positions where tokens can validly complete the stop strings.
Generation is stopped as soon as a token is generated that completes any of the stop strings.
We want to catch any instance in which the stop string would be present in the decoded output, which means
we must also catch cases with "overhangs" off one or both ends. To make this more concrete, for the stop string
"stop", any of the following token sequences would trigger the match:
- ["st", "op"]
- ["stop"]
- ["st", "opera"]
- ["sto", "pper"]
- ["las", "topper"]
- ["s", "to", "pped"]
Note that a match will only be triggered if the stop string is at the end of the generated sequence. In other
words, these sequences will not trigger a match:
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
- ["stop", "at"]
- ["st", "op", "at"]
- ["st", "opera", "tion"]
The reason these are not a match is that the stop string does not overlap with the final token. If you can remove
one or more tokens from the end of the sequence without destroying the stop string, then this criterion will not
match that stop string. This is by design; because this check is run after each token is generated, we can't miss a
valid stop string if one is generated, but we don't want to halt generation just because the stop string exists
somewhere in the past input_ids.
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
How is the match actually performed, though? We do it in quite a confusing way, because we want the entire match
process to be compilable with Torch or XLA, which means we cannot use standard string methods. However, it is possible,
with some work, to do string matching with pure tensor operations. We'll begin by describing the algorithm we use
with standard string operations, and then at the end we'll explain how this is converted to pure tensor operations.
The key to the algorithm is an observation: Because the stop string must overlap with the end of the token sequence, we can start at
the end of the sequence and work backwards. Specifically, we check that there is an overlap between the start of
the final token and the end of the stop_string, or to put it another way, stop_string[-i:] == token[:i] for
some i > 0. If you look at the positive examples above, you'll see the last token in all of them fulfills this
property:
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
- ["st", "op"] (overlap is "op", overlap length == 2)
- ["stop"] (overlap is "stop", overlap length == 4)
- ["st", "opera"] (overlap is "op", overlap length == 2)
- ["sto", "pper"] (overlap is "p", overlap length == 1)
- ["las", "topper"] (overlap is "top", overlap length == 3)
- ["s", "to", "pped"] (overlap is "p", overlap length == 1)
It's impossible to construct a matching sequence that does not have this property (feel free to verify this
yourself). However, although this overlap between the start of the final token and the end of the stop string is
necessary for a match, it is not sufficient. We also need to check that the rest of the token sequence is
consistent with the stop string.
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
How do we do that? Let's use ["s", "to", "pped"] as an example. We know that the final token, "pped", has an
overlap of 1 with the stop string, "stop". We then go back to the previous token, "to". Since we have already
matched 1 character from the stop string, the remainder to check is "sto". We check that the next token "to"
matches the end of the remainder, which it does. We have now matched 3 characters from the stop string, and the
remainder to match is "s". We go back to the previous token again, which is also "s". This is a match, and so
we have matched the entire stop string.
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
How does it work when the tokens run off the start of the stop string, though? Let's consider the example of
["las", "topper"]. The final token, "topper", has an overlap of 3 with the stop string, "stop". Therefore,
the remaining stop string to match is "s". We go back to the previous token, "las". Because the remainder to
match is just "s", with length 1, we consider only the final 1 character from the token, which is "s". This
matches the stop string, and so the entire string is matched.
How do we compute these matches with tensor operations, though? Simply: we efficiently precompute the necessary
information for all tokens! For every token, we compute:
- Its overlap with the end of the stop string, if any
- The positions inside the stop string where the token matches, including matches that run off the start.
- The total length of the token
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
For example, for the token "pped", we would compute an end overlap of 1, no internal matching positions,
and a length of 4. For the token "to", we would compute no end overlap, a single internal matching position
of 1 (counting from the end), and a length of 2. For the token "s", we would compute no end overlap,
a single internal matching position of 3 (again counting from the end) and a length of 1.
As long as we have this information, we can execute the algorithm above without any string comparison
operations. We simply perform the following steps:
- Check if the final token has an end-overlap with the start string
- Continue backwards, keeping track of how much of the stop string we've matched so far
- At each point, check if the next token has the current position as one of its valid positions
- Continue until either a match fails, or we completely match the whole stop string
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
Again, consider ["s", "to", "pped"] as an example. "pped" has an end overlap of 1, so we can begin a match.
We have matched 1 character so far, so we check that the next token "to", has 1 as a valid position (again,
counting from the end). It does, so we add the length of "to" to our position tracker. We have now matched
3 characters, so we check that the next token "s" has 3 as a valid position. It does, so we add its length
to the position tracker. The position tracker is now 4, which is the length of the stop string. We have matched the
entire stop string.
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
In the second case, ["las", "topper"], "topper" has an end overlap of 3, so we can begin a match. We have
matched 3 characters so far, so we check that the next token "las" has 3 as a valid position. It does, because we
allow tokens to match positions that run off the start of the stop string. We add its length to the position
tracker. The position tracker is now 6, which is greater than the length of the stop string! Don't panic, though -
this also counts as a match of the stop string. We have matched the entire stop string.
Args:
tokenizer (`PreTrainedTokenizer`):
The model's associated tokenizer (necessary to extract vocab and tokenize the termination sequences)
stop_strings (`Union[str, List[str]]`):
A list of strings that should end generation. If a string is passed, it will be treated like a
list with a single element.
Examples:
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
>>> tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2")
>>> model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2")
>>> inputs = tokenizer("The biggest states in the USA by land area:", return_tensors="pt")
>>> gen_out = model.generate(**inputs)
>>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0])
The biggest states in the USA by land area:
- Alaska
- Texas
- California
>>> # Passing one or more stop strings will halt generation after those strings are emitted
>>> # Note that generating with stop strings requires you to pass the tokenizer too
>>> gen_out = model.generate(**inputs, stop_strings=["Texas"], tokenizer=tokenizer)
>>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0])
The biggest states in the USA by land area:
- Alaska
- Texas
```
"""
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
def __init__(self, tokenizer: PreTrainedTokenizerBase, stop_strings: Union[str, List[str]]):
if isinstance(stop_strings, str):
stop_strings = [stop_strings]
self.stop_strings: Tuple[str, ...] = tuple(stop_strings)
vocab = tokenizer.get_vocab()
token_list, token_indices = tuple(vocab.keys()), tuple(vocab.values())
self.embedding_vec, self.max_valid_positions, self.max_valid_end_lens = self.clean_and_embed_tokens_with_cache(
token_list, token_indices, self.stop_strings, tokenizer
)
self.maximum_token_len = max([len(stop_string) for stop_string in self.stop_strings])
self.num_stop_strings = len(self.stop_strings)
self.target_lens = torch.tensor([len(stop_string) for stop_string in stop_strings], dtype=torch.int32)
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
def clean_and_embed_tokens_with_cache(self, token_list, token_indices, stop_strings, tokenizer):
# We don't use the tokenizer in the cache key, because I don't trust it to have well-behaved equality
if (token_list, token_indices, stop_strings) in STOP_STRING_EMBEDDING_CACHE:
embedding_vec, max_valid_positions, max_valid_end_lens = STOP_STRING_EMBEDDING_CACHE[
(token_list, token_indices, self.stop_strings)
]
STOP_STRING_EMBEDDING_CACHE.move_to_end((token_list, token_indices, stop_strings))
else:
clean_token_list, clean_token_indices = self.clean_tokenizer_vocab(tokenizer)
embedding_vec, max_valid_positions, max_valid_end_lens = self._stop_string_create_embedding_vec(
clean_token_list, clean_token_indices, stop_strings
)
STOP_STRING_EMBEDDING_CACHE[(token_list, token_indices, stop_strings)] = (
embedding_vec,
max_valid_positions,
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
max_valid_end_lens,
)
if len(STOP_STRING_EMBEDDING_CACHE) > 8:
STOP_STRING_EMBEDDING_CACHE.popitem(last=False) # Pop from the start, the least recently used item
return embedding_vec, max_valid_positions, max_valid_end_lens
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
@staticmethod
def clean_tokenizer_vocab(tokenizer, static_prefix="abcdef"):
"""
This method turns a tokenizer vocab into a "clean" vocab where each token represents the actual string
it will yield, without any special prefixes like "##" or "Ġ". This is trickier than it looks - the method
tokenizer.convert_tokens_to_string() does not always return the correct string because of issues with prefix
space addition/removal. To work around this, we add a static prefix to the start of the token, then remove
it (and any prefix that may have been introduced with it) after calling convert_tokens_to_string().
"""
vocab = tokenizer.get_vocab()
clean_token_list = []
clean_token_indices = []
sentence_base = tokenizer(static_prefix, add_special_tokens=False)["input_ids"]
tokens_base = [tokenizer._convert_id_to_token(tok) for tok in sentence_base]
for token, token_idx in vocab.items():
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
token_string = tokenizer.convert_tokens_to_string(tokens_base + [token])
token_string = token_string[token_string.index(static_prefix) + len(static_prefix) :]
clean_token_list.append(token_string)
clean_token_indices.append(token_idx)
return tuple(clean_token_list), tuple(clean_token_indices)
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
@staticmethod
def _stop_string_get_matching_positions(
token_list, token_indices, stop_strings
) -> Tuple[Dict[str, Dict[str, List[int]]], Dict[str, Dict[str, List[int]]]]:
"""This function preprocesses stop strings and the tokenizer vocabulary to determine where tokens can
validly appear in the stop strings. For each token, it computes a list of positions in the stop string where the
token appears, as well as a list of the possible "end overlaps" for that token - that is, the number of characters
from the end of the stop string that overlap with the start of the token, which can have more than one value.
The reason for computing these may seem a bit cryptic - please see the docstring for StopStringCriteria for a full
explanation of what these values are for!"""
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
token_valid_positions = {}
token_end_overlaps = {}
for stop_string in stop_strings:
reversed_stop_string = stop_string[::-1]
token_valid_positions[stop_string] = {}
token_end_overlaps[stop_string] = {}
for token, tok_idx in zip(token_list, token_indices):
reversed_token = token[::-1]
matching_positions = []
possible_end_lengths = []
for i in range(1 - len(token), len(stop_string)):
if i < 0:
tok = reversed_token[-i:]
i = 0
else:
tok = reversed_token
stop = reversed_stop_string[i : i + len(tok)]
if tok.startswith(stop):
if i == 0:
possible_end_lengths.append(min(len(tok), len(stop)))
else:
matching_positions.append(i)
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
if matching_positions:
token_valid_positions[stop_string][tok_idx] = matching_positions
if possible_end_lengths:
token_end_overlaps[stop_string][tok_idx] = possible_end_lengths
return token_valid_positions, token_end_overlaps
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
@staticmethod
def _stop_string_create_embedding_vec(token_list, token_indices, stop_strings) -> Dict[str, torch.tensor]:
"""This function precomputes everything needed for the run-time checks in StopStringCriteria, and packs
them into an embedding tensor that can be accessed with pure tensor operations. For the specifics of the values
that are precomputed and what they are used for, please refer to the StopStringCriteria docstring!"""
token_valid_positions, token_end_overlaps = StopStringCriteria._stop_string_get_matching_positions(
token_list, token_indices, stop_strings
)
all_valid_positions = [len(val) for positions in token_valid_positions.values() for val in positions.values()]
# In some cases, tokens may have no valid internal positions (such as single-character stop strings), so
# we need a fallback to handle this case
max_valid_positions = max(all_valid_positions) if all_valid_positions else 1
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
# There should always be at least one valid end_len, however, so no fallback needed here
valid_end_lens = [len(val) for positions in token_end_overlaps.values() for val in positions.values()]
if not valid_end_lens:
raise ValueError(
"Stop string preprocessing was unable to identify tokens matching one or more of the "
"supplied stop string(s). This is most often caused by the stop "
"strings containing unusual characters that are not in the tokenizer vocabulary."
)
max_valid_end_lens = max(valid_end_lens)
vec_size = len(stop_strings) * (max_valid_positions + max_valid_end_lens) + 1
gather_vec = np.full((len(token_list), vec_size), dtype=np.int32, fill_value=-1)
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
for i, stop_string in enumerate(stop_strings):
positions = token_valid_positions[stop_string]
end_lens = token_end_overlaps[stop_string]
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
# Since this is lots of very small assignments of lists, we build it with numpy rather
# than torch for speed + simplicity, then convert to torch at the end
for token_idx, valid_positions in positions.items():
gather_vec[token_idx, max_valid_positions * i : max_valid_positions * i + len(valid_positions)] = (
valid_positions
)
for token_idx, possible_end_lens in end_lens.items():
gather_vec[
token_idx,
max_valid_positions * len(stop_strings) + max_valid_end_lens * i : max_valid_positions
* len(stop_strings)
+ max_valid_end_lens * i
+ len(possible_end_lens),
] = possible_end_lens
for token, token_idx in zip(token_list, token_indices):
gather_vec[token_idx, -1] = len(token)
gather_vec = torch.tensor(gather_vec, dtype=torch.int32)
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
return gather_vec, max_valid_positions, max_valid_end_lens
@add_start_docstrings(STOPPING_CRITERIA_INPUTS_DOCSTRING)
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> torch.Tensor:
self.embedding_vec = self.embedding_vec.to(input_ids.device)
self.target_lens = self.target_lens.to(input_ids.device)
# The maximum length we need to consider is 1 token per character. Note that input_ids can also be
# *shorter* than the global max, and the code below should be ready for that
input_ids = input_ids[:, -self.maximum_token_len :]
# Flip input_ids because we're only matching strings at the end of the generated sequence
flipped_ids = torch.flip(input_ids, (1,))
# Size of the vector of positions a single token can match
max_valid_positions = self.max_valid_positions
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
# The embedding vec contains the valid positions, end_lengths and total lengths for each token
embedded = F.embedding(flipped_ids, self.embedding_vec)
# Now we split the embedding vector. valid_positions is the positions in the stop string the token can fit
valid_positions = embedded[:, 1:, : max_valid_positions * self.num_stop_strings].unflatten(
-1, (self.num_stop_strings, -1)
)
# end_lengths is the number of characters from the string, counting from the end, that the token
# contains. It can have multiple values if the same token can overlap different end lengths
end_lengths = embedded[:, :1, max_valid_positions * self.num_stop_strings : -1].unflatten(
-1, (self.num_stop_strings, -1)
)
# Lengths is the total length of each token. Unlike the others, it always has a single value
lengths = embedded[:, 1:, None, -1:] # Insert a dummy dimension for stop_strings even though lengths are const
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
# Concatenate lengths onto each possible end_lengths value
lengths = lengths.expand((-1, -1, end_lengths.shape[-2], end_lengths.shape[-1]))
lengths_with_ends = torch.cat([end_lengths, lengths], dim=1)
# cumsum() to get the number of matched characters in the stop string after each token
cumsum = lengths_with_ends.cumsum(dim=1) # B x maximum_token_len x num_stop_strings x max_valid_end_lens
# The calculation above assumes that all tokens are in valid positions. Now we mask the ones that are not.
# First, tokens match the start of the string if they have a positive value in the end_lengths vector
initial_match = end_lengths > 0
# Tokens continue the string if the cumsum() so far is one of the valid positions for that token
# Note that we're actually tracking one cumsum() for for each possible end_length
later_match = torch.any(cumsum[:, :-1, :, None] == valid_positions[:, :, :, :, None], axis=-2)
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
# The match vector is a boolean vector that indicates which positions have valid tokens
match = torch.cat([initial_match, later_match], dim=1)
# Once a single position does not match, all positions following that position are masked
mask = (~match).cumsum(dim=1, dtype=torch.int32)
mask = mask == 0
# The string is matched if we reached a cumsum equal to or greater than the length of the string
# before hitting the mask
string_matches = torch.amax(cumsum * mask, dim=(1, -1)) >= self.target_lens[None, :]
# We return a per-sample vector that is True if any stop string is matched for that sample
return torch.any(string_matches, dim=-1)
| 10,716 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
class EosTokenCriteria(StoppingCriteria):
"""
This class can be used to stop generation whenever the "end-of-sequence" token is generated.
By default, it uses the `model.generation_config.eos_token_id`.
Args:
eos_token_id (`Union[int, List[int], torch.Tensor]`):
The id(s) of the *end-of-sequence* token.
"""
def __init__(self, eos_token_id: Union[int, List[int], torch.Tensor]):
if not isinstance(eos_token_id, torch.Tensor):
if isinstance(eos_token_id, int):
eos_token_id = [eos_token_id]
eos_token_id = torch.tensor(eos_token_id)
self.eos_token_id = eos_token_id
@add_start_docstrings(STOPPING_CRITERIA_INPUTS_DOCSTRING)
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> torch.BoolTensor:
self.eos_token_id = self.eos_token_id.to(input_ids.device)
is_done = isin_mps_friendly(input_ids[:, -1], self.eos_token_id)
return is_done
| 10,717 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
class ConfidenceCriteria(StoppingCriteria):
"""
This class can be used to stop generation whenever assistant model's confidence in its prediction for the current token is lower than the threshold
`model.generation_config.assistant_confidence_threshold` even if the number of speculative tokens (defined by `num_assistant_tokens`) is not yet reached.
Args:
assistant_confidence_threshold (`float`):
The value of the threshold.
"""
def __init__(self, assistant_confidence_threshold):
self.assistant_confidence_threshold = assistant_confidence_threshold
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> torch.BoolTensor:
probs = scores[-1].softmax(-1)
p = probs[0, input_ids[0, -1]].item()
if p < self.assistant_confidence_threshold:
return True
return False
| 10,718 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
class StoppingCriteriaList(list):
@add_start_docstrings(STOPPING_CRITERIA_INPUTS_DOCSTRING)
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> torch.BoolTensor:
is_done = torch.full((input_ids.shape[0],), False, device=input_ids.device, dtype=torch.bool)
for criteria in self:
is_done = is_done | criteria(input_ids, scores, **kwargs)
return is_done
@property
def max_length(self) -> Optional[int]:
for stopping_criterium in self:
if isinstance(stopping_criterium, MaxLengthCriteria):
return stopping_criterium.max_length
return None
| 10,719 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/stopping_criteria.py
|
class CandidateGenerator:
"""Abstract base class for all candidate generators that can be applied during assisted generation."""
def get_candidates(self, input_ids: torch.LongTensor) -> Tuple[torch.LongTensor, Optional[torch.FloatTensor]]:
"""
Fetches the candidates to be tried for the current input.
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
| 10,720 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
Return:
`torch.LongTensor` of shape `(batch_size, candidate_length)` containing the candidate sequences to be
assessed by the model and, optionally, a `torch.FloatTensor` of shape `(batch_size, candidate_length,
vocabulary_size)` containing the logits associated to each candidate.
"""
raise NotImplementedError(
f"{self.__class__} is an abstract class. Only classes inheriting this class can call `get_candidates`."
)
def update_candidate_strategy(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, num_matches: int):
"""
Updates the candidate generation strategy based on the outcomes.
| 10,720 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
scores (`torch.FloatTensor` of shape `(batch_size, candidate_length, config.vocab_size)`):
Prediction scores of a language modeling head. These can be logits for each vocabulary when not using
beam search or log softmax for each vocabulary token when using beam search
num_matches (`int`):
The number of matches between the candidate sequences and the model predictions.
"""
raise NotImplementedError(
f"{self.__class__} is an abstract class. Only classes inheriting this class can call "
"`update_candidate_strategy`."
)
| 10,720 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
class AssistedCandidateGenerator(CandidateGenerator):
"""
`CandidateGenerator` class to be used for assisted generation and speculative decoding. This class generates
candidates through the use of a smaller model. Read the following blog post for more information:
https://huggingface.co/blog/assisted-generation
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
assistant_model (`PreTrainedModel`):
The model to be used for generating candidates. This model should be smaller than the main model.
generation_config (`~generation.GenerationConfig`, *optional*):
The generation configuration to be used as base parametrization for the generation call.
logits_processor (`LogitsProcessorList`):
An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsProcessor`]
used to modify the prediction scores of the language modeling head applied at each generation step.
model_kwargs (`Dict`):
The keyword arguments that will be passed to the main model, and are used as base inputs for the assistant
model as well.
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
inputs_tensor (`torch.Tensor`, *optional*):
The model input tensor. In encoder-decoder models, this is the encoder input.
"""
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
def __init__(
self,
input_ids: torch.LongTensor,
assistant_model: "PreTrainedModel",
generation_config: "GenerationConfig",
model_kwargs: Dict,
inputs_tensor: Optional[torch.Tensor] = None,
logits_processor: "LogitsProcessorList" = None,
):
# Make sure all data at the same device as assistant model
device = assistant_model.device
input_ids = input_ids.to(device)
if inputs_tensor is not None:
inputs_tensor = inputs_tensor.to(device)
# Prepare the assistant and the starting number of candidate tokens
self.assistant_model = assistant_model
self.num_assistant_tokens = assistant_model.generation_config.num_assistant_tokens
self.assistant_confidence_threshold = assistant_model.generation_config.assistant_confidence_threshold
# Set eos in assistant same as in target model
self.assistant_model.generation_config.eos_token_id = generation_config.eos_token_id
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
# Prepare the kwargs for the assistant model
assistant_kwargs = {}
for key, value in model_kwargs.items(): # deepcopy crashes if we attempt to copy encoder outputs with grads
if key not in ("encoder_outputs", "assistant_encoder_outputs", "past_key_values"):
assistant_kwargs[key] = (
value.detach().to(device) if isinstance(value, torch.Tensor) else copy.deepcopy(value)
)
# Remove potential default "num_logits_to_keep" key
if "num_logits_to_keep" in assistant_kwargs.keys() and not assistant_model._supports_num_logits_to_keep():
del assistant_kwargs["num_logits_to_keep"]
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
if "assistant_encoder_outputs" in model_kwargs:
assistant_kwargs["encoder_outputs"] = model_kwargs["assistant_encoder_outputs"]
elif assistant_model.config.is_encoder_decoder:
inputs_tensor, model_input_name, assistant_kwargs = assistant_model._prepare_model_inputs(
inputs_tensor, assistant_model.generation_config.bos_token_id, assistant_kwargs
)
assistant_kwargs = assistant_model._prepare_encoder_decoder_kwargs_for_generation(
inputs_tensor, assistant_kwargs, model_input_name, assistant_model.generation_config
)
elif "encoder_outputs" in model_kwargs:
assistant_kwargs["encoder_outputs"] = model_kwargs["encoder_outputs"]
self.assistant_kwargs = assistant_kwargs
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
# Prepare assistant model's keys of inputs
if assistant_model.config.is_encoder_decoder:
# both are encoder-decoder
self.input_ids_key = "decoder_input_ids"
elif "encoder_outputs" in assistant_kwargs:
# special case for encoder-decoder with decoder-only assistant (like DistilWhisper)
self.input_ids_key = "input_ids"
self.assistant_kwargs["attention_mask"] = self.assistant_kwargs.get(
"decoder_attention_mask",
torch.ones((input_ids.shape[0], 1), device=input_ids.device, dtype=torch.long),
)
else:
# both are decoder-only
self.input_ids_key = "input_ids"
# Prepare generation-related options.
self.logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList()
self.generation_config = copy.deepcopy(generation_config)
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
self.generation_config.return_dict_in_generate = True
self.generation_config.output_scores = True
self.generation_config.assistant_confidence_threshold = self.assistant_confidence_threshold
# this flag allow us set the confidence stopping criteria for assistant model generation.
self.generation_config.is_assistant = True
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
# avoid unnecessary warnings that min_length is larger than max_new_tokens
# remove the `MinLengthLogitsProcessor` if exists (NOTE: no need to check for `MinNewTokensLogitsProcessor`)
self.main_model_min_length = self.generation_config.min_length
self.generation_config.min_length = 0
self.generation_config.min_new_tokens = None
for processor in self.logits_processor:
if isinstance(processor, MinLengthLogitsProcessor):
raise ValueError(
"Passing `MinLengthLogitsProcessor` when using `assisted_generation is disabled. "
"Please pass in `min_length` into `.generate()` instead"
)
# We need to roll back the cache in assisted generation, only DynamicCache is supported
self.generation_config.cache_implementation = None
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
if (
is_sklearn_available()
and self.assistant_model.generation_config.assistant_confidence_threshold
and type(self) is AssistedCandidateGenerator
):
self.probs = []
self.matches = []
def get_candidates(self, input_ids: torch.LongTensor) -> Tuple[torch.LongTensor, Optional[torch.FloatTensor]]:
"""
Fetches the candidates to be tried for the current input.
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
Return:
`torch.LongTensor` of shape `(batch_size, candidate_length)` containing the candidate sequences to be
assessed by the model and a `torch.FloatTensor` of shape `(batch_size, candidate_length,
vocabulary_size)` containing the logits associated to each candidate.
"""
input_ids = input_ids.to(self.assistant_model.device)
# Calculate new tokens to generate
min_new_tokens, max_new_tokens = self._calculate_new_tokens(input_ids)
if max_new_tokens == 0:
return input_ids, None
# Update past key values and masks
self._update_past_and_masks(input_ids)
# Generate candidates
generation_args = self._prepare_generation_args(input_ids, min_new_tokens, max_new_tokens)
candidate_ids, candidate_logits = self._generate_candidates(generation_args)
return candidate_ids, candidate_logits
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
def update_candidate_strategy(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, num_matches: int):
"""
Updates the candidate generation strategy based on the outcomes.
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
scores (`torch.FloatTensor` of shape `(batch_size, candidate_length, config.vocab_size)`):
Prediction scores of a language modeling head. These can be logits for each vocabulary when not using
beam search or log softmax for each vocabulary token when using beam search
num_matches (`int`):
The number of matches between the candidate sequences and the model predictions.
"""
# Adjust the max number of assistant tokens to use in the next iteration. This is a simple heuristic,
# probably can be improved -- we want to balance the benefits of getting assistant tokens correct with the
# cost of forecasting incorrect assistant tokens.
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
if self.assistant_model.generation_config.num_assistant_tokens_schedule in {
"heuristic",
"heuristic_transient",
}:
# len(scores[0])-1 is the number of candidates according to the target tokenizer.
if num_matches == len(scores[0]) - 1:
self.num_assistant_tokens += 2.0
else:
self.num_assistant_tokens = max(1.0, self.num_assistant_tokens - 1.0)
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
# The assistant's confidence threshold is adjusted throughout the speculative iterations to reduce the number of unnecessary draft and target forward passes. The costs are estimated based on the ROC curve, which considers the probability of the draft token and its match with the target. A cost of 25% is assigned to false positives and 75% to false negatives.
# This adaptation is not compatible with UAG, as it relies on the number of matched tokens based on the draft vocabulary, which is unavailable in UAG.
if (
is_sklearn_available()
and self.assistant_model.generation_config.assistant_confidence_threshold
and type(self) is AssistedCandidateGenerator
):
# update self.matches
self.matches.extend([1] * num_matches)
if len(self.probs) > len(self.matches):
self.matches.append(0)
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
# update self.probs
excess_length = len(self.probs) - len(self.matches)
if excess_length > 0:
del self.probs[-excess_length:]
if (
len(self.probs) > 5 and {0, 1}.issubset(self.matches)
): # require at least 5 samples to calculate the ROC curve and at least one positive and one negative sample
fpr, tpr, thresholds = roc_curve(self.matches, self.probs)
fnr = 1 - tpr
# Calculate the cost for each threshold
costs = fpr + 3 * fnr
# Find the threshold that minimizes the cost
optimal_threshold_index = np.argmin(costs)
best_threshold = thresholds[optimal_threshold_index]
self.assistant_model.generation_config.assistant_confidence_threshold = best_threshold
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
def _calculate_new_tokens(self, input_ids: torch.LongTensor) -> Tuple[int, int]:
"""Calculate the minimum and maximum number of new tokens to generate."""
new_cur_len = input_ids.shape[-1]
max_new_tokens = min(int(self.num_assistant_tokens), self.generation_config.max_length - new_cur_len - 1)
min_new_tokens = max(min(max_new_tokens, self.main_model_min_length - new_cur_len), 0)
return min_new_tokens, max_new_tokens
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
def _update_past_and_masks(self, input_ids: torch.LongTensor, remove_from_pkv: int = 0) -> bool:
"""Update past key values and attention masks for subsequent generation rounds."""
has_past_key_values = self.assistant_kwargs.get("past_key_values", None) is not None
if has_past_key_values:
new_cache_size = input_ids.shape[-1] - 1 - remove_from_pkv
self.assistant_kwargs["past_key_values"] = _crop_past_key_values(
self.assistant_model, self.assistant_kwargs["past_key_values"], new_cache_size - 1
)
self.assistant_kwargs = _prepare_attention_mask(
self.assistant_kwargs, input_ids.shape[-1], self.assistant_model.config.is_encoder_decoder
)
self.assistant_kwargs = _prepare_token_type_ids(self.assistant_kwargs, input_ids.shape[-1])
return has_past_key_values
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
def _prepare_generation_args(self, input_ids: torch.LongTensor, min_new_tokens: int, max_new_tokens: int) -> Dict:
"""Prepare arguments for the generation call."""
return {
self.input_ids_key: input_ids,
"min_new_tokens": min_new_tokens,
"max_new_tokens": max_new_tokens,
"generation_config": self.generation_config,
"logits_processor": self.logits_processor,
}
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
def _generate_candidates(self, generation_args: Dict) -> Tuple[torch.LongTensor, Optional[torch.FloatTensor]]:
"""Generate candidate sequences using the assistant model."""
assistant_output = self.assistant_model.generate(**generation_args, **self.assistant_kwargs)
self.assistant_kwargs["past_key_values"] = assistant_output.past_key_values
if (
is_sklearn_available()
and self.assistant_model.generation_config.assistant_confidence_threshold
and type(self) is AssistedCandidateGenerator
):
scores_tensor = torch.cat(assistant_output.scores, dim=0)
scores_softmax = torch.softmax(scores_tensor, dim=-1)
ids = assistant_output.sequences[-1, -len(assistant_output.scores) :]
p = scores_softmax[range(len(ids)), ids]
self.probs.extend(p.tolist())
candidate_logits = torch.stack(assistant_output.scores, dim=1)
candidate_ids = assistant_output.sequences
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
return candidate_ids, candidate_logits
| 10,721 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
class AssistedCandidateGeneratorDifferentTokenizers(AssistedCandidateGenerator):
"""
`CandidateGenerator` class to be used for Universal Assisted Generation (UAD): assisted generation with different tokenizers
for the assistant and main models. This class generates candidates through the use of a smaller
model.
The main model input tokens are re-encoded into assistant model tokens, then candidate tokens are generated in the assistant encoding, which are
in turn re-encoded into main model candidate tokens. Validation then proceeds as explained above.
The re-encoding steps involve decoding token ids into text and then encoding the text using a different tokenizer.
Since re-encoding the tokens may result in tokenization discrepancies, UAD finds the longest common subsequence between the source and target encodings,
to ensure the new tokens include the correct prompt suffix.
| 10,722 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
assistant_model (`PreTrainedModel`):
The model to be used for generating candidates. This model should be smaller than the main model.
target_tokenizer (`PreTrainedTokenizerBase`):
The tokenizer used for the target model.
assistant_tokenizer (`PreTrainedTokenizerBase`):
The tokenizer used for the assistant model.
generation_config (`~generation.GenerationConfig`, *optional*):
The generation configuration to be used as base parametrization for the generation call.
logits_processor (`LogitsProcessorList`):
An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsProcessor`]
used to modify the prediction scores of the language modeling head applied at each generation step.
| 10,722 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
model_kwargs (`Dict`):
The keyword arguments that will be passed to the main model, and are used as base inputs for the assistant
model as well.
inputs_tensor (`torch.Tensor`, *optional*):
The model input tensor. In encoder-decoder models, this is the encoder input.
"""
| 10,722 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
def __init__(
self,
input_ids: torch.LongTensor,
assistant_model: "PreTrainedModel",
target_tokenizer: "PreTrainedTokenizerBase",
assistant_tokenizer: "PreTrainedTokenizerBase",
generation_config: "GenerationConfig",
model_kwargs: Dict,
inputs_tensor: Optional[torch.Tensor] = None,
logits_processor: "LogitsProcessorList" = None,
):
super().__init__(input_ids, assistant_model, generation_config, model_kwargs, inputs_tensor, logits_processor)
self.target_tokenizer = target_tokenizer
self.assistant_tokenizer = assistant_tokenizer
self.prev_target_ids_len: Optional[int] = None
self.prev_assistant_ids = None
self.target_lookbehind = assistant_model.generation_config.target_lookbehind
self.assistant_lookbehind = assistant_model.generation_config.assistant_lookbehind
| 10,722 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
@staticmethod
def _get_longest_diag_dict(input_matrix, nonzero_idx):
"""
Calculates the length of the longest diagonal sequence in a given matrix.
Args:
input_matrix (torch.Tensor): The input matrix.
nonzero_idx (torch.Tensor): The indices of the non-zero elements in the matrix.
Returns:
dict: A dictionary where the keys are the indices of the non-zero elements and the values are the lengths of the longest diagonal sequences starting from those indices.
"""
visited = set()
diags = {}
for idx in nonzero_idx:
start_idx = torch.clone(idx)
tuple_start_idx = tuple(start_idx.tolist())
if tuple_start_idx in visited:
continue
| 10,722 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
visited.add(tuple_start_idx)
cur_diag_len = 1
start_idx += 1
while start_idx[0] < input_matrix.shape[0] and start_idx[1] < input_matrix.shape[1]:
tuple_start_idx = tuple(start_idx.tolist())
visited.add(tuple_start_idx)
if input_matrix[start_idx[0], start_idx[1]] == 1:
cur_diag_len += 1
start_idx += 1
else:
break
diags[idx] = cur_diag_len
return diags
@staticmethod
def _get_longest_diag_index(input_matrix):
"""
Returns the start index and length of the longest diagonal in the given input.
Args:
input_matrix (numpy.ndarray): The input matrix.
Returns:
tuple: A tuple containing the start index and length of the longest diagonal.
"""
| 10,722 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
diags = AssistedCandidateGeneratorDifferentTokenizers._get_longest_diag_dict(
input_matrix, input_matrix.nonzero()
)
diags_values = list(diags.values())
diags_keys = list(diags.keys())
best_diag = np.argmax(diags_values)
diag_start_index = diags_keys[best_diag]
diag_start_length = diags_values[best_diag]
return diag_start_index, diag_start_length
| 10,722 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
@staticmethod
def _get_tokens_diag(prompt, prompt_plus_new_tokens):
"""
Input:
prompt: 2D array of shape (batch_size, prompt_length), represents the original prompt tokens
prompt_plus_new_tokens: 2D array of shape (batch_size, prompt_length), represents the suffix of the original prompt, with additional new tokens.
Output:
discrepancy_length: int, represents the number of tokens that need to be replaced from prompt
new_tokens_only: 2D array of shape (batch_size, new_token_length), represents the new tokens that are not in prompt
discrepancy_only: 2D array of shape (batch_size, discrepancy_length), represents the new tokens that are in prompt but not in prompt_plus_new_tokens
"""
compare_mat = prompt_plus_new_tokens.T == prompt
if not torch.is_tensor(compare_mat):
compare_mat = torch.tensor(compare_mat)
compare_mat_int = compare_mat.to(int)
| 10,722 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
if not compare_mat_int.any().item():
# empty intersection between prompt and prompt_plus_new_tokens
return None, None, None
longest_location, longest_diag_length = AssistedCandidateGeneratorDifferentTokenizers._get_longest_diag_index(
compare_mat_int
)
new_token_start_index = longest_location[0] + longest_diag_length
discrepancy_with_old = longest_location[1] + longest_diag_length
discrepancy_length = (prompt.shape[1] - discrepancy_with_old).item()
new_tokens_only = prompt_plus_new_tokens[:, new_token_start_index + discrepancy_length :]
discrepancy_only = prompt_plus_new_tokens[
:, new_token_start_index : new_token_start_index + discrepancy_length
]
return discrepancy_length, new_tokens_only, discrepancy_only
| 10,722 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
def convert_source_tokens_to_target_tokens(
self,
input_ids,
source_tokenizer,
destination_tokenizer,
):
"""
Convert token IDs from one tokenizer to another.
Args:
input_ids: The input token IDs.
source_tokenizer: The source tokenizer.
destination_tokenizer: The destination tokenizer.
Returns:
The converted token IDs.
"""
text = source_tokenizer.batch_decode(input_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
dest_ids = destination_tokenizer(text, add_special_tokens=True, return_tensors="pt")["input_ids"]
return dest_ids.to(input_ids.device)
def get_candidates(self, input_ids: torch.LongTensor) -> Tuple[torch.LongTensor, Optional[torch.FloatTensor]]:
"""
Fetches the candidates to be tried for the current input.
| 10,722 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
Return:
`torch.LongTensor` of shape `(batch_size, candidate_length)` containing the candidate sequences to be
assessed by the model and a `torch.FloatTensor` of shape `(batch_size, candidate_length,
vocabulary_size)` containing the logits associated to each candidate.
"""
max_new_tokens = int(self.num_assistant_tokens)
if max_new_tokens == 0:
return input_ids, None
input_ids = input_ids.to(self.assistant_model.device)
remove_from_pkv = 0
assistant_input_ids, remove_from_pkv = self._prepare_assistant_input_ids(input_ids)
self.prev_assistant_ids = assistant_input_ids
min_new_tokens = max(min(max_new_tokens, self.main_model_min_length - assistant_input_ids.shape[-1]), 0)
| 10,722 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
self._update_past_and_masks(assistant_input_ids, remove_from_pkv)
generation_args = self._prepare_generation_args(assistant_input_ids, min_new_tokens, max_new_tokens)
self.assistant_kwargs.pop("attention_mask", None)
assistant_output = self.assistant_model.generate(**generation_args, **self.assistant_kwargs)
new_target_ids = self._process_assistant_outputs(input_ids, assistant_output.sequences, assistant_input_ids)
# Update state
self.prev_target_ids_len = input_ids.shape[1]
self.assistant_kwargs["past_key_values"] = assistant_output.past_key_values
self.prev_assistant_ids = assistant_output.sequences
if self.prev_target_ids_len >= new_target_ids.shape[1]:
return input_ids, None
return new_target_ids, None
| 10,722 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
def _prepare_assistant_input_ids(self, input_ids: torch.LongTensor) -> Tuple[torch.LongTensor, int]:
"""Converts target input IDs to assistant input IDs, handling discrepancies."""
convert_kwargs = {
"source_tokenizer": self.target_tokenizer,
"destination_tokenizer": self.assistant_tokenizer,
}
remove_from_pkv = 0
if self.prev_assistant_ids is not None and self.prev_target_ids_len > self.target_lookbehind:
# input_ids contains all target prompt input ids and some new target input ids
start_index_in_target_window = self.prev_target_ids_len - self.target_lookbehind
new_assistant_ids = self.convert_source_tokens_to_target_tokens(
input_ids[:, start_index_in_target_window:], **convert_kwargs
)
prompt_use_length = new_assistant_ids.shape[1]
prompt_use = self.prev_assistant_ids[:, -prompt_use_length:]
| 10,722 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
discrepancy_length, new_tokens_only, discrepancy_only = self._get_tokens_diag(
prompt_use, new_assistant_ids
)
assistant_input_ids = self.prev_assistant_ids
if new_tokens_only is not None:
if discrepancy_length > 0 and discrepancy_only.shape[1] > 0:
if discrepancy_length == discrepancy_only.shape[1]:
assistant_input_ids[:, -discrepancy_length:] = discrepancy_only
elif discrepancy_length > discrepancy_only.shape[1]:
discrepancy_length_diff = discrepancy_length - discrepancy_only.shape[1]
assistant_input_ids = assistant_input_ids[:, :-discrepancy_length_diff]
assistant_input_ids[:, -discrepancy_only.shape[1] :] = discrepancy_only
remove_from_pkv = discrepancy_length
| 10,722 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
if new_tokens_only.shape[1] > 0:
assistant_input_ids = torch.cat([assistant_input_ids, new_tokens_only], dim=-1)
else:
# edge case: in case of no intersection between prompt and new_assistant_ids
assistant_input_ids = torch.cat([assistant_input_ids, new_assistant_ids], dim=-1)
else:
assistant_input_ids = self.convert_source_tokens_to_target_tokens(input_ids, **convert_kwargs)
self.prev_target_ids_len = input_ids.shape[1]
return assistant_input_ids, remove_from_pkv
def _process_assistant_outputs(
self, input_ids: torch.LongTensor, assistant_sequences: torch.LongTensor, assistant_input_ids: torch.LongTensor
) -> torch.LongTensor:
"""Processes assistant outputs to obtain target input IDs."""
num_prev_assistant = self.prev_assistant_ids.shape[1]
start_assistant_look_index = num_prev_assistant - self.assistant_lookbehind
| 10,722 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
new_target_ids_from_window = self.convert_source_tokens_to_target_tokens(
assistant_sequences[:, start_assistant_look_index:],
source_tokenizer=self.assistant_tokenizer,
destination_tokenizer=self.target_tokenizer,
)
target_prompt_use_length = new_target_ids_from_window.shape[1]
target_prompt_use = input_ids[:, -target_prompt_use_length:]
_, target_new_tokens_only, _ = self._get_tokens_diag(target_prompt_use, new_target_ids_from_window)
new_target_ids = input_ids
if target_new_tokens_only is not None:
if target_new_tokens_only.shape[1] > 0:
new_target_ids = torch.cat([new_target_ids, target_new_tokens_only], dim=-1)
else:
# edge case: in case of no intersection between prompt and new_target_ids
new_target_ids = torch.cat([new_target_ids, new_target_ids_from_window], dim=-1)
| 10,722 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
if hasattr(self.generation_config, "max_length"):
new_target_ids = new_target_ids[:, : self.generation_config.max_length]
return new_target_ids
| 10,722 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
class PromptLookupCandidateGenerator(CandidateGenerator):
"""
`CandidateGenerator` class to be used for prompt lookup generation. This class generates candidates by looking up
likely continuations in the provided prompt (input_ids) itself.
Read the following blog post for more information: https://github.com/apoorvumang/prompt-lookup-decoding
Args:
max_matching_ngram_size (`int`):
The maximum ngram size to be considered for matching in the prompt
num_output_tokens (`int`):
The number of tokens to be output as candidate tokens.
max_length (`int`):
The number of total maximum tokens that can be generated. For decoder-only models that includes the prompt length.
Defaults to 20, which is the max length used as default in generation config.
"""
| 10,723 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
def __init__(
self,
eos_token_id: torch.Tensor = None,
num_output_tokens: int = 10,
max_matching_ngram_size: int = None,
max_length: int = 20,
):
self.num_output_tokens = num_output_tokens
self.max_matching_ngram_size = max_matching_ngram_size if max_matching_ngram_size else 2
self.max_length = max_length
self.eos_token_id = eos_token_id
if self.max_matching_ngram_size <= 0 or self.num_output_tokens <= 0:
raise ValueError("Invalid max_matching_ngram_size or num_output_tokens")
def get_candidates(self, input_ids: torch.LongTensor) -> Tuple[torch.LongTensor, Optional[torch.FloatTensor]]:
"""
Fetches the candidates to be tried for the current input.
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
| 10,723 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
Return:
`torch.LongTensor` of shape `(num_candidates, candidate_length)`: The candidate sequences to be tried.
"""
input_length = input_ids.size(1)
# Don't generate more than `max_length - 1` candidates since the target model generates one extra token.
if self.max_length == input_length + 1:
return input_ids, None
chosen_ids = None
match_found = False
for ngram_size in range(min(self.max_matching_ngram_size, input_length - 1), 0, -1):
# Create sliding windows of size ngram_size
windows = input_ids.unfold(dimension=1, size=ngram_size, step=1)
# Convert ngram to a tensor for comparison
ngram_tensor = input_ids[0, -ngram_size:]
# Find where the windows match the ngram
matches = (windows == ngram_tensor).all(dim=2)
# Get the indices of matches
match_indices = matches.nonzero(as_tuple=True)[1]
| 10,723 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
# Iterate through match indices to find a valid continuation
for idx in match_indices:
start_idx = idx + ngram_size
end_idx = start_idx + self.num_output_tokens
end_idx = min(end_idx, input_length, self.max_length)
if start_idx < end_idx:
chosen_ids = input_ids[0, start_idx:end_idx]
match_found = True
| 10,723 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
# remove remaining candidate ids if an "eos" token is found, otherwise the target model may
# accept eos and the rest as valid, thus not stopping generation after "eos"
# NOTE: below code is written based on the fact that assisted decoding supports only bs=1
mask = isin_mps_friendly(chosen_ids, self.eos_token_id)
match_indices_eos = torch.nonzero(mask)
if match_indices_eos.numel() > 0:
first_eos_index = match_indices_eos[0].item()
chosen_ids = chosen_ids[:first_eos_index]
break
if match_found:
break
if chosen_ids is None or len(chosen_ids) == 0:
# In case we didn't find a match return the input sequence unchanged, reverts back to autoregressive decoding
return input_ids, None
| 10,723 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
# Now need extend input_ids with chosen_ids
chosen_ids = chosen_ids.unsqueeze(0)
candidate_input_ids = torch.cat((input_ids, chosen_ids), dim=1)
# assisted_generation expects logits as well, but we don't have those here, so returning None
return candidate_input_ids, None
def update_candidate_strategy(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, num_matches: int):
"""
Updates the candidate generation strategy based on the outcomes.
| 10,723 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
scores (`torch.FloatTensor` of shape `(batch_size, candidate_length, config.vocab_size)`):
Prediction scores of a language modeling head. These can be logits for each vocabulary when not using
beam search or log softmax for each vocabulary token when using beam search
num_matches (`int`):
The number of matches between the candidate sequences and the model predictions.
"""
# Currently does nothing
return
| 10,723 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
class EarlyExitCandidateGenerator(AssistedCandidateGenerator):
"""
`CandidateGenerator` class to be used for assisted generation and speculative decoding. This class generates
candidates through the use of **the model itself**, exiting early. Can only be used with models that support early
exit, e.g., `facebook/layerskip-llama3.2-1B`.
| 10,724 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
assistant_model (`PreTrainedModel`):
The original model. This model must support early exit (i.e. is trained to compute logits in earlier
layers).
generation_config (`~generation.GenerationConfig`, *optional*):
The generation configuration to be used as base parametrization for the generation call.
logits_processor (`LogitsProcessorList`):
An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsProcessor`]
used to modify the prediction scores of the language modeling head applied at each generation step.
model_kwargs (`Dict`):
The keyword arguments that will be passed to the main model, and are used as base inputs for the assistant
model as well.
| 10,724 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
inputs_tensor (`torch.Tensor`, *optional*):
The model input tensor. In encoder-decoder models, this is the encoder input.
"""
| 10,724 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
def __init__(
self,
input_ids: torch.LongTensor,
assistant_model: "PreTrainedModel",
generation_config: "GenerationConfig",
model_kwargs: Dict,
inputs_tensor: Optional[torch.Tensor] = None,
logits_processor: "LogitsProcessorList" = None,
):
super().__init__(
input_ids=input_ids,
assistant_model=assistant_model,
generation_config=generation_config,
model_kwargs=model_kwargs,
inputs_tensor=inputs_tensor,
logits_processor=logits_processor,
)
# We have to move early exit out of the generation config, otherwise the assistant will also call `generate`
# with early exit
self.assistant_early_exit = self.generation_config.assistant_early_exit
self.generation_config.assistant_early_exit = None
| 10,724 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
def get_candidates(self, input_ids: torch.LongTensor) -> Tuple[torch.LongTensor, Optional[torch.FloatTensor]]:
# Temporarily sets the number of hidden layers to the early exit value
base_model = getattr(self.assistant_model, self.assistant_model.base_model_prefix)
original_num_hidden_layers = base_model.config.num_hidden_layers
base_model.config.num_hidden_layers = self.assistant_early_exit
candidate_ids, candidate_logits = super().get_candidates(input_ids)
base_model.config.num_hidden_layers = original_num_hidden_layers
return candidate_ids, candidate_logits
| 10,724 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/candidate_generator.py
|
class WatermarkDetectorOutput:
"""
Outputs of a watermark detector.
| 10,725 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/watermarking.py
|
Args:
num_tokens_scored (np.array of shape (batch_size)):
Array containing the number of tokens scored for each element in the batch.
num_green_tokens (np.array of shape (batch_size)):
Array containing the number of green tokens for each element in the batch.
green_fraction (np.array of shape (batch_size)):
Array containing the fraction of green tokens for each element in the batch.
z_score (np.array of shape (batch_size)):
Array containing the z-score for each element in the batch. Z-score here shows
how many standard deviations away is the green token count in the input text
from the expected green token count for machine-generated text.
p_value (np.array of shape (batch_size)):
Array containing the p-value for each batch obtained from z-scores.
prediction (np.array of shape (batch_size)), *optional*:
| 10,725 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/watermarking.py
|
Array containing boolean predictions whether a text is machine-generated for each element in the batch.
confidence (np.array of shape (batch_size)), *optional*:
Array containing confidence scores of a text being machine-generated for each element in the batch.
"""
| 10,725 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/watermarking.py
|
num_tokens_scored: np.array = None
num_green_tokens: np.array = None
green_fraction: np.array = None
z_score: np.array = None
p_value: np.array = None
prediction: Optional[np.array] = None
confidence: Optional[np.array] = None
| 10,725 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/watermarking.py
|
class WatermarkDetector:
r"""
Detector for detection of watermark generated text. The detector needs to be given the exact same settings that were
given during text generation to replicate the watermark greenlist generation and so detect the watermark. This includes
the correct device that was used during text generation, the correct watermarking arguments and the correct tokenizer vocab size.
The code was based on the [original repo](https://github.com/jwkirchenbauer/lm-watermarking/tree/main).
See [the paper](https://arxiv.org/abs/2306.04634) for more information.
| 10,726 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/watermarking.py
|
Args:
model_config (`PretrainedConfig`):
The model config that will be used to get model specific arguments used when generating.
device (`str`):
The device which was used during watermarked text generation.
watermarking_config (Union[`WatermarkingConfig`, `Dict`]):
The exact same watermarking config and arguments used when generating text.
ignore_repeated_ngrams (`bool`, *optional*, defaults to `False`):
Whether to count every unique ngram only once or not.
max_cache_size (`int`, *optional*, defaults to 128):
The max size to be used for LRU caching of seeding/sampling algorithms called for every token.
Examples:
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, WatermarkDetector, WatermarkingConfig
| 10,726 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/watermarking.py
|
>>> model_id = "openai-community/gpt2"
>>> model = AutoModelForCausalLM.from_pretrained(model_id)
>>> tok = AutoTokenizer.from_pretrained(model_id)
>>> tok.pad_token_id = tok.eos_token_id
>>> tok.padding_side = "left"
>>> inputs = tok(["This is the beginning of a long story", "Alice and Bob are"], padding=True, return_tensors="pt")
>>> input_len = inputs["input_ids"].shape[-1]
>>> # first generate text with watermark and without
>>> watermarking_config = WatermarkingConfig(bias=2.5, seeding_scheme="selfhash")
>>> out_watermarked = model.generate(**inputs, watermarking_config=watermarking_config, do_sample=False, max_length=20)
>>> out = model.generate(**inputs, do_sample=False, max_length=20)
| 10,726 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/watermarking.py
|
>>> # now we can instantiate the detector and check the generated text
>>> detector = WatermarkDetector(model_config=model.config, device="cpu", watermarking_config=watermarking_config)
>>> detection_out_watermarked = detector(out_watermarked, return_dict=True)
>>> detection_out = detector(out, return_dict=True)
>>> detection_out_watermarked.prediction
array([ True, True])
>>> detection_out.prediction
array([False, False])
```
"""
def __init__(
self,
model_config: PretrainedConfig,
device: str,
watermarking_config: Union[WatermarkingConfig, Dict],
ignore_repeated_ngrams: bool = False,
max_cache_size: int = 128,
):
if isinstance(watermarking_config, WatermarkingConfig):
watermarking_config = watermarking_config.to_dict()
| 10,726 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/watermarking.py
|
self.bos_token_id = (
model_config.bos_token_id if not model_config.is_encoder_decoder else model_config.decoder_start_token_id
)
self.greenlist_ratio = watermarking_config["greenlist_ratio"]
self.ignore_repeated_ngrams = ignore_repeated_ngrams
self.processor = WatermarkLogitsProcessor(
vocab_size=model_config.vocab_size, device=device, **watermarking_config
)
# Expensive re-seeding and sampling is cached.
self._get_ngram_score_cached = lru_cache(maxsize=max_cache_size)(self._get_ngram_score)
def _get_ngram_score(self, prefix: torch.LongTensor, target: int):
greenlist_ids = self.processor._get_greenlist_ids(prefix)
return target in greenlist_ids
| 10,726 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/watermarking.py
|
def _score_ngrams_in_passage(self, input_ids: torch.LongTensor):
batch_size, seq_length = input_ids.shape
selfhash = int(self.processor.seeding_scheme == "selfhash")
n = self.processor.context_width + 1 - selfhash
indices = torch.arange(n).unsqueeze(0) + torch.arange(seq_length - n + 1).unsqueeze(1)
ngram_tensors = input_ids[:, indices]
num_tokens_scored_batch = np.zeros(batch_size)
green_token_count_batch = np.zeros(batch_size)
for batch_idx in range(ngram_tensors.shape[0]):
frequencies_table = collections.Counter(ngram_tensors[batch_idx])
ngram_to_watermark_lookup = {}
for ngram_example in frequencies_table.keys():
prefix = ngram_example if selfhash else ngram_example[:-1]
target = ngram_example[-1]
ngram_to_watermark_lookup[ngram_example] = self._get_ngram_score_cached(prefix, target)
| 10,726 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/watermarking.py
|
if self.ignore_repeated_ngrams:
# counts a green/red hit once per unique ngram.
# num total tokens scored becomes the number unique ngrams.
num_tokens_scored_batch[batch_idx] = len(frequencies_table.keys())
green_token_count_batch[batch_idx] = sum(ngram_to_watermark_lookup.values())
else:
num_tokens_scored_batch[batch_idx] = sum(frequencies_table.values())
green_token_count_batch[batch_idx] = sum(
freq * outcome
for freq, outcome in zip(frequencies_table.values(), ngram_to_watermark_lookup.values())
)
return num_tokens_scored_batch, green_token_count_batch
| 10,726 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/watermarking.py
|
def _compute_z_score(self, green_token_count: np.array, total_num_tokens: np.array) -> np.array:
expected_count = self.greenlist_ratio
numer = green_token_count - expected_count * total_num_tokens
denom = np.sqrt(total_num_tokens * expected_count * (1 - expected_count))
z = numer / denom
return z
def _compute_pval(self, x, loc=0, scale=1):
z = (x - loc) / scale
return 1 - (0.5 * (1 + np.sign(z) * (1 - np.exp(-2 * z**2 / np.pi))))
| 10,726 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/watermarking.py
|
def __call__(
self,
input_ids: torch.LongTensor,
z_threshold: float = 3.0,
return_dict: bool = False,
) -> Union[WatermarkDetectorOutput, np.array]:
"""
Args:
input_ids (`torch.LongTensor`):
The watermark generated text. It is advised to remove the prompt, which can affect the detection.
z_threshold (`Dict`, *optional*, defaults to `3.0`):
Changing this threshold will change the sensitivity of the detector. Higher z threshold gives less
sensitivity and vice versa for lower z threshold.
return_dict (`bool`, *optional*, defaults to `False`):
Whether to return `~generation.WatermarkDetectorOutput` or not. If not it will return boolean predictions,
ma
Return:
[`~generation.WatermarkDetectorOutput`] or `np.array`: A [`~generation.WatermarkDetectorOutput`]
| 10,726 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/watermarking.py
|
if `return_dict=True` otherwise a `np.array`.
| 10,726 |
/Users/nielsrogge/Documents/python_projecten/transformers/src/transformers/generation/watermarking.py
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.