joaogante's picture
joaogante HF Staff
Update README.md
6e4d005 verified
|
raw
history blame
816 Bytes
---
library_name: transformers
tags: [custom_generate]
---
## Description
Test repo to experiment with calling `generate` from the hub. It is a simplified implementation of greedy decoding.
⚠️ this recipe has an impossible requirement and is meant to crash. If you try to run it, you should see something like
```
ValueError: Missing requirements for joaogante/test_generate_from_hub_bad_requirements:
foo (installed: None)
bar==0.0.0 (installed: None)
torch>=99.0 (installed: 2.6.0+cu126)
```
## Base model
`Qwen/Qwen2.5-0.5B-Instruct`
## Model compatibility
Most models. More specifically, any `transformer` LLM/VLM trained for causal language modeling.
## Additional Arguments
`left_padding` (`int`, *optional*): number of padding tokens to add before the provided input
## Output Type changes
(none)