add text-generation pipeline example with autocast (#25)
Browse files- add text-generation pipeline example with autocast (81df2f9645575ac85fce8943783c9ffb1c674ab0)
Co-authored-by: Vitaliy Chiley <[email protected]>
README.md
CHANGED
|
@@ -119,6 +119,22 @@ from transformers import AutoTokenizer
|
|
| 119 |
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
|
| 120 |
```
|
| 121 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 122 |
## Model Description
|
| 123 |
|
| 124 |
The architecture is a modification of a standard decoder-only transformer.
|
|
|
|
| 119 |
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
|
| 120 |
```
|
| 121 |
|
| 122 |
+
The model can then be used, for example, within a text-generation pipeline.
|
| 123 |
+
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
|
| 124 |
+
|
| 125 |
+
```python
|
| 126 |
+
from transformers import pipeline
|
| 127 |
+
|
| 128 |
+
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
|
| 129 |
+
|
| 130 |
+
with torch.autocast('cuda', dtype=torch.bfloat16):
|
| 131 |
+
print(
|
| 132 |
+
pipe('Here is a recipe for vegan banana bread:\n',
|
| 133 |
+
max_new_tokens=100,
|
| 134 |
+
do_sample=True,
|
| 135 |
+
use_cache=True))
|
| 136 |
+
```
|
| 137 |
+
|
| 138 |
## Model Description
|
| 139 |
|
| 140 |
The architecture is a modification of a standard decoder-only transformer.
|