Update README.md
Browse files
README.md
CHANGED
|
@@ -37,6 +37,13 @@ This model was pretrained only in a self-supervised way, without any supervised
|
|
| 37 |
|
| 38 |
### How to use
|
| 39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
If you want to use this model for instruction-following, you need to use the same prompt format we used in the second stage of the pretraining (basically the same format what Meta used in their Llama2 models). **Note: do not use "LlamaTokenizer" from transformers library but always use the AutoTokenizer instead, or use the plain sentencepiece tokenizer.** Here is an example using the instruction-following prompt format, with some generation arguments you can modify for your use:
|
| 41 |
|
| 42 |
```python
|
|
|
|
| 37 |
|
| 38 |
### How to use
|
| 39 |
|
| 40 |
+
**Finetuning:** \
|
| 41 |
+
We have now added finetuning example notebook along with video! \
|
| 42 |
+
Notebook: https://huggingface.co/Finnish-NLP/Ahma-3B/blob/main/Finetune_Ahma_3B_example.ipynb \
|
| 43 |
+
Video: https://www.youtube.com/watch?v=6mbgn9XzpS4
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
**Inference:** \
|
| 47 |
If you want to use this model for instruction-following, you need to use the same prompt format we used in the second stage of the pretraining (basically the same format what Meta used in their Llama2 models). **Note: do not use "LlamaTokenizer" from transformers library but always use the AutoTokenizer instead, or use the plain sentencepiece tokenizer.** Here is an example using the instruction-following prompt format, with some generation arguments you can modify for your use:
|
| 48 |
|
| 49 |
```python
|