Update README.md
Browse files
README.md
CHANGED
@@ -62,7 +62,7 @@ print(tokenizer.decode(outputs[0]))
|
|
62 |
|
63 |
## Training Data
|
64 |
|
65 |
-
The model is fine-tuned on the **
|
66 |
|
67 |
### Fine-tuning with Learning Rate Optimization
|
68 |
|
|
|
62 |
|
63 |
## Training Data
|
64 |
|
65 |
+
The model is fine-tuned on the **BramVanroy/dolly-15k-dutch** dataset, specifically using the training split (`train_sft`). This dataset is not SoTA, however the goal is to demonstrate the capabilities and it fits <1024 tokens.
|
66 |
|
67 |
### Fine-tuning with Learning Rate Optimization
|
68 |
|