Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,11 @@ language:
|
|
9 |
CSTinyLlama-1.2B is a Czech language model continously pretrained on 168b training tokens from English [TinyLLama-2.5T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) model. Model was pretrained on ~67b token [Large Czech Collection](https://huggingface.co/datasets/BUT-FIT/but_lcc) using Czech tokenizer, obtained using our vocabulary swap method.
|
10 |
Training was done on [Karolina](https://www.it4i.cz/en) cluster.
|
11 |
|
|
|
|
|
|
|
|
|
|
|
12 |
# Loss
|
13 |
Below we
|
14 |
- (i) demonstrate the convergence speed of released model (`TINYLLAMA1.2B_cztokenizer64k_align1.7k_tllama1.1B_C2048_lr1e-04_150k`, at 160k step).
|
|
|
9 |
CSTinyLlama-1.2B is a Czech language model continously pretrained on 168b training tokens from English [TinyLLama-2.5T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) model. Model was pretrained on ~67b token [Large Czech Collection](https://huggingface.co/datasets/BUT-FIT/but_lcc) using Czech tokenizer, obtained using our vocabulary swap method.
|
10 |
Training was done on [Karolina](https://www.it4i.cz/en) cluster.
|
11 |
|
12 |
+
# <span style="color:blue">BUT Model Roster</span>
|
13 |
+
- [BUT-FIT/CSTinyLlama-1.2B](https://huggingface.co/BUT-FIT/CSTinyLlama-1.2B)
|
14 |
+
- [BUT-FIT/Czech-GPT-2-XL-133k](https://huggingface.co/BUT-FIT/Czech-GPT-2-XL-133k)
|
15 |
+
- [BUT-FIT/csmpt7b](https://huggingface.co/BUT-FIT/csmpt7b)
|
16 |
+
|
17 |
# Loss
|
18 |
Below we
|
19 |
- (i) demonstrate the convergence speed of released model (`TINYLLAMA1.2B_cztokenizer64k_align1.7k_tllama1.1B_C2048_lr1e-04_150k`, at 160k step).
|