| This is a **RoBERTa-base** model trained from scratch in Spanish. | |
| The training dataset is mc4 (1) subsampling documents to a total of about 50 million examples. Sampling is biased towards average perplexity values (using a Gaussian function), discarding more often documents with very large values (poor quality) of very small values (short, repetitive texts). | |
| This model takes the one using sequence length 128 (2) and trains during 25.000 steps using sequence length 512. | |
| (1) https://huggingface.co/datasets/bertin-project/mc4-es-sampled | |
| (2) https://huggingface.co/bertin-project/bertin-base-gaussian |