Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:

Evaluation experiment

#17
by ldwang - opened

“We utilize the MiniCPM-1.2B model architecture with the MiniCPM3-4B tokenizer. Each experiment involves training on 100B tokens”

In the experimental setup section of the paper, is this validation experiment based on continued training of an existing model or training from scratch? @BigDong
Thanks

OpenBMB org

Each experiment is from scratch traing 100B tokens

BigDong changed discussion status to closed

Sign up or log in to comment