Scaling Laws for Robust Comparison of Open Foundation Language-Vision Models and Datasets [arXiv] [NeurIPS 2025]
We provide pre-trained models across various scales (model scales: S/32 - H/14; samples seen scales: 1.28M - 3.07B; datasets: Re-LAION 1.4B, DataComp 1.4B, DFN-1.4B), including all the intermediate checkpoints during the training, used in the paper Scaling Laws for Robust Comparison of Open Foundation Language-Vision Models and Datasets [arXiv], [NeurIPS 2025]
Please refer to the official Github repository for more information about how to reproduce the results download and use the models.