Update README.md
Browse files
README.md
CHANGED
@@ -162,7 +162,6 @@ We selected 200k documents out of 600k Mixtral-annotated documents with scores <
|
|
162 |
|
163 |
We performed an ablation where we combined the DCLM-fastText filter and the Cosmopedia-Edu-fastText filter using an OR rule. In particular, we retain documents which at least one filter votes as high-quality. Using the OR rule allowed us to achieve similar performance as the AND rule (wherein documents are retained only if both the classifiers vote as high-quality) and better performance than individual fastText classifiers, while retaining substantially larger number of tokens.
|
164 |
|
165 |
-
fasttext_ablation_35b_single_seed_s42
|
166 |
<img src="fasttext_ablation_35b_single_seed_s42.png" alt="fasttext_ablation_35b_single_seed_s42.png" style="width:1000px;"/>
|
167 |
|
168 |
**Figure 15:** Ablation experiment comparing a combination of fastText filters against the FineWeb.V1.1 baseline.
|
@@ -181,7 +180,7 @@ Based on this observation, there is a risk of losing high-quality documents if a
|
|
181 |
|
182 |
In Figure 16, we show the progression of accuracy with training for High Signal Tasks for 1.4 billion parameter model on 35 billion tokens. We see that for both datasets compared, the accuracy increases over time and the accuracy of the dataset with Readability Score quality filter is consistently higher and ending at 53.20 than the baseline at 51.94.
|
183 |
|
184 |
-
|
185 |
|
186 |
**Figure 16:** Ablation experiment comparing Readability Score Filter against the FineWeb.V1.1 baseline at 1.4 Billion model size for 35 Billion tokens
|
187 |
|
|
|
162 |
|
163 |
We performed an ablation where we combined the DCLM-fastText filter and the Cosmopedia-Edu-fastText filter using an OR rule. In particular, we retain documents which at least one filter votes as high-quality. Using the OR rule allowed us to achieve similar performance as the AND rule (wherein documents are retained only if both the classifiers vote as high-quality) and better performance than individual fastText classifiers, while retaining substantially larger number of tokens.
|
164 |
|
|
|
165 |
<img src="fasttext_ablation_35b_single_seed_s42.png" alt="fasttext_ablation_35b_single_seed_s42.png" style="width:1000px;"/>
|
166 |
|
167 |
**Figure 15:** Ablation experiment comparing a combination of fastText filters against the FineWeb.V1.1 baseline.
|
|
|
180 |
|
181 |
In Figure 16, we show the progression of accuracy with training for High Signal Tasks for 1.4 billion parameter model on 35 billion tokens. We see that for both datasets compared, the accuracy increases over time and the accuracy of the dataset with Readability Score quality filter is consistently higher and ending at 53.20 than the baseline at 51.94.
|
182 |
|
183 |
+
<img src="Rscore.png" alt="Rscore.png" style="width:1000px;"/>
|
184 |
|
185 |
**Figure 16:** Ablation experiment comparing Readability Score Filter against the FineWeb.V1.1 baseline at 1.4 Billion model size for 35 Billion tokens
|
186 |
|