Datasets:

Languages:
English
ArXiv:
License:
bhatta1 commited on
Commit
16098a7
·
verified ·
1 Parent(s): d7f76b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -1
README.md CHANGED
@@ -130,7 +130,91 @@ This gain further increases at 7 Billion model size, models trained on GneissWeb
130
 
131
  **Figure 13:** Average evaluation score on High-Signal tasks versus the number of tokens at 7 Billion model size for 350 Billion tokens. The model trained on GneissWeb consistently outperforms the one trained on FineWeb.V1.1 throughout the training.
132
 
133
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
134
 
135
  **Combining GneissWeb Components into a Winning Recipe**
136
 
 
130
 
131
  **Figure 13:** Average evaluation score on High-Signal tasks versus the number of tokens at 7 Billion model size for 350 Billion tokens. The model trained on GneissWeb consistently outperforms the one trained on FineWeb.V1.1 throughout the training.
132
 
133
+ **GneissWeb Recipe Details**
134
+
135
+ In this section, we will describe the key ingredients of the GneissWeb recipes that provide significant gains by explaining each of the components (processing steps) along with the evaluation results of their individual ablation experiments.
136
+
137
+   **Exact Substring Deduplication**
138
+
139
+ Removing duplicates from training data has been shown to reduce memorization and improve model performance Lee et al. (2022). FineWeb applied per snapshot fuzzy deduplication and removed near-duplicate documents using the MinHash algorithm. Furthermore, FineWeb also applied repetition filter, intra-document deduplication which removes documents with many repeated lines and paragraphs. However, duplicates still remain at sequence-level within and across documents. Such repeated substrings bypass the document level deduplication steps of FineWeb for several reasons: they may not represent a significant enough portion of a document or a single document may include repeated sections from various documents.
140
+
141
+ We apply exact substring deduplication to remove any substring of predetermined length that repeats verbatim more than once by adapting the implementation from Lee et al. (2022) based on Suffix arrays. Exact substring deduplication can be fine tuned through two hyper-parameters: length-threshold (the minimum length of repeated text sequences) and frequency-threshold. We utilize a length-threshold of 50, consistent with the implementation in Lee et al. (2022).
142
+
143
+ We make several modifications to the exact substring deduplication implementation from Lee et al. (2022) to run at scale. Furthermore, we adapt it to remove exact substring duplicates in a sharded manner. In particular, we shard each snapshot of FineWeb-V1.1.0 into sets of roughly equal size and apply exact substring deduplication on each shard independently. Also, rather than removing all copies of a duplicate substring, we retain the first occurrence of each duplicate substring and remove any subsequent matches exceeding 50 consecutive tokens.
144
+
145
+ In Fig 14, we show the progression of accuracy with training for High Signal Tasks at 1.4 billion parameter model for 350 billion tokens. We see that for both datasets compared, the accuracy increases over time and the accuracy of the dataset with exact substring deduplication is consistently higher ending at 57.39 than the baseline which ends at 55.99
146
+
147
+
148
+
149
+ **Figure 14:** Ablation experiment comparing Exact Substring Deduplication against the FineWeb.V1.1 baseline at 1.4 Billion model size for 350 Billion tokens
150
+
151
+   **Custom Data Quality classifiers (fastText)**
152
+
153
+ fastText family of binary classifiers have been shown to perform well in identifying high-quality pre-training documents. Specifically, DCLM trained a fastText classifier on a mix of instruction-formatted data (OpenHermes-2.5) and high scoring posts from ELI5, and demonstrated that its effectiveness for quality filtering, surpassing compute-heavy methods such as AskLLM (prompting an LLM to ask if a document is helpful). After annotating a subset of using the DCLM-fastText, we observed that it favors well-structured, well-formatted documents (e.g., including bullet points), but tends to miss high-quality informational documents without substantial formatting.
154
+
155
+ In addition to DCLM-fastText, we trained a custom fastText classifier on a mix of high-quality synthetic data and data annotated by LLM for high educational value. Specifically, we used 400k documents, equality split between positive (i.e., high-quality) and negative (i.e., low-quality) classes. We obtained the 200k positive documents as
156
+
157
+ 190k synthetic documents randomly sampled from the Cosmopedia dataset — an open synthetic dataset consisting of textbooks, blogposts, stories, posts and WikiHow articles generated by Mixtral-8x7B-Instruct-v0.1.
158
+
159
+ 10k documents with high educational value as follows: we annotated 600k random documents from FineWeb.V1.1 asking Mixtral-8x22B-Instruct to score each document between 1 to 5 for its educational quality (with 5 being the highest quality), using a prompt similar to the one used by FineWeb-Edu. Next, we selected 10k random documents with scores >= 4.
160
+
161
+ We selected 200k documents out of 600k Mixtral-annotated documents with scores <=2 as the negative documents.
162
+
163
+ We performed an ablation where we combined the DCLM-fastText filter and the Cosmopedia-Edu-fastText filter using an OR rule. In particular, we retain documents which at least one filter votes as high-quality. Using the OR rule allowed us to achieve similar performance as the AND rule (wherein documents are retained only if both the classifiers vote as high-quality) and better performance than individual fastText classifiers, while retaining substantially larger number of tokens.
164
+
165
+
166
+
167
+ **Figure 15:** Ablation experiment comparing a combination of fastText filters against the FineWeb.V1.1 baseline.
168
+
169
+ In Figure 15, we show the plot of the average eval score on high-signal tasks versus the number of training tokens for a 1.4 billion parameter model. We observe that filtering with the combination of fastText classifiers outperforms the FineWeb.V1.1 baseline throughout the training.
170
+
171
+ &nbsp;&nbsp;**Readability scores**
172
+
173
+ Readability scores are formulas based on text statistics (such as sentence length, average number of words, number of syllables etc.) designed to assess how easily the text can be read and understood \cite{duffy1985readability}. We apply readability scores as a novel quality metric to facilitate identifying and filtering hard-to-read low-quality documents.
174
+
175
+ A large number of readability score formulas have been developed to asses text difficulty. We experimented with a number of readability score formulas and selected McAlpine-EFLAW readability score. McAlpine-EFLAW readability score of a document is a numerical score computed as a function of the number of words in a document plus the number of mini-words (consisting of <= 3 characters) divided by the number of sentences. Lower score means the document is easier to understand for a reader with English as a foreign language. We also demonstrate the effectiveness of the McAlpine-EFLAW score compared to other readability scores through ablation experiments, and determined that McAlpine-EFLAW yields the best results.
176
+
177
+ We analyzed readability score distributions of the documents grouped by categories. Specifically, we considered the documents from the following 3 snapshots from FineWeb-V1.1.0: CC-MAIN-2024-10, CC-MAIN-2023-40 and CC-MAIN-2023-14 and computed the top-level category for each document using the WatsonNLP hierarchical text categorization. The WatsonNLP categorization is based on the Interactive Advertising Bureau (IAB) Tech Lab categories taxonomy. We observe the readability score distributions in certain categories, such as science, education, technology and medical health differ from the overall distribution across all categories. This variation in distributions can be attributed to the observation that several documents in these categories demand a higher level of education to understand and have high readability score, leading to a higher average readability score.
178
+
179
+ Based on this observation, there is a risk of losing high-quality documents if a threshold is selected based on the overall data distribution and the same threshold is applied to all documents. Guided by readability score distributions in different categories, we leverage the category information of documents and develop a category-aware readability score quality filter as part of our ensemble quality filter. In general, we use a more lenient threshold for these specific categories to prevent filtering out documents with potential educational value solely because of their high readability scores which results in better performance compared to filtering without leveraging category information.
180
+
181
+ In Figure 16, we show the progression of accuracy with training for High Signal Tasks for 1.4 billion parameter model on 35 billion tokens. We see that for both datasets compared, the accuracy increases over time and the accuracy of the dataset with Readability Score quality filter is consistently higher and ending at 53.20 than the baseline at 51.94.
182
+
183
+
184
+
185
+ **Figure 16:** Ablation experiment comparing Readability Score Filter against the FineWeb.V1.1 baseline at 1.4 Billion model size for 35 Billion tokens
186
+
187
+ &nbsp;&nbsp;**Extreme tokenized documents removal**
188
+
189
+ After manually inspecting fastText model-quality annotations and readability scores of large number of low-quality documents, we found that several abnormal documents were mislabeled by these annotators. We observed a peculiar pattern after tokenizing these documents: while most of these documents had similar lengths, they produced significantly different token counts. To quantify this effect, we propose novel annotations that effectively leverages information from the ``pre-tokenization'' stage (document char length, document size) and the ``post-tokenization'' stage (token counts) to identify potential low-quality documents. We refer to the the documents with extremely high or low number of tokens per character (or tokens per byte) as extreme-tokenized documents (see Figure 17 for a schematic).
190
+
191
+
192
+
193
+ **Figure 17:** A schematic outlining the steps for removing extreme tokenized documents
194
+
195
+ We analyzed the distributions of TokensPerChar and TokensPerByte for documents grouped by category. Specifically, we considered the documents from the following 3 snapshots from FineWeb-V1.1.0: CC-MAIN-2024-10, CC-MAIN-2023-40 and CC-MAIN-2023-14, and computed the top-level category for each document using the WatsonNLP hierarchical text categorization. The WatsonNLP categorization is based on the Interactive Advertising Bureau (IAB) Tech Lab categories taxonomy. We observe that the distributions are generally bell-shaped for each category, but the values of the mean and variance differ by category. Furthermore, we observe that low-quality documents typically fall into the two extremes of the distribution. Therefore, we characterize extreme-tokenized documents of a given category as those falling into the two extremes of the TokensPerChar (or TokensPerByte) distribution for the category.
196
+
197
+ Guided by the distributions of TokensPerChar and TokensPerByte in different categories, we leverage the category information of documents and develop a category-aware extreme-tokenized quality filter as part of our ensemble quality filter. At a high level, we use stricter thresholds on TokensPerChar/TokensPerByte for documents outside the key categories and use more lenient thresholds for documents in these key categories.
198
+
199
+ In Figure 18, we show the progression of accuracy with training for High Signal Tasks for 1.4 billion parameter model for 35 billion tokens. We see that for both datasets compared, the accuracy increases over time and the accuracy of the dataset with Extreme_tokenized quality filter at 52.78 is higher than the baseline at 51.94.
200
+
201
+
202
+
203
+ **Figure 18:** Ablation experiment comparing Extreme_tokenized Filter against the FineWeb.V1.1 baseline at 1.4 Billion model size for 35 Billion tokens
204
+
205
+ &nbsp;&nbsp;**Document Categorization Classifiers**
206
+
207
+ As mentioned in previous sections, the quality score distributions of documents in certain categories, which tend to contain documents with high educational-level, differ from the overall distribution across all categories. In particular, we observe that the following IAB categories supported by WatsonNLP categorization have significantly different distributions than the overall distribution across all categories: science, education, technology & computing, and medical health. Thus, for each of these key categories, we annotate whether each document falls into the category.
208
+
209
+ To perform category classification on the 96 snapshots in FineWeb-V1.1.0 at scale, we train four binary fastText category classifiers for each of the four key categories. Specifically, we generated labeled data using the WatsonNLP hierachical categorization, and used the supervised fastText package to train the fastText classifiers on the following documents:
210
+
211
+ Positive documents: 400k documents randomly sampled from the documents labeled with that specific category with a confidence score 0.95 and above.
212
+
213
+ Negative documents: 400k documents randomly sampled from the documents labeled with any category other than these four categories with a confidence score of 0.95 and above.
214
+
215
+ Each classifier takes as input a document and produces a label whether the document belongs to the category, along with a confidence score between [0,1]. We use our trained document category classifiers to annotate all the snapshots from FineWeb-V1.1.0. We leverage these category annotations in our category-aware readability score quality filtering and extreme-tokenized quality filtering which results in better performance compared to filtering without leveraging category information.
216
+
217
+
218
 
219
  **Combining GneissWeb Components into a Winning Recipe**
220