Datasets:

Languages:
English
ArXiv:
License:
bhatta1 commited on
Commit
a59104c
·
verified ·
1 Parent(s): dd028fc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -11,7 +11,7 @@ library_name:
11
  ## What is it?
12
  - Recipe for producing a state-of-the-art LLM pre-training dataset having `10+ Trillion` tokens, derived from [FineWeb V1.1.0](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
13
  - Evaluation results showing more than `2%` avg improvement (with multiple random seeds) over FineWeb V1.1.0 tokens on common benchmarks for a `7B` parameter ablation model
14
- - [Data Prep Kit](https://github.com/IBM/data-prep-kit) [Notebook](https://github.com/IBM/data-prep-kit/blob/dev/examples/notebooks/GneissWeb/GneissWeb.ipynb) for reproducing the annotations and filters on top of FineWeb and [Notebook](https://github.com/ian-cho/data-prep-kit/blob/dev/transforms/universal/bloom/bloom_python.ipynb) for applying a bloom filter on FineWeb to quickly reproduce an approximate version of GneissWeb (without annotations or filters)
15
  - Details in the [Blog](https://research.ibm.com/blog/gneissweb-for-granite-training) and [Paper](https://huggingface.co/datasets/ibm-granite/GneissWeb/blob/main/GneissWebPaper_Feb21_2025.pdf)
16
  - Ablations model with `7B` parameters pre-trained with `350B` tokens of [GneissWeb](https://huggingface.co/ibm-granite/GneissWeb.7B_ablation_model_on_350B_GneissWeb.seed1), [FineWeb](https://huggingface.co/ibm-granite/GneissWeb.7B_ablation_model_on_350B_FineWeb.seed1) V1.1.0 and [FineWeb.Edu](https://huggingface.co/ibm-granite/GneissWeb.7B_ablation_model_on_350B_FineWeb.Edu.seed1) For each dataset three ablation models trained subsets of 350 Billion tokens based on different seeds are being released
17
  - Gneiss, pronounced "nice" (naɪs), is a durable igneous rock, just like IBM’s open-source [Granite](https://huggingface.co/ibm-granite) models trained from it
 
11
  ## What is it?
12
  - Recipe for producing a state-of-the-art LLM pre-training dataset having `10+ Trillion` tokens, derived from [FineWeb V1.1.0](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
13
  - Evaluation results showing more than `2%` avg improvement (with multiple random seeds) over FineWeb V1.1.0 tokens on common benchmarks for a `7B` parameter ablation model
14
+ - [Data Prep Kit](https://github.com/IBM/data-prep-kit) [Notebook](https://github.com/data-prep-kit/data-prep-kit/blob/dev/recipes/GneissWeb/GneissWeb.ipynb) for reproducing the annotations and filters on top of FineWeb and [Notebook](https://github.com/ian-cho/data-prep-kit/blob/dev/transforms/universal/bloom/bloom_python.ipynb) for applying a bloom filter on FineWeb to quickly reproduce an approximate version of GneissWeb (without annotations or filters)
15
  - Details in the [Blog](https://research.ibm.com/blog/gneissweb-for-granite-training) and [Paper](https://huggingface.co/datasets/ibm-granite/GneissWeb/blob/main/GneissWebPaper_Feb21_2025.pdf)
16
  - Ablations model with `7B` parameters pre-trained with `350B` tokens of [GneissWeb](https://huggingface.co/ibm-granite/GneissWeb.7B_ablation_model_on_350B_GneissWeb.seed1), [FineWeb](https://huggingface.co/ibm-granite/GneissWeb.7B_ablation_model_on_350B_FineWeb.seed1) V1.1.0 and [FineWeb.Edu](https://huggingface.co/ibm-granite/GneissWeb.7B_ablation_model_on_350B_FineWeb.Edu.seed1) For each dataset three ablation models trained subsets of 350 Billion tokens based on different seeds are being released
17
  - Gneiss, pronounced "nice" (naɪs), is a durable igneous rock, just like IBM’s open-source [Granite](https://huggingface.co/ibm-granite) models trained from it