File size: 1,517 Bytes
1d5eb18 14004e6 1d5eb18 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
license: odc-by
task_categories:
- text-classification
- token-classification
- question-answering
- text-generation
- text2text-generation
size_categories:
- 100K<n<1M
---
# Essential Web v1.0 - 1M Token Sample
Approximately 1,000,000 tokens sampled from Essential Web v1.0.
## Dataset Info
- **Target**: 1,000,000 tokens
- **Actual**: ~1,099,800 tokens (estimated)
- **Source**: [EssentialAI/essential-web-v1.0](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0)
## Schema
This sample preserves ALL columns from the original dataset, including:
- `id`: Document ID
- `text`: Text content
- `metadata`: URL and source information
- `quality_signals`: RedPajama quality metrics
- `eai_taxonomy`: Essential AI taxonomy labels
- `pid`: Partition ID
- And all other original columns
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("sumuks/essential-web-v1.0-sample-1M")
# Access the data with all columns
example = dataset['train'][0]
print(example['text'][:200] + "...")
# Access quality signals
print(example['quality_signals'])
# Access taxonomy
print(example['eai_taxonomy'])
```
## File Structure
The dataset is split across multiple parquet files in the `data/` directory:
- `data/part-00000.parquet`
- `data/part-00001.parquet`
- etc.
HuggingFace datasets automatically loads all parts as a single dataset.
## Sampling Method
- Random sampling across snapshots
- Preserves all original columns and metadata
- Token estimation: ~600 tokens per row |