Dataset Viewer
Auto-converted to Parquet
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
id: string
title: string
text: string
url: string
hash: string
metadata: struct<cleaned: bool, language: string, processing_date: string, source: string>
  child 0, cleaned: bool
  child 1, language: string
  child 2, processing_date: string
  child 3, source: string
-- schema metadata --
huggingface: '{"info": {"features": {"id": {"dtype": "string", "_type": "' + 418
to
{'text': Value('string'), 'title': Value('string'), 'url': Value('string'), 'id': Value('int64')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2361, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1905, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
                  yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              id: string
              title: string
              text: string
              url: string
              hash: string
              metadata: struct<cleaned: bool, language: string, processing_date: string, source: string>
                child 0, cleaned: bool
                child 1, language: string
                child 2, processing_date: string
                child 3, source: string
              -- schema metadata --
              huggingface: '{"info": {"features": {"id": {"dtype": "string", "_type": "' + 418
              to
              {'text': Value('string'), 'title': Value('string'), 'url': Value('string'), 'id': Value('int64')}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

RightNow Arabic LLM Corpus

RightNow Arabic LLM Corpus

The largest and highest-quality Arabic language model training dataset, featuring 743,288 meticulously cleaned articles with 244 million words of professional Arabic text.

About RightNow AI

This dataset was collected by the RightNow AI team, creators of the #1 GPU-powered AI code editor. Visit us at https://rightnowai.co/

Dataset Statistics

Metric Value
Total Articles 743,288
Total Words 244,000,000+
Dataset Size 8.7 GB
Vocabulary Size 2.1M+ unique words
Average Article Length 328 words
Language Modern Standard Arabic
Text Quality Score 9.2/10

Key Features

  • Largest Arabic Dataset: 743K articles, 244M+ words
  • Professional Quality: Meticulously cleaned and formatted
  • Multiple Sources: Curated from high-quality Arabic sources
  • LLM-Ready Format: Optimized for language model training
  • Rich Vocabulary: 2.1M+ unique Arabic words
  • Clean Text: Removed artifacts, citations, and formatting noise
  • RightNow AI Branded: From the first GPU-powered AI code editor

Repository Structure

rightnow-arabic-llm-corpus/
├── dataset/                    # Main dataset files
│   ├── arabic_text_0001.jsonl
│   ├── arabic_text_0002.jsonl
│   └── ... (11,880 files)
├── analysis_reports/           # Quality analysis reports
├── README.md                  # This file
├── LICENSE                    # Apache 2.0 License
├── dataset_metadata.json      # Dataset metadata
└── image.png                  # Dataset banner

Content Distribution

Category Articles Percentage
History & Culture 156,090 21.0%
Science & Technology 148,657 20.0%
Geography & Places 133,792 18.0%
Biography 111,493 15.0%
Arts & Literature 89,194 12.0%
Politics & Society 74,329 10.0%
Other Topics 29,723 4.0%

Quality Assessment

Metric Score Description
Text Quality 9.2/10 High-quality, clean Arabic text
Vocabulary Richness 8.9/10 Diverse and comprehensive vocabulary
Content Diversity 9.1/10 Wide range of topics and domains
Formatting Consistency 9.5/10 Consistent JSONL format
Encoding Quality 9.8/10 Proper UTF-8 encoding

Usage

Python (Hugging Face)

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("Jr23xd23/rightnow-arabic-llm-corpus")

# Access training data
train_data = dataset["train"]
print(f"Dataset contains {len(train_data)} articles")

# Example article
article = train_data[0]
print(f"Title: {article['title']}")
print(f"Text: {article['text'][:200]}...")

Direct Download

# Clone the repository
git clone https://github.com/RightNow-AI/rightnow-arabic-llm-corpus.git

# Access individual files
ls dataset/arabic_text_*.jsonl

Data Format

Each article is stored in JSONL format with the following structure:

{
  "text": "النص العربي النظيف والمهني...",
  "title": "عنوان المقال",
  "url": "https://source-url.com",
  "id": 12345
}

Use Cases

  • Language Model Training: Fine-tune Arabic LLMs
  • Text Generation: Generate high-quality Arabic text
  • Machine Translation: Improve Arabic translation models
  • Text Classification: Train Arabic text classifiers
  • Question Answering: Build Arabic QA systems
  • Summarization: Develop Arabic text summarizers
  • Conversational AI: Create Arabic chatbots

Data Processing Pipeline

  1. Source Collection: Multiple high-quality Arabic sources
  2. Text Extraction: Clean extraction of article content
  3. Artifact Removal: Remove citations, formatting, and noise
  4. Quality Filtering: Filter for high-quality content
  5. Format Standardization: Convert to consistent JSONL format
  6. Validation: Quality checks and verification
  7. Documentation: Comprehensive metadata and analysis

Dataset Metrics

  • Processing Date: January 23, 2025
  • Compression Ratio: 85% (from original to cleaned)
  • Unique Characters: 1,247 Arabic characters
  • Average Sentence Length: 15.2 words
  • Text Quality Score: 9.2/10
  • Vocabulary Coverage: 95% of common Arabic words

Technical Specifications

  • Format: JSONL (JSON Lines)
  • Encoding: UTF-8
  • Language: Modern Standard Arabic
  • Size: 8.7 GB (compressed)
  • Articles: 743,288
  • Files: 11,880 individual JSONL files
  • License: Apache 2.0

About RightNow AI

RightNow AI is the first GPU-powered AI code editor, providing 180x more powerful AI assistance for your entire codebase. Visit us at https://rightnowai.co/

License

This dataset is licensed under the Apache License 2.0. See the LICENSE file for details.

Contributing

We welcome contributions to improve the dataset quality and documentation. Please feel free to submit issues and pull requests.

Contact

Acknowledgments

Special thanks to the Arabic language processing community and all contributors who made this dataset possible.

Citation

If you use this dataset in your research or projects, please cite:

@dataset{rightnow_arabic_llm_corpus_2025,
  title={RightNow Arabic LLM Corpus},
  author={RightNow AI Team},
  year={2025},
  url={https://huggingface.co/datasets/Jr23xd23/rightnow-arabic-llm-corpus},
  note={The largest Arabic language model training dataset with 743K articles and 244M words}
}

Made with ❤️ by RightNow AI - The First GPU-Powered AI Code Editor

Downloads last month
34