|
--- |
|
dataset_info: |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: url |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 810734099 |
|
num_examples: 160651 |
|
download_size: 265394551 |
|
dataset_size: 810734099 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
license: apache-2.0 |
|
task_categories: |
|
- text-classification |
|
- question-answering |
|
- summarization |
|
- text2text-generation |
|
language: |
|
- ta |
|
pretty_name: Tamil Wikipedia |
|
size_categories: |
|
- 100K<n<1M |
|
|
|
|
|
|
|
|
|
--- |
|
Here’s a detailed `README.md` (Model Card / Dataset Card) for your Tamil Wikipedia dataset on Hugging Face, formatted in Markdown: |
|
|
|
--- |
|
|
|
# 🧠 Tamil Wikipedia Dataset (`tawiki`) |
|
|
|
[](https://huggingface.co/datasets/Hemanth-thunder/tawiki) |
|
|
|
The **Tamil Wikipedia (`tawiki`)** dataset is a cleaned and preprocessed dump of the Tamil language Wikipedia, curated for use in Natural Language Processing (NLP) tasks such as language modeling, summarization, machine translation, and question answering. |
|
|
|
--- |
|
|
|
## 📚 Dataset Overview |
|
|
|
* **Language**: Tamil (`ta`) |
|
* **Source**: [Wikipedia](https://ta.wikipedia.org) |
|
* **Size**: Varies by configuration (tokenized or raw text) |
|
* **License**: Apache 2.0 |
|
* **Maintainer**: [Hemanth-thunder](https://huggingface.co/Hemanth-thunder) |
|
|
|
|
|
--- |
|
|
|
## ✨ Dataset Features |
|
|
|
| Feature | Description | |
|
| ------- | --------------------------------- | |
|
| `id` | Unique identifier for the article | |
|
| `title` | Title of the Wikipedia page | |
|
| `text` | Full cleaned text of the article | |
|
|
|
--- |
|
|
|
## 🧹 Preprocessing Steps |
|
|
|
* Extracted using a Wikipedia XML dump. |
|
* Removed MediaWiki markup, references, templates, and tables. |
|
* Removed stub or extremely short articles. |
|
* Unicode normalization (UTF-8). |
|
* Optional: Sentence segmentation (if applied). |
|
|
|
--- |
|
|
|
## 🛠️ Usage |
|
|
|
You can load this dataset using the Hugging Face `datasets` library: |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
# Load the dataset |
|
dataset = load_dataset("Hemanth-thunder/tawiki") |
|
|
|
# Access a sample |
|
print(dataset["train"][0]) |
|
``` |
|
|
|
--- |
|
|
|
## 📌 Example Entry |
|
|
|
```json |
|
{ |
|
"id": "12345", |
|
"title": "தமிழ் இலக்கியம்", |
|
"text": "தமிழ் இலக்கியம் என்பது தமிழ் மொழியில் எழுதப்பட்ட ..." |
|
} |
|
``` |
|
|
|
--- |
|
|
|
## 📈 Potential Use Cases |
|
|
|
* ✅ Language Modeling (e.g., training Tamil LLMs) |
|
* ✅ Named Entity Recognition (NER) |
|
* ✅ Summarization |
|
* ✅ Translation (e.g., Tamil ↔ English) |
|
* ✅ Question Answering and Knowledge Graph extraction |
|
|
|
--- |
|
|
|
## 🚧 Limitations |
|
|
|
* May still contain minor residual formatting. |
|
* Biased toward topics commonly represented in Wikipedia. |
|
* Not suitable for real-time or critical decision-making without further validation. |
|
|
|
--- |
|
|
|
## 📄 License |
|
|
|
The dataset is derived from [Wikipedia](https://ta.wikipedia.org) and is distributed under the [Creative Commons Attribution-Share-Alike 3.0 Unported License (CC BY-SA 3.0)](https://creativecommons.org/licenses/by-sa/3.0/). |
|
|
|
--- |
|
|
|
## 👨💻 Citation |
|
|
|
If you use this dataset, please cite it as: |
|
|
|
```bibtex |
|
@misc{tawiki2025, |
|
title = {Tamil Wikipedia Dataset}, |
|
year = {2024} |
|
} |
|
``` |
|
|
|
--- |
|
|
|
## 📬 Contact |
|
|
|
If you have questions, suggestions, or issues, feel free to reach out via [Hugging Face](https://huggingface.co/Hemanth-thunder) or email. |
|
|
|
--- |
|
|
|
Let me know if you want this converted to a `dataset_card.json` or need help analyzing token counts, vocab size, or stats! |
|
|