File size: 3,645 Bytes
5c9d250 4109b99 7bc424b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 810734099
num_examples: 160651
download_size: 265394551
dataset_size: 810734099
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-classification
- question-answering
- summarization
- text2text-generation
language:
- ta
pretty_name: Tamil Wikipedia
size_categories:
- 100K<n<1M
---
Here’s a detailed `README.md` (Model Card / Dataset Card) for your Tamil Wikipedia dataset on Hugging Face, formatted in Markdown:
---
# 🧠 Tamil Wikipedia Dataset (`tawiki`)
[](https://huggingface.co/datasets/Hemanth-thunder/tawiki)
The **Tamil Wikipedia (`tawiki`)** dataset is a cleaned and preprocessed dump of the Tamil language Wikipedia, curated for use in Natural Language Processing (NLP) tasks such as language modeling, summarization, machine translation, and question answering.
---
## 📚 Dataset Overview
* **Language**: Tamil (`ta`)
* **Source**: [Wikipedia](https://ta.wikipedia.org)
* **Size**: Varies by configuration (tokenized or raw text)
* **License**: Apache 2.0
* **Maintainer**: [Hemanth-thunder](https://huggingface.co/Hemanth-thunder)
---
## ✨ Dataset Features
| Feature | Description |
| ------- | --------------------------------- |
| `id` | Unique identifier for the article |
| `title` | Title of the Wikipedia page |
| `text` | Full cleaned text of the article |
---
## 🧹 Preprocessing Steps
* Extracted using a Wikipedia XML dump.
* Removed MediaWiki markup, references, templates, and tables.
* Removed stub or extremely short articles.
* Unicode normalization (UTF-8).
* Optional: Sentence segmentation (if applied).
---
## 🛠️ Usage
You can load this dataset using the Hugging Face `datasets` library:
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("Hemanth-thunder/tawiki")
# Access a sample
print(dataset["train"][0])
```
---
## 📌 Example Entry
```json
{
"id": "12345",
"title": "தமிழ் இலக்கியம்",
"text": "தமிழ் இலக்கியம் என்பது தமிழ் மொழியில் எழுதப்பட்ட ..."
}
```
---
## 📈 Potential Use Cases
* ✅ Language Modeling (e.g., training Tamil LLMs)
* ✅ Named Entity Recognition (NER)
* ✅ Summarization
* ✅ Translation (e.g., Tamil ↔ English)
* ✅ Question Answering and Knowledge Graph extraction
---
## 🚧 Limitations
* May still contain minor residual formatting.
* Biased toward topics commonly represented in Wikipedia.
* Not suitable for real-time or critical decision-making without further validation.
---
## 📄 License
The dataset is derived from [Wikipedia](https://ta.wikipedia.org) and is distributed under the [Creative Commons Attribution-Share-Alike 3.0 Unported License (CC BY-SA 3.0)](https://creativecommons.org/licenses/by-sa/3.0/).
---
## 👨💻 Citation
If you use this dataset, please cite it as:
```bibtex
@misc{tawiki2025,
title = {Tamil Wikipedia Dataset},
year = {2024}
}
```
---
## 📬 Contact
If you have questions, suggestions, or issues, feel free to reach out via [Hugging Face](https://huggingface.co/Hemanth-thunder) or email.
---
Let me know if you want this converted to a `dataset_card.json` or need help analyzing token counts, vocab size, or stats!
|