Hemanth-thunder commited on
Commit
7bc424b
·
verified ·
1 Parent(s): 4109b99

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +120 -1
README.md CHANGED
@@ -31,4 +31,123 @@ language:
31
  pretty_name: Tamil Wikipedia
32
  size_categories:
33
  - 100K<n<1M
34
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  pretty_name: Tamil Wikipedia
32
  size_categories:
33
  - 100K<n<1M
34
+
35
+
36
+
37
+
38
+ ---
39
+ Here’s a detailed `README.md` (Model Card / Dataset Card) for your Tamil Wikipedia dataset on Hugging Face, formatted in Markdown:
40
+
41
+ ---
42
+
43
+ # 🧠 Tamil Wikipedia Dataset (`tawiki`)
44
+
45
+ [![Hugging Face](https://img.shields.io/badge/dataset-huggingface-blue)](https://huggingface.co/datasets/Hemanth-thunder/tawiki)
46
+
47
+ The **Tamil Wikipedia (`tawiki`)** dataset is a cleaned and preprocessed dump of the Tamil language Wikipedia, curated for use in Natural Language Processing (NLP) tasks such as language modeling, summarization, machine translation, and question answering.
48
+
49
+ ---
50
+
51
+ ## 📚 Dataset Overview
52
+
53
+ * **Language**: Tamil (`ta`)
54
+ * **Source**: [Wikipedia](https://ta.wikipedia.org)
55
+ * **Size**: Varies by configuration (tokenized or raw text)
56
+ * **License**: Apache 2.0
57
+ * **Maintainer**: [Hemanth-thunder](https://huggingface.co/Hemanth-thunder)
58
+
59
+
60
+ ---
61
+
62
+ ## ✨ Dataset Features
63
+
64
+ | Feature | Description |
65
+ | ------- | --------------------------------- |
66
+ | `id` | Unique identifier for the article |
67
+ | `title` | Title of the Wikipedia page |
68
+ | `text` | Full cleaned text of the article |
69
+
70
+ ---
71
+
72
+ ## 🧹 Preprocessing Steps
73
+
74
+ * Extracted using a Wikipedia XML dump.
75
+ * Removed MediaWiki markup, references, templates, and tables.
76
+ * Removed stub or extremely short articles.
77
+ * Unicode normalization (UTF-8).
78
+ * Optional: Sentence segmentation (if applied).
79
+
80
+ ---
81
+
82
+ ## 🛠️ Usage
83
+
84
+ You can load this dataset using the Hugging Face `datasets` library:
85
+
86
+ ```python
87
+ from datasets import load_dataset
88
+
89
+ # Load the dataset
90
+ dataset = load_dataset("Hemanth-thunder/tawiki")
91
+
92
+ # Access a sample
93
+ print(dataset["train"][0])
94
+ ```
95
+
96
+ ---
97
+
98
+ ## 📌 Example Entry
99
+
100
+ ```json
101
+ {
102
+ "id": "12345",
103
+ "title": "தமிழ் இலக்கியம்",
104
+ "text": "தமிழ் இலக்கியம் என்பது தமிழ் மொழியில் எழுதப்பட்ட ..."
105
+ }
106
+ ```
107
+
108
+ ---
109
+
110
+ ## 📈 Potential Use Cases
111
+
112
+ * ✅ Language Modeling (e.g., training Tamil LLMs)
113
+ * ✅ Named Entity Recognition (NER)
114
+ * ✅ Summarization
115
+ * ✅ Translation (e.g., Tamil ↔ English)
116
+ * ✅ Question Answering and Knowledge Graph extraction
117
+
118
+ ---
119
+
120
+ ## 🚧 Limitations
121
+
122
+ * May still contain minor residual formatting.
123
+ * Biased toward topics commonly represented in Wikipedia.
124
+ * Not suitable for real-time or critical decision-making without further validation.
125
+
126
+ ---
127
+
128
+ ## 📄 License
129
+
130
+ The dataset is derived from [Wikipedia](https://ta.wikipedia.org) and is distributed under the [Creative Commons Attribution-Share-Alike 3.0 Unported License (CC BY-SA 3.0)](https://creativecommons.org/licenses/by-sa/3.0/).
131
+
132
+ ---
133
+
134
+ ## 👨‍💻 Citation
135
+
136
+ If you use this dataset, please cite it as:
137
+
138
+ ```bibtex
139
+ @misc{tawiki2025,
140
+ title = {Tamil Wikipedia Dataset},
141
+ year = {2024}
142
+ }
143
+ ```
144
+
145
+ ---
146
+
147
+ ## 📬 Contact
148
+
149
+ If you have questions, suggestions, or issues, feel free to reach out via [Hugging Face](https://huggingface.co/Hemanth-thunder) or email.
150
+
151
+ ---
152
+
153
+ Let me know if you want this converted to a `dataset_card.json` or need help analyzing token counts, vocab size, or stats!