File size: 5,649 Bytes
ee588cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22655b1
ee588cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
---
datasets:
- name: sanskrit-word-segmentation
  license: other
  language:
    - sa
  task_categories:
    - token-classification
    - structured-prediction
  tags:
    - sanskrit
    - word-segmentation
    - morphological-analysis
    - low-resource
    - graph-based
    - lemma
    - cng
  pretty_name: Sanskrit Word Segmentation & Morphological Candidates
  size_categories:
    - 100K<n<1M
  source_datasets:
    - original
  annotations_creators:
    - expert-annotated
  multilinguality: monolingual
  language_creators:
    - expert
  paperswithcode_id: graph-based-word-segmentation-sanskrit
  citation: |
    @inproceedings{krishna2017graph,
      title={A Graph Based Framework for Structured Prediction Tasks in Sanskrit},
      author={Krishna, Amrith and Goyal, Pankaj and Sharma, Dipti Misra},
      booktitle={Proceedings of the 14th International Conference on Natural Language Processing},
      year={2017}
    }
---


# Sanskrit Word Segmentation and Morphological Candidate Dataset

This dataset provides Sanskrit sentences annotated with gold-standard word segmentations, lemmas, and morphological tags based on the paper [A Dataset for Sanskrit Word Segmentation](https://aclanthology.org/W17-2214.pdf). It also includes graph-based candidate morphological analyses derived from a structured Sanskrit parser. The dataset is useful for tasks such as:

- Word Segmentation  
- Lemmatization  
- Morphological Analysis  
- Graph-based Disambiguation  
- Low-resource NLP  

---

## 📂 Folder Structure

```

.
├── train.jsonl # Final merged dataset in JSON Lines format
├── DCS_pick.zip # Zipped DCS pickle files (.p) with gold segmentation and lemmas
├── Graphml.zip # Zipped GraphML files (.graphml) with candidate morphological structures
├── convert.py # Python script to convert .p and .graphml into dataset.jsonl
├── graphFiles # Text file listing valid .graphml IDs to be processed
├── README.md 

├── Sample/ # Contains sample files and visual tools
│ ├── DCS_999.p
│ ├── sample_999.graphml
│ ├── graph_reader.py # GraphML visualizer using NetworkX + matplotlib
│ ├── pickleReader.py # Script to read and inspect a DCS .p file
│ └── graph_output.png # Example output image from visualizing a graph
````

---

## 🧠 Data Format: `dataset.jsonl`

Each line is a JSON object with the following fields:

```json
{
  "id": "100012",
  "sentence": "tatra sarvatra vaktavyaṁ manyante śāstrakovidāḥ",
  "gold_segments": ["tatra", "sarvatra", "vaktavya", "man", "śāstra", "kovida"],
  "lemmas": [["tatra"], ["sarvatra"], ["vaktavya"], ["man"], ["śāstra", "kovida"]],
  "morph_tags": [["2"], ["2"], ["71"], ["-19"], ["3", "39"]],
  "candidates": [
    {
      "id": "1",
      "word": "tatra",
      "lemma": "tatra",
      "morph": "adv.",
      "cng": "2",
      "chunk_no": "1",
      "position": 0,
      "length": 5,
      "pre_verb": ""
    },
    ...
  ]
}
````

* `gold_segments`: List of correct segmented words (chunks)
* `lemmas`: Root forms for each chunk (1-to-many possible)
* `morph_tags`: Morphological CNG tags (per chunk)
* `candidates`: All possible analyses from the graph representation

---

## How to Use

### Read Dataset in Python

```python
import json

with open("dataset.jsonl", "r", encoding="utf-8") as f:
    for line in f:
        entry = json.loads(line)
        print(entry["id"], entry["sentence"])
```

### Visualize GraphML File (Example)

```bash
python graph_reader.py
```

For additional setup details or troubleshooting, please refer to the [original codebase on Zenodo](https://zenodo.org/records/803508).


# Citing 
```bibtex
@inproceedings{krishna-etal-2017-dataset,
    title = "A Dataset for {S}anskrit Word Segmentation",
    author = "Krishna, Amrith  and
      Satuluri, Pavan Kumar  and
      Goyal, Pawan",
    editor = "Alex, Beatrice  and
      Degaetano-Ortlieb, Stefania  and
      Feldman, Anna  and
      Kazantseva, Anna  and
      Reiter, Nils  and
      Szpakowicz, Stan",
    booktitle = "Proceedings of the Joint {SIGHUM} Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature",
    month = aug,
    year = "2017",
    address = "Vancouver, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/W17-2214/",
    doi = "10.18653/v1/W17-2214",
    pages = "105--114",
    abstract = "The last decade saw a surge in digitisation efforts for ancient manuscripts in Sanskrit. Due to various linguistic peculiarities inherent to the language, even the preliminary tasks such as word segmentation are non-trivial in Sanskrit. Elegant models for Word Segmentation in Sanskrit are indispensable for further syntactic and semantic processing of the manuscripts. Current works in word segmentation for Sanskrit, though commendable in their novelty, often have variations in their objective and evaluation criteria. In this work, we set the record straight. We formally define the objectives and the requirements for the word segmentation task. In order to encourage research in the field and to alleviate the time and effort required in pre-processing, we release a dataset of 115,000 sentences for word segmentation. For each sentence in the dataset we include the input character sequence, ground truth segmentation, and additionally lexical and morphological information about all the phonetically possible segments for the given sentence. In this work, we also discuss the linguistic considerations made while generating the candidate space of the possible segments."
}
```