shivrajanand's picture
Upload folder using huggingface_hub
22655b1 verified
metadata
datasets:
  - name: sanskrit-word-segmentation
    license: other
    language:
      - sa
    task_categories:
      - token-classification
      - structured-prediction
    tags:
      - sanskrit
      - word-segmentation
      - morphological-analysis
      - low-resource
      - graph-based
      - lemma
      - cng
    pretty_name: Sanskrit Word Segmentation & Morphological Candidates
    size_categories:
      - 100K<n<1M
    source_datasets:
      - original
    annotations_creators:
      - expert-annotated
    multilinguality: monolingual
    language_creators:
      - expert
    paperswithcode_id: graph-based-word-segmentation-sanskrit
    citation: |
      @inproceedings{krishna2017graph,
        title={A Graph Based Framework for Structured Prediction Tasks in Sanskrit},
        author={Krishna, Amrith and Goyal, Pankaj and Sharma, Dipti Misra},
        booktitle={Proceedings of the 14th International Conference on Natural Language Processing},
        year={2017}
      }

Sanskrit Word Segmentation and Morphological Candidate Dataset

This dataset provides Sanskrit sentences annotated with gold-standard word segmentations, lemmas, and morphological tags based on the paper A Dataset for Sanskrit Word Segmentation. It also includes graph-based candidate morphological analyses derived from a structured Sanskrit parser. The dataset is useful for tasks such as:

  • Word Segmentation
  • Lemmatization
  • Morphological Analysis
  • Graph-based Disambiguation
  • Low-resource NLP

📂 Folder Structure


.
├── train.jsonl # Final merged dataset in JSON Lines format
├── DCS_pick.zip # Zipped DCS pickle files (.p) with gold segmentation and lemmas
├── Graphml.zip # Zipped GraphML files (.graphml) with candidate morphological structures
├── convert.py # Python script to convert .p and .graphml into dataset.jsonl
├── graphFiles # Text file listing valid .graphml IDs to be processed
├── README.md 

├── Sample/ # Contains sample files and visual tools
│ ├── DCS_999.p
│ ├── sample_999.graphml
│ ├── graph_reader.py # GraphML visualizer using NetworkX + matplotlib
│ ├── pickleReader.py # Script to read and inspect a DCS .p file
│ └── graph_output.png # Example output image from visualizing a graph

🧠 Data Format: dataset.jsonl

Each line is a JSON object with the following fields:

{
  "id": "100012",
  "sentence": "tatra sarvatra vaktavyaṁ manyante śāstrakovidāḥ",
  "gold_segments": ["tatra", "sarvatra", "vaktavya", "man", "śāstra", "kovida"],
  "lemmas": [["tatra"], ["sarvatra"], ["vaktavya"], ["man"], ["śāstra", "kovida"]],
  "morph_tags": [["2"], ["2"], ["71"], ["-19"], ["3", "39"]],
  "candidates": [
    {
      "id": "1",
      "word": "tatra",
      "lemma": "tatra",
      "morph": "adv.",
      "cng": "2",
      "chunk_no": "1",
      "position": 0,
      "length": 5,
      "pre_verb": ""
    },
    ...
  ]
}
  • gold_segments: List of correct segmented words (chunks)
  • lemmas: Root forms for each chunk (1-to-many possible)
  • morph_tags: Morphological CNG tags (per chunk)
  • candidates: All possible analyses from the graph representation

How to Use

Read Dataset in Python

import json

with open("dataset.jsonl", "r", encoding="utf-8") as f:
    for line in f:
        entry = json.loads(line)
        print(entry["id"], entry["sentence"])

Visualize GraphML File (Example)

python graph_reader.py

For additional setup details or troubleshooting, please refer to the original codebase on Zenodo.

Citing

@inproceedings{krishna-etal-2017-dataset,
    title = "A Dataset for {S}anskrit Word Segmentation",
    author = "Krishna, Amrith  and
      Satuluri, Pavan Kumar  and
      Goyal, Pawan",
    editor = "Alex, Beatrice  and
      Degaetano-Ortlieb, Stefania  and
      Feldman, Anna  and
      Kazantseva, Anna  and
      Reiter, Nils  and
      Szpakowicz, Stan",
    booktitle = "Proceedings of the Joint {SIGHUM} Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature",
    month = aug,
    year = "2017",
    address = "Vancouver, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/W17-2214/",
    doi = "10.18653/v1/W17-2214",
    pages = "105--114",
    abstract = "The last decade saw a surge in digitisation efforts for ancient manuscripts in Sanskrit. Due to various linguistic peculiarities inherent to the language, even the preliminary tasks such as word segmentation are non-trivial in Sanskrit. Elegant models for Word Segmentation in Sanskrit are indispensable for further syntactic and semantic processing of the manuscripts. Current works in word segmentation for Sanskrit, though commendable in their novelty, often have variations in their objective and evaluation criteria. In this work, we set the record straight. We formally define the objectives and the requirements for the word segmentation task. In order to encourage research in the field and to alleviate the time and effort required in pre-processing, we release a dataset of 115,000 sentences for word segmentation. For each sentence in the dataset we include the input character sequence, ground truth segmentation, and additionally lexical and morphological information about all the phonetically possible segments for the given sentence. In this work, we also discuss the linguistic considerations made while generating the candidate space of the possible segments."
}