ATC-ASR-Dataset / README.md
jacktol's picture
Update README.md
075e736 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: audio
      dtype: audio
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 692440565.782
      num_examples: 6497
    - name: validation
      num_bytes: 86238362
      num_examples: 812
    - name: test
      num_bytes: 87842088
      num_examples: 813
  download_size: 848224655
  dataset_size: 866521015.782
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

ATC ASR Dataset

ATC ASR Dataset is a high-quality, fine-tuning-ready speech recognition dataset constructed from two real-world Air Traffic Control (ATC) corpora: the UWB ATC Corpus and the ATCO2 1-Hour Test Subset.

The dataset consists of cleanly segmented audio + transcript pairs at the utterance level, specifically curated for Automatic Speech Recognition (ASR) training and fine-tuning in the ATC domain.

Contents

This dataset includes:

  • Audio files (.wav, 16kHz mono) of individual ATC utterances
  • Transcripts (.txt) aligned with each audio file
  • Training, validation, and test splits

Use Cases

This dataset is ideal for:

  • Training ASR models specialized in aviation communication
  • Benchmarking domain-adapted speech recognition systems
  • Studying accented and noisy English in operational ATC environments

Source Datasets

This dataset combines data from:

  • UWB ATC Corpus - ATC speech and corresponding transcripts recorded over Czech airspace, featuring heavily accented English.

  • ATCO2 1-Hour Test Subset - A publicly released evaluation slice from the larger ATCO2 corpus, featuring diverse ATC environments, speaker accents, and acoustic conditions.

Cleaning & Preprocessing

The raw corpora were normalized and cleaned using custom Python scripts. Key steps included:

  • Segmenting long audio files into utterance-level clips using timestamps
  • Uppercasing all transcripts for uniformity
  • Converting digits to words (e.g., 3 5 0THREE FIVE ZERO)
  • Expanding letters to phonetic alphabet equivalents (e.g., NNOVEMBER)
  • Removing non-English, unintelligible, or corrupted segments
  • Normalizing diacritics and fixing broken Unicode characters
  • Manual filtering of misaligned or low-quality samples

Usage

  1. Install Dependencies:
    Use Hugging Face's datasets library to load the dataset:

    from datasets import load_dataset
    dataset = load_dataset("jacktol/ATC-ASR-Dataset")
    
  2. Training:
    The dataset is ready for speech recognition tasks such as fine-tuning Whisper models. It includes training and test splits to evaluate models based on Word Error Rate (WER).

Reproducibility

All preprocessing scripts and data creation pipelines are publicly available in the companion GitHub repository:

ATC ASR Dataset Preparation Toolkit (GitHub)

This includes:

  • Scripts to process raw UWB, ATCO2, and ATCC datasets
  • Tools for combining, splitting, and augmenting data
  • Upload scripts for Hugging Face dataset integration

References

Citation

If you use this dataset, please cite the original UWB and ATCO2 corpora where appropriate. For data processing methodology and code, reference the ATC ASR Dataset Preparation Toolkit.

Mentioning or linking to this Hugging Face dataset page helps support transparency and future development of open ATC ASR resources.