Datasets:

Modalities:
Tabular
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
hifitts-2 / README.md
rlangman's picture
Move FAQs higher up on Readme (#5)
da14282 verified
metadata
license: cc-by-4.0
configs:
  - config_name: manifest
    data_files:
      - split: 22khz
        path: 22khz/manifest_22khz.json
      - split: 44khz
        path: 44khz/manifest_44khz.json
  - config_name: chapters
    data_files:
      - split: 22khz
        path: 22khz/chapters_22khz.json
      - split: 44khz
        path: 44khz/chapters_44khz.json

HiFiTTS-2: A Large-Scale High Bandwidth Speech Dataset

Dataset Description

This repository contains the metadata for HiFiTTS-2, a large scale speech dataset derived from LibriVox audiobooks. For more details, please refer to our paper.

The dataset contains metadata for approximately 36.7k hours of audio from 5k speakers that can be downloaded from LibriVox at a 48 kHz sampling rate.

The metadata contains estimated bandwidth, which can be used to infer the original sampling rate the audio was recorded at. The base dataset is filtered for a bandwidth appropriate for training speech models at 22 kHz. We also provide a precomputed subset with 31.7k hours appropriate for 44 kHz training. Users can modify the download script to use any sampling rate and bandwidth threshold which might be more appropriate for their work.

LibriVox audiobooks are not redistributed on Hugging Face. All audio in the dataset can be downloaded from LibriVox, following the instructions below.

Frequently Asked Questions

  • Downloading the 22 kHz version will require approximately 2.8TB of disk space. Downloading the 44 kHz version will require approximately 4.0TB of disk space.
  • During download there might be warning messages from "libmpg123". These warnings can be safely ignored. These errors look like [src/libmpg123/id3.c:INT123_id3_to_utf8():394] warning: Weird tag size 119 for encoding 1 - I will probably trim too early or something but I think the MP3 is broken.
  • By default, the script will download audio files into the workspace directory under {workspace_dir}/audio_22khz. The download will ignore HTTP errors and store information for any failed downloads into {workspace_dir}/errors_22khz.json. A new manifest will be created at {worksapce_dir}/manifest_filtered_22khz.json with utterances from failed audiobooks removed. You can override the default behavior by modifying the config.yaml file in your local SDP repository.
  • If you want to retry the download for failed audiobooks, rerun the script with the output errors_22khz.json file:
    python /home/NeMo-speech-data-processor/main.py \
        --config-path="/home/NeMo-speech-data-processor/dataset_configs/english/hifitts2" \
        --config-name="config_22khz.yaml" \
        workspace_dir="/home/hifitts2" \
        chapter_filename="/home/hifitts2/errors_22khz.json" \
        max_workers=8
    

Dataset Format

The dataset contains an utterance level manifest with these fields:

  • audio_filepath: Relative path where utterance is stored
  • speaker: LibriVox speaker ID
  • set: Dataset partition, either "train", "test_seen", "dev_seen", "test_unseen", or "dev_unseen"
  • duration: Duration of utterance
  • bandwidth: Estimated bandwidth of audiobook chapter containing this utterance
  • speaker_count: Number of speakers detected in this utterance
  • wer: ASR word error rate of normalized_text
  • cer: ASR character error rate of normalized_text
  • text_source: Data source text was taken from, either 'book' or 'mls'
  • text: Original data source transcription
  • normalized_text: Transcription output by text processing pipeline

The dataset contains an audiobook chapter level manifest with these fields:

  • url: Download URL for the LibriVox audiobook chapter
  • chapter_filepath: Relative path where audiobook chapter is stored
  • duration: Duration of chapter
  • bandwidth: Bandwidth estimated using the first 30 seconds of the chapter
  • utterances: List of utterance metadata with the following fields
  • utterances.audio_filepath: Relative path where utterance is stored
  • utterances.offset: Offset of utterance within the chapter
  • utterances.duration: Duration of utterance

Bandwidth is estimated from the first 30 seconds of each audiobook using the approach from estimate_bandwidth.py in Speech Data Processor (SDP) Toolkit. The bandwidth fmax is estimated by using the mean of the power spectrum to find the highest frequency that has at least -50 dB level relative to the peak value of the spectrum, namely, fmax=max{f[0,fNyquist]10log10(P(f)Ppeak)50dB}f_{\text{max}} = \max\left\{f \in [0, f_{\text{Nyquist}}] \, \bigg|\, 10 \log_{10} \left(\frac{P(f)}{P_{\text{peak}}}\right) \geq -50\, \text{dB}\right\} where P(f) is the power spectral density and P_peak the maximum spectral power.

Download Instructions

  1. Download the manifet.json file and chapter.json files corresponding to your desired sampling rate from this Hugging Face repository. Copy these into a workspace directory (in this example /home/hifitts2).
  2. Install NeMo-speech-data-processor (SDP) using the Installation instructions on https://github.com/NVIDIA/NeMo-speech-data-processor.
  3. Run the SDP script to download the dataset to local disk.
python /home/NeMo-speech-data-processor/main.py \
    --config-path="/home/NeMo-speech-data-processor/dataset_configs/english/hifitts2" \
    --config-name="config_22khz.yaml" \
    workspace_dir="/home/hifitts2" \
    max_workers=8

max_workers is the number of threads to use for downloading the data. To download the 44khz dataset, specify config_44khz.yaml.

Please see FAQs for further help regarding download. Or raise an issue on the community tab.

Dataset Owner(s)

NVIDIA Corporation

Dataset Creation Date

June 2025

License/Terms of Use

GOVERNING TERMS: This dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0) available at https://creativecommons.org/licenses/by/4.0/legalcode.

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please report security vulnerabilities or NVIDIA AI Concerns here.

Citation

If you find this dataset useful, please cite:

@inproceedings{rlangman2025hifitts2,
      title={HiFiTTS-2: A Large-Scale High Bandwidth Speech Dataset}, 
      author={Ryan Langman and Xuesong Yang and Paarth Neekhara and Shehzeen Hussain and Edresson Casanova and Evelina Bakhturina and Jason Li},
      booktitle={Interspeech},
      year={2025},
}