guynich's picture
Upload README.md with huggingface_hub
62e2c8e verified
metadata
language:
  - en
pretty_name: librispeech_asr_test_vad
tags:
  - speech
license: cc-by-4.0
task_categories:
  - text-classification

Voice Activity Detection (VAD) Test Dataset

This dataset is based on the test.clean and test.other splits from the librispeech_asr corpus. It includes two binary labels:

  • speech: Indicates presence of speech ([0, 1]), computed using a dynamic threshold method with background noise estimation and smoothing.

  • confidence: A post-processing flag to optionally correct transient dropouts in speech. It is set to 1 by default, but switches to 0 for up to ~0.1 seconds (3 chunks of audio) following a transition from speech to silence. Approximately 7% of the speech labels in this dataset are confidence 0. The remaining 93% are confidence 1 enabling VAD testing.

The dataset has minimal background noise, making it suitable for mixing with external noise samples to test VAD robustness.

Example data

A plot for an example showing audio samples and the speech feature.

Example from test.other

The example below shows brief dropouts in the speech feature during natural short pauses of quiet in the talker's speech. Since some VAD models may react more slowly, the confidence feature offers a way to optionally ignore these transient droputs when evaluating performance.

Example from test.other

Example usage of dataset

The VAD model under test must support processing a chunk size of 512 audio samples at 16000 Hz generating a prediction for each speech feature.

import datasets
import numpy as np
from sklearn.metrics import roc_auc_score

dataset = datasets.load_dataset("guynich/librispeech_asr_test_vad")

audio = dataset["test.clean"][0]["audio"]["array"]
speech = dataset["test.clean"][0]["speech"]

# Compute voice activity probabilities
speech_probs = vad_model(audio)

# Add test code here
roc_auc = roc_auc_score(speech, speech_probs)

In practice you would run the AUC computation across the entire test split.

Ignore transient dropouts

The confidence values can be used to filter the data. Removing zero confidence values excludes 6.8% of the dataset and causes numerical increase in computed precision. This compensates for slower moving voice activity decisions as encountered in real-world applications.

confidence = dataset["test.clean"][0]["confidence"]

speech_array = np.array(speech)
speech_probs_array = np.array(speech_probs)

roc_auc_confidence = roc_auc_score(
    speech_array[np.array(confidence) == 1],
    speech_probs_array[np.array(confidence) == 1],
)

Model evaluation example

Example AUC plots computed for Silero VAD model with test.clean split.

Example from test.clean with Silero-VAD

Precision values are increased when data is sliced by confidence values. These low-confidence speech labels are flagged rather than removed, allowing users to either exclude them (as shown here) or handle them with other methods.

Example from test.clean with Silero-VAD

License Information

This derivative dataset retains the same license as the source dataset librispeech_asr.

CC BY 4.0

Citation Information

Labels contributed by Guy Nicholson were added to the following dataset.

@inproceedings{panayotov2015librispeech,
  title={Librispeech: an ASR corpus based on public domain audio books},
  author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
  booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
  pages={5206--5210},
  year={2015},
  organization={IEEE}
}