guynich's picture
Upload README.md with huggingface_hub
33233a6 verified
|
raw
history blame
3.82 kB
metadata
language:
  - en
pretty_name: librispeech_asr_test_vad
tags:
  - speech
license: cc-by-4.0
task_categories:
  - text-classification

librispeech_asr_test_vad

A dataset for testing voice activity detection (VAD).

This dataset uses test splits [test.clean, test.other] extracted from the librispeech_asr dataset.

There are two additional features.

  1. Binary classification of speech activity, called speech. These binary values [0, 1] were computed from speech audio samples using a dynamic threshold method with background noise estimation and smoothing.

  2. Binary classification of confidence, called confidence. These binary values [0, 1] are computed as follows. The default confidence is 1. After a speech transition from 1 to 0 the confidence is set to 0 up to a maximum of three 0s in speech (approximately 0.1 second). This can be used to correct for temporary blips in the speech feature and unknown decay in the method under test.

This test dataset has little background noise thus enables mixing with noise samples to assess voice activity detection robustness.

Example data

A plot for an example showing audio samples and the speech feature.

Example from test.other

The following example demonstrates short zero blips in the speech feature for valid short pauses in the talker's speech. However a VAD model under test may have slower reaction time. The confidence feature provides an optional means for reducing the impact of these short zero blips when computing metrics for a method under test.

Example from test.other

Example usage of dataset

The model under test must support processing a chunk size of 512 audio samples at 16000 Hz generating a prediction for each speech feature.

import datasets
import numpy as np
from sklearn.metrics import roc_auc_score

dataset = datasets.load_dataset("guynich/librispeech_asr_test_vad")

audio = dataset["test.clean"][0]["audio"]["array"]
speech = dataset["test.clean"][0]["speech"]

# Compute probabilities from model under test (block size 512).
speech_probs = model_under_test(audio)

# Add test code here such as AUC metrics.
# In practice you would run this across the entire test split.
roc_auc = roc_auc_score(speech, speech_probs)

The confidence values can be used to slice the data. This removes 6.8% of the entire dataset speech features and removing these low confidence values increases precision.

confidence = dataset["test.clean"][0]["confidence"]

speech_array = np.array(speech)
speech_probs_array = np.array(speech_probs)

roc_auc_confidence = roc_auc_score(
    speech_array[np.array(confidence) == 1],
    speech_probs_array[np.array(confidence) == 1],
)

Silero-VAD model testing

Example AUC plots computed for Silero-VAD model model with test.clean split.

Example from test.clean with Silero-VAD

Precision values are increased when data is sliced by confidence values.

Example from test.clean with Silero-VAD

License Information

This dataset retains the same license as the source dataset.

CC BY 4.0

Citation Information

Derived from this dataset.

@inproceedings{panayotov2015librispeech,
  title={Librispeech: an ASR corpus based on public domain audio books},
  author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
  booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
  pages={5206--5210},
  year={2015},
  organization={IEEE}
}