File size: 4,054 Bytes
ef642af 55dc265 ef642af 55dc265 69bf9d9 55dc265 69bf9d9 32801a2 62e2c8e 55dc265 69bf9d9 55dc265 62e2c8e 38a8fa8 69bf9d9 55dc265 33233a6 55dc265 33233a6 55dc265 33233a6 32801a2 33233a6 55dc265 32801a2 33233a6 55dc265 32801a2 55dc265 32801a2 55dc265 32801a2 55dc265 32801a2 62e2c8e 69bf9d9 55dc265 62e2c8e 55dc265 62e2c8e 55dc265 378a7b9 55dc265 378a7b9 55dc265 33233a6 55dc265 32801a2 62e2c8e 32801a2 33233a6 55dc265 33233a6 55dc265 32801a2 55dc265 33233a6 55dc265 62e2c8e 55dc265 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 |
---
language:
- en
pretty_name: librispeech_asr_test_vad
tags:
- speech
license: cc-by-4.0
task_categories:
- text-classification
---
Voice Activity Detection (VAD) Test Dataset
This dataset is based on the `test.clean` and `test.other` splits from the
[librispeech_asr](https://huggingface.co/datasets/openslr/librispeech_asr)
corpus. It includes two binary labels:
- **speech**: Indicates presence of speech ([0, 1]), computed using a dynamic threshold method with background noise estimation and smoothing.
- **confidence**: A post-processing flag to optionally correct transient dropouts in speech. It is set to 1 by default, but switches to 0 for up to ~0.1 seconds (3 chunks of audio) following a transition from speech to silence. Approximately 7% of the `speech` labels in this dataset are `confidence` 0. The remaining 93% are `confidence` 1 enabling VAD testing.
The dataset has minimal background noise, making it suitable for mixing with
external noise samples to test VAD robustness.
## Example data
A plot for an example showing audio samples and the `speech` feature.
<img src="assets/test_other_item_02.png" alt="Example from test.other"/>
The example below shows brief dropouts in the `speech` feature during natural
short pauses of quiet in the talker's speech. Since some VAD models may react
more slowly, the `confidence` feature offers a way to optionally ignore these
transient droputs when evaluating performance.
<img src="assets/test_clean_item_02.png" alt="Example from test.other"/>
# Example usage of dataset
The VAD model under test must support processing a chunk size of 512 audio
samples at 16000 Hz generating a prediction for each `speech` feature.
```console
import datasets
import numpy as np
from sklearn.metrics import roc_auc_score
dataset = datasets.load_dataset("guynich/librispeech_asr_test_vad")
audio = dataset["test.clean"][0]["audio"]["array"]
speech = dataset["test.clean"][0]["speech"]
# Compute voice activity probabilities
speech_probs = vad_model(audio)
# Add test code here
roc_auc = roc_auc_score(speech, speech_probs)
```
In practice you would run the AUC computation across the entire test split.
## Ignore transient dropouts
The `confidence` values can be used to filter the data. Removing zero confidence
values excludes 6.8% of the dataset and causes numerical increase in
computed precision. This compensates for slower moving voice activity decisions
as encountered in real-world applications.
```console
confidence = dataset["test.clean"][0]["confidence"]
speech_array = np.array(speech)
speech_probs_array = np.array(speech_probs)
roc_auc_confidence = roc_auc_score(
speech_array[np.array(confidence) == 1],
speech_probs_array[np.array(confidence) == 1],
)
```
# Model evaluation example
Example AUC plots computed for
[Silero VAD](https://github.com/snakers4/silero-vad?tab=readme-ov-file)
model with `test.clean` split.
<img src="assets/roc_test_clean.png" alt="Example from test.clean with Silero-VAD"/>
Precision values are increased when data is sliced by `confidence` values.
These low-confidence `speech` labels are flagged rather than removed, allowing
users to either exclude them (as shown here) or handle them with other methods.
<img src="assets/roc_test_clean_exclude_low_confidence.png" alt="Example from test.clean with Silero-VAD"/>
# License Information
This derivative dataset retains the same license as the source dataset
[librispeech_asr](https://huggingface.co/datasets/openslr/librispeech_asr).
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
# Citation Information
Labels contributed by Guy Nicholson were added to the following dataset.
```
@inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
}
```
|