Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,38 +1,109 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
- name: text
|
11 |
-
dtype: string
|
12 |
-
- name: speaker_id
|
13 |
-
dtype: int64
|
14 |
-
- name: chapter_id
|
15 |
-
dtype: int64
|
16 |
-
- name: id
|
17 |
-
dtype: string
|
18 |
-
- name: speech
|
19 |
-
sequence: int64
|
20 |
-
- name: confidence
|
21 |
-
sequence: int64
|
22 |
-
splits:
|
23 |
-
- name: test.clean
|
24 |
-
num_bytes: 633124402.5
|
25 |
-
num_examples: 2620
|
26 |
-
- name: test.other
|
27 |
-
num_bytes: 625951259.625
|
28 |
-
num_examples: 2939
|
29 |
-
download_size: 1212479246
|
30 |
-
dataset_size: 1259075662.125
|
31 |
-
configs:
|
32 |
-
- config_name: default
|
33 |
-
data_files:
|
34 |
-
- split: test.clean
|
35 |
-
path: data/test.clean-*
|
36 |
-
- split: test.other
|
37 |
-
path: data/test.other-*
|
38 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
pretty_name: librispeech_asr_test_vad
|
5 |
+
tags:
|
6 |
+
- speech
|
7 |
+
license: cc-by-4.0
|
8 |
+
task_categories:
|
9 |
+
- text-classification
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
---
|
11 |
+
|
12 |
+
# librispeech_asr_test_vad
|
13 |
+
|
14 |
+
A dataset for testing voice activity detection.
|
15 |
+
|
16 |
+
This dataset uses test splits [`test.clean`, `test.other`] extracted
|
17 |
+
from the
|
18 |
+
[`librispeech_asr` dataset](https://huggingface.co/datasets/openslr/librispeech_asr).
|
19 |
+
|
20 |
+
There are two additional features.
|
21 |
+
|
22 |
+
1. Binary classification of speech activity, called `speech`. These binary values [0, 1] were computed from speech audio samples using a dynamic threshold method with background noise estimation and smoothing.
|
23 |
+
|
24 |
+
2. Binary classification of confidence, called `confidence`. These binary values [0, 1] are computed as follows. The default confidence is 1. After a `speech` transition from 1 to 0 the confidence is set to 0 up to a maximum of three 0s in `speech` (approximately 0.1 second). This can be used to correct for temporary blips in the `speech` feature and unknown decay in the method under test.
|
25 |
+
|
26 |
+
The effective chunk size is 512 audio samples for each `speech` feature.
|
27 |
+
|
28 |
+
# License Information
|
29 |
+
|
30 |
+
This dataset retains the same license as the source dataset.
|
31 |
+
|
32 |
+
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
|
33 |
+
|
34 |
+
# Example usage of dataset
|
35 |
+
|
36 |
+
```console
|
37 |
+
import datasets
|
38 |
+
import numpy as np
|
39 |
+
from sklearn.metrics import roc_auc_score
|
40 |
+
|
41 |
+
dataset = datasets.load_dataset("guynich/librispeech_asr_test_vad")
|
42 |
+
|
43 |
+
audio = dataset["test.clean"][0]["audio"]["array"]
|
44 |
+
speech = dataset["test.clean"][0]["speech"]
|
45 |
+
|
46 |
+
# Compute probabilities from method under test (block size 512).
|
47 |
+
speech_probs = method_under_test(audio)
|
48 |
+
|
49 |
+
# Add test code here such as AUC metrics.
|
50 |
+
# In practice you would run this across the entire test split.
|
51 |
+
roc_auc = roc_auc_score(speech, speech_probs)
|
52 |
+
|
53 |
+
# Data for plotting
|
54 |
+
time_step = 512 / 16000
|
55 |
+
audio_x_ticks = np.linspace(0.0, len(audio) / 16000, len(audio))
|
56 |
+
speech_x_ticks = np.linspace(0.0, len(speech) * time_step, len(speech))
|
57 |
+
|
58 |
+
# Data for inspecting masked audio with plotting or playback.
|
59 |
+
speech_mask = np.repeat(speech, 512)
|
60 |
+
masked_audio = audio[:len(speech_mask)] * speech_mask
|
61 |
+
```
|
62 |
+
|
63 |
+
The confidence values can be used prior to computing metric
|
64 |
+
```console
|
65 |
+
confidence = dataset["test.clean"][0]["confidence"]
|
66 |
+
|
67 |
+
speech_array = np.array(speech)
|
68 |
+
speech_probs_array = np.array(speech_probs)
|
69 |
+
|
70 |
+
roc_auc_confidence = roc_auc_score(
|
71 |
+
speech_array[confidence],
|
72 |
+
speech_probs_array[confidence],
|
73 |
+
)
|
74 |
+
```
|
75 |
+
|
76 |
+
Example plots.
|
77 |
+
|
78 |
+
<img src="assets/test_other_item_02.png" alt="Example from test.other"/>
|
79 |
+
|
80 |
+
The following example demonstrates short zero blips in the `speech` feature for
|
81 |
+
valid short pauses in the talker's speech. However a VAD method under test may
|
82 |
+
have slower reaction time. The `confidence` feature provides an optional means
|
83 |
+
for reducing the impact of these short zero blips when computing metrics for a
|
84 |
+
method under test.
|
85 |
+
|
86 |
+
<img src="assets/test_clean_item_02.png" alt="Example from test.other"/>
|
87 |
+
|
88 |
+
# VAD testing
|
89 |
+
|
90 |
+
The VAD method shall supply a voice activity prediction for audio chunks of
|
91 |
+
512 samples at rate 16000 Hz.
|
92 |
+
|
93 |
+
Example AUC plots computed for `test.clean` split and Silero-VAD model.
|
94 |
+
|
95 |
+
<img src="assets/roc_test_clean.png" alt="Example from test.clean with Silero-VAD"/>
|
96 |
+
|
97 |
+
# Citation Information
|
98 |
+
|
99 |
+
Derived from this dataset.
|
100 |
+
```
|
101 |
+
@inproceedings{panayotov2015librispeech,
|
102 |
+
title={Librispeech: an ASR corpus based on public domain audio books},
|
103 |
+
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
|
104 |
+
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
|
105 |
+
pages={5206--5210},
|
106 |
+
year={2015},
|
107 |
+
organization={IEEE}
|
108 |
+
}
|
109 |
+
```
|