Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ task_categories:
|
|
11 |
|
12 |
# librispeech_asr_test_vad
|
13 |
|
14 |
-
A dataset for testing voice activity detection.
|
15 |
|
16 |
This dataset uses test splits [`test.clean`, `test.other`] extracted
|
17 |
from the
|
@@ -23,16 +23,28 @@ There are two additional features.
|
|
23 |
|
24 |
2. Binary classification of confidence, called `confidence`. These binary values [0, 1] are computed as follows. The default confidence is 1. After a `speech` transition from 1 to 0 the confidence is set to 0 up to a maximum of three 0s in `speech` (approximately 0.1 second). This can be used to correct for temporary blips in the `speech` feature and unknown decay in the method under test.
|
25 |
|
26 |
-
|
|
|
27 |
|
28 |
-
|
29 |
|
30 |
-
|
31 |
|
32 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
# Example usage of dataset
|
35 |
|
|
|
|
|
|
|
36 |
```console
|
37 |
import datasets
|
38 |
import numpy as np
|
@@ -43,24 +55,17 @@ dataset = datasets.load_dataset("guynich/librispeech_asr_test_vad")
|
|
43 |
audio = dataset["test.clean"][0]["audio"]["array"]
|
44 |
speech = dataset["test.clean"][0]["speech"]
|
45 |
|
46 |
-
# Compute probabilities from
|
47 |
-
speech_probs =
|
48 |
|
49 |
# Add test code here such as AUC metrics.
|
50 |
# In practice you would run this across the entire test split.
|
51 |
roc_auc = roc_auc_score(speech, speech_probs)
|
52 |
-
|
53 |
-
# Data for plotting
|
54 |
-
time_step = 512 / 16000
|
55 |
-
audio_x_ticks = np.linspace(0.0, len(audio) / 16000, len(audio))
|
56 |
-
speech_x_ticks = np.linspace(0.0, len(speech) * time_step, len(speech))
|
57 |
-
|
58 |
-
# Data for inspecting masked audio with plotting or playback.
|
59 |
-
speech_mask = np.repeat(speech, 512)
|
60 |
-
masked_audio = audio[:len(speech_mask)] * speech_mask
|
61 |
```
|
62 |
|
63 |
-
The confidence values can be used
|
|
|
|
|
64 |
```console
|
65 |
confidence = dataset["test.clean"][0]["confidence"]
|
66 |
|
@@ -68,31 +73,27 @@ speech_array = np.array(speech)
|
|
68 |
speech_probs_array = np.array(speech_probs)
|
69 |
|
70 |
roc_auc_confidence = roc_auc_score(
|
71 |
-
speech_array[confidence],
|
72 |
-
speech_probs_array[confidence],
|
73 |
)
|
74 |
```
|
75 |
|
76 |
-
|
77 |
|
78 |
-
|
79 |
|
80 |
-
|
81 |
-
valid short pauses in the talker's speech. However a VAD method under test may
|
82 |
-
have slower reaction time. The `confidence` feature provides an optional means
|
83 |
-
for reducing the impact of these short zero blips when computing metrics for a
|
84 |
-
method under test.
|
85 |
|
86 |
-
|
|
|
|
|
87 |
|
88 |
-
#
|
89 |
|
90 |
-
|
91 |
-
512 samples at rate 16000 Hz.
|
92 |
|
93 |
-
|
94 |
|
95 |
-
<img src="assets/roc_test_clean.png" alt="Example from test.clean with Silero-VAD"/>
|
96 |
|
97 |
# Citation Information
|
98 |
|
|
|
11 |
|
12 |
# librispeech_asr_test_vad
|
13 |
|
14 |
+
A dataset for testing voice activity detection (VAD).
|
15 |
|
16 |
This dataset uses test splits [`test.clean`, `test.other`] extracted
|
17 |
from the
|
|
|
23 |
|
24 |
2. Binary classification of confidence, called `confidence`. These binary values [0, 1] are computed as follows. The default confidence is 1. After a `speech` transition from 1 to 0 the confidence is set to 0 up to a maximum of three 0s in `speech` (approximately 0.1 second). This can be used to correct for temporary blips in the `speech` feature and unknown decay in the method under test.
|
25 |
|
26 |
+
This test dataset has little background noise thus enables mixing with noise
|
27 |
+
samples to assess voice activity detection robustness.
|
28 |
|
29 |
+
## Example data
|
30 |
|
31 |
+
A plot for an example showing audio samples and the `speech` feature.
|
32 |
|
33 |
+
<img src="assets/test_other_item_02.png" alt="Example from test.other"/>
|
34 |
+
|
35 |
+
The following example demonstrates short zero blips in the `speech` feature for
|
36 |
+
valid short pauses in the talker's speech. However a VAD model under test may
|
37 |
+
have slower reaction time. The `confidence` feature provides an optional means
|
38 |
+
for reducing the impact of these short zero blips when computing metrics for a
|
39 |
+
method under test.
|
40 |
+
|
41 |
+
<img src="assets/test_clean_item_02.png" alt="Example from test.other"/>
|
42 |
|
43 |
# Example usage of dataset
|
44 |
|
45 |
+
The model under test must support processing a chunk size of 512 audio samples
|
46 |
+
at 16000 Hz generating a prediction for each `speech` feature.
|
47 |
+
|
48 |
```console
|
49 |
import datasets
|
50 |
import numpy as np
|
|
|
55 |
audio = dataset["test.clean"][0]["audio"]["array"]
|
56 |
speech = dataset["test.clean"][0]["speech"]
|
57 |
|
58 |
+
# Compute probabilities from model under test (block size 512).
|
59 |
+
speech_probs = model_under_test(audio)
|
60 |
|
61 |
# Add test code here such as AUC metrics.
|
62 |
# In practice you would run this across the entire test split.
|
63 |
roc_auc = roc_auc_score(speech, speech_probs)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
```
|
65 |
|
66 |
+
The confidence values can be used to slice the data. This removes 6.8% of the
|
67 |
+
entire dataset `speech` features and removing these low confidence values
|
68 |
+
increases precision.
|
69 |
```console
|
70 |
confidence = dataset["test.clean"][0]["confidence"]
|
71 |
|
|
|
73 |
speech_probs_array = np.array(speech_probs)
|
74 |
|
75 |
roc_auc_confidence = roc_auc_score(
|
76 |
+
speech_array[np.array(confidence) == 1],
|
77 |
+
speech_probs_array[np.array(confidence) == 1],
|
78 |
)
|
79 |
```
|
80 |
|
81 |
+
# Silero-VAD model testing
|
82 |
|
83 |
+
Example AUC plots computed for Silero-VAD model model with `test.clean` split.
|
84 |
|
85 |
+
<img src="assets/roc_test_clean.png" alt="Example from test.clean with Silero-VAD"/>
|
|
|
|
|
|
|
|
|
86 |
|
87 |
+
Precision values are increased when data is sliced by confidence values.
|
88 |
+
|
89 |
+
<img src="assets/roc_test_clean_exclude_low_confidence.png" alt="Example from test.clean with Silero-VAD"/>
|
90 |
|
91 |
+
# License Information
|
92 |
|
93 |
+
This dataset retains the same license as the source dataset.
|
|
|
94 |
|
95 |
+
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
|
96 |
|
|
|
97 |
|
98 |
# Citation Information
|
99 |
|