Update README.md
Browse files
README.md
CHANGED
|
@@ -10,12 +10,16 @@ tags:
|
|
| 10 |
- asr
|
| 11 |
- quantized
|
| 12 |
---
|
| 13 |
-
# WhisperKit
|
|
|
|
|
|
|
| 14 |
|
| 15 |
|
| 16 |
|
| 17 |
## Dataset: `earnings22-12hours`
|
| 18 |
-
|
|
|
|
|
|
|
| 19 |
|
| 20 |
| | WER (↓) | QoI (↑) | File Size (MB) | Code Commit |
|
| 21 |
|:------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------|----------:|-----------------:|:---------------------------------------------------------------|
|
|
@@ -30,72 +34,23 @@ abc
|
|
| 30 |
| [tiny.en](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-tiny.en) | [19.02](https://hf.co/datasets/argmaxinc/whisperkit-evals-staging/tree/main/WhisperKit/openai_whisper-tiny.en/earnings22-12hours) | 0 | 66 | [Link](https://github.com/argmaxinc/WhisperKit/commit/65cb888) |
|
| 31 |
| [tiny](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-tiny) | [21.21](https://hf.co/datasets/argmaxinc/whisperkit-evals-staging/tree/main/WhisperKit/openai_whisper-tiny/earnings22-12hours) | 0 | 66 | [Link](https://github.com/argmaxinc/WhisperKit/commit/65cb888) |
|
| 32 |
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
-
|
| 43 |
-
|
| 44 |
-
(
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
(
|
| 52 |
-
|
| 53 |
-
`WhisperOpenAIAPI` sets the reference and we assume that it is using the equivalent of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2)
|
| 54 |
-
in float16 precision along with additional undisclosed optimizations from OpenAI. In all measurements, we care primarily about per-example no-regressions (quantified as `qoi` below)
|
| 55 |
-
which is a stricter metric compared to dataset average [Word Error RATE (WER)](https://en.wikipedia.org/wiki/Word_error_rate). A 100% `qoi` preserves perfect backwards-compatibility on the test distribution and avoids "perceived regressions", the phenomenon
|
| 56 |
-
where per-example known behavior changes after a code/model update and causes divergence in downstream code or breaks the user experience itself (even if dataset averages might stay flat
|
| 57 |
-
across updates). Pseudocode for `qoi`:
|
| 58 |
-
|
| 59 |
-
```python
|
| 60 |
-
qoi = []
|
| 61 |
-
for example in dataset:
|
| 62 |
-
no_regression = wer(optimized_model(example)) <= wer(reference_model(example))
|
| 63 |
-
qoi.append(no_regression)
|
| 64 |
-
qoi = (sum(qoi) / len(qoi)) * 100.
|
| 65 |
-
```
|
| 66 |
-
|
| 67 |
-
Note that the ordering of models with respect to `WER` does not necessarily match the ordering with respect to `QoI`. This is because the reference model gets assigned
|
| 68 |
-
a QoI of 100% by definition. Any per-example regression by other implementations get penalized while per-example improvements are not rewarded. `QoI` (higher is better) matters
|
| 69 |
-
where the production behavior is established by the reference results and the goal is to not regress when switching to an optimized or compressed model. On the other hand,
|
| 70 |
-
`WER` (lower is better) matters when there is no established production behavior and one is picking the best quality versus model size trade off point.
|
| 71 |
-
|
| 72 |
-
We anticipate developers that use Whisper (or similar models) in production to have their own Quality Assurance test sets and [whisperkittools](https://github.com/argmaxinc/whisperkittools) offers
|
| 73 |
-
the tooling necessary to run the same measurements on such custom test sets, please see the [Model Evaluation on Custom Dataset]((https://github.com/argmaxinc/whisperkittools)) for details.
|
| 74 |
-
|
| 75 |
-
### Why are there so many Whisper versions?
|
| 76 |
-
WhisperKit is an SDK for building speech-to-text features in apps across a wide range of Apple devices. We are working towards abstracting away the model versioning from the developer so WhisperKit
|
| 77 |
-
"just works" by deploying the highest-quality model version that a particular device can execute. In the interim, we leave the choice to the developer by providing quality and size trade-offs.
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
### Datasets
|
| 81 |
-
- [librispeech](https://huggingface.co/datasets/argmaxinc/librispeech): ~5 hours of short English audio clips, tests short-form transcription quality
|
| 82 |
-
- [earnings22](https://huggingface.co/datasets/argmaxinc/earnings22): ~120 hours of English audio clips from earnings calls with various accents, tests long-form transcription quality
|
| 83 |
-
|
| 84 |
-
### Reproducing Results
|
| 85 |
-
Benchmark results on this page were automatically generated by [whisperkittools](https://github.com/argmaxinc/whisperkittools) using our cluster of Apple Silicon Macs as self-hosted runners on
|
| 86 |
-
Github Actions. We periodically recompute these benchmarks as part of our CI pipeline. Due to [security concerns](https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#hardening-for-self-hosted-runners),
|
| 87 |
-
we are unable to open up the cluster to the public. However, any Apple Silicon Mac (even with 8GB RAM) can be used to
|
| 88 |
-
run identical [evaluation jobs](#evaluation) locally. For reference, our M2 Ultra devices complete a `librispeech` + `openai/whisper-large-v3`
|
| 89 |
-
evaluation in under 1 hour regardless of the Whisper implementation. Oldest Apple Silicon Macs should take less than 1 day to complete the same evaluation.
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
### Glossary
|
| 94 |
-
|
| 95 |
-
- `_turbo`: Indicates the presence of additional optimizations (not compression) to unlock streaming transcription
|
| 96 |
-
as described in our [Blog Post](https://www.takeargmax.com/blog/whisperkit).
|
| 97 |
-
|
| 98 |
-
- `_*MB`: Indicates the presence of model compression. Instead of cluttering the filename with details like
|
| 99 |
-
`_AudioEncoder-5.8bits_TextDecoder-6.1bits_QLoRA-rank=16`, we choose to summarize the compression spec as the
|
| 100 |
-
resulting total file size since this is what matters to developers in production.
|
| 101 |
-
|
|
|
|
| 10 |
- asr
|
| 11 |
- quantized
|
| 12 |
---
|
| 13 |
+
# WhisperKit-0.7.0 VAD Chunking Strategy Evaluation Results
|
| 14 |
+
|
| 15 |
+
This is an evaluation study to verify that the [Voice Activity Detection (VAD) based chunk-and-batch strategy introduced in WhisperKit-0.7.0](https://github.com/argmaxinc/WhisperKit/releases/tag/v0.7.0) does not decrease transcription quality. In order to measure the impact of chunking, we picked a random 10% subset of the [earnings22](https://huggingface.co/datasets/argmaxinc/earnings22-12hours) dataset which comprises corporate earnings call recordings in English with various accents. The long-form nature (>1hr/clip) and the density of speech in these audio clips are intended to stress test VAD accuracy. If VAD is inaccurate, WhisperKit will present speech segments to the Whisper model that start middle-of-speech and cause Whisper to hallucinate at increased rates.
|
| 16 |
|
| 17 |
|
| 18 |
|
| 19 |
## Dataset: `earnings22-12hours`
|
| 20 |
+
Long-Form Audio (>1hr/clip) - ~12 hours of earnings call recordings in English with various accents
|
| 21 |
+
|
| 22 |
+
### with VAD
|
| 23 |
|
| 24 |
| | WER (↓) | QoI (↑) | File Size (MB) | Code Commit |
|
| 25 |
|:------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------|----------:|-----------------:|:---------------------------------------------------------------|
|
|
|
|
| 34 |
| [tiny.en](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-tiny.en) | [19.02](https://hf.co/datasets/argmaxinc/whisperkit-evals-staging/tree/main/WhisperKit/openai_whisper-tiny.en/earnings22-12hours) | 0 | 66 | [Link](https://github.com/argmaxinc/WhisperKit/commit/65cb888) |
|
| 35 |
| [tiny](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-tiny) | [21.21](https://hf.co/datasets/argmaxinc/whisperkit-evals-staging/tree/main/WhisperKit/openai_whisper-tiny/earnings22-12hours) | 0 | 66 | [Link](https://github.com/argmaxinc/WhisperKit/commit/65cb888) |
|
| 36 |
|
| 37 |
+
### without VAD
|
| 38 |
+
|
| 39 |
+
| | WER (↓) | QoI (↑) | File Size (MB) | Code Commit |
|
| 40 |
+
|:------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------|----------:|-----------------:|:---------------------------------------------------------------|
|
| 41 |
+
| [large-v3_turbo](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-large-v3_turbo) | [11.95](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-large-v3_turbo/earnings22-12hours) | 100 | 3100 | [Link](https://github.com/argmaxinc/WhisperKit/commit/c829f9a) |
|
| 42 |
+
| [large-v2](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-large-v2) | [13.76](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-large-v2/earnings22-12hours) | 15.4 | 3100 | [Link](https://github.com/argmaxinc/WhisperKit/commit/c829f9a) |
|
| 43 |
+
| [large-v2_949MB](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-large-v2_949MB) | [12.32](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-large-v2_949MB/earnings22-12hours) | 30.8 | 949 | [Link](https://github.com/argmaxinc/WhisperKit/commit/c829f9a) |
|
| 44 |
+
| [large-v2_turbo](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-large-v2_turbo) | [13.06](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-large-v2_turbo/earnings22-12hours) | 15.4 | 3100 | [Link](https://github.com/argmaxinc/WhisperKit/commit/c829f9a) |
|
| 45 |
+
| [large-v3](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-large-v3) | [12.09](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-large-v3/earnings22-12hours) | 30.8 | 3100 | [Link](https://github.com/argmaxinc/WhisperKit/commit/c829f9a) |
|
| 46 |
+
| [large-v3_turbo_954MB](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-large-v3_turbo_954MB) | [20.25](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-large-v3_turbo_954MB/earnings22-12hours) | 0 | 954 | [Link](https://github.com/argmaxinc/WhisperKit/commit/c829f9a) |
|
| 47 |
+
| [distil-large-v3](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/distil-whisper_distil-large-v3) | [13.03](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/distil-whisper_distil-large-v3/earnings22-12hours) | 15.4 | 1510 | [Link](https://github.com/argmaxinc/WhisperKit/commit/c829f9a) |
|
| 48 |
+
| [distil-large-v3_594MB](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/distil-whisper_distil-large-v3_594MB) | [17.33](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/distil-whisper_distil-large-v3_594MB/earnings22-12hours) | 0 | 594 | [Link](https://github.com/argmaxinc/WhisperKit/commit/c829f9a) |
|
| 49 |
+
| [distil-large-v3_turbo](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/distil-whisper_distil-large-v3_turbo) | [13.19](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/distil-whisper_distil-large-v3_turbo/earnings22-12hours) | 15.4 | 1510 | [Link](https://github.com/argmaxinc/WhisperKit/commit/c829f9a) |
|
| 50 |
+
| [distil-large-v3_turbo_600MB](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/distil-whisper_distil-large-v3_turbo_600MB) | [16.38](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/distil-whisper_distil-large-v3_turbo_600MB/earnings22-12hours) | 0 | 600 | [Link](https://github.com/argmaxinc/WhisperKit/commit/c829f9a) |
|
| 51 |
+
| [small.en](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-small.en) | [15.39](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-small.en/earnings22-12hours) | 7.7 | 483 | [Link](https://github.com/argmaxinc/WhisperKit/commit/c829f9a) |
|
| 52 |
+
| [small](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-small) | [16.27](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-small/earnings22-12hours) | 7.7 | 483 | [Link](https://github.com/argmaxinc/WhisperKit/commit/c829f9a) |
|
| 53 |
+
| [base.en](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-base.en) | [19.62](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-base.en/earnings22-12hours) | 0 | 145 | [Link](https://github.com/argmaxinc/WhisperKit/commit/c829f9a) |
|
| 54 |
+
| [base](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-base) | [25.26](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-base/earnings22-12hours) | 0 | 145 | [Link](https://github.com/argmaxinc/WhisperKit/commit/c829f9a) |
|
| 55 |
+
| [tiny.en](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-tiny.en) | [23.79](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-tiny.en/earnings22-12hours) | 0 | 66 | [Link](https://github.com/argmaxinc/WhisperKit/commit/c829f9a) |
|
| 56 |
+
| [tiny](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-tiny) | [31.48](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-tiny/earnings22-12hours) | 0 | 66 | [Link](https://github.com/argmaxinc/WhisperKit/commit/c829f9a) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|