license: mit
quantized_by: Pomni
language:
  - en
base_model:
  - distil-whisper/distil-medium.en
pipeline_tag: automatic-speech-recognition
datasets:
  - mozilla-foundation/common_voice_13_0
  - facebook/voxpopuli
  - LIUM/tedlium
  - MLCommons/peoples_speech
  - speechcolab/gigaspeech
  - edinburghcstr/ami
tags:
  - whisper.cpp
  - ggml
  - whisper
  - audio
  - speech
  - voice
  - distil
Distil-Medium.en quants
This is a repository of GGML quants for distil-medium.en (a Whisper-based transcription model), for use with whisper.cpp.
If you are looking for a program to run this model with, then I would recommend EasyWhisper UI, as it is user-friendly, has a GUI, and automates a lot of the hard stuff for you.
List of Quants
Sorry, but I am still in the middle of creating quants right now. For now, you can download the official F32 and F16 quants down below.
Clicking on a link will download the corresponding quant instantly.
| Link | Quant | Size | Notes | 
|---|---|---|---|
| GGML | F32 | 1.58 GB | Likely overkill. | 
| GGML | F16 | 794 MB | Performs better than Q8_0 for noisy audio or music. | 
The F32 quant was taken from distil-whisper/distil-medium.en/ggml-medium-32-2.en.fp32.bin, and the F16 quant was taken from distil-whisper/distil-medium.en/ggml-medium-32-2.en.bin.
Questions you may have
Why do the "K-quants" not work for me?
My guess is that your GPU might be too old to recognize them, considering that I have gotten the same error on my GTX 1080. If you would like to run them regardless, you can try switching to CPU inference.
Are the K-quants "S", "M", or "L"?
The quantizer I was using was not specific about this, so I do not know about this either.
What program did you use to make these quants?
I used whisper.cpp v1.7.6 on Windows x64, leveraging CUDA 12.4.0.
