Pomni's picture
Update model list
e3f977c verified
|
raw
history blame
3.89 kB
metadata
license: mit
quantized_by: Pomni
language:
  - en
base_model:
  - distil-whisper/distil-medium.en
pipeline_tag: automatic-speech-recognition
datasets:
  - mozilla-foundation/common_voice_13_0
  - facebook/voxpopuli
  - LIUM/tedlium
  - MLCommons/peoples_speech
  - speechcolab/gigaspeech
  - edinburghcstr/ami
tags:
  - whisper.cpp
  - ggml
  - whisper
  - audio
  - speech
  - voice
  - distil

Distil-Medium.en quants

This is a repository of GGML quants for distil-medium.en (a Whisper-based transcription model), for use with whisper.cpp.

If you are looking for a program to run this model with, then I would recommend EasyWhisper UI, as it is user-friendly, has a GUI, and automates a lot of the hard stuff for you.

List of Quants

I am still in the middle of creating quants right now. The most important quants that I deem to be "useful in real-world usage" (being able to provide acceptably accurate transcriptions) are already up however.

Clicking on a link will download the corresponding quant instantly.

Link Quant Size Notes
GGML F32 1.58 GB Likely overkill.
GGML F16 794 MB Performs better than Q8_0 for noisy audio and music.
GGML Q8_0 430 MB Sweet spot; superficial quality loss at nearly double the speed.
GGML Q6_K 336 MB
GGML Q5_K 284 MB
GGML Q5_1 308 MB
GGML Q5_0 284 MB Last "good" quant; anything below loses quality rapidly.
GGML Q4_K 235 MB Might not have lost too much quality, but I'm not sure.
GGML Q4_1 491 MB
GGML Q4_0 444 MB
GGML Q3_K 345 MB

The F32 quant was taken from distil-whisper/distil-medium.en/ggml-medium-32-2.en.fp32.bin, and the F16 quant was taken from distil-whisper/distil-medium.en/ggml-medium-32-2.en.bin.

Questions you may have

Why do the "K-quants" not work for me?

My guess is that your GPU might be too old to recognize them, considering that I have gotten the same error on my GTX 1080. If you would like to run them regardless, you can try switching to CPU inference.

Are the K-quants "S", "M", or "L"?

The quantizer I was using was not specific about this, so I do not know about this either.

What program did you use to make these quants?

I used whisper.cpp v1.7.6 on Windows x64, leveraging CUDA 12.4.0.