Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,83 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
quantized_by: gghfez
|
| 3 |
+
pipeline_tag: text-generation
|
| 4 |
+
base_model: deepseek-ai/DeepSeek-V3.1
|
| 5 |
+
license: mit
|
| 6 |
+
base_model_relation: quantized
|
| 7 |
+
tags:
|
| 8 |
+
- mla
|
| 9 |
+
- imatrix
|
| 10 |
+
- deepseek_v3.1
|
| 11 |
+
- conversational
|
| 12 |
+
- ik_llama.cpp
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
## `ik_llama.cpp` imatrix Quantizations of deepseek-ai/DeepSeek-V3.1
|
| 16 |
+
This quant **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support the ik's latest SOTA quants and optimizations! Do **not** download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc!
|
| 17 |
+
|
| 18 |
+
*NOTE* `ik_llama.cpp` can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
|
| 19 |
+
|
| 20 |
+
I made this for myself and my RAM+VRAM setup. For more ik_llama quants of this model, discussions, perplexity measurements, see @ubergarm's [DeepSeek-V3.1 Collection](https://huggingface.co/ubergarm/DeepSeek-V3.1-GGUF)
|
| 21 |
+
|
| 22 |
+
<details>
|
| 23 |
+
<summary>👈 Quant details</summary>
|
| 24 |
+
|
| 25 |
+
```bash
|
| 26 |
+
#!/usr/bin/env bash
|
| 27 |
+
|
| 28 |
+
custom="
|
| 29 |
+
# First 3 dense layers (0-3) (GPU)
|
| 30 |
+
# Using q8_0 for attn_k_b since imatrix might not have these tensors
|
| 31 |
+
blk\.[0-2]\.attn_k_b.*=q8_0
|
| 32 |
+
blk\.[0-2]\.attn_.*=iq5_ks
|
| 33 |
+
blk\.[0-2]\.ffn_down.*=iq5_ks
|
| 34 |
+
blk\.[0-2]\.ffn_(gate|up).*=iq4_ks
|
| 35 |
+
blk\.[0-2]\..*=iq5_ks
|
| 36 |
+
|
| 37 |
+
# All attention, norm weights, and bias tensors for MoE layers (3-60) (GPU)
|
| 38 |
+
# Using q8_0 for attn_k_b since imatrix might not have these tensors
|
| 39 |
+
blk\.[3-9]\.attn_k_b.*=q8_0
|
| 40 |
+
blk\.[1-5][0-9]\.attn_k_b.*=q8_0
|
| 41 |
+
blk\.60\.attn_k_b.*=q8_0
|
| 42 |
+
|
| 43 |
+
blk\.[3-9]\.attn_.*=iq5_ks
|
| 44 |
+
blk\.[1-5][0-9]\.attn_.*=iq5_ks
|
| 45 |
+
blk\.60\.attn_.*=iq5_ks
|
| 46 |
+
|
| 47 |
+
# Shared Expert (3-60) (GPU)
|
| 48 |
+
blk\.[3-9]\.ffn_down_shexp\.weight=iq5_ks
|
| 49 |
+
blk\.[1-5][0-9]\.ffn_down_shexp\.weight=iq5_ks
|
| 50 |
+
blk\.60\.ffn_down_shexp\.weight=iq5_ks
|
| 51 |
+
|
| 52 |
+
blk\.[3-9]\.ffn_(gate|up)_shexp\.weight=iq4_ks
|
| 53 |
+
blk\.[1-5][0-9]\.ffn_(gate|up)_shexp\.weight=iq4_ks
|
| 54 |
+
blk\.60\.ffn_(gate|up)_shexp\.weight=iq4_ks
|
| 55 |
+
|
| 56 |
+
# Routed Experts (3-60) (CPU)
|
| 57 |
+
blk\.[3-9]\.ffn_down_exps\.weight=iq3_ks
|
| 58 |
+
blk\.[1-5][0-9]\.ffn_down_exps\.weight=iq3_ks
|
| 59 |
+
blk\.60\.ffn_down_exps\.weight=iq3_ks
|
| 60 |
+
|
| 61 |
+
blk\.[3-9]\.ffn_(gate|up)_exps\.weight=iq2_ks
|
| 62 |
+
blk\.[1-5][0-9]\.ffn_(gate|up)_exps\.weight=iq2_ks
|
| 63 |
+
blk\.60\.ffn_(gate|up)_exps\.weight=iq2_ks
|
| 64 |
+
|
| 65 |
+
# Token embedding and output tensors (GPU)
|
| 66 |
+
token_embd\.weight=iq5_k
|
| 67 |
+
output\.weight=q8_0 # Changed to q8_0
|
| 68 |
+
"
|
| 69 |
+
|
| 70 |
+
custom=$(
|
| 71 |
+
echo "$custom" | grep -v '^#' | \
|
| 72 |
+
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
|
| 73 |
+
)
|
| 74 |
+
|
| 75 |
+
./build/bin/llama-quantize \
|
| 76 |
+
--custom-q "$custom" \
|
| 77 |
+
--imatrix /fast/DeepSeek-V3.1.imatrix \
|
| 78 |
+
/fast/bf16/DeepSeek-V3-00001-of-00030.gguf
|
| 79 |
+
/fast2/quants/DeepSeek-V3.1-IQ2_KS.gguf \
|
| 80 |
+
IQ2_KS \
|
| 81 |
+
```
|
| 82 |
+
</details>
|
| 83 |
+
|