IQ2_KS quant of DeepSeek-V3.1-Base I made for my 192GB DDR5 + 3090/4090. Done according to:

IQ2_KS 181.239 GiB (2.317 BPW)

👈 Secret Recipe
#!/usr/bin/env bash

custom="
# First 3 dense layers (0-3) (GPU)
# Except blk.*.attn_k_b.weight is not divisible by 256 so only supports qN_0
blk\.[0-2]\.attn_k_b.*=q4_0
blk\.[0-2]\.attn_.*=iq4_ks
blk\.[0-2]\.ffn_down.*=iq4_ks
blk\.[0-2]\.ffn_(gate|up).*=iq4_ks
blk\.[0-2]\..*=iq4_ks

# All attention, norm weights, and bias tensors for MoE layers (3-60) (GPU)
# Except blk.*.attn_k_b.weight is not divisible by 256 so only supports qN_0
blk\.[3-9]\.attn_k_b.*=q4_0
blk\.[1-5][0-9]\.attn_k_b.*=q4_0
blk\.60\.attn_k_b.*=q4_0

blk\.[3-9]\.attn_.*=iq4_ks
blk\.[1-5][0-9]\.attn_.*=iq4_ks
blk\.60\.attn_.*=iq4_ks

# Shared Expert (3-60) (GPU)
blk\.[3-9]\.ffn_down_shexp\.weight=iq4_ks
blk\.[1-5][0-9]\.ffn_down_shexp\.weight=iq4_ks
blk\.60\.ffn_down_shexp\.weight=iq4_ks

blk\.[3-9]\.ffn_(gate|up)_shexp\.weight=iq4_ks
blk\.[1-5][0-9]\.ffn_(gate|up)_shexp\.weight=iq4_ks
blk\.60\.ffn_(gate|up)_shexp\.weight=iq4_ks

# Routed Experts (3-60) (CPU)
blk\.[3-9]\.ffn_down_exps\.weight=iq2_k
blk\.[1-5][0-9]\.ffn_down_exps\.weight=iq2_k
blk\.60\.ffn_down_exps\.weight=iq2_k

blk\.[3-9]\.ffn_(gate|up)_exps\.weight=iq2_ks
blk\.[1-5][0-9]\.ffn_(gate|up)_exps\.weight=iq2_ks
blk\.60\.ffn_(gate|up)_exps\.weight=iq2_ks

# Token embedding and output tensors (GPU)
token_embd\.weight=iq4_k
output\.weight=Q8_0

ik_llama.cpp quantizations of DeepSeek-V3.1-Base

NOTE: These quants MUST be run using the llama.cpp fork, ik_llama.cpp

Credits to @ubergarm for his DeepSeek quant recipes for which these quants were based on.

Credits to @ggfhez for his bf16 upload.

Downloads last month
82
GGUF
Model size
672B params
Architecture
deepseek2
Hardware compatibility
Log In to view the estimation

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lmganon123/DeepSeek-V3.1-Base_IK_GGUF_Q2

Quantized
(7)
this model