mlx-community/Qwen3.5-0.8B-OptiQ-4bit

Optimized for Apple Silicon with mlx-optiq — sensitivity-aware mixed-precision quantization, reusable at inference, fine-tuning, and serving time.

A 4-bit mixed-precision MLX quant of Qwen/Qwen3.5-0.8B. Per-layer bit-widths come from a KL-divergence sensitivity pass on the bundled optiq.jsonl five-domain calibration mix (prose · reasoning · code · agent · tool-call). Sensitive layers go to 8-bit; robust ones stay at 4-bit. The on-disk size is within ~5 % of a stock uniform 4-bit MLX quant.

Quantization details

Property Value
Predominant precision 4-bit
Layers at 8-bit (sensitive) 56
Layers at 4-bit (robust) 130
Total quantized layers 186
Group size 64
Calibration mix optiq.jsonl (32 samples × 5 domains)
Reference for sensitivity bf16 (auto-resolved; falls back to uniform-4-bit if bf16 doesn't fit)

We follow the same naming convention llama.cpp uses for Q4_K_M and similar mixed-precision quants: the "4-bit" label is for the predominant precision, not the weighted average. The mixed allocation is what lets this build beat stock uniform-4-bit at the same disk size — see the benchmark deltas below.

Usage

Load it with mlx-lm and use it as usual:

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/Qwen3.5-0.8B-OptiQ-4bit")
response = generate(
    model, tokenizer,
    prompt="Explain quantum computing in simple terms.",
    max_tokens=200,
)

For more — mixed-precision KV-cache serving, sensitivity-aware LoRA fine-tuning, the OpenAI + Anthropic-compatible inference server, hot-swap mounted adapters, sandboxed Python execution for agent workflows — install mlx-optiq:

pip install mlx-optiq

See the Qwen3.5 family guide on mlx-optiq.com for sampling defaults, training recipes, and family-specific caveats.

Benchmarks

Five-metric suite that drives the Capability Score:

Metric Score
MMLU (5-shot, 1000 samples) 54.5%
GSM8K (1000 samples, 3-shot CoT) 37.3%
IFEval (full set, strict) 45.8%
IFEval (full set, loose) 45.8%
BFCL-V3 simple (200 single-turn calls) 43.0%
HumanEval (164 problems, pass@1) 27.4%
Capability Score (mean of the 5 benchmarks above) 41.6
KL vs uniform-4-bit reference (mean / p95) 0.0965 / 0.3445
On-disk size 0.6 GB

The Capability Score is the simple unweighted mean of the five benchmarks — every metric gets one equal vote. Disk size is reported next to it as an honest second axis instead of being folded into the score. See the eval-framework writeup for the full methodology.

Links

License

Apache 2.0 (inherits from base model).

Downloads last month
7,505
Safetensors
Model size
0.2B params
Tensor type
BF16
·
U32
·
F32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mlx-community/Qwen3.5-0.8B-OptiQ-4bit

Quantized
(106)
this model
Free AI Image Generator No sign-up. Instant results. Open Now