W4A16 quantization using llmcompressor. Run with:

vllm serve leon-se/gemma-3-27b-it-qat-W4A16-G128 --max-model-len 4096 --max-num-seqs 1
Downloads last month
17,315
Safetensors
Model size
6.64B params
Tensor type
I64
·
I32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for leon-se/gemma-3-27b-it-qat-W4A16-G128

Quantized
(10)
this model