Gemma-3-27B-it-tr-reasoning · Q4_K_M (GGUF)

A GGUF Q4_K_M quantization of emre/gemma-3-27b-it-tr-reasoning40k-4bit for ultra-fast, low-RAM local inference with llama.cpp (and compatible back-ends).

No weights were changed beyond quantization; alignment, vocabulary and tokenizer remain intact.

  • Developed by: emre

  • Finetuned from model : unsloth/gemma-3-27b-it-unsloth-bnb-4bit

This gemma3 model was trained 2x faster with Unsloth and Huggingface's TRL library.


Downloads last month
6
GGUF
Model size
27B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Tiotanio/gemma-3-27b-it-tr-reasoning_Q4_K_M