gemma-3-270m-it-GGUF

Gemma 3 270M and Gemma 3 270M-IT are lightweight, state-of-the-art multimodal models from Google, developed with the same research and technology as the Gemini family, supporting both text and image inputs (for instruction-tuned sizes above 270M), and delivering high-quality text outputs for tasks like question answering, summarization, image understanding, and code generation over a 32K context window. These models are trained on a diverse, multilingual dataset (over 140 languages) spanning web text, code, math, and images, with rigorous safety and quality filtering, and are designed for efficient deployment in resource-constrained environments. Both pre-trained (270M) and instruction-tuned (270M-IT) variants are openly available, offering robust benchmark performance, responsible AI development practices, and a broad range of academic, creative, and practical applications while emphasizing ethical use, safety, and transparency.

Model Files

Gemma-3-270M-It

File Name Quant Type File Size
gemma-3-270m-it-BF16.gguf BF16 543 MB
gemma-3-270m-it-F16.gguf F16 543 MB
gemma-3-270m-it-Q8_0.gguf Q8_0 292 MB

Gemma-3-270M

File Name Quant Type File Size
gemma-3-270m-F16.gguf F16 543 MB
gemma-3-270m-Q8_0.gguf Q8_0 292 MB
gemma-3-270m-it-BF16.gguf BF16 543 MB

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
238
GGUF
Model size
268M params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/gemma-3-270m-it-GGUF

Quantized
(33)
this model