gemma-3-270m-it-GGUF
Gemma 3 270M and Gemma 3 270M-IT are lightweight, state-of-the-art multimodal models from Google, developed with the same research and technology as the Gemini family, supporting both text and image inputs (for instruction-tuned sizes above 270M), and delivering high-quality text outputs for tasks like question answering, summarization, image understanding, and code generation over a 32K context window. These models are trained on a diverse, multilingual dataset (over 140 languages) spanning web text, code, math, and images, with rigorous safety and quality filtering, and are designed for efficient deployment in resource-constrained environments. Both pre-trained (270M) and instruction-tuned (270M-IT) variants are openly available, offering robust benchmark performance, responsible AI development practices, and a broad range of academic, creative, and practical applications while emphasizing ethical use, safety, and transparency.
Model Name | Hugging Face Repository URL |
---|---|
gemma-3-270m-it-GGUF | https://huggingface.co/prithivMLmods/gemma-3-270m-it-GGUF/tree/main/gemma-3-270m-it-GGUF |
gemma-3-270m-GGUF | https://huggingface.co/prithivMLmods/gemma-3-270m-it-GGUF/tree/main/gemma-3-270m-GGUF |
Model Files
Gemma-3-270M-It
File Name | Quant Type | File Size |
---|---|---|
gemma-3-270m-it-BF16.gguf | BF16 | 543 MB |
gemma-3-270m-it-F16.gguf | F16 | 543 MB |
gemma-3-270m-it-Q8_0.gguf | Q8_0 | 292 MB |
Gemma-3-270M
File Name | Quant Type | File Size |
---|---|---|
gemma-3-270m-F16.gguf | F16 | 543 MB |
gemma-3-270m-Q8_0.gguf | Q8_0 | 292 MB |
gemma-3-270m-it-BF16.gguf | BF16 | 543 MB |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
- Downloads last month
- 238
8-bit
16-bit