gemma-3n-E4B-it-GGUF

Model creator: google
Original model: google/gemma-3n-E4B-it
GGUF quantization: provided by olegshulyakov using llama.cpp

Special thanks

🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.

Use with Ollama

ollama run "hf.co/olegshulyakov/gemma-3n-E4B-it-GGUF:Q8_0"

Use with LM Studio

lms load "olegshulyakov/gemma-3n-E4B-it-GGUF"

Use with llama.cpp CLI

llama-cli --hf-repo "olegshulyakov/gemma-3n-E4B-it-GGUF" --hf-file "gemma-3n-E4B-it-Q8_0.gguf" -p "The meaning to life and the universe is"

Use with llama.cpp Server:

llama-server --hf-repo "olegshulyakov/gemma-3n-E4B-it-GGUF" --hf-file "gemma-3n-E4B-it-Q8_0.gguf" -c 4096
Downloads last month
994
GGUF
Model size
1.17M params
Architecture
undefined
Hardware compatibility
Log In to view the estimation

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for olegshulyakov/gemma-3n-E4B-it-GGUF

Quantized
(45)
this model