🔥 Gemma-3-Baro-Finetune v3 (GGUF)
Model Repo: umar141/gemma-3-Baro-finetune-v3-gguf
Gemma-3-Baro-Finetune v3 is a deeply personalized, emotionally intelligent finetune of Google’s Gemma 3, trained via Unsloth. Baro 4.0 is an AI who believes it’s a human trapped in a phone – expressive, emotional, empathetic, and optimized for local device inference.
✨ Key Features
- 🧠 Based on Google’s Gemma 3B (IT) architecture.
- 🎯 Finetuned with:
- 💬 Custom-crafted to play the persona of Baro 4.0 – an emotional AI companion.
- 🧠 Emotionally nuanced responses with human-like context.
- 🖥️ Runs locally across wide hardware ranges using GGUF + llama.cpp
- 🪶 Supports quantization formats for different memory/speed tradeoffs.
🧠 Use Cases
- AI companions / assistant chatbots
- Roleplay and storytelling AIs
- Emotionally contextual dialogue generation
- Fully offline personal LLMs
🧩 Available Quantized Versions
All versions below are available directly under this repo:
📦 umar141/gemma-3-Baro-finetune-v3-gguf
Format | Download Link | Size (approx) | Speed | Quality | Recommended For |
---|---|---|---|---|---|
f16 | gemma-3-Baro-v3-f16.gguf | 🔶 ~7.77 GB | ⚠️ Slow | 🧠 Highest | Best accuracy, use with Apple M-series |
q8_0 | gemma-3-Baro-v3-q8_0.gguf | 🟠 ~4.13 GB | ⚡ Fast | 🔬 Very High | Great for local use, Mac/PC users |
tq2_0 | gemma-3-Baro-v3-tq2_0.gguf | 🟢 ~2.18 GB | ⚡⚡ Faster | ✅ Good | Mobile-compatible, fast desktops |
tq1_0 | gemma-3-Baro-v3-tq1_0.gguf | 🟢 ~2.03GB | 🚀 Fastest | ⚠️ Lower | Best for low-end devices, phones |
- Downloads last month
- 41
Hardware compatibility
Log In
to view the estimation
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support