🦊 klusai/tf2-12b — LoRA-tuned EN→RO Translator (Fables) — GGUF build
klusai/tf2-12b is a Gemma 3 12B (Instruct) based translator, fine-tuned with LoRA on 15,000 English→Romanian fable pairs (synthetic, generated with GPT-o3). This repository distributes the merged weights in GGUF format for fast inference with llama.cpp-compatible runtimes.
Focus: faithful, fluent EN→RO translation of short moral fables, preserving tone, structure, and morals (with proper Romanian diacritics).
✨ Highlights
- Task: EN→RO translation for narrative prose (fables).
- Training: Parameter-efficient LoRA; adapters merged into base and exported to GGUF.
- Data: 15k synthetic pairs generated by GPT-o3.
- Format: GGUF (llama.cpp). Multiple quantizations may be provided (e.g. Q5_K_M, F16).
🧱 I/O Contract
- Input: English fable paragraph(s) + concise instruction to translate to Romanian.
- Output: Romanian translation that is faithful, fluent, and keeps the moral explicit.
Prompting tip: “Translate to Romanian with proper diacritics. Preserve meaning, tone, and moral. Avoid adding content.”
🧪 Training Summary
- Base model: Gemma 3 12B (Instruct)
- Method: LoRA fine-tuning; adapters merged for deployment
- Dataset: 15k EN→RO fable translations (GPT-o3 generated)
✅ Intended Use
- Education/NLP demos, EN→RO machine translation of narrative content, style-preserving localization.
⚠️ Limitations
- Tuned on fables; may underperform on legal/medical/colloquial domains.
- Synthetic data can carry stylistic biases or occasional literalism.
- Always add human review for production use.
🔑 Licensing
- This GGUF build: MIT
- Use must also comply with the Gemma base model license/terms.
📝 Changelog
- v1.0 (GGUF) — Initial public GGUF release (LoRA-merged). Multiple quantizations provided (e.g., Q5_K_M).
- Downloads last month
- 46
Hardware compatibility
Log In
to view the estimation
5-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support