Model Card for tr-gemma-3-270m-it
This model is a fine-tuned version of google/gemma-3-270m-it, adapted for Turkish instruction-following tasks.
It was trained using TRL's SFTTrainer
on the ucekmez/OpenOrca-tr dataset.
Quick Start
from transformers import pipeline
generator = pipeline("text-generation", model="canbingol/tr-gemma-3-270m-it", device="cuda")
question = "Sadece bir kez geçmişe ya da geleceğe gidebilecek olsaydın, hangisini seçerdin ve neden?"
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
- Downloads last month
- 22
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support