Model Card for tr-gemma-3-270m-it

This model is a fine-tuned version of google/gemma-3-270m-it, adapted for Turkish instruction-following tasks.
It was trained using TRL's SFTTrainer on the ucekmez/OpenOrca-tr dataset.

Quick Start

from transformers import pipeline

generator = pipeline("text-generation", model="canbingol/tr-gemma-3-270m-it", device="cuda")
question = "Sadece bir kez geçmişe ya da geleceğe gidebilecek olsaydın, hangisini seçerdin ve neden?"
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
Downloads last month
22
Safetensors
Model size
268M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for canbingol/tr-gemma-3-270m-it

Finetuned
(290)
this model
Quantizations
1 model