Model Card for gemma3-konkani

This model is a fine-tuned version of google/gemma-3-4b-it. It has been trained using TRL.

Quick start

# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="Reubencf/gemma3-konkani")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)

Using PEFT

from peft import PeftModel
from transformers import AutoModelForCausalLM

base_model = AutoModelForCausalLM.from_pretrained("google/gemma-3-4b-it")
model = PeftModel.from_pretrained(base_model, "Reubencf/gemma3-konkani")

Training procedure

This model was trained with SFT.

Framework versions

  • PEFT 0.17.1
  • TRL: 0.21.0
  • Transformers: 4.55.0
  • Pytorch: 2.8.0+cu126
  • Datasets: 4.0.0
  • Tokenizers: 0.21.4

Citations

Cite TRL as:

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}
Downloads last month
467
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Reubencf/gemma3-konkani

Adapter
(54)
this model

Space using Reubencf/gemma3-konkani 1