metadata
			base_model: google/gemma-3-4b-it
library_name: peft
model_name: gemma3-konkani
tags:
  - base_model:adapter:google/gemma-3-4b-it
  - lora
  - sft
  - transformers
  - trl
licence: license
pipeline_tag: text-generation
Model Card for gemma3-konkani
This model is a fine-tuned version of google/gemma-3-4b-it. It has been trained using TRL.
Quick start
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="Reubencf/gemma3-konkani")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
Using PEFT
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("google/gemma-3-4b-it")
model = PeftModel.from_pretrained(base_model, "Reubencf/gemma3-konkani")
Training procedure
This model was trained with SFT.
Framework versions
- PEFT 0.17.1
 - TRL: 0.21.0
 - Transformers: 4.55.0
 - Pytorch: 2.8.0+cu126
 - Datasets: 4.0.0
 - Tokenizers: 0.21.4
 
Citations
Cite TRL as:
@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}