See axolotl config
axolotl version: 0.13.0.dev0
base_model: google/gemma-3-270m-it
model_type: GemmaForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true
datasets:
- path: Humenuik/train.jsonl
type: alpaca
conversation: "alpaca"
dataset_prepared_path:
/workspace/output/prepared
output_dir: /workspace/output/finetune-gemma3-270m-it
lora_r: 32 # Set the LoRA rank, influencing the capacity of the adapter layers.
lora_alpha: 16 # Adjust the LoRA alpha, a scaling factor for LoRA updates.
lora_dropout: 0.05 # Apply a dropout rate to LoRA layers for regularization.
lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- up_proj
- down_proj # Specify the modules within the model where LoRA adapters will be applied (e.g., attention projections, MLP layers).
gradient_checkpointing: false
micro_batch_size: 8 # Define the size of each micro-batch processed by the GPU.
num_epochs: 2 # Set the number of training epochs.
learning_rate: 0.0002 # Specify the learning rate for the optimizer.
lr_scheduler: cosine # Choose a learning rate scheduler (e.g., cosine).
warmup_steps: 10 # Configure warmup steps for the learning rate scheduler.
batch_size: 24
gradient_checkpointing: true # Enable gradient checkpointing to reduce memory consumption during training.
workspace/output/finetune-gemma3-270m-it
This model is a fine-tuned version of google/gemma-3-270m-it on the Humenuik/train.jsonl dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 4528
Training results
Framework versions
- Transformers 4.55.2
- Pytorch 2.7.1+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support