metadata
library_name: transformers
license: apache-2.0
datasets:
- kurakurai/luth-sft
language:
- fr
- en
base_model:
- Qwen/Qwen3-0.6B
pipeline_tag: text-generation
Luth-0.6B
Luth-0.6B is a French fine-tuned version of Qwen3-0.6B, trained on the Luth-SFT dataset. The model has drastically improved its French capabilities in instruction following, math, and general knowledge. Additionally, its English capabilities have remained stable and have even increased in some areas.
Model Details
Luth-0.6B was trained using full fine-tuning on the Luth-SFT dataset with Axolotl. The resulting model was then merged with the base Qwen3-0.6B model. This process successfully retained the model's English capabilities while improving its performance on nearly all benchmarks in both French and English.
Benchmark Results
French Evaluation:
English Evaluation:
Citation
@misc{luth2025kurakurai,
title = {Luth-0.6B},
author = {Kurakura AI Team},
year = {2025},
howpublished = {\url{https://huggingface.co/kurakurai/Luth-0.6B}},
note = {Qwen3-0.6B fine-tuned on French datasets}
}