metadata
license: mit
datasets:
- HuggingFaceH4/CodeAlpaca_20K
base_model:
- Qwen/Qwen3-0.6B
🧠 Qwen-0.6B – Code Generation Model
Model Repo: XformAI-india/qwen-0.6b-coder
Base Model: Qwen/Qwen-0.5B
Task: Code generation and completion
Trained by: XformAI
Date: May 2025
🔍 What is this?
This is a fine-tuned version of Qwen-0.6B optimized for code generation, completion, and programming logic reasoning.
It’s designed to be lightweight, fast, and capable of handling common developer tasks across multiple programming languages.
💻 Use Cases
- AI-powered code assistants
- Auto-completion for IDEs
- Offline code generation
- Learning & training environments
- Natural language → code prompts
📚 Training Details
Parameter | Value |
---|---|
Epochs | 3 |
Batch Size | 16 |
Optimizer | AdamW |
Precision | bfloat16 |
Context Window | 2048 tokens |
Framework | 🤗 Transformers + LoRA (PEFT) |
🚀 Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("XformAI-india/qwen-0.6b-coder")
tokenizer = AutoTokenizer.from_pretrained("XformAI-india/qwen-0.6b-coder")
prompt = "Write a Python function that checks if a number is prime:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))