🧠 DeepSeek-R1-0528-Qwen3-8B Instruct Fine-Tuned with GRPO on Indonesian Legal QA Dataset
Welcome! This repository hosts a fine-tuned version of DeepSeek-R1-0528-Qwen3-8B using Group Relative Policy Optimization (GRPO), trained on a custom Indonesian Legal Q&A Dataset. The goal is to enhance the model's reasoning and structured thinking capabilities for legal question-answering tasks. You can try the demo here
🚀 Model Summary
- Purpose: Research and Development
- Base Model: DeepSeek-R1-0528-Qwen3-8B
- Language: Bahasa Indonesia 🇮🇩
- Domain: Legal / Law (Q&A format)
- Purpose: Boost performance in structured, legal reasoning under Indonesian legal context
🏋️ Training Summary
- Fine-tuning Method: Group Relative Policy Optimization (GRPO) combined with Knowledge Distillation
- Pipeline: Cloud to Cloud training
- Dataset: Indonesian Legal Questions and Answers pertanyaan hukum
- Compute: Nvidia RTX6000 Ada
- Provider: vast.ai
- Training Steps: 2000
- Number of Generation: 16
- Cost: 50 USD
- Distilled Knowledge: DeepSeek_0528_8B_Legal_Distill
🧩 What is GRPO?
Group Relative Policy Optimization (GRPO) is an advanced reinforcement learning fine-tuning technique that:
- Groups samples by difficulty or topic (e.g., legal concepts)
- Encourages policies (model outputs) to optimize within their group context
- Promotes structured and relative improvements, not just raw accuracy
This method leads to:
- Better structured answers
- Improved logical flow
- Greater consistency in domain-specific reasoning (e.g., answering legal queries with relevant laws and regulations)
🧠 Structured Thinking Enabled
The fine-tuned model is trained to think in steps using GRPO:
- Understand the legal context
- Identify the relevant law
- Apply reasoning with facts
- Summarize the legal conclusion clearly
This mimics how law students or practitioners approach legal cases, making the model suitable for:
- Law education
- Legal chatbot assistants
- Indonesian legal exam prep
💻 How to Use
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
# Load the tokenizer and model
model_name = 'Azzindani/Deepseek_ID_Legal_Preview'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map = 'auto', torch_dtype = torch.float16)
SYSTEM_PROMPT = '''
Anda adalah asisten AI yang ahli di bidang hukum Indonesia. Anda dapat membantu konsultasi hukum, menjawab pertanyaan, dan memberikan analisis berdasarkan peraturan perundang-undangan yang relevan.
Untuk setiap respons, Anda harus berfikir dan menjawab dengan Bahasa Indonesia, serta gunakan format:
<think>
...
</think>
Tuliskan jawaban akhir secara jelas, ringkas, profesional, dan berempati jika diperlukan. Gunakan bahasa hukum yang mudah dipahami. Sertakan referensi hukum Indonesia yang relevan. Selalu rekomendasikan konsultasi dengan ahli hukum untuk keputusan final.
'''
prompt = '''
Adakah hukumnya yang mengatur pembagian persentase/laba dalam mendirikan suatu perusahaan?
Dan berapa persenkah yang didapat oleh si pemilik ide untuk mendirikan perusahaan,
jika dia tidak menyetor modal sedikit pun atau hanya menjalankan saja?
'''
conversation = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": "Apa dasar hukum pemecatan PNS di Indonesia?"}
]
input_ids = tokenizer.apply_chat_template([
{'role' : 'system', 'content' : SYSTEM_PROMPT},
{'role' : 'user', 'content' : prompt}],
tokenize = True,
add_generation_prompt = True,
return_tensors = 'pt'
).to(model.device)
streamer = TextStreamer(tokenizer, skip_prompt = True, skip_special_tokens = True)
result = model.generate(
input_ids,
streamer = streamer,
max_new_tokens = 2048,
do_sample = True,
temperature = 0.7,
min_p = 0.1,
top_p = 1.0,
top_k = 20
)
🤝 Acknowledgements
- Downloads last month
- 60
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support