File size: 2,386 Bytes
e489d20
7fc5cd3
 
e489d20
7fc5cd3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e489d20
 
7fc5cd3
e489d20
7fc5cd3
 
 
 
e489d20
7fc5cd3
e489d20
7fc5cd3
e489d20
7fc5cd3
e489d20
7fc5cd3
 
e489d20
7fc5cd3
e489d20
7fc5cd3
e489d20
7fc5cd3
e489d20
7fc5cd3
 
 
e489d20
7fc5cd3
 
e489d20
7fc5cd3
 
e489d20
7fc5cd3
 
 
 
 
e489d20
7fc5cd3
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
language: en
license: gemma
tags:
  - lora
  - peft
  - adapters
  - unsloth
  - gemma-3
  - chess
  - instruction-tuning
  - sft
  - trl
  - 270m
base_model: unsloth/gemma-3-270m-it
library_name: transformers
pipeline_tag: text-generation
datasets:
  - Thytu/ChessInstruct
model-index:
  - name: gemma3-270m-chess-lora
    results: []
---

# Gemma-3 270M — Chess Coach (LoRA Adapters)

**Author:** [@codertrish](https://huggingface.co/codertrish)  
**Base model:** [`unsloth/gemma-3-270m-it`](https://huggingface.co/unsloth/gemma-3-270m-it)  
**Type:** **LoRA adapters** (attach to base at load-time)  
**Task:** Conversational chess tutoring (rules, openings, beginner tactics)

This repo contains **only the LoRA adapter weights** (ΔW). You must also load the **base model** and then **attach** these adapters to reproduce the fine-tuned behavior.

---

## ✨ Intended Use

- **Direct use:** Teach or explain beginner chess concepts, opening principles, and simple tactics in plain English.  
- **Downstream use:** As a lightweight add-on for apps where distributing full weights isn’t desired or allowed.

**Out-of-scope:** Engine-grade move calculation or authoritative evaluations of complex positions. For strong analysis, pair with a chess engine (e.g., Stockfish).

---

## 🔧 How to Use (attach adapters)

### Option A — Unsloth (simplest)
```python
# pip install "unsloth[torch]" transformers peft accelerate bitsandbytes sentencepiece

from unsloth import FastModel
from unsloth.chat_templates import get_chat_template

BASE   = "unsloth/gemma-3-270m-it"                 # base checkpoint
ADAPTER= "codertrish/gemma3-270m-chess-lora"       # this repo

model, tok = FastModel.from_pretrained(
    BASE, max_seq_length=2048, load_in_4bit=True, full_finetuning=False
)
tok = get_chat_template(tok, "gemma3")             # Gemma-3 chat formatting
model.load_adapter(ADAPTER)                        # <-- attach LoRA

messages = [
  {"role":"system","content":"You are a helpful chess coach. Answer in plain text."},
  {"role":"user","content":"List 3 opening principles for beginners."},
]
prompt = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
out = model.generate(**tok([prompt], return_tensors="pt").to(model.device),
                     max_new_tokens=200, do_sample=False)
print(tok.decode(out[0], skip_special_tokens=True))