File size: 1,106 Bytes
1663c44
 
 
 
0719e04
1663c44
 
 
0719e04
 
1663c44
0719e04
 
 
 
1663c44
 
0719e04
7e22977
0719e04
7e22977
818684c
 
0719e04
 
 
 
 
818684c
 
7e22977
0719e04
 
 
818684c
0719e04
818684c
7e22977
818684c
7e22977
0719e04
 
7e22977
0719e04
 
7e22977
818684c
7e22977
 
 
 
 
 
 
1663c44
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
license: mit
tags:
  - tinyllama
  - lora
  - cli
  - fine-tuning
  - qna
  - transformers
  - peft
library_name: transformers
datasets:
  - custom
language: en
model_type: causal-lm
---

# 🔧 CLI LoRA-TinyLlama

A fine-tuned version of [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on a custom dataset of command-line Q&A, using **LoRA** (Low-Rank Adaptation). Built for fast, accurate help on common CLI topics.

---

## 🧩 Base Model

- Model: `TinyLlama/TinyLlama-1.1B-Chat-v1.0`
- Fine-Tuning Method: [LoRA](https://arxiv.org/abs/2106.09685)
- Libraries Used: `transformers`, `peft`, `datasets`, `accelerate`

---

## 📚 Dataset

- Custom dataset with **150+ Q&A pairs** covering:
  - `git`, `bash`, `grep`, `tar`, `venv`
- Raw file: `cli_questions.json`
- Tokenized version: `tokenized_dataset/`

---

## 🛠️ Training Configuration

```python
from peft import LoraConfig

base_model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"

lora_config = LoraConfig(
    r=16,
    lora_alpha=32,
    lora_dropout=0.1,
    bias="none",
    task_type="CAUSAL_LM"
)