devanshdhir commited on
Commit
859c1de
·
verified ·
1 Parent(s): 0869543

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -116
README.md CHANGED
@@ -1,45 +1,42 @@
 
1
  license: mit
2
  language: en
3
  base_model: Qwen/Qwen3-0.6B-Base
4
  tags:
5
-
6
- qwen
7
-
8
- flask
9
-
10
- code-generation
11
-
12
- question-answering
13
-
14
- lora
15
-
16
- peft
17
  datasets:
 
 
 
 
18
 
19
- custom-flask-qa
20
 
21
- Qwen3-0.6B-Flask-Expert
22
- Model Description
23
- This model is a fine-tuned version of Qwen/Qwen3-0.6B-Base, specifically adapted to function as a specialized Question & Answering assistant for the Python Flask web framework.
24
 
25
  The model was trained on a high-quality, custom dataset generated by parsing the official Flask source code and documentation. It has been instruction-tuned to understand and answer developer-style questions, explain complex concepts with step-by-step reasoning, and identify when a question is outside its scope of knowledge.
26
 
27
  This project was developed as part of an internship, demonstrating a full fine-tuning pipeline from data creation to evaluation and deployment.
28
 
29
- Intended Use
30
- The primary intended use of this model is to act as a helpful assistant for developers working with Flask. It can be used for:
31
-
32
- Answering technical questions about Flask's API and internal mechanisms.
33
 
34
- Providing explanations for core concepts (e.g., application context, blueprints).
35
 
36
- Assisting with debugging common errors and understanding framework behavior.
 
 
 
37
 
38
- Powering a chatbot or an integrated help tool within a developer environment.
39
 
40
- How to Use
41
- You can use this model directly with the transformers library pipeline for text generation. Make sure to use the provided prompt format for the best results.
42
 
 
43
  from transformers import pipeline
44
  import torch
45
 
@@ -68,94 +65,4 @@ outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.1, top_
68
  answer = outputs[0]["generated_text"].split("### Response:")[1].strip()
69
 
70
  print(f"Question: {question}")
71
- print(f"Answer: {answer}")
72
-
73
- Training Data
74
- The model was trained on a custom dataset of 600 high-quality Q&A pairs created specifically for this task. The data generation process involved:
75
-
76
- Cloning the official Flask GitHub repository.
77
-
78
- Parsing all .py source files and .rst documentation files into meaningful chunks (classes, functions, paragraphs).
79
-
80
- Using the Gemini API with a series of advanced prompts to generate a diverse set of questions, including:
81
-
82
- Conceptual and Chain-of-Thought explanations.
83
-
84
- Adversarial and edge-case scenarios.
85
-
86
- Beginner and senior developer-level personas.
87
-
88
- "Guardrail" prompts to teach the model to refuse off-topic questions.
89
-
90
- The final dataset was manually reviewed and curated to ensure quality and factual accuracy.
91
-
92
- Training Procedure
93
- The model was fine-tuned using Parameter-Efficient Fine-Tuning (PEFT) with the LoRA (Low-Rank Adaptation) method.
94
-
95
- Frameworks: transformers, peft, bitsandbytes, trl
96
-
97
- Hardware: A single NVIDIA RTX 3060 with 6GB VRAM.
98
-
99
- Quantization: The base model was loaded in 4-bit (nf4) to fit within the available memory.
100
-
101
- LoRA Configuration:
102
-
103
- r (rank): 16
104
-
105
- lora_alpha: 32
106
-
107
- target_modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
108
-
109
- Hyperparameters:
110
-
111
- learning_rate: 2e-4
112
-
113
- num_train_epochs: 2
114
-
115
- per_device_train_batch_size: 2
116
-
117
- gradient_accumulation_steps: 8 (Effective batch size of 16)
118
-
119
- optimizer: paged_adamw_32bit
120
-
121
- Evaluation Results
122
- The fine-tuned model showed significant improvement over the base model's few-shot performance on a held-out test set. The most notable gain was a +123.3% increase in the ROUGE-2 score, indicating a much-improved ability to generate correct technical phrases.
123
-
124
- Metric
125
-
126
- Baseline (Untrained)
127
-
128
- Fine-Tuned Model
129
-
130
- Improvement
131
-
132
- ROUGE-1
133
-
134
- 0.306
135
-
136
- 0.382
137
-
138
- +24.7%
139
-
140
- ROUGE-2
141
-
142
- 0.067
143
-
144
- 0.149
145
-
146
- +123.3%
147
-
148
- ROUGE-L
149
-
150
- 0.162
151
-
152
- 0.240
153
-
154
- +48.1%
155
-
156
- Limitations & Bias
157
- This is a 0.6B parameter model, and its knowledge is limited to the data it was trained on. It can still hallucinate or provide factually incorrect answers, especially for complex or nuanced questions it has not seen before.
158
-
159
- The model's answers should be considered a helpful starting point, not a definitive source of truth. Always verify critical information against the official Flask documentation.
160
-
161
- The training data is derived from the Flask codebase and documentation, so it may reflect the biases and conventions
 
1
+ ---
2
  license: mit
3
  language: en
4
  base_model: Qwen/Qwen3-0.6B-Base
5
  tags:
6
+ - qwen
7
+ - flask
8
+ - code-generation
9
+ - question-answering
10
+ - lora
11
+ - peft
 
 
 
 
 
 
12
  datasets:
13
+ - custom-flask-qa
14
+ ---
15
+
16
+ # Qwen3-0.6B-Flask-Expert
17
 
18
+ ## Model Description
19
 
20
+ This model is a fine-tuned version of `Qwen/Qwen3-0.6B-Base`, specifically adapted to function as a specialized Question & Answering assistant for the **Python Flask web framework**.
 
 
21
 
22
  The model was trained on a high-quality, custom dataset generated by parsing the official Flask source code and documentation. It has been instruction-tuned to understand and answer developer-style questions, explain complex concepts with step-by-step reasoning, and identify when a question is outside its scope of knowledge.
23
 
24
  This project was developed as part of an internship, demonstrating a full fine-tuning pipeline from data creation to evaluation and deployment.
25
 
26
+ ## Intended Use
 
 
 
27
 
28
+ The primary intended use of this model is to act as a helpful assistant for developers working with Flask. It can be used for:
29
 
30
+ * Answering technical questions about Flask's API and internal mechanisms.
31
+ * Providing explanations for core concepts (e.g., application context, blueprints).
32
+ * Assisting with debugging common errors and understanding framework behavior.
33
+ * Powering a chatbot or an integrated help tool within a developer environment.
34
 
35
+ ## How to Use
36
 
37
+ You can use this model directly with the `transformers` library pipeline for text generation. Make sure to use the provided prompt format for the best results.
 
38
 
39
+ ```python
40
  from transformers import pipeline
41
  import torch
42
 
 
65
  answer = outputs[0]["generated_text"].split("### Response:")[1].strip()
66
 
67
  print(f"Question: {question}")
68
+ print(f"Answer: {answer}")