Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,69 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
---
|
4 |
+
|
5 |
+
# MathGPT-2 (distilgpt2 Fine-Tuned for Arithmetic)
|
6 |
+
|
7 |
+
This model is a **fine-tuned version of DistilGPT-2** on a custom dataset consisting exclusively of arithmetic problems and their answers. The goal of this model is to act as a **calculator** that can solve basic arithmetic problems.
|
8 |
+
|
9 |
+
## Model Description
|
10 |
+
|
11 |
+
The model was trained using a dataset of simple arithmetic expressions, including addition, subtraction, multiplication, and division. The training data was generated using Python and ensured to have **no duplicate expressions**.
|
12 |
+
|
13 |
+
### Key Features:
|
14 |
+
- **Solves basic arithmetic** (addition, subtraction, multiplication, division)
|
15 |
+
- Can **handle simple problems** like `12 + 5 =`
|
16 |
+
- Fine-tuned version of `distilgpt2` on a math-specific dataset
|
17 |
+
- Trained for **10 epochs** (further improvements can be made by training for more epochs)
|
18 |
+
|
19 |
+
## Model Details
|
20 |
+
|
21 |
+
- **Model architecture**: DistilGPT-2
|
22 |
+
- **Training duration**: 10 epochs (could be improved further)
|
23 |
+
- **Dataset**: Generated math expressions like `12 + 5 = 17`
|
24 |
+
- **Tokenization**: Standard GPT-2 tokenizer
|
25 |
+
- **Fine-tuned on**: Simple arithmetic operations
|
26 |
+
|
27 |
+
## Intended Use
|
28 |
+
|
29 |
+
This model is designed to:
|
30 |
+
- **Answer basic arithmetic problems** (addition, subtraction, multiplication, division).
|
31 |
+
- It can generate answers for simple problems like `12 * 6 = ?`.
|
32 |
+
|
33 |
+
### Example:
|
34 |
+
|
35 |
+
**Input**:
|
36 |
+
```
|
37 |
+
13 + 47 =
|
38 |
+
```
|
39 |
+
|
40 |
+
**Output**:
|
41 |
+
```
|
42 |
+
60
|
43 |
+
```
|
44 |
+
|
45 |
+
|
46 |
+
## Training Data
|
47 |
+
|
48 |
+
The training dataset was generated using Python, consisting of random arithmetic expressions (addition, subtraction, multiplication, division) between numbers from 1 to 100. The expressions were formatted as:
|
49 |
+
|
50 |
+
```
|
51 |
+
2 + 3 = 5
|
52 |
+
100 - 25 = 75
|
53 |
+
45 * 5 = 225
|
54 |
+
100 / 25 = 4
|
55 |
+
```
|
56 |
+
|
57 |
+
No duplicate expressions were used, ensuring the model learns unique patterns.
|
58 |
+
|
59 |
+
## Fine-Tuning
|
60 |
+
|
61 |
+
This model was fine-tuned from the `distilgpt2` base model for 10 epochs.
|
62 |
+
|
63 |
+
---
|
64 |
+
|
65 |
+
## Limitations
|
66 |
+
|
67 |
+
- **Basic Arithmetic Only**: The model can only handle basic arithmetic problems like addition, subtraction, multiplication, and division. It does not handle more complex operations like exponentiation, logarithms, or advanced algebra.
|
68 |
+
- **Limited Training Duration**: While trained for 10 epochs, more epochs or data diversity may improve the model's performance further.
|
69 |
+
- **No real-time validation**: The model's performance varies, and there are still inaccuracies in answers for some problems.
|