File size: 4,852 Bytes
384c795
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-1.3b-base
tags:
- generated_from_trainer
model-index:
- name: lemexp-task1-v2-lemma_object_small_nodefs-deepseek-coder-1.3b-base-ddp-8lr-v2
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# lemexp-task1-v2-lemma_object_small_nodefs-deepseek-coder-1.3b-base-ddp-8lr-v2

This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2403

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch   | Step  | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| 0.6881        | 0.2002  | 721   | 0.4926          |
| 0.5104        | 0.4003  | 1442  | 0.4505          |
| 0.4353        | 0.6005  | 2163  | 0.4146          |
| 0.4206        | 0.8007  | 2884  | 0.3981          |
| 0.3977        | 1.0008  | 3605  | 0.3817          |
| 0.3672        | 1.2010  | 4326  | 0.3654          |
| 0.3603        | 1.4012  | 5047  | 0.3605          |
| 0.3498        | 1.6013  | 5768  | 0.3574          |
| 0.351         | 1.8015  | 6489  | 0.3504          |
| 0.3469        | 2.0017  | 7210  | 0.3429          |
| 0.3255        | 2.2018  | 7931  | 0.3355          |
| 0.3206        | 2.4020  | 8652  | 0.3390          |
| 0.3164        | 2.6022  | 9373  | 0.3182          |
| 0.3129        | 2.8023  | 10094 | 0.3280          |
| 0.3136        | 3.0025  | 10815 | 0.3185          |
| 0.2888        | 3.2027  | 11536 | 0.3133          |
| 0.2865        | 3.4028  | 12257 | 0.3089          |
| 0.2874        | 3.6030  | 12978 | 0.3072          |
| 0.287         | 3.8032  | 13699 | 0.3069          |
| 0.2851        | 4.0033  | 14420 | 0.2991          |
| 0.2627        | 4.2035  | 15141 | 0.2995          |
| 0.264         | 4.4037  | 15862 | 0.3018          |
| 0.2632        | 4.6038  | 16583 | 0.2958          |
| 0.2643        | 4.8040  | 17304 | 0.2872          |
| 0.2633        | 5.0042  | 18025 | 0.2877          |
| 0.2375        | 5.2043  | 18746 | 0.2845          |
| 0.2405        | 5.4045  | 19467 | 0.2876          |
| 0.2409        | 5.6047  | 20188 | 0.2808          |
| 0.2418        | 5.8048  | 20909 | 0.2773          |
| 0.2424        | 6.0050  | 21630 | 0.2718          |
| 0.2217        | 6.2052  | 22351 | 0.2762          |
| 0.2205        | 6.4053  | 23072 | 0.2732          |
| 0.221         | 6.6055  | 23793 | 0.2692          |
| 0.222         | 6.8057  | 24514 | 0.2667          |
| 0.2203        | 7.0058  | 25235 | 0.2676          |
| 0.2053        | 7.2060  | 25956 | 0.2639          |
| 0.2015        | 7.4062  | 26677 | 0.2596          |
| 0.2006        | 7.6063  | 27398 | 0.2549          |
| 0.1982        | 7.8065  | 28119 | 0.2545          |
| 0.1982        | 8.0067  | 28840 | 0.2570          |
| 0.1764        | 8.2068  | 29561 | 0.2501          |
| 0.1799        | 8.4070  | 30282 | 0.2530          |
| 0.1796        | 8.6072  | 31003 | 0.2462          |
| 0.1801        | 8.8073  | 31724 | 0.2465          |
| 0.1769        | 9.0075  | 32445 | 0.2489          |
| 0.1565        | 9.2077  | 33166 | 0.2462          |
| 0.1569        | 9.4078  | 33887 | 0.2463          |
| 0.1594        | 9.6080  | 34608 | 0.2465          |
| 0.1605        | 9.8082  | 35329 | 0.2409          |
| 0.1583        | 10.0083 | 36050 | 0.2427          |
| 0.1389        | 10.2085 | 36771 | 0.2476          |
| 0.1378        | 10.4087 | 37492 | 0.2445          |
| 0.1395        | 10.6088 | 38213 | 0.2436          |
| 0.14          | 10.8090 | 38934 | 0.2418          |
| 0.1375        | 11.0092 | 39655 | 0.2412          |
| 0.1262        | 11.2093 | 40376 | 0.2433          |
| 0.1231        | 11.4095 | 41097 | 0.2439          |
| 0.124         | 11.6097 | 41818 | 0.2440          |
| 0.1233        | 11.8098 | 42539 | 0.2403          |


### Framework versions

- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0