deb101 commited on
Commit
76b57ac
·
verified ·
1 Parent(s): 2f1c711

Model save

Browse files
README.md CHANGED
@@ -16,35 +16,35 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
- - F1 Micro: 0.0005
20
- - F1 Macro: 0.0000
21
- - Precision At 5: 0.2847
22
- - Recall At 5: 0.0664
23
- - Precision At 8: 0.2542
24
- - Recall At 8: 0.0910
25
- - Precision At 15: 0.1926
26
- - Recall At 15: 0.1252
27
- - Rare F1 Micro: 0.0
28
- - Rare F1 Macro: 0.0
29
- - Rare Precision: 0.0
30
- - Rare Recall: 0.0
31
- - Rare Precision At 5: 0.0263
32
- - Rare Recall At 5: 0.0069
33
- - Rare Precision At 8: 0.0267
34
- - Rare Recall At 8: 0.0118
35
- - Rare Precision At 15: 0.0257
36
- - Rare Recall At 15: 0.0219
37
- - Not Rare F1 Micro: 0.0015
38
- - Not Rare F1 Macro: 0.0003
39
- - Not Rare Precision: 0.2576
40
- - Not Rare Recall: 0.0007
41
- - Not Rare Precision At 5: 0.2847
42
- - Not Rare Recall At 5: 0.1756
43
- - Not Rare Precision At 8: 0.2542
44
- - Not Rare Recall At 8: 0.2401
45
- - Not Rare Precision At 15: 0.1926
46
- - Not Rare Recall At 15: 0.3324
47
- - Loss: 0.0170
48
 
49
  ## Model description
50
 
@@ -72,17 +72,18 @@ The following hyperparameters were used during training:
72
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
73
  - lr_scheduler_type: linear
74
  - lr_scheduler_warmup_steps: 500
75
- - num_epochs: 4
76
  - mixed_precision_training: Native AMP
77
 
78
  ### Training results
79
 
80
  | Training Loss | Epoch | Step | F1 Micro | F1 Macro | Precision At 5 | Recall At 5 | Precision At 8 | Recall At 8 | Precision At 15 | Recall At 15 | Rare F1 Micro | Rare F1 Macro | Rare Precision | Rare Recall | Rare Precision At 5 | Rare Recall At 5 | Rare Precision At 8 | Rare Recall At 8 | Rare Precision At 15 | Rare Recall At 15 | Not Rare F1 Micro | Not Rare F1 Macro | Not Rare Precision | Not Rare Recall | Not Rare Precision At 5 | Not Rare Recall At 5 | Not Rare Precision At 8 | Not Rare Recall At 8 | Not Rare Precision At 15 | Not Rare Recall At 15 | Validation Loss |
81
  |:-------------:|:------:|:----:|:--------:|:--------:|:--------------:|:-----------:|:--------------:|:-----------:|:---------------:|:------------:|:-------------:|:-------------:|:--------------:|:-----------:|:-------------------:|:----------------:|:-------------------:|:----------------:|:--------------------:|:-----------------:|:-----------------:|:-----------------:|:------------------:|:---------------:|:-----------------------:|:--------------------:|:-----------------------:|:--------------------:|:------------------------:|:---------------------:|:---------------:|
82
- | 0.0229 | 0.9981 | 262 | 0.0 | 0.0 | 0.2769 | 0.0644 | 0.2472 | 0.0880 | 0.1883 | 0.1226 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0078 | 0.0025 | 0.0075 | 0.0040 | 0.0070 | 0.0065 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2771 | 0.1707 | 0.2473 | 0.2341 | 0.1888 | 0.3243 | 0.0209 |
83
- | 0.0181 | 1.9981 | 524 | 0.0 | 0.0 | 0.2775 | 0.0651 | 0.2542 | 0.0910 | 0.1934 | 0.1270 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0295 | 0.0091 | 0.0251 | 0.0128 | 0.0230 | 0.0214 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2774 | 0.1715 | 0.2542 | 0.2401 | 0.1933 | 0.3366 | 0.0170 |
84
- | 0.0176 | 2.9981 | 786 | 0.0005 | 0.0000 | 0.2844 | 0.0665 | 0.2542 | 0.0910 | 0.1908 | 0.1227 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0282 | 0.0084 | 0.0246 | 0.0118 | 0.0273 | 0.0233 | 0.0013 | 0.0003 | 0.2830 | 0.0007 | 0.2844 | 0.1755 | 0.2542 | 0.2401 | 0.1908 | 0.3230 | 0.0170 |
85
- | 0.0167 | 3.9981 | 1048 | 0.0005 | 0.0000 | 0.2847 | 0.0664 | 0.2542 | 0.0910 | 0.1926 | 0.1252 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0263 | 0.0069 | 0.0267 | 0.0118 | 0.0257 | 0.0219 | 0.0015 | 0.0003 | 0.2576 | 0.0007 | 0.2847 | 0.1756 | 0.2542 | 0.2401 | 0.1926 | 0.3324 | 0.0170 |
 
86
 
87
 
88
  ### Framework versions
 
16
 
17
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - F1 Micro: 0.0062
20
+ - F1 Macro: 0.0059
21
+ - Precision At 5: 0.0131
22
+ - Recall At 5: 0.0040
23
+ - Precision At 8: 0.0108
24
+ - Recall At 8: 0.0056
25
+ - Precision At 15: 0.0124
26
+ - Recall At 15: 0.0101
27
+ - Rare F1 Micro: 0.0040
28
+ - Rare F1 Macro: 0.0040
29
+ - Rare Precision: 0.0020
30
+ - Rare Recall: 0.9992
31
+ - Rare Precision At 5: 0.0055
32
+ - Rare Recall At 5: 0.0025
33
+ - Rare Precision At 8: 0.0041
34
+ - Rare Recall At 8: 0.0029
35
+ - Rare Precision At 15: 0.0032
36
+ - Rare Recall At 15: 0.0044
37
+ - Not Rare F1 Micro: 0.1354
38
+ - Not Rare F1 Macro: 0.1308
39
+ - Not Rare Precision: 0.0726
40
+ - Not Rare Recall: 0.9998
41
+ - Not Rare Precision At 5: 0.1391
42
+ - Not Rare Recall At 5: 0.0842
43
+ - Not Rare Precision At 8: 0.1066
44
+ - Not Rare Recall At 8: 0.1005
45
+ - Not Rare Precision At 15: 0.0989
46
+ - Not Rare Recall At 15: 0.1650
47
+ - Loss: -2.3104
48
 
49
  ## Model description
50
 
 
72
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
73
  - lr_scheduler_type: linear
74
  - lr_scheduler_warmup_steps: 500
75
+ - num_epochs: 5
76
  - mixed_precision_training: Native AMP
77
 
78
  ### Training results
79
 
80
  | Training Loss | Epoch | Step | F1 Micro | F1 Macro | Precision At 5 | Recall At 5 | Precision At 8 | Recall At 8 | Precision At 15 | Recall At 15 | Rare F1 Micro | Rare F1 Macro | Rare Precision | Rare Recall | Rare Precision At 5 | Rare Recall At 5 | Rare Precision At 8 | Rare Recall At 8 | Rare Precision At 15 | Rare Recall At 15 | Not Rare F1 Micro | Not Rare F1 Macro | Not Rare Precision | Not Rare Recall | Not Rare Precision At 5 | Not Rare Recall At 5 | Not Rare Precision At 8 | Not Rare Recall At 8 | Not Rare Precision At 15 | Not Rare Recall At 15 | Validation Loss |
81
  |:-------------:|:------:|:----:|:--------:|:--------:|:--------------:|:-----------:|:--------------:|:-----------:|:---------------:|:------------:|:-------------:|:-------------:|:--------------:|:-----------:|:-------------------:|:----------------:|:-------------------:|:----------------:|:--------------------:|:-----------------:|:-----------------:|:-----------------:|:------------------:|:---------------:|:-----------------------:|:--------------------:|:-----------------------:|:--------------------:|:------------------------:|:---------------------:|:---------------:|
82
+ | -2.5733 | 0.9981 | 262 | 0.0086 | 0.0060 | 0.2032 | 0.0452 | 0.1975 | 0.0694 | 0.1826 | 0.1185 | 0.0051 | 0.0040 | 0.0026 | 0.7894 | 0.0369 | 0.0112 | 0.0329 | 0.0162 | 0.0290 | 0.0270 | 0.1354 | 0.1308 | 0.0726 | 1.0 | 0.2012 | 0.1187 | 0.1963 | 0.1842 | 0.1802 | 0.3115 | -2.1808 |
83
+ | -2.8745 | 1.9981 | 524 | 0.0070 | 0.0062 | 0.1153 | 0.0311 | 0.1079 | 0.0456 | 0.0933 | 0.0723 | 0.0044 | 0.0041 | 0.0022 | 0.8685 | 0.0391 | 0.0155 | 0.0333 | 0.0210 | 0.0281 | 0.0323 | 0.1399 | 0.1337 | 0.0754 | 0.9720 | 0.1735 | 0.1110 | 0.1544 | 0.1553 | 0.1400 | 0.2550 | -2.2971 |
84
+ | -3.0665 | 2.9981 | 786 | 0.0064 | 0.0060 | 0.0525 | 0.0148 | 0.0450 | 0.0203 | 0.0392 | 0.0309 | 0.0041 | 0.0040 | 0.0020 | 0.9688 | 0.0150 | 0.0061 | 0.0134 | 0.0086 | 0.0107 | 0.0129 | 0.1376 | 0.1323 | 0.0739 | 0.9840 | 0.1498 | 0.0950 | 0.1236 | 0.1245 | 0.1147 | 0.2041 | -2.3224 |
85
+ | -3.5627 | 3.9981 | 1048 | 0.0062 | 0.0060 | 0.0182 | 0.0059 | 0.0152 | 0.0075 | 0.0163 | 0.0135 | 0.0040 | 0.0040 | 0.0020 | 0.9920 | 0.0069 | 0.0031 | 0.0052 | 0.0039 | 0.0044 | 0.0062 | 0.1361 | 0.1313 | 0.0730 | 0.9973 | 0.1394 | 0.0855 | 0.1093 | 0.1055 | 0.1022 | 0.1756 | -2.3239 |
86
+ | -4.0526 | 4.9981 | 1310 | 0.0062 | 0.0059 | 0.0131 | 0.0040 | 0.0108 | 0.0056 | 0.0124 | 0.0101 | 0.0040 | 0.0040 | 0.0020 | 0.9992 | 0.0055 | 0.0025 | 0.0041 | 0.0029 | 0.0032 | 0.0044 | 0.1354 | 0.1308 | 0.0726 | 0.9998 | 0.1391 | 0.0842 | 0.1066 | 0.1005 | 0.0989 | 0.1650 | -2.3104 |
87
 
88
 
89
  ### Framework versions
eval_loss_plot.png CHANGED
eval_precision_at_15_plot.png CHANGED
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7d3a37a57bfddb569d6b3f50a45e8053e871ac4b0213a69b650ec9ffcb396867
3
  size 4824468367
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:478dba6504c6c8c148cec4e26c700ce185e6dabb18d109211530a429080ef95b
3
  size 4824468367
train_loss_plot.png CHANGED
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c60967cbb5c4ceeda10e4014734667df1dd5d5c2918d3cc15dd2280ad349a209
3
  size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20399e0a716a9d18996b1a1352467fbea950665a69d598ea5aef75f443edda72
3
  size 5496