howanching-clara commited on
Commit
a4a1831
·
verified ·
1 Parent(s): f5c362c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -79
README.md CHANGED
@@ -1,79 +1,79 @@
1
- ---
2
- tags:
3
- - generated_from_trainer
4
- base_model: sentence-transformers/multi-qa-MiniLM-L6-cos-v1
5
- metrics:
6
- - accuracy
7
- - precision
8
- - recall
9
- - f1
10
- model-index:
11
- - name: all_keywords_multi-qa-MiniLM-L6-cos-v1_another
12
- results: []
13
- ---
14
-
15
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
- should probably proofread and complete it, then remove this comment. -->
17
-
18
- # all_keywords_multi-qa-MiniLM-L6-cos-v1_another
19
-
20
- This model is a fine-tuned version of [sentence-transformers/multi-qa-MiniLM-L6-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-cos-v1) on the None dataset.
21
- It achieves the following results on the evaluation set:
22
- - Loss: 2.7780
23
- - Accuracy: 0.5526
24
- - Precision: 0.5526
25
- - Recall: 0.5526
26
- - F1: 0.5526
27
-
28
- ## Model description
29
-
30
- More information needed
31
-
32
- ## Intended uses & limitations
33
-
34
- More information needed
35
-
36
- ## Training and evaluation data
37
-
38
- More information needed
39
-
40
- ## Training procedure
41
-
42
- ### Training hyperparameters
43
-
44
- The following hyperparameters were used during training:
45
- - learning_rate: 5e-05
46
- - train_batch_size: 4
47
- - eval_batch_size: 4
48
- - seed: 42
49
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
- - lr_scheduler_type: linear
51
- - num_epochs: 15
52
-
53
- ### Training results
54
-
55
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
56
- |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
57
- | 2.3017 | 1.0 | 712 | 2.0180 | 0.4362 | 0.4362 | 0.4362 | 0.4362 |
58
- | 2.09 | 2.0 | 1424 | 1.8306 | 0.4390 | 0.4390 | 0.4390 | 0.4390 |
59
- | 1.775 | 3.0 | 2136 | 1.7843 | 0.4783 | 0.4783 | 0.4783 | 0.4783 |
60
- | 1.5811 | 4.0 | 2848 | 1.7686 | 0.5175 | 0.5175 | 0.5175 | 0.5175 |
61
- | 1.2665 | 5.0 | 3560 | 1.7257 | 0.5147 | 0.5147 | 0.5147 | 0.5147 |
62
- | 1.0957 | 6.0 | 4272 | 1.8126 | 0.5568 | 0.5568 | 0.5568 | 0.5568 |
63
- | 0.9661 | 7.0 | 4984 | 2.0472 | 0.5386 | 0.5386 | 0.5386 | 0.5386 |
64
- | 0.7399 | 8.0 | 5696 | 2.1375 | 0.5428 | 0.5428 | 0.5428 | 0.5428 |
65
- | 0.6533 | 9.0 | 6408 | 2.2761 | 0.5400 | 0.5400 | 0.5400 | 0.5400 |
66
- | 0.5268 | 10.0 | 7120 | 2.4777 | 0.5400 | 0.5400 | 0.5400 | 0.5400 |
67
- | 0.5067 | 11.0 | 7832 | 2.6160 | 0.5372 | 0.5372 | 0.5372 | 0.5372 |
68
- | 0.4209 | 12.0 | 8544 | 2.6253 | 0.5512 | 0.5512 | 0.5512 | 0.5512 |
69
- | 0.4102 | 13.0 | 9256 | 2.7287 | 0.5442 | 0.5442 | 0.5442 | 0.5442 |
70
- | 0.3405 | 14.0 | 9968 | 2.7607 | 0.5470 | 0.5470 | 0.5470 | 0.5470 |
71
- | 0.3278 | 15.0 | 10680 | 2.7780 | 0.5526 | 0.5526 | 0.5526 | 0.5526 |
72
-
73
-
74
- ### Framework versions
75
-
76
- - Transformers 4.39.3
77
- - Pytorch 2.2.1+cu118
78
- - Datasets 2.14.7
79
- - Tokenizers 0.15.2
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ base_model: sentence-transformers/multi-qa-MiniLM-L6-cos-v1
5
+ metrics:
6
+ - accuracy
7
+ - precision
8
+ - recall
9
+ - f1
10
+ model-index:
11
+ - name: all_keywords_multi-qa-MiniLM-L6-cos-v1_another
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # all_keywords_multi-qa-MiniLM-L6-cos-v1_another
19
+
20
+ This model is a fine-tuned version of [sentence-transformers/multi-qa-MiniLM-L6-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-cos-v1) on the None dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 2.7780
23
+ - Accuracy: 0.5526
24
+ - Precision: 0.5526
25
+ - Recall: 0.5526
26
+ - F1: 0.5526
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 5e-05
46
+ - train_batch_size: 4
47
+ - eval_batch_size: 4
48
+ - seed: 42
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: linear
51
+ - num_epochs: 15
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
56
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|
57
+ | 2.3017 | 1.0 | 712 | 2.0180 | 0.4362 |
58
+ | 2.09 | 2.0 | 1424 | 1.8306 | 0.4390 |
59
+ | 1.775 | 3.0 | 2136 | 1.7843 | 0.4783 |
60
+ | 1.5811 | 4.0 | 2848 | 1.7686 | 0.5175 |
61
+ | 1.2665 | 5.0 | 3560 | 1.7257 | 0.5147 |
62
+ | 1.0957 | 6.0 | 4272 | 1.8126 | 0.5568 |
63
+ | 0.9661 | 7.0 | 4984 | 2.0472 | 0.5386 |
64
+ | 0.7399 | 8.0 | 5696 | 2.1375 | 0.5428 |
65
+ | 0.6533 | 9.0 | 6408 | 2.2761 | 0.5400 |
66
+ | 0.5268 | 10.0 | 7120 | 2.4777 | 0.5400 |
67
+ | 0.5067 | 11.0 | 7832 | 2.6160 | 0.5372 |
68
+ | 0.4209 | 12.0 | 8544 | 2.6253 | 0.5512 |
69
+ | 0.4102 | 13.0 | 9256 | 2.7287 | 0.5442 |
70
+ | 0.3405 | 14.0 | 9968 | 2.7607 | 0.5470 |
71
+ | 0.3278 | 15.0 | 10680 | 2.7780 | 0.5526 |
72
+
73
+
74
+ ### Framework versions
75
+
76
+ - Transformers 4.39.3
77
+ - Pytorch 2.2.1+cu118
78
+ - Datasets 2.14.7
79
+ - Tokenizers 0.15.2