End of training
Browse files- README.md +17 -12
- tf_model.h5 +2 -2
README.md
CHANGED
@@ -15,10 +15,10 @@ probably proofread and complete it, then remove this comment. -->
|
|
15 |
|
16 |
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
|
17 |
It achieves the following results on the evaluation set:
|
18 |
-
- Train Loss:
|
19 |
-
- Validation Loss: 1.
|
20 |
-
- Train Accuracy: 0.
|
21 |
-
- Epoch:
|
22 |
|
23 |
## Model description
|
24 |
|
@@ -37,23 +37,28 @@ More information needed
|
|
37 |
### Training hyperparameters
|
38 |
|
39 |
The following hyperparameters were used during training:
|
40 |
-
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'
|
41 |
- training_precision: float32
|
42 |
|
43 |
### Training results
|
44 |
|
45 |
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|
46 |
|:----------:|:---------------:|:--------------:|:-----:|
|
47 |
-
| 2.
|
48 |
-
| 2.
|
49 |
-
| 1.
|
50 |
-
| 1.
|
51 |
-
| 1.
|
|
|
|
|
|
|
|
|
|
|
52 |
|
53 |
|
54 |
### Framework versions
|
55 |
|
56 |
- Transformers 4.31.0
|
57 |
-
- TensorFlow 2.
|
58 |
-
- Datasets 2.14.
|
59 |
- Tokenizers 0.13.3
|
|
|
15 |
|
16 |
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
|
17 |
It achieves the following results on the evaluation set:
|
18 |
+
- Train Loss: 0.9353
|
19 |
+
- Validation Loss: 1.0343
|
20 |
+
- Train Accuracy: 0.8667
|
21 |
+
- Epoch: 9
|
22 |
|
23 |
## Model description
|
24 |
|
|
|
37 |
### Training hyperparameters
|
38 |
|
39 |
The following hyperparameters were used during training:
|
40 |
+
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1680, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
|
41 |
- training_precision: float32
|
42 |
|
43 |
### Training results
|
44 |
|
45 |
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|
46 |
|:----------:|:---------------:|:--------------:|:-----:|
|
47 |
+
| 2.2697 | 2.1984 | 0.4667 | 0 |
|
48 |
+
| 2.1245 | 2.0728 | 0.6 | 1 |
|
49 |
+
| 1.9780 | 1.9057 | 0.8 | 2 |
|
50 |
+
| 1.8135 | 1.7702 | 0.8667 | 3 |
|
51 |
+
| 1.6516 | 1.6121 | 0.8667 | 4 |
|
52 |
+
| 1.4854 | 1.4733 | 0.8667 | 5 |
|
53 |
+
| 1.3306 | 1.3294 | 0.8667 | 6 |
|
54 |
+
| 1.1829 | 1.2269 | 0.8333 | 7 |
|
55 |
+
| 1.0596 | 1.1176 | 0.8667 | 8 |
|
56 |
+
| 0.9353 | 1.0343 | 0.8667 | 9 |
|
57 |
|
58 |
|
59 |
### Framework versions
|
60 |
|
61 |
- Transformers 4.31.0
|
62 |
+
- TensorFlow 2.12.0
|
63 |
+
- Datasets 2.14.4
|
64 |
- Tokenizers 0.13.3
|
tf_model.h5
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:51b22dad4c7a34a4a719ed74d006353cefe04ebf477904e2583a7357135573a2
|
3 |
+
size 343494328
|