model_id
stringlengths 7
105
| model_card
stringlengths 1
130k
| model_labels
listlengths 2
80k
|
---|---|---|
tiennguyenbnbk/teacher-status-van-tiny-256-2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# teacher-status-van-tiny-256-2
This model is a fine-tuned version of [Visual-Attention-Network/van-tiny](https://huggingface.co/Visual-Attention-Network/van-tiny) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0916
- Accuracy: 0.9759
- F1 Score: 0.9842
- Recall: 0.9757
- Precision: 0.9929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:------:|:---------:|
| 0.6896 | 0.99 | 26 | 0.6707 | 0.7701 | 0.8701 | 1.0 | 0.7701 |
| 0.5438 | 1.98 | 52 | 0.4302 | 0.7701 | 0.8701 | 1.0 | 0.7701 |
| 0.3756 | 2.97 | 78 | 0.2762 | 0.8850 | 0.9285 | 0.9688 | 0.8914 |
| 0.3017 | 4.0 | 105 | 0.2002 | 0.9225 | 0.9503 | 0.9618 | 0.9390 |
| 0.257 | 4.99 | 131 | 0.1794 | 0.9385 | 0.9605 | 0.9722 | 0.9492 |
| 0.2345 | 5.98 | 157 | 0.1485 | 0.9358 | 0.9582 | 0.9549 | 0.9615 |
| 0.2318 | 6.97 | 183 | 0.1302 | 0.9439 | 0.9631 | 0.9514 | 0.9751 |
| 0.2173 | 8.0 | 210 | 0.1277 | 0.9519 | 0.9689 | 0.9722 | 0.9655 |
| 0.2058 | 8.99 | 236 | 0.1269 | 0.9572 | 0.9722 | 0.9722 | 0.9722 |
| 0.1955 | 9.98 | 262 | 0.1146 | 0.9572 | 0.9724 | 0.9792 | 0.9658 |
| 0.2083 | 10.97 | 288 | 0.1083 | 0.9652 | 0.9772 | 0.9688 | 0.9859 |
| 0.1886 | 12.0 | 315 | 0.1048 | 0.9599 | 0.9741 | 0.9792 | 0.9691 |
| 0.1618 | 12.99 | 341 | 0.1033 | 0.9626 | 0.9757 | 0.9757 | 0.9757 |
| 0.1908 | 13.98 | 367 | 0.1044 | 0.9599 | 0.9739 | 0.9722 | 0.9756 |
| 0.1594 | 14.97 | 393 | 0.0915 | 0.9626 | 0.9758 | 0.9792 | 0.9724 |
| 0.1474 | 16.0 | 420 | 0.0916 | 0.9759 | 0.9842 | 0.9757 | 0.9929 |
| 0.1734 | 16.99 | 446 | 0.0951 | 0.9652 | 0.9773 | 0.9722 | 0.9825 |
| 0.1484 | 17.98 | 472 | 0.1049 | 0.9706 | 0.9809 | 0.9792 | 0.9826 |
| 0.1495 | 18.97 | 498 | 0.0930 | 0.9679 | 0.9791 | 0.9757 | 0.9825 |
| 0.1385 | 20.0 | 525 | 0.0955 | 0.9626 | 0.9759 | 0.9826 | 0.9692 |
| 0.1492 | 20.99 | 551 | 0.0911 | 0.9599 | 0.9741 | 0.9792 | 0.9691 |
| 0.1401 | 21.98 | 577 | 0.0927 | 0.9706 | 0.9809 | 0.9792 | 0.9826 |
| 0.1288 | 22.97 | 603 | 0.0940 | 0.9706 | 0.9809 | 0.9792 | 0.9826 |
| 0.1304 | 24.0 | 630 | 0.0913 | 0.9652 | 0.9775 | 0.9826 | 0.9725 |
| 0.14 | 24.99 | 656 | 0.0979 | 0.9652 | 0.9776 | 0.9861 | 0.9693 |
| 0.1461 | 25.98 | 682 | 0.0874 | 0.9706 | 0.9810 | 0.9861 | 0.9759 |
| 0.1429 | 26.97 | 708 | 0.0837 | 0.9706 | 0.9808 | 0.9757 | 0.9860 |
| 0.1444 | 28.0 | 735 | 0.0876 | 0.9679 | 0.9792 | 0.9792 | 0.9792 |
| 0.145 | 28.99 | 761 | 0.0903 | 0.9706 | 0.9809 | 0.9792 | 0.9826 |
| 0.1445 | 29.71 | 780 | 0.0882 | 0.9679 | 0.9791 | 0.9757 | 0.9825 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"abnormal",
"normal"
] |
harshkhare/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6844
- Accuracy: 0.7917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.57 | 1 | 0.9324 | 0.6667 |
| No log | 1.71 | 3 | 0.7241 | 0.75 |
| No log | 2.29 | 4 | 0.6844 | 0.7917 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"cloudy",
"green_area",
"water"
] |
hkivancoral/hushem_40x_beit_large_adamax_001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_large_adamax_001_fold1
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2476
- Accuracy: 0.7333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3238 | 1.0 | 215 | 0.6915 | 0.7333 |
| 0.1477 | 2.0 | 430 | 1.2081 | 0.6444 |
| 0.0434 | 3.0 | 645 | 1.8202 | 0.6444 |
| 0.0459 | 4.0 | 860 | 1.9604 | 0.6222 |
| 0.0376 | 5.0 | 1075 | 0.7965 | 0.7778 |
| 0.0151 | 6.0 | 1290 | 1.6449 | 0.7111 |
| 0.0084 | 7.0 | 1505 | 2.7172 | 0.6222 |
| 0.0085 | 8.0 | 1720 | 2.4588 | 0.6667 |
| 0.0105 | 9.0 | 1935 | 3.0173 | 0.5333 |
| 0.0465 | 10.0 | 2150 | 1.5242 | 0.7778 |
| 0.0056 | 11.0 | 2365 | 2.2494 | 0.7333 |
| 0.0106 | 12.0 | 2580 | 2.3865 | 0.6889 |
| 0.0614 | 13.0 | 2795 | 1.3048 | 0.7778 |
| 0.0068 | 14.0 | 3010 | 2.7128 | 0.6889 |
| 0.0 | 15.0 | 3225 | 2.3042 | 0.7778 |
| 0.0001 | 16.0 | 3440 | 2.6333 | 0.7333 |
| 0.0483 | 17.0 | 3655 | 2.9792 | 0.7111 |
| 0.0 | 18.0 | 3870 | 2.6692 | 0.7111 |
| 0.0 | 19.0 | 4085 | 2.7990 | 0.7556 |
| 0.0 | 20.0 | 4300 | 2.7968 | 0.7333 |
| 0.0 | 21.0 | 4515 | 2.8289 | 0.7333 |
| 0.0 | 22.0 | 4730 | 2.8734 | 0.7333 |
| 0.0 | 23.0 | 4945 | 2.7220 | 0.7556 |
| 0.0742 | 24.0 | 5160 | 2.8716 | 0.7111 |
| 0.0011 | 25.0 | 5375 | 2.8927 | 0.7333 |
| 0.0 | 26.0 | 5590 | 2.8101 | 0.7333 |
| 0.0 | 27.0 | 5805 | 2.9619 | 0.7111 |
| 0.0 | 28.0 | 6020 | 3.0313 | 0.7111 |
| 0.0 | 29.0 | 6235 | 3.1395 | 0.7111 |
| 0.0 | 30.0 | 6450 | 3.4589 | 0.7111 |
| 0.0 | 31.0 | 6665 | 3.5502 | 0.6889 |
| 0.0 | 32.0 | 6880 | 3.7038 | 0.6667 |
| 0.0 | 33.0 | 7095 | 2.9949 | 0.7111 |
| 0.0 | 34.0 | 7310 | 3.0364 | 0.7111 |
| 0.0 | 35.0 | 7525 | 3.1096 | 0.7111 |
| 0.0 | 36.0 | 7740 | 3.1633 | 0.7333 |
| 0.0 | 37.0 | 7955 | 3.1868 | 0.7333 |
| 0.0 | 38.0 | 8170 | 3.2061 | 0.7333 |
| 0.0 | 39.0 | 8385 | 3.2444 | 0.7333 |
| 0.0 | 40.0 | 8600 | 3.2660 | 0.7333 |
| 0.0 | 41.0 | 8815 | 3.2861 | 0.7333 |
| 0.0 | 42.0 | 9030 | 3.3090 | 0.7333 |
| 0.0 | 43.0 | 9245 | 3.3340 | 0.7333 |
| 0.0 | 44.0 | 9460 | 3.3547 | 0.7333 |
| 0.0 | 45.0 | 9675 | 3.3742 | 0.7333 |
| 0.0 | 46.0 | 9890 | 3.3879 | 0.7333 |
| 0.0 | 47.0 | 10105 | 3.4047 | 0.7333 |
| 0.0 | 48.0 | 10320 | 3.2184 | 0.7333 |
| 0.0 | 49.0 | 10535 | 3.2219 | 0.7333 |
| 0.0 | 50.0 | 10750 | 3.2476 | 0.7333 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
yuanhuaisen/autotrain-q7md9-qtgfu
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.4186987578868866
f1_macro: 0.8215900897948484
f1_micro: 0.881578947368421
f1_weighted: 0.8716990992917834
precision_macro: 0.9090909090909092
precision_micro: 0.881578947368421
precision_weighted: 0.9138755980861244
recall_macro: 0.8132132132132132
recall_micro: 0.881578947368421
recall_weighted: 0.881578947368421
accuracy: 0.881578947368421
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
hkivancoral/hushem_40x_beit_large_adamax_001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_large_adamax_001_fold2
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4537
- Accuracy: 0.6889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3184 | 1.0 | 215 | 1.1233 | 0.7333 |
| 0.1436 | 2.0 | 430 | 1.6198 | 0.6889 |
| 0.047 | 3.0 | 645 | 1.8592 | 0.6667 |
| 0.0527 | 4.0 | 860 | 1.5842 | 0.7111 |
| 0.0463 | 5.0 | 1075 | 2.7619 | 0.6889 |
| 0.0137 | 6.0 | 1290 | 1.4680 | 0.7556 |
| 0.0665 | 7.0 | 1505 | 2.2491 | 0.6889 |
| 0.0121 | 8.0 | 1720 | 1.7706 | 0.7556 |
| 0.0005 | 9.0 | 1935 | 2.1035 | 0.7556 |
| 0.0192 | 10.0 | 2150 | 3.0002 | 0.6889 |
| 0.02 | 11.0 | 2365 | 2.1406 | 0.6667 |
| 0.011 | 12.0 | 2580 | 2.2828 | 0.6667 |
| 0.0346 | 13.0 | 2795 | 2.5178 | 0.6667 |
| 0.0045 | 14.0 | 3010 | 2.0578 | 0.7333 |
| 0.0021 | 15.0 | 3225 | 1.4918 | 0.7556 |
| 0.0002 | 16.0 | 3440 | 2.6023 | 0.7111 |
| 0.0007 | 17.0 | 3655 | 2.4242 | 0.7111 |
| 0.0019 | 18.0 | 3870 | 2.8391 | 0.6667 |
| 0.0005 | 19.0 | 4085 | 2.9921 | 0.7556 |
| 0.0 | 20.0 | 4300 | 3.1529 | 0.6667 |
| 0.0 | 21.0 | 4515 | 2.7412 | 0.7556 |
| 0.0 | 22.0 | 4730 | 2.8583 | 0.7333 |
| 0.0 | 23.0 | 4945 | 2.9971 | 0.7333 |
| 0.0 | 24.0 | 5160 | 3.0142 | 0.7556 |
| 0.0 | 25.0 | 5375 | 3.0328 | 0.7556 |
| 0.0 | 26.0 | 5590 | 3.0307 | 0.7778 |
| 0.0 | 27.0 | 5805 | 3.2285 | 0.7556 |
| 0.0 | 28.0 | 6020 | 3.2719 | 0.7111 |
| 0.0 | 29.0 | 6235 | 2.7270 | 0.7778 |
| 0.0 | 30.0 | 6450 | 3.4979 | 0.7111 |
| 0.0 | 31.0 | 6665 | 3.4752 | 0.7333 |
| 0.0 | 32.0 | 6880 | 3.4952 | 0.7333 |
| 0.0 | 33.0 | 7095 | 3.5111 | 0.7333 |
| 0.0 | 34.0 | 7310 | 3.5230 | 0.7333 |
| 0.0 | 35.0 | 7525 | 3.5422 | 0.7333 |
| 0.0 | 36.0 | 7740 | 3.5606 | 0.7333 |
| 0.0 | 37.0 | 7955 | 3.5754 | 0.7333 |
| 0.0 | 38.0 | 8170 | 3.5859 | 0.7333 |
| 0.0 | 39.0 | 8385 | 3.5773 | 0.7333 |
| 0.0 | 40.0 | 8600 | 4.7039 | 0.6 |
| 0.0 | 41.0 | 8815 | 4.7831 | 0.6 |
| 0.0 | 42.0 | 9030 | 4.4812 | 0.6667 |
| 0.0 | 43.0 | 9245 | 4.4224 | 0.6889 |
| 0.0 | 44.0 | 9460 | 4.4294 | 0.6889 |
| 0.0 | 45.0 | 9675 | 4.4285 | 0.6889 |
| 0.0 | 46.0 | 9890 | 4.4304 | 0.6889 |
| 0.0 | 47.0 | 10105 | 4.4476 | 0.6889 |
| 0.0 | 48.0 | 10320 | 4.4513 | 0.6889 |
| 0.0 | 49.0 | 10535 | 4.4531 | 0.6889 |
| 0.0 | 50.0 | 10750 | 4.4537 | 0.6889 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
yuanhuaisen/autotrain-31b7i-w1ict
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.4175724387168884
f1_macro: 0.7720961603314546
f1_micro: 0.8767123287671232
f1_weighted: 0.8574316899860817
precision_macro: 0.8490555071200232
precision_micro: 0.8767123287671232
precision_weighted: 0.8791869200176757
recall_macro: 0.7687687687687688
recall_micro: 0.8767123287671232
recall_weighted: 0.8767123287671232
accuracy: 0.8767123287671232
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
hkivancoral/hushem_40x_beit_large_adamax_001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_large_adamax_001_fold3
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1862
- Accuracy: 0.7674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3341 | 1.0 | 217 | 1.0412 | 0.6512 |
| 0.1567 | 2.0 | 434 | 0.8679 | 0.7907 |
| 0.1261 | 3.0 | 651 | 0.9519 | 0.7907 |
| 0.0451 | 4.0 | 868 | 1.0258 | 0.7674 |
| 0.0494 | 5.0 | 1085 | 1.2986 | 0.8140 |
| 0.0356 | 6.0 | 1302 | 1.5169 | 0.7442 |
| 0.0142 | 7.0 | 1519 | 1.4785 | 0.7907 |
| 0.048 | 8.0 | 1736 | 1.4460 | 0.8140 |
| 0.0363 | 9.0 | 1953 | 1.0943 | 0.7674 |
| 0.0409 | 10.0 | 2170 | 1.6345 | 0.7907 |
| 0.0021 | 11.0 | 2387 | 1.2558 | 0.8140 |
| 0.0072 | 12.0 | 2604 | 1.1994 | 0.8372 |
| 0.0193 | 13.0 | 2821 | 1.2732 | 0.8372 |
| 0.0006 | 14.0 | 3038 | 1.5708 | 0.7674 |
| 0.0013 | 15.0 | 3255 | 1.1380 | 0.8837 |
| 0.0001 | 16.0 | 3472 | 1.3578 | 0.8837 |
| 0.0 | 17.0 | 3689 | 1.3940 | 0.8837 |
| 0.0 | 18.0 | 3906 | 1.4630 | 0.8605 |
| 0.0 | 19.0 | 4123 | 1.4804 | 0.8140 |
| 0.0 | 20.0 | 4340 | 1.5039 | 0.8372 |
| 0.0 | 21.0 | 4557 | 1.5153 | 0.8605 |
| 0.0 | 22.0 | 4774 | 1.6110 | 0.8372 |
| 0.0 | 23.0 | 4991 | 1.6351 | 0.8372 |
| 0.0 | 24.0 | 5208 | 1.6586 | 0.8372 |
| 0.0 | 25.0 | 5425 | 1.6837 | 0.8605 |
| 0.0 | 26.0 | 5642 | 2.1644 | 0.8140 |
| 0.0 | 27.0 | 5859 | 1.8231 | 0.8372 |
| 0.0 | 28.0 | 6076 | 1.8592 | 0.8837 |
| 0.0 | 29.0 | 6293 | 2.3766 | 0.7907 |
| 0.0004 | 30.0 | 6510 | 2.2802 | 0.7674 |
| 0.0 | 31.0 | 6727 | 2.0919 | 0.7907 |
| 0.0 | 32.0 | 6944 | 2.0989 | 0.7907 |
| 0.0 | 33.0 | 7161 | 2.1214 | 0.7907 |
| 0.0 | 34.0 | 7378 | 2.1583 | 0.7907 |
| 0.0 | 35.0 | 7595 | 2.1876 | 0.7907 |
| 0.0 | 36.0 | 7812 | 2.1795 | 0.7907 |
| 0.0007 | 37.0 | 8029 | 3.1536 | 0.7674 |
| 0.0 | 38.0 | 8246 | 3.0845 | 0.7674 |
| 0.0 | 39.0 | 8463 | 2.9748 | 0.7907 |
| 0.0 | 40.0 | 8680 | 2.9984 | 0.7907 |
| 0.0 | 41.0 | 8897 | 3.0029 | 0.7907 |
| 0.0 | 42.0 | 9114 | 3.0143 | 0.7907 |
| 0.0 | 43.0 | 9331 | 3.0354 | 0.7907 |
| 0.0 | 44.0 | 9548 | 3.0480 | 0.7907 |
| 0.0 | 45.0 | 9765 | 3.0564 | 0.7907 |
| 0.0 | 46.0 | 9982 | 3.1685 | 0.7674 |
| 0.0 | 47.0 | 10199 | 3.1763 | 0.7674 |
| 0.0 | 48.0 | 10416 | 3.1810 | 0.7674 |
| 0.0 | 49.0 | 10633 | 3.1846 | 0.7674 |
| 0.0 | 50.0 | 10850 | 3.1862 | 0.7674 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_beit_large_adamax_001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_large_adamax_001_fold4
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0452
- Accuracy: 0.9762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4622 | 1.0 | 219 | 0.6192 | 0.7143 |
| 0.1812 | 2.0 | 438 | 0.3334 | 0.8810 |
| 0.1061 | 3.0 | 657 | 0.3100 | 0.8810 |
| 0.061 | 4.0 | 876 | 0.3909 | 0.9048 |
| 0.07 | 5.0 | 1095 | 0.5029 | 0.8095 |
| 0.0116 | 6.0 | 1314 | 0.1841 | 0.9286 |
| 0.0286 | 7.0 | 1533 | 0.1625 | 0.9524 |
| 0.0589 | 8.0 | 1752 | 0.3628 | 0.9286 |
| 0.0111 | 9.0 | 1971 | 0.1004 | 0.9762 |
| 0.0199 | 10.0 | 2190 | 0.2149 | 0.9524 |
| 0.0026 | 11.0 | 2409 | 0.2299 | 0.9524 |
| 0.003 | 12.0 | 2628 | 0.0798 | 0.9524 |
| 0.0002 | 13.0 | 2847 | 0.3767 | 0.9524 |
| 0.0 | 14.0 | 3066 | 0.3423 | 0.9524 |
| 0.0 | 15.0 | 3285 | 0.3097 | 0.9524 |
| 0.0 | 16.0 | 3504 | 0.3620 | 0.9524 |
| 0.0 | 17.0 | 3723 | 0.3599 | 0.9524 |
| 0.0109 | 18.0 | 3942 | 1.0112 | 0.8810 |
| 0.0058 | 19.0 | 4161 | 0.3536 | 0.9286 |
| 0.0 | 20.0 | 4380 | 0.1749 | 0.9524 |
| 0.0 | 21.0 | 4599 | 0.1549 | 0.9762 |
| 0.0 | 22.0 | 4818 | 0.1579 | 0.9762 |
| 0.0001 | 23.0 | 5037 | 0.2020 | 0.9762 |
| 0.0 | 24.0 | 5256 | 0.1981 | 0.9524 |
| 0.0 | 25.0 | 5475 | 0.2004 | 0.9524 |
| 0.0 | 26.0 | 5694 | 0.2385 | 0.9524 |
| 0.0 | 27.0 | 5913 | 0.2312 | 0.9762 |
| 0.0 | 28.0 | 6132 | 0.2326 | 0.9524 |
| 0.0 | 29.0 | 6351 | 0.2329 | 0.9762 |
| 0.0 | 30.0 | 6570 | 0.2354 | 0.9762 |
| 0.0 | 31.0 | 6789 | 0.2406 | 0.9762 |
| 0.0 | 32.0 | 7008 | 0.1614 | 0.9524 |
| 0.0 | 33.0 | 7227 | 0.7242 | 0.8810 |
| 0.0 | 34.0 | 7446 | 0.6237 | 0.9048 |
| 0.0 | 35.0 | 7665 | 0.2046 | 0.9762 |
| 0.0 | 36.0 | 7884 | 0.3311 | 0.9524 |
| 0.0 | 37.0 | 8103 | 0.0102 | 1.0 |
| 0.0 | 38.0 | 8322 | 0.0205 | 0.9762 |
| 0.0 | 39.0 | 8541 | 0.4064 | 0.9286 |
| 0.0 | 40.0 | 8760 | 0.2152 | 0.9524 |
| 0.0 | 41.0 | 8979 | 0.0320 | 0.9762 |
| 0.0 | 42.0 | 9198 | 0.0414 | 0.9762 |
| 0.0 | 43.0 | 9417 | 0.0410 | 0.9762 |
| 0.0 | 44.0 | 9636 | 0.0475 | 0.9762 |
| 0.0 | 45.0 | 9855 | 0.0475 | 0.9762 |
| 0.0 | 46.0 | 10074 | 0.0463 | 0.9762 |
| 0.0 | 47.0 | 10293 | 0.0463 | 0.9762 |
| 0.0 | 48.0 | 10512 | 0.0476 | 0.9762 |
| 0.0 | 49.0 | 10731 | 0.0481 | 0.9762 |
| 0.0 | 50.0 | 10950 | 0.0452 | 0.9762 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_beit_large_adamax_001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_large_adamax_001_fold5
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1906
- Accuracy: 0.8049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3728 | 1.0 | 220 | 0.2484 | 0.9024 |
| 0.2424 | 2.0 | 440 | 1.0593 | 0.7805 |
| 0.1221 | 3.0 | 660 | 0.9944 | 0.7317 |
| 0.0746 | 4.0 | 880 | 1.4179 | 0.7073 |
| 0.0501 | 5.0 | 1100 | 0.6557 | 0.8049 |
| 0.0914 | 6.0 | 1320 | 1.5051 | 0.7073 |
| 0.0408 | 7.0 | 1540 | 0.1238 | 0.9512 |
| 0.0281 | 8.0 | 1760 | 0.6572 | 0.8537 |
| 0.0024 | 9.0 | 1980 | 0.9478 | 0.8049 |
| 0.0097 | 10.0 | 2200 | 0.6899 | 0.8537 |
| 0.0507 | 11.0 | 2420 | 1.0591 | 0.8049 |
| 0.0001 | 12.0 | 2640 | 0.9070 | 0.8780 |
| 0.0056 | 13.0 | 2860 | 1.1233 | 0.7805 |
| 0.0168 | 14.0 | 3080 | 1.3279 | 0.8049 |
| 0.0205 | 15.0 | 3300 | 1.4696 | 0.8049 |
| 0.0004 | 16.0 | 3520 | 1.8691 | 0.7561 |
| 0.0001 | 17.0 | 3740 | 1.4193 | 0.8293 |
| 0.0029 | 18.0 | 3960 | 1.9471 | 0.8049 |
| 0.0 | 19.0 | 4180 | 1.9190 | 0.7317 |
| 0.0 | 20.0 | 4400 | 2.0689 | 0.7317 |
| 0.0021 | 21.0 | 4620 | 0.3369 | 0.9024 |
| 0.0001 | 22.0 | 4840 | 0.9862 | 0.8537 |
| 0.0001 | 23.0 | 5060 | 0.9863 | 0.8780 |
| 0.0118 | 24.0 | 5280 | 1.0405 | 0.8049 |
| 0.0016 | 25.0 | 5500 | 1.4400 | 0.7805 |
| 0.0379 | 26.0 | 5720 | 1.0773 | 0.8537 |
| 0.0 | 27.0 | 5940 | 0.9902 | 0.8537 |
| 0.0 | 28.0 | 6160 | 0.9125 | 0.8293 |
| 0.0 | 29.0 | 6380 | 0.8492 | 0.8293 |
| 0.0 | 30.0 | 6600 | 1.3170 | 0.8293 |
| 0.0 | 31.0 | 6820 | 1.3145 | 0.7805 |
| 0.0 | 32.0 | 7040 | 0.7274 | 0.8780 |
| 0.0 | 33.0 | 7260 | 0.7992 | 0.8780 |
| 0.0 | 34.0 | 7480 | 0.7001 | 0.9024 |
| 0.0 | 35.0 | 7700 | 0.7059 | 0.9024 |
| 0.0 | 36.0 | 7920 | 0.7509 | 0.9024 |
| 0.0 | 37.0 | 8140 | 0.7646 | 0.9024 |
| 0.0 | 38.0 | 8360 | 1.2149 | 0.8293 |
| 0.0 | 39.0 | 8580 | 1.2146 | 0.8293 |
| 0.0 | 40.0 | 8800 | 1.2180 | 0.8293 |
| 0.0 | 41.0 | 9020 | 1.1864 | 0.8049 |
| 0.0 | 42.0 | 9240 | 1.1736 | 0.8049 |
| 0.0 | 43.0 | 9460 | 1.1601 | 0.8049 |
| 0.0 | 44.0 | 9680 | 1.1683 | 0.8049 |
| 0.0 | 45.0 | 9900 | 1.1682 | 0.8049 |
| 0.0 | 46.0 | 10120 | 1.1690 | 0.8049 |
| 0.0 | 47.0 | 10340 | 1.1691 | 0.8049 |
| 0.0 | 48.0 | 10560 | 1.1738 | 0.8049 |
| 0.0 | 49.0 | 10780 | 1.1753 | 0.8049 |
| 0.0 | 50.0 | 11000 | 1.1906 | 0.8049 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
moock/swinv2-tiny-patch4-window8-256-finetuned-gardner-exp-max
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-finetuned-gardner-exp-max
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5312
- Accuracy: 0.8389
## Model description
Predict Expansion Grade - Gardner Score from an embryo image
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6068 | 0.97 | 14 | 1.5809 | 0.5415 |
| 1.56 | 2.0 | 29 | 1.2830 | 0.5415 |
| 1.1852 | 2.97 | 43 | 1.0794 | 0.5415 |
| 1.1132 | 4.0 | 58 | 0.9314 | 0.6488 |
| 0.9416 | 4.97 | 72 | 0.8935 | 0.6341 |
| 0.9143 | 6.0 | 87 | 0.8009 | 0.6829 |
| 0.8243 | 6.97 | 101 | 0.8067 | 0.6634 |
| 0.8171 | 8.0 | 116 | 0.7783 | 0.6780 |
| 0.7901 | 8.97 | 130 | 0.7871 | 0.6585 |
| 0.7944 | 10.0 | 145 | 0.7414 | 0.6976 |
| 0.7669 | 10.97 | 159 | 0.6977 | 0.7122 |
| 0.7478 | 12.0 | 174 | 0.7043 | 0.7122 |
| 0.766 | 12.97 | 188 | 0.7778 | 0.6585 |
| 0.7322 | 14.0 | 203 | 0.7504 | 0.6780 |
| 0.7242 | 14.97 | 217 | 0.7291 | 0.6829 |
| 0.7554 | 16.0 | 232 | 0.7694 | 0.6634 |
| 0.7422 | 16.97 | 246 | 0.7569 | 0.6829 |
| 0.7292 | 18.0 | 261 | 0.7389 | 0.6780 |
| 0.7354 | 18.97 | 275 | 0.6684 | 0.7122 |
| 0.6847 | 20.0 | 290 | 0.6821 | 0.7122 |
| 0.7231 | 20.97 | 304 | 0.6839 | 0.7024 |
| 0.6962 | 22.0 | 319 | 0.6958 | 0.6878 |
| 0.7079 | 22.97 | 333 | 0.7039 | 0.6878 |
| 0.7088 | 24.0 | 348 | 0.6974 | 0.6878 |
| 0.7106 | 24.14 | 350 | 0.6975 | 0.6878 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"0",
"1",
"2",
"3",
"4"
] |
hkivancoral/hushem_40x_beit_large_adamax_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_large_adamax_00001_fold1
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6462
- Accuracy: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0076 | 1.0 | 215 | 0.3905 | 0.8222 |
| 0.0024 | 2.0 | 430 | 0.2614 | 0.8889 |
| 0.0002 | 3.0 | 645 | 0.3363 | 0.8889 |
| 0.0012 | 4.0 | 860 | 0.2896 | 0.8889 |
| 0.0012 | 5.0 | 1075 | 0.4297 | 0.8667 |
| 0.0001 | 6.0 | 1290 | 0.4692 | 0.8667 |
| 0.0001 | 7.0 | 1505 | 0.4005 | 0.8444 |
| 0.0001 | 8.0 | 1720 | 0.5343 | 0.8444 |
| 0.0004 | 9.0 | 1935 | 0.6104 | 0.8667 |
| 0.0 | 10.0 | 2150 | 0.6182 | 0.8667 |
| 0.0 | 11.0 | 2365 | 0.5923 | 0.8 |
| 0.0 | 12.0 | 2580 | 0.5080 | 0.8667 |
| 0.0 | 13.0 | 2795 | 0.3680 | 0.8444 |
| 0.0 | 14.0 | 3010 | 0.5787 | 0.8667 |
| 0.0 | 15.0 | 3225 | 0.5592 | 0.8667 |
| 0.0 | 16.0 | 3440 | 0.6399 | 0.8667 |
| 0.0 | 17.0 | 3655 | 0.7482 | 0.8444 |
| 0.0 | 18.0 | 3870 | 0.6724 | 0.8444 |
| 0.0 | 19.0 | 4085 | 0.7872 | 0.8222 |
| 0.0 | 20.0 | 4300 | 0.5260 | 0.8667 |
| 0.0 | 21.0 | 4515 | 0.5473 | 0.8667 |
| 0.0 | 22.0 | 4730 | 0.7409 | 0.8222 |
| 0.0 | 23.0 | 4945 | 0.4466 | 0.8667 |
| 0.0 | 24.0 | 5160 | 0.4166 | 0.8889 |
| 0.0 | 25.0 | 5375 | 0.5144 | 0.8667 |
| 0.0 | 26.0 | 5590 | 0.4960 | 0.8889 |
| 0.0 | 27.0 | 5805 | 0.4646 | 0.8889 |
| 0.0 | 28.0 | 6020 | 0.5759 | 0.8444 |
| 0.0 | 29.0 | 6235 | 0.7279 | 0.8444 |
| 0.0 | 30.0 | 6450 | 0.5042 | 0.8889 |
| 0.0 | 31.0 | 6665 | 0.6050 | 0.8667 |
| 0.0 | 32.0 | 6880 | 0.6602 | 0.8222 |
| 0.0 | 33.0 | 7095 | 0.6359 | 0.8222 |
| 0.0 | 34.0 | 7310 | 0.5725 | 0.8667 |
| 0.0 | 35.0 | 7525 | 0.6179 | 0.8444 |
| 0.0 | 36.0 | 7740 | 0.6579 | 0.8889 |
| 0.0 | 37.0 | 7955 | 0.7260 | 0.8222 |
| 0.0 | 38.0 | 8170 | 0.6510 | 0.8667 |
| 0.0 | 39.0 | 8385 | 0.6445 | 0.8667 |
| 0.0 | 40.0 | 8600 | 0.6364 | 0.8444 |
| 0.0001 | 41.0 | 8815 | 0.6206 | 0.8444 |
| 0.0 | 42.0 | 9030 | 0.6766 | 0.8667 |
| 0.0 | 43.0 | 9245 | 0.7659 | 0.8667 |
| 0.0003 | 44.0 | 9460 | 0.7574 | 0.8667 |
| 0.0 | 45.0 | 9675 | 0.7168 | 0.8667 |
| 0.0 | 46.0 | 9890 | 0.6864 | 0.8667 |
| 0.0 | 47.0 | 10105 | 0.6531 | 0.8667 |
| 0.0 | 48.0 | 10320 | 0.6563 | 0.8667 |
| 0.0 | 49.0 | 10535 | 0.6461 | 0.8667 |
| 0.0001 | 50.0 | 10750 | 0.6462 | 0.8667 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_beit_large_adamax_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_large_adamax_00001_fold2
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5239
- Accuracy: 0.8444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0134 | 1.0 | 215 | 0.7143 | 0.7556 |
| 0.0005 | 2.0 | 430 | 0.8825 | 0.8444 |
| 0.0002 | 3.0 | 645 | 1.1645 | 0.8 |
| 0.0002 | 4.0 | 860 | 1.1853 | 0.8 |
| 0.0001 | 5.0 | 1075 | 1.2007 | 0.8 |
| 0.0001 | 6.0 | 1290 | 1.1677 | 0.8222 |
| 0.0006 | 7.0 | 1505 | 1.1023 | 0.8222 |
| 0.0001 | 8.0 | 1720 | 1.5156 | 0.7333 |
| 0.0 | 9.0 | 1935 | 1.1716 | 0.8222 |
| 0.0 | 10.0 | 2150 | 1.2763 | 0.8222 |
| 0.0 | 11.0 | 2365 | 1.1176 | 0.8444 |
| 0.0 | 12.0 | 2580 | 1.2233 | 0.8444 |
| 0.0023 | 13.0 | 2795 | 1.5312 | 0.8 |
| 0.0 | 14.0 | 3010 | 1.3548 | 0.8 |
| 0.0 | 15.0 | 3225 | 1.2898 | 0.8222 |
| 0.0 | 16.0 | 3440 | 1.2810 | 0.8222 |
| 0.0 | 17.0 | 3655 | 1.3480 | 0.8222 |
| 0.0 | 18.0 | 3870 | 1.2231 | 0.8444 |
| 0.0 | 19.0 | 4085 | 1.2120 | 0.8444 |
| 0.0 | 20.0 | 4300 | 1.3990 | 0.8222 |
| 0.0 | 21.0 | 4515 | 1.3925 | 0.8222 |
| 0.0 | 22.0 | 4730 | 1.3055 | 0.8444 |
| 0.0 | 23.0 | 4945 | 1.3624 | 0.8222 |
| 0.0 | 24.0 | 5160 | 1.3420 | 0.8222 |
| 0.0 | 25.0 | 5375 | 1.3903 | 0.8222 |
| 0.0 | 26.0 | 5590 | 1.3025 | 0.8444 |
| 0.0 | 27.0 | 5805 | 1.3676 | 0.8444 |
| 0.0 | 28.0 | 6020 | 1.3843 | 0.8444 |
| 0.0 | 29.0 | 6235 | 1.4718 | 0.8 |
| 0.0 | 30.0 | 6450 | 1.4946 | 0.8222 |
| 0.0 | 31.0 | 6665 | 1.5006 | 0.8222 |
| 0.0 | 32.0 | 6880 | 1.5270 | 0.8222 |
| 0.0 | 33.0 | 7095 | 1.6386 | 0.8 |
| 0.0 | 34.0 | 7310 | 1.5335 | 0.8222 |
| 0.0 | 35.0 | 7525 | 1.5020 | 0.8444 |
| 0.0 | 36.0 | 7740 | 1.5220 | 0.8444 |
| 0.0 | 37.0 | 7955 | 1.6305 | 0.8 |
| 0.0 | 38.0 | 8170 | 1.5482 | 0.8 |
| 0.0 | 39.0 | 8385 | 1.5491 | 0.8 |
| 0.0 | 40.0 | 8600 | 1.5716 | 0.8222 |
| 0.0 | 41.0 | 8815 | 1.5929 | 0.8222 |
| 0.0 | 42.0 | 9030 | 1.5745 | 0.8222 |
| 0.0 | 43.0 | 9245 | 1.4702 | 0.8444 |
| 0.0 | 44.0 | 9460 | 1.4777 | 0.8444 |
| 0.0 | 45.0 | 9675 | 1.4961 | 0.8444 |
| 0.0 | 46.0 | 9890 | 1.5108 | 0.8444 |
| 0.0 | 47.0 | 10105 | 1.5228 | 0.8444 |
| 0.0 | 48.0 | 10320 | 1.5215 | 0.8444 |
| 0.0 | 49.0 | 10535 | 1.5246 | 0.8444 |
| 0.0032 | 50.0 | 10750 | 1.5239 | 0.8444 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_beit_large_adamax_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_large_adamax_00001_fold3
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0094
- Accuracy: 0.8837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0088 | 1.0 | 217 | 0.5009 | 0.8605 |
| 0.0048 | 2.0 | 434 | 0.5720 | 0.8837 |
| 0.0002 | 3.0 | 651 | 0.6684 | 0.8605 |
| 0.0005 | 4.0 | 868 | 0.6185 | 0.8605 |
| 0.0001 | 5.0 | 1085 | 0.7115 | 0.8837 |
| 0.0002 | 6.0 | 1302 | 0.7630 | 0.8837 |
| 0.0001 | 7.0 | 1519 | 0.6588 | 0.8837 |
| 0.0 | 8.0 | 1736 | 0.6227 | 0.8837 |
| 0.0001 | 9.0 | 1953 | 0.5468 | 0.9070 |
| 0.0 | 10.0 | 2170 | 0.7021 | 0.8837 |
| 0.0 | 11.0 | 2387 | 0.7605 | 0.8605 |
| 0.0002 | 12.0 | 2604 | 0.7994 | 0.8837 |
| 0.0 | 13.0 | 2821 | 1.0881 | 0.8372 |
| 0.0002 | 14.0 | 3038 | 0.8413 | 0.8605 |
| 0.0002 | 15.0 | 3255 | 0.9237 | 0.8837 |
| 0.0 | 16.0 | 3472 | 0.9623 | 0.8605 |
| 0.0 | 17.0 | 3689 | 0.9912 | 0.8605 |
| 0.0001 | 18.0 | 3906 | 0.7287 | 0.9070 |
| 0.0 | 19.0 | 4123 | 0.9687 | 0.8372 |
| 0.0 | 20.0 | 4340 | 0.6790 | 0.9070 |
| 0.0 | 21.0 | 4557 | 0.8424 | 0.9070 |
| 0.0 | 22.0 | 4774 | 0.7674 | 0.9070 |
| 0.0 | 23.0 | 4991 | 0.8450 | 0.9070 |
| 0.0 | 24.0 | 5208 | 0.8947 | 0.8837 |
| 0.0 | 25.0 | 5425 | 0.8485 | 0.8837 |
| 0.0 | 26.0 | 5642 | 0.9138 | 0.8837 |
| 0.0 | 27.0 | 5859 | 0.9516 | 0.8837 |
| 0.0 | 28.0 | 6076 | 0.8628 | 0.9070 |
| 0.0 | 29.0 | 6293 | 0.9458 | 0.8837 |
| 0.0 | 30.0 | 6510 | 0.9582 | 0.8837 |
| 0.0 | 31.0 | 6727 | 1.1730 | 0.8837 |
| 0.0 | 32.0 | 6944 | 1.0331 | 0.8837 |
| 0.0 | 33.0 | 7161 | 1.1055 | 0.8605 |
| 0.0 | 34.0 | 7378 | 0.9893 | 0.8837 |
| 0.0 | 35.0 | 7595 | 1.0353 | 0.8837 |
| 0.0 | 36.0 | 7812 | 1.0373 | 0.8837 |
| 0.0 | 37.0 | 8029 | 1.0358 | 0.8837 |
| 0.0 | 38.0 | 8246 | 1.0426 | 0.8837 |
| 0.0 | 39.0 | 8463 | 1.1391 | 0.8837 |
| 0.0 | 40.0 | 8680 | 1.0647 | 0.8837 |
| 0.0 | 41.0 | 8897 | 1.0082 | 0.8837 |
| 0.0 | 42.0 | 9114 | 1.0681 | 0.8837 |
| 0.0 | 43.0 | 9331 | 1.0189 | 0.8837 |
| 0.0 | 44.0 | 9548 | 1.0129 | 0.8837 |
| 0.0 | 45.0 | 9765 | 1.0237 | 0.8837 |
| 0.0 | 46.0 | 9982 | 1.0239 | 0.8837 |
| 0.0 | 47.0 | 10199 | 1.0008 | 0.8837 |
| 0.0 | 48.0 | 10416 | 1.0075 | 0.8837 |
| 0.0001 | 49.0 | 10633 | 1.0115 | 0.8837 |
| 0.0 | 50.0 | 10850 | 1.0094 | 0.8837 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_beit_large_adamax_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_large_adamax_00001_fold4
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0038
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0209 | 1.0 | 219 | 0.0613 | 0.9762 |
| 0.0077 | 2.0 | 438 | 0.0174 | 1.0 |
| 0.0003 | 3.0 | 657 | 0.0464 | 0.9762 |
| 0.0004 | 4.0 | 876 | 0.0760 | 0.9762 |
| 0.0062 | 5.0 | 1095 | 0.0813 | 0.9762 |
| 0.0001 | 6.0 | 1314 | 0.0164 | 1.0 |
| 0.0002 | 7.0 | 1533 | 0.0181 | 1.0 |
| 0.0002 | 8.0 | 1752 | 0.0299 | 0.9762 |
| 0.0 | 9.0 | 1971 | 0.0028 | 1.0 |
| 0.0001 | 10.0 | 2190 | 0.0137 | 1.0 |
| 0.0001 | 11.0 | 2409 | 0.0028 | 1.0 |
| 0.0 | 12.0 | 2628 | 0.0068 | 1.0 |
| 0.0 | 13.0 | 2847 | 0.0011 | 1.0 |
| 0.0 | 14.0 | 3066 | 0.0415 | 0.9762 |
| 0.0 | 15.0 | 3285 | 0.0029 | 1.0 |
| 0.0003 | 16.0 | 3504 | 0.0012 | 1.0 |
| 0.0 | 17.0 | 3723 | 0.0002 | 1.0 |
| 0.0 | 18.0 | 3942 | 0.0203 | 0.9762 |
| 0.0 | 19.0 | 4161 | 0.0016 | 1.0 |
| 0.0 | 20.0 | 4380 | 0.0412 | 0.9762 |
| 0.0 | 21.0 | 4599 | 0.0007 | 1.0 |
| 0.0 | 22.0 | 4818 | 0.0079 | 1.0 |
| 0.0 | 23.0 | 5037 | 0.0005 | 1.0 |
| 0.0001 | 24.0 | 5256 | 0.0050 | 1.0 |
| 0.0 | 25.0 | 5475 | 0.0077 | 1.0 |
| 0.0 | 26.0 | 5694 | 0.0021 | 1.0 |
| 0.0 | 27.0 | 5913 | 0.0004 | 1.0 |
| 0.0 | 28.0 | 6132 | 0.0003 | 1.0 |
| 0.0 | 29.0 | 6351 | 0.0021 | 1.0 |
| 0.0 | 30.0 | 6570 | 0.0005 | 1.0 |
| 0.0 | 31.0 | 6789 | 0.0002 | 1.0 |
| 0.0 | 32.0 | 7008 | 0.0010 | 1.0 |
| 0.0 | 33.0 | 7227 | 0.0045 | 1.0 |
| 0.0 | 34.0 | 7446 | 0.0082 | 1.0 |
| 0.0 | 35.0 | 7665 | 0.0066 | 1.0 |
| 0.0 | 36.0 | 7884 | 0.0009 | 1.0 |
| 0.0 | 37.0 | 8103 | 0.0004 | 1.0 |
| 0.0 | 38.0 | 8322 | 0.0004 | 1.0 |
| 0.0 | 39.0 | 8541 | 0.0101 | 1.0 |
| 0.0 | 40.0 | 8760 | 0.0083 | 1.0 |
| 0.0 | 41.0 | 8979 | 0.0080 | 1.0 |
| 0.0001 | 42.0 | 9198 | 0.0073 | 1.0 |
| 0.0 | 43.0 | 9417 | 0.0042 | 1.0 |
| 0.0 | 44.0 | 9636 | 0.0040 | 1.0 |
| 0.0 | 45.0 | 9855 | 0.0049 | 1.0 |
| 0.0 | 46.0 | 10074 | 0.0031 | 1.0 |
| 0.0 | 47.0 | 10293 | 0.0031 | 1.0 |
| 0.0 | 48.0 | 10512 | 0.0039 | 1.0 |
| 0.0 | 49.0 | 10731 | 0.0040 | 1.0 |
| 0.0 | 50.0 | 10950 | 0.0038 | 1.0 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
chanhua/autotrain-krvpy-mebgz
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 1.0846457481384277
f1_macro: 0.26666666666666666
f1_micro: 0.5
f1_weighted: 0.4
precision_macro: 0.2222222222222222
precision_micro: 0.5
precision_weighted: 0.3333333333333333
recall_macro: 0.3333333333333333
recall_micro: 0.5
recall_weighted: 0.5
accuracy: 0.5
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
chanhua/autotrain-rnjto-gg00g
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 1.0826029777526855
f1_macro: 0.5555555555555555
f1_micro: 0.6666666666666666
f1_weighted: 0.5555555555555555
precision_macro: 0.5
precision_micro: 0.6666666666666666
precision_weighted: 0.5
recall_macro: 0.6666666666666666
recall_micro: 0.6666666666666666
recall_weighted: 0.6666666666666666
accuracy: 0.6666666666666666
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
hkivancoral/hushem_40x_beit_large_adamax_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_large_adamax_00001_fold5
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3633
- Accuracy: 0.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0116 | 1.0 | 220 | 0.3464 | 0.8780 |
| 0.0008 | 2.0 | 440 | 0.2183 | 0.9512 |
| 0.0009 | 3.0 | 660 | 0.2250 | 0.9268 |
| 0.0006 | 4.0 | 880 | 0.2906 | 0.9268 |
| 0.0001 | 5.0 | 1100 | 0.3626 | 0.9268 |
| 0.0004 | 6.0 | 1320 | 0.2649 | 0.9512 |
| 0.0 | 7.0 | 1540 | 0.4436 | 0.8780 |
| 0.0004 | 8.0 | 1760 | 0.4765 | 0.9024 |
| 0.0001 | 9.0 | 1980 | 0.4469 | 0.9024 |
| 0.0 | 10.0 | 2200 | 0.4327 | 0.8780 |
| 0.0 | 11.0 | 2420 | 0.4850 | 0.9268 |
| 0.0 | 12.0 | 2640 | 0.4853 | 0.8780 |
| 0.0 | 13.0 | 2860 | 0.5574 | 0.8537 |
| 0.0 | 14.0 | 3080 | 0.5001 | 0.9024 |
| 0.0 | 15.0 | 3300 | 0.4709 | 0.8537 |
| 0.0 | 16.0 | 3520 | 0.6659 | 0.8293 |
| 0.0 | 17.0 | 3740 | 0.8132 | 0.8293 |
| 0.0 | 18.0 | 3960 | 0.7367 | 0.8780 |
| 0.0005 | 19.0 | 4180 | 0.2607 | 0.9512 |
| 0.0 | 20.0 | 4400 | 0.3217 | 0.9512 |
| 0.0 | 21.0 | 4620 | 0.2845 | 0.9512 |
| 0.0 | 22.0 | 4840 | 0.5419 | 0.8780 |
| 0.0 | 23.0 | 5060 | 0.4106 | 0.9024 |
| 0.0 | 24.0 | 5280 | 0.3477 | 0.9024 |
| 0.0 | 25.0 | 5500 | 0.4515 | 0.8780 |
| 0.0 | 26.0 | 5720 | 0.3857 | 0.9024 |
| 0.0 | 27.0 | 5940 | 0.4374 | 0.9024 |
| 0.0 | 28.0 | 6160 | 0.5116 | 0.8780 |
| 0.0 | 29.0 | 6380 | 0.6248 | 0.8537 |
| 0.0 | 30.0 | 6600 | 0.5380 | 0.8780 |
| 0.0 | 31.0 | 6820 | 0.5231 | 0.8780 |
| 0.0 | 32.0 | 7040 | 0.5186 | 0.8780 |
| 0.0 | 33.0 | 7260 | 0.4301 | 0.9024 |
| 0.0 | 34.0 | 7480 | 0.4552 | 0.9024 |
| 0.0 | 35.0 | 7700 | 0.4309 | 0.9024 |
| 0.0 | 36.0 | 7920 | 0.5631 | 0.8780 |
| 0.0 | 37.0 | 8140 | 0.5187 | 0.8780 |
| 0.0 | 38.0 | 8360 | 0.3960 | 0.9268 |
| 0.0 | 39.0 | 8580 | 0.5497 | 0.9024 |
| 0.0 | 40.0 | 8800 | 0.4890 | 0.9024 |
| 0.0 | 41.0 | 9020 | 0.3987 | 0.9268 |
| 0.0 | 42.0 | 9240 | 0.4184 | 0.9268 |
| 0.0 | 43.0 | 9460 | 0.3286 | 0.9512 |
| 0.0 | 44.0 | 9680 | 0.3483 | 0.9268 |
| 0.0 | 45.0 | 9900 | 0.3614 | 0.9268 |
| 0.0 | 46.0 | 10120 | 0.3697 | 0.9268 |
| 0.0 | 47.0 | 10340 | 0.3577 | 0.9512 |
| 0.0 | 48.0 | 10560 | 0.3575 | 0.9512 |
| 0.0 | 49.0 | 10780 | 0.3626 | 0.9268 |
| 0.0 | 50.0 | 11000 | 0.3633 | 0.9268 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
chanhua/autotrain-izefx-v3qh0
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.9459153413772583
f1_macro: 0.26666666666666666
f1_micro: 0.5
f1_weighted: 0.4
precision_macro: 0.2222222222222222
precision_micro: 0.5
precision_weighted: 0.3333333333333333
recall_macro: 0.3333333333333333
recall_micro: 0.5
recall_weighted: 0.5
accuracy: 0.5
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
tfyxj/autotrain-bl992-mguwi
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.12179487179487179
f1_micro: 0.2235294117647059
f1_weighted: 0.08167420814479637
precision_macro: 0.07450980392156863
precision_micro: 0.2235294117647059
precision_weighted: 0.04996539792387543
recall_macro: 0.3333333333333333
recall_micro: 0.2235294117647059
recall_weighted: 0.2235294117647059
accuracy: 0.2235294117647059
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
tiennguyenbnbk/teacher-status-van-tiny-256-0
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# teacher-status-van-tiny-256-0
This model is a fine-tuned version of [Visual-Attention-Network/van-tiny](https://huggingface.co/Visual-Attention-Network/van-tiny) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0672
- Accuracy: 0.9778
- F1 Score: 0.9841
- Recall: 0.9893
- Precision: 0.9789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:------:|:---------:|
| 0.6788 | 0.99 | 47 | 0.6437 | 0.6933 | 0.8189 | 1.0 | 0.6933 |
| 0.463 | 2.0 | 95 | 0.3406 | 0.8756 | 0.9162 | 0.9808 | 0.8596 |
| 0.3596 | 2.99 | 142 | 0.2072 | 0.9304 | 0.9504 | 0.9615 | 0.9395 |
| 0.3505 | 4.0 | 190 | 0.1564 | 0.9526 | 0.9661 | 0.9744 | 0.9580 |
| 0.2962 | 4.99 | 237 | 0.1262 | 0.9556 | 0.9681 | 0.9722 | 0.9640 |
| 0.2762 | 6.0 | 285 | 0.1038 | 0.9644 | 0.9745 | 0.9808 | 0.9684 |
| 0.2604 | 6.99 | 332 | 0.0932 | 0.9719 | 0.9798 | 0.9829 | 0.9766 |
| 0.2427 | 8.0 | 380 | 0.0928 | 0.9719 | 0.9797 | 0.9786 | 0.9807 |
| 0.2465 | 8.99 | 427 | 0.0898 | 0.9719 | 0.9797 | 0.9786 | 0.9807 |
| 0.2519 | 10.0 | 475 | 0.0913 | 0.9689 | 0.9775 | 0.9765 | 0.9786 |
| 0.2258 | 10.99 | 522 | 0.0847 | 0.9733 | 0.9809 | 0.9872 | 0.9747 |
| 0.2184 | 12.0 | 570 | 0.0812 | 0.9793 | 0.9851 | 0.9893 | 0.9809 |
| 0.2208 | 12.99 | 617 | 0.0693 | 0.9807 | 0.9861 | 0.9872 | 0.9851 |
| 0.2201 | 14.0 | 665 | 0.0628 | 0.9763 | 0.9829 | 0.9850 | 0.9809 |
| 0.2251 | 14.99 | 712 | 0.0811 | 0.9733 | 0.9810 | 0.9915 | 0.9707 |
| 0.2135 | 16.0 | 760 | 0.0718 | 0.9763 | 0.9829 | 0.9850 | 0.9809 |
| 0.1851 | 16.99 | 807 | 0.0791 | 0.9763 | 0.9830 | 0.9872 | 0.9788 |
| 0.2152 | 18.0 | 855 | 0.0737 | 0.9748 | 0.9818 | 0.9808 | 0.9829 |
| 0.1871 | 18.99 | 902 | 0.0814 | 0.9763 | 0.9830 | 0.9872 | 0.9788 |
| 0.1714 | 20.0 | 950 | 0.0692 | 0.9763 | 0.9830 | 0.9893 | 0.9768 |
| 0.188 | 20.99 | 997 | 0.0641 | 0.9778 | 0.9840 | 0.9850 | 0.9829 |
| 0.191 | 22.0 | 1045 | 0.0644 | 0.9793 | 0.9851 | 0.9872 | 0.9830 |
| 0.2025 | 22.99 | 1092 | 0.0675 | 0.9793 | 0.9850 | 0.9829 | 0.9871 |
| 0.1753 | 24.0 | 1140 | 0.0655 | 0.9822 | 0.9872 | 0.9893 | 0.9851 |
| 0.1857 | 24.99 | 1187 | 0.0731 | 0.9793 | 0.9851 | 0.9915 | 0.9789 |
| 0.2007 | 26.0 | 1235 | 0.0677 | 0.9793 | 0.9851 | 0.9915 | 0.9789 |
| 0.2086 | 26.99 | 1282 | 0.0640 | 0.9793 | 0.9851 | 0.9893 | 0.9809 |
| 0.1666 | 28.0 | 1330 | 0.0712 | 0.9778 | 0.9841 | 0.9893 | 0.9789 |
| 0.157 | 28.99 | 1377 | 0.0661 | 0.9807 | 0.9862 | 0.9893 | 0.9830 |
| 0.1758 | 29.68 | 1410 | 0.0672 | 0.9778 | 0.9841 | 0.9893 | 0.9789 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"abnormal",
"normal"
] |
hkivancoral/smids_10x_beit_large_adamax_001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_beit_large_adamax_001_fold1
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9751
- Accuracy: 0.9048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3471 | 1.0 | 751 | 0.3720 | 0.8631 |
| 0.2879 | 2.0 | 1502 | 0.4078 | 0.8364 |
| 0.2355 | 3.0 | 2253 | 0.4002 | 0.8831 |
| 0.2335 | 4.0 | 3004 | 0.2992 | 0.8831 |
| 0.1816 | 5.0 | 3755 | 0.3290 | 0.8965 |
| 0.1386 | 6.0 | 4506 | 0.3986 | 0.8898 |
| 0.1637 | 7.0 | 5257 | 0.4542 | 0.8681 |
| 0.0627 | 8.0 | 6008 | 0.4567 | 0.8965 |
| 0.0985 | 9.0 | 6759 | 0.3926 | 0.9015 |
| 0.1363 | 10.0 | 7510 | 0.4519 | 0.8848 |
| 0.0463 | 11.0 | 8261 | 0.5853 | 0.8898 |
| 0.023 | 12.0 | 9012 | 0.5711 | 0.8865 |
| 0.0292 | 13.0 | 9763 | 0.5829 | 0.8932 |
| 0.0137 | 14.0 | 10514 | 0.5739 | 0.8965 |
| 0.0034 | 15.0 | 11265 | 0.6922 | 0.8815 |
| 0.0201 | 16.0 | 12016 | 0.6833 | 0.8948 |
| 0.0068 | 17.0 | 12767 | 0.7845 | 0.8898 |
| 0.0084 | 18.0 | 13518 | 0.6851 | 0.8781 |
| 0.0033 | 19.0 | 14269 | 0.6219 | 0.8998 |
| 0.0023 | 20.0 | 15020 | 0.5986 | 0.8982 |
| 0.0011 | 21.0 | 15771 | 0.6825 | 0.8965 |
| 0.0011 | 22.0 | 16522 | 0.7971 | 0.8932 |
| 0.027 | 23.0 | 17273 | 0.5546 | 0.9098 |
| 0.0061 | 24.0 | 18024 | 0.6400 | 0.8932 |
| 0.0001 | 25.0 | 18775 | 0.6875 | 0.8965 |
| 0.0111 | 26.0 | 19526 | 0.7316 | 0.8965 |
| 0.0029 | 27.0 | 20277 | 0.8142 | 0.8865 |
| 0.0004 | 28.0 | 21028 | 0.7441 | 0.8915 |
| 0.0043 | 29.0 | 21779 | 0.7052 | 0.8965 |
| 0.0 | 30.0 | 22530 | 0.7049 | 0.9048 |
| 0.0 | 31.0 | 23281 | 0.8253 | 0.9149 |
| 0.0005 | 32.0 | 24032 | 0.6696 | 0.9065 |
| 0.0001 | 33.0 | 24783 | 0.8050 | 0.9065 |
| 0.0 | 34.0 | 25534 | 0.8833 | 0.9015 |
| 0.0 | 35.0 | 26285 | 0.8344 | 0.9032 |
| 0.0 | 36.0 | 27036 | 0.8190 | 0.8982 |
| 0.0 | 37.0 | 27787 | 0.8357 | 0.9032 |
| 0.0 | 38.0 | 28538 | 0.9401 | 0.9015 |
| 0.0 | 39.0 | 29289 | 0.7726 | 0.9115 |
| 0.0 | 40.0 | 30040 | 0.8975 | 0.8965 |
| 0.0 | 41.0 | 30791 | 0.8489 | 0.9065 |
| 0.0 | 42.0 | 31542 | 0.9519 | 0.8998 |
| 0.0 | 43.0 | 32293 | 0.9084 | 0.9032 |
| 0.0 | 44.0 | 33044 | 0.9097 | 0.9048 |
| 0.0 | 45.0 | 33795 | 0.9438 | 0.9098 |
| 0.0 | 46.0 | 34546 | 0.9461 | 0.9082 |
| 0.0 | 47.0 | 35297 | 0.9632 | 0.9048 |
| 0.0 | 48.0 | 36048 | 0.9598 | 0.9065 |
| 0.0 | 49.0 | 36799 | 0.9723 | 0.9048 |
| 0.0 | 50.0 | 37550 | 0.9751 | 0.9048 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_beit_large_adamax_001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_beit_large_adamax_001_fold2
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4058
- Accuracy: 0.8536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6414 | 1.0 | 750 | 0.6828 | 0.6639 |
| 0.5428 | 2.0 | 1500 | 0.5438 | 0.7754 |
| 0.4614 | 3.0 | 2250 | 0.4523 | 0.8336 |
| 0.4233 | 4.0 | 3000 | 0.4215 | 0.8236 |
| 0.4304 | 5.0 | 3750 | 0.4599 | 0.7903 |
| 0.3335 | 6.0 | 4500 | 0.4118 | 0.8336 |
| 0.3481 | 7.0 | 5250 | 0.4939 | 0.8253 |
| 0.3092 | 8.0 | 6000 | 0.4308 | 0.8486 |
| 0.2568 | 9.0 | 6750 | 0.4756 | 0.8353 |
| 0.331 | 10.0 | 7500 | 0.4715 | 0.8619 |
| 0.2403 | 11.0 | 8250 | 0.5349 | 0.8469 |
| 0.2162 | 12.0 | 9000 | 0.5922 | 0.8136 |
| 0.2489 | 13.0 | 9750 | 0.5818 | 0.8419 |
| 0.0972 | 14.0 | 10500 | 0.6218 | 0.8419 |
| 0.1212 | 15.0 | 11250 | 0.5371 | 0.8436 |
| 0.1175 | 16.0 | 12000 | 0.6818 | 0.8286 |
| 0.1011 | 17.0 | 12750 | 0.8719 | 0.8120 |
| 0.179 | 18.0 | 13500 | 0.7106 | 0.8486 |
| 0.1325 | 19.0 | 14250 | 0.6119 | 0.8552 |
| 0.111 | 20.0 | 15000 | 0.7905 | 0.8552 |
| 0.0431 | 21.0 | 15750 | 0.8636 | 0.8469 |
| 0.0973 | 22.0 | 16500 | 0.9921 | 0.8403 |
| 0.0529 | 23.0 | 17250 | 0.7563 | 0.8536 |
| 0.1212 | 24.0 | 18000 | 1.1228 | 0.8103 |
| 0.0377 | 25.0 | 18750 | 1.0572 | 0.8386 |
| 0.035 | 26.0 | 19500 | 0.8767 | 0.8536 |
| 0.0591 | 27.0 | 20250 | 0.9535 | 0.8652 |
| 0.0188 | 28.0 | 21000 | 1.1035 | 0.8536 |
| 0.0402 | 29.0 | 21750 | 1.1575 | 0.8586 |
| 0.0333 | 30.0 | 22500 | 1.1473 | 0.8669 |
| 0.0255 | 31.0 | 23250 | 1.0948 | 0.8469 |
| 0.0283 | 32.0 | 24000 | 1.4345 | 0.8419 |
| 0.0262 | 33.0 | 24750 | 1.1277 | 0.8552 |
| 0.0004 | 34.0 | 25500 | 1.2002 | 0.8519 |
| 0.0058 | 35.0 | 26250 | 1.1085 | 0.8586 |
| 0.0265 | 36.0 | 27000 | 1.2506 | 0.8436 |
| 0.0298 | 37.0 | 27750 | 1.1890 | 0.8602 |
| 0.0146 | 38.0 | 28500 | 1.5719 | 0.8486 |
| 0.0266 | 39.0 | 29250 | 1.2137 | 0.8486 |
| 0.0079 | 40.0 | 30000 | 1.2207 | 0.8586 |
| 0.0077 | 41.0 | 30750 | 1.1783 | 0.8636 |
| 0.0004 | 42.0 | 31500 | 1.2606 | 0.8552 |
| 0.0014 | 43.0 | 32250 | 1.6455 | 0.8453 |
| 0.0004 | 44.0 | 33000 | 1.4264 | 0.8436 |
| 0.015 | 45.0 | 33750 | 1.4403 | 0.8536 |
| 0.0002 | 46.0 | 34500 | 1.2419 | 0.8552 |
| 0.002 | 47.0 | 35250 | 1.3338 | 0.8536 |
| 0.0101 | 48.0 | 36000 | 1.5464 | 0.8469 |
| 0.0086 | 49.0 | 36750 | 1.3979 | 0.8536 |
| 0.0061 | 50.0 | 37500 | 1.4058 | 0.8536 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
gehug/vit-base-patch16-224-finetuned-flower
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.1.0+cu121
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"daisy",
"dandelion",
"roses",
"sunflowers",
"tulips"
] |
yuanhuaisen/autotrain-7n06d-9fa4u
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.3695702850818634
f1_macro: 0.9200292729704493
f1_micro: 0.9333333333333333
f1_weighted: 0.9333333333333333
precision_macro: 0.9200292729704493
precision_micro: 0.9333333333333333
precision_weighted: 0.9333333333333333
recall_macro: 0.9200292729704493
recall_micro: 0.9333333333333333
recall_weighted: 0.9333333333333333
accuracy: 0.9333333333333333
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
Sai1212/swin-finetuned-class_mi_a4c
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-finetuned-class_mi_a4c
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 187691964027097262850048.0000
- Accuracy: 0.4324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: reduce_lr_on_plateau
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-----------------------------:|:-----:|:----:|:-----------------------------:|:--------:|
| No log | 0.84 | 4 | 187691964027097262850048.0000 | 0.4324 |
| No log | 1.89 | 9 | 187691964027097262850048.0000 | 0.4324 |
| 197383793291431707148288.0000 | 2.95 | 14 | 187691964027097262850048.0000 | 0.4324 |
| 197383793291431707148288.0000 | 4.0 | 19 | 187691964027097262850048.0000 | 0.4324 |
| 201517492465567910592512.0000 | 4.84 | 23 | 187691964027097262850048.0000 | 0.4324 |
| 201517492465567910592512.0000 | 5.89 | 28 | 187691964027097262850048.0000 | 0.4324 |
| 190149859368370083725312.0000 | 6.95 | 33 | 187691964027097262850048.0000 | 0.4324 |
| 190149859368370083725312.0000 | 8.0 | 38 | 187691964027097262850048.0000 | 0.4324 |
| 203584363669914227048448.0000 | 8.42 | 40 | 187691964027097262850048.0000 | 0.4324 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"0",
"1"
] |
hkivancoral/smids_10x_beit_large_adamax_001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_beit_large_adamax_001_fold3
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0324
- Accuracy: 0.9017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3438 | 1.0 | 750 | 0.3826 | 0.8517 |
| 0.2931 | 2.0 | 1500 | 0.3034 | 0.89 |
| 0.2025 | 3.0 | 2250 | 0.3971 | 0.8783 |
| 0.2582 | 4.0 | 3000 | 0.3086 | 0.8867 |
| 0.2483 | 5.0 | 3750 | 0.3346 | 0.8917 |
| 0.1606 | 6.0 | 4500 | 0.3908 | 0.8717 |
| 0.1236 | 7.0 | 5250 | 0.4286 | 0.8783 |
| 0.1197 | 8.0 | 6000 | 0.3887 | 0.9 |
| 0.0412 | 9.0 | 6750 | 0.4924 | 0.885 |
| 0.0384 | 10.0 | 7500 | 0.5551 | 0.89 |
| 0.0583 | 11.0 | 8250 | 0.4882 | 0.9017 |
| 0.0806 | 12.0 | 9000 | 0.5902 | 0.88 |
| 0.0489 | 13.0 | 9750 | 0.5212 | 0.88 |
| 0.0353 | 14.0 | 10500 | 0.5171 | 0.9 |
| 0.0094 | 15.0 | 11250 | 0.6341 | 0.895 |
| 0.0154 | 16.0 | 12000 | 0.5409 | 0.9133 |
| 0.0118 | 17.0 | 12750 | 0.6110 | 0.8833 |
| 0.0159 | 18.0 | 13500 | 0.6873 | 0.9033 |
| 0.0026 | 19.0 | 14250 | 0.7871 | 0.8983 |
| 0.0163 | 20.0 | 15000 | 0.6341 | 0.895 |
| 0.0002 | 21.0 | 15750 | 0.7139 | 0.9017 |
| 0.0006 | 22.0 | 16500 | 0.6717 | 0.9033 |
| 0.0266 | 23.0 | 17250 | 0.6268 | 0.895 |
| 0.0051 | 24.0 | 18000 | 0.6425 | 0.905 |
| 0.0 | 25.0 | 18750 | 0.7506 | 0.91 |
| 0.0004 | 26.0 | 19500 | 0.6864 | 0.9017 |
| 0.0002 | 27.0 | 20250 | 0.6111 | 0.9117 |
| 0.0163 | 28.0 | 21000 | 0.6875 | 0.9017 |
| 0.0001 | 29.0 | 21750 | 0.8050 | 0.8967 |
| 0.0002 | 30.0 | 22500 | 0.7397 | 0.8967 |
| 0.0004 | 31.0 | 23250 | 0.8218 | 0.8983 |
| 0.0 | 32.0 | 24000 | 0.8725 | 0.8983 |
| 0.0 | 33.0 | 24750 | 0.9662 | 0.8967 |
| 0.0 | 34.0 | 25500 | 0.9148 | 0.9083 |
| 0.0 | 35.0 | 26250 | 0.8492 | 0.9083 |
| 0.0001 | 36.0 | 27000 | 0.8264 | 0.9067 |
| 0.0 | 37.0 | 27750 | 0.8650 | 0.895 |
| 0.0004 | 38.0 | 28500 | 0.9030 | 0.91 |
| 0.0 | 39.0 | 29250 | 0.9540 | 0.9 |
| 0.0 | 40.0 | 30000 | 1.0292 | 0.8883 |
| 0.0 | 41.0 | 30750 | 1.0282 | 0.8917 |
| 0.0 | 42.0 | 31500 | 1.0128 | 0.8933 |
| 0.0 | 43.0 | 32250 | 1.0147 | 0.8983 |
| 0.0 | 44.0 | 33000 | 0.9709 | 0.8983 |
| 0.0 | 45.0 | 33750 | 0.9643 | 0.9067 |
| 0.0 | 46.0 | 34500 | 0.9770 | 0.9017 |
| 0.0 | 47.0 | 35250 | 1.0000 | 0.8983 |
| 0.0 | 48.0 | 36000 | 1.0223 | 0.9017 |
| 0.0 | 49.0 | 36750 | 1.0291 | 0.9017 |
| 0.0 | 50.0 | 37500 | 1.0324 | 0.9017 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
Dulfary/platzi-vit-model-omar-espejel
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-omar-espejel
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1062
- Accuracy: 0.9774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1461 | 3.85 | 500 | 0.1062 | 0.9774 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
hkivancoral/smids_10x_beit_large_adamax_001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_beit_large_adamax_001_fold4
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6842
- Accuracy: 0.8717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3361 | 1.0 | 750 | 0.4333 | 0.8367 |
| 0.2968 | 2.0 | 1500 | 0.4495 | 0.8467 |
| 0.288 | 3.0 | 2250 | 0.4264 | 0.8383 |
| 0.2379 | 4.0 | 3000 | 0.4907 | 0.85 |
| 0.1893 | 5.0 | 3750 | 0.4876 | 0.8533 |
| 0.1419 | 6.0 | 4500 | 0.4376 | 0.8667 |
| 0.1288 | 7.0 | 5250 | 0.5742 | 0.84 |
| 0.079 | 8.0 | 6000 | 0.6426 | 0.86 |
| 0.0885 | 9.0 | 6750 | 0.6694 | 0.8617 |
| 0.0513 | 10.0 | 7500 | 0.7772 | 0.8483 |
| 0.0371 | 11.0 | 8250 | 0.7425 | 0.8667 |
| 0.0559 | 12.0 | 9000 | 0.7844 | 0.8633 |
| 0.0437 | 13.0 | 9750 | 0.9475 | 0.8617 |
| 0.0237 | 14.0 | 10500 | 0.8539 | 0.86 |
| 0.0064 | 15.0 | 11250 | 1.1662 | 0.8683 |
| 0.0766 | 16.0 | 12000 | 1.1003 | 0.8683 |
| 0.0045 | 17.0 | 12750 | 1.1294 | 0.8633 |
| 0.0012 | 18.0 | 13500 | 1.0595 | 0.8717 |
| 0.0107 | 19.0 | 14250 | 1.0246 | 0.875 |
| 0.0098 | 20.0 | 15000 | 0.9670 | 0.8633 |
| 0.0227 | 21.0 | 15750 | 1.0829 | 0.8633 |
| 0.0004 | 22.0 | 16500 | 1.0091 | 0.855 |
| 0.0026 | 23.0 | 17250 | 1.0123 | 0.8667 |
| 0.001 | 24.0 | 18000 | 1.0183 | 0.8783 |
| 0.0083 | 25.0 | 18750 | 1.2133 | 0.8533 |
| 0.0076 | 26.0 | 19500 | 1.0638 | 0.865 |
| 0.0045 | 27.0 | 20250 | 1.1546 | 0.8717 |
| 0.0001 | 28.0 | 21000 | 1.0902 | 0.8567 |
| 0.0003 | 29.0 | 21750 | 1.1809 | 0.86 |
| 0.0 | 30.0 | 22500 | 1.2715 | 0.8733 |
| 0.0001 | 31.0 | 23250 | 1.1922 | 0.8767 |
| 0.0 | 32.0 | 24000 | 1.4076 | 0.87 |
| 0.0075 | 33.0 | 24750 | 1.3961 | 0.8617 |
| 0.0 | 34.0 | 25500 | 1.4345 | 0.875 |
| 0.0 | 35.0 | 26250 | 1.6125 | 0.8683 |
| 0.0 | 36.0 | 27000 | 1.5456 | 0.8567 |
| 0.0 | 37.0 | 27750 | 1.5632 | 0.865 |
| 0.0 | 38.0 | 28500 | 1.6349 | 0.8617 |
| 0.0 | 39.0 | 29250 | 1.5362 | 0.8617 |
| 0.0 | 40.0 | 30000 | 1.6434 | 0.8667 |
| 0.0 | 41.0 | 30750 | 1.6815 | 0.87 |
| 0.0 | 42.0 | 31500 | 1.6593 | 0.8667 |
| 0.0 | 43.0 | 32250 | 1.6757 | 0.87 |
| 0.0 | 44.0 | 33000 | 1.6503 | 0.8683 |
| 0.0 | 45.0 | 33750 | 1.6999 | 0.8667 |
| 0.0 | 46.0 | 34500 | 1.6868 | 0.8667 |
| 0.0 | 47.0 | 35250 | 1.6803 | 0.87 |
| 0.0 | 48.0 | 36000 | 1.6872 | 0.8733 |
| 0.0 | 49.0 | 36750 | 1.6911 | 0.8717 |
| 0.0 | 50.0 | 37500 | 1.6842 | 0.8717 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_beit_large_adamax_001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_beit_large_adamax_001_fold5
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8836
- Accuracy: 0.905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3665 | 1.0 | 750 | 0.3594 | 0.8583 |
| 0.2964 | 2.0 | 1500 | 0.4126 | 0.8483 |
| 0.2817 | 3.0 | 2250 | 0.2955 | 0.895 |
| 0.2107 | 4.0 | 3000 | 0.4285 | 0.8483 |
| 0.2441 | 5.0 | 3750 | 0.2917 | 0.905 |
| 0.2284 | 6.0 | 4500 | 0.3000 | 0.8933 |
| 0.1417 | 7.0 | 5250 | 0.3775 | 0.9033 |
| 0.1212 | 8.0 | 6000 | 0.4010 | 0.9 |
| 0.1114 | 9.0 | 6750 | 0.3900 | 0.8917 |
| 0.1229 | 10.0 | 7500 | 0.5863 | 0.8833 |
| 0.0978 | 11.0 | 8250 | 0.5114 | 0.8883 |
| 0.019 | 12.0 | 9000 | 0.6596 | 0.9033 |
| 0.0244 | 13.0 | 9750 | 0.6428 | 0.9017 |
| 0.0242 | 14.0 | 10500 | 0.6293 | 0.9 |
| 0.0159 | 15.0 | 11250 | 0.5943 | 0.9067 |
| 0.0287 | 16.0 | 12000 | 0.4876 | 0.9033 |
| 0.0161 | 17.0 | 12750 | 0.7094 | 0.8933 |
| 0.0033 | 18.0 | 13500 | 0.7392 | 0.9117 |
| 0.0133 | 19.0 | 14250 | 0.6855 | 0.9017 |
| 0.0009 | 20.0 | 15000 | 0.7025 | 0.895 |
| 0.033 | 21.0 | 15750 | 0.5767 | 0.895 |
| 0.0007 | 22.0 | 16500 | 0.6533 | 0.8983 |
| 0.0005 | 23.0 | 17250 | 0.8501 | 0.8883 |
| 0.0041 | 24.0 | 18000 | 0.6751 | 0.91 |
| 0.0016 | 25.0 | 18750 | 0.8175 | 0.8983 |
| 0.022 | 26.0 | 19500 | 0.7166 | 0.9067 |
| 0.002 | 27.0 | 20250 | 0.7746 | 0.9033 |
| 0.0002 | 28.0 | 21000 | 0.7048 | 0.91 |
| 0.0002 | 29.0 | 21750 | 0.8217 | 0.9083 |
| 0.0187 | 30.0 | 22500 | 0.7107 | 0.8983 |
| 0.0002 | 31.0 | 23250 | 0.7863 | 0.9133 |
| 0.0 | 32.0 | 24000 | 0.8314 | 0.8983 |
| 0.0 | 33.0 | 24750 | 0.7909 | 0.8967 |
| 0.0003 | 34.0 | 25500 | 0.8566 | 0.905 |
| 0.0 | 35.0 | 26250 | 0.7280 | 0.9117 |
| 0.0 | 36.0 | 27000 | 0.8236 | 0.9017 |
| 0.0068 | 37.0 | 27750 | 0.7886 | 0.92 |
| 0.0 | 38.0 | 28500 | 0.8302 | 0.9017 |
| 0.0 | 39.0 | 29250 | 0.8589 | 0.9067 |
| 0.0 | 40.0 | 30000 | 0.8152 | 0.9017 |
| 0.0 | 41.0 | 30750 | 0.8501 | 0.905 |
| 0.0 | 42.0 | 31500 | 0.8563 | 0.91 |
| 0.0 | 43.0 | 32250 | 0.7690 | 0.9117 |
| 0.0 | 44.0 | 33000 | 0.8007 | 0.9083 |
| 0.0 | 45.0 | 33750 | 0.8622 | 0.9033 |
| 0.0001 | 46.0 | 34500 | 0.8624 | 0.905 |
| 0.0 | 47.0 | 35250 | 0.8665 | 0.9067 |
| 0.0 | 48.0 | 36000 | 0.8739 | 0.9067 |
| 0.0 | 49.0 | 36750 | 0.8825 | 0.9067 |
| 0.0 | 50.0 | 37500 | 0.8836 | 0.905 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
bookworm88/vitbase224
|
Model Trained Using Model google/vit-base-patch16-224
Problem type: Image Classification
Validate Metrics
Accuracy: 0.8317757009345794
Precision: 0.7731769036116862
Recall: 0.7727598566308244
F1 Score: 0.756015135878913
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
yuanhuaisen/autotrain-r8wab-qlqlg
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.476613849401474
f1_macro: 0.9019633297479075
f1_micro: 0.9210526315789473
f1_weighted: 0.9196370878793735
precision_macro: 0.9094932844932844
precision_micro: 0.9210526315789473
precision_weighted: 0.9195042895700791
recall_macro: 0.8957219251336898
recall_micro: 0.9210526315789473
recall_weighted: 0.9210526315789473
accuracy: 0.9210526315789473
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
hkivancoral/smids_10x_beit_large_adamax_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_beit_large_adamax_00001_fold1
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8887
- Accuracy: 0.9282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1288 | 1.0 | 751 | 0.2785 | 0.9065 |
| 0.0676 | 2.0 | 1502 | 0.3146 | 0.9149 |
| 0.0264 | 3.0 | 2253 | 0.4181 | 0.9115 |
| 0.025 | 4.0 | 3004 | 0.5488 | 0.9199 |
| 0.0069 | 5.0 | 3755 | 0.5526 | 0.9182 |
| 0.0049 | 6.0 | 4506 | 0.6296 | 0.9165 |
| 0.0005 | 7.0 | 5257 | 0.7054 | 0.9149 |
| 0.0001 | 8.0 | 6008 | 0.7404 | 0.9182 |
| 0.0362 | 9.0 | 6759 | 0.7520 | 0.9132 |
| 0.0001 | 10.0 | 7510 | 0.8011 | 0.9149 |
| 0.0001 | 11.0 | 8261 | 0.7591 | 0.9199 |
| 0.0002 | 12.0 | 9012 | 0.7216 | 0.9215 |
| 0.0024 | 13.0 | 9763 | 0.8101 | 0.9132 |
| 0.0 | 14.0 | 10514 | 0.8382 | 0.9249 |
| 0.0 | 15.0 | 11265 | 0.8571 | 0.9165 |
| 0.0 | 16.0 | 12016 | 0.8307 | 0.9249 |
| 0.0002 | 17.0 | 12767 | 0.8135 | 0.9098 |
| 0.0 | 18.0 | 13518 | 0.9070 | 0.9132 |
| 0.0 | 19.0 | 14269 | 0.8650 | 0.9115 |
| 0.0 | 20.0 | 15020 | 0.8297 | 0.9265 |
| 0.0 | 21.0 | 15771 | 0.8359 | 0.9282 |
| 0.0 | 22.0 | 16522 | 0.8827 | 0.9265 |
| 0.0 | 23.0 | 17273 | 0.8484 | 0.9215 |
| 0.0 | 24.0 | 18024 | 0.8739 | 0.9182 |
| 0.0004 | 25.0 | 18775 | 0.8728 | 0.9232 |
| 0.0 | 26.0 | 19526 | 0.8742 | 0.9149 |
| 0.0 | 27.0 | 20277 | 0.9029 | 0.9199 |
| 0.0 | 28.0 | 21028 | 0.8812 | 0.9232 |
| 0.0109 | 29.0 | 21779 | 0.9326 | 0.9215 |
| 0.0 | 30.0 | 22530 | 0.9197 | 0.9115 |
| 0.0001 | 31.0 | 23281 | 0.8910 | 0.9215 |
| 0.0 | 32.0 | 24032 | 0.8659 | 0.9215 |
| 0.0 | 33.0 | 24783 | 0.8759 | 0.9232 |
| 0.0 | 34.0 | 25534 | 0.9176 | 0.9199 |
| 0.0 | 35.0 | 26285 | 0.8674 | 0.9249 |
| 0.0 | 36.0 | 27036 | 0.8364 | 0.9249 |
| 0.0 | 37.0 | 27787 | 0.8518 | 0.9265 |
| 0.0 | 38.0 | 28538 | 0.8614 | 0.9232 |
| 0.0 | 39.0 | 29289 | 0.8789 | 0.9215 |
| 0.0 | 40.0 | 30040 | 0.8979 | 0.9215 |
| 0.0 | 41.0 | 30791 | 0.9262 | 0.9199 |
| 0.0107 | 42.0 | 31542 | 0.8969 | 0.9232 |
| 0.0 | 43.0 | 32293 | 0.9021 | 0.9265 |
| 0.0 | 44.0 | 33044 | 0.8921 | 0.9282 |
| 0.0 | 45.0 | 33795 | 0.9002 | 0.9249 |
| 0.0007 | 46.0 | 34546 | 0.9147 | 0.9199 |
| 0.0 | 47.0 | 35297 | 0.8904 | 0.9249 |
| 0.0 | 48.0 | 36048 | 0.8842 | 0.9282 |
| 0.0 | 49.0 | 36799 | 0.8899 | 0.9265 |
| 0.0 | 50.0 | 37550 | 0.8887 | 0.9282 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
AutumnQiu/finetuned-classficatin-fer2013
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-classficatin-fer2013
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the fer2013 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0586
- Accuracy: 0.6952
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5814 | 0.06 | 100 | 1.1124 | 0.6525 |
| 0.3856 | 0.11 | 200 | 1.1988 | 0.6428 |
| 0.4073 | 0.17 | 300 | 1.2319 | 0.6492 |
| 0.3408 | 0.22 | 400 | 1.1527 | 0.6581 |
| 0.4001 | 0.28 | 500 | 1.1861 | 0.6601 |
| 0.3973 | 0.33 | 600 | 1.1161 | 0.6637 |
| 0.3897 | 0.39 | 700 | 1.2955 | 0.6464 |
| 0.4689 | 0.45 | 800 | 1.1492 | 0.6578 |
| 0.4173 | 0.5 | 900 | 1.1538 | 0.6659 |
| 0.3238 | 0.56 | 1000 | 1.1742 | 0.6704 |
| 0.2774 | 0.61 | 1100 | 1.1426 | 0.6771 |
| 0.3948 | 0.67 | 1200 | 1.1533 | 0.6701 |
| 0.3258 | 0.72 | 1300 | 1.1405 | 0.6743 |
| 0.3816 | 0.78 | 1400 | 1.1101 | 0.6838 |
| 0.308 | 0.84 | 1500 | 1.1281 | 0.6871 |
| 0.4592 | 0.89 | 1600 | 1.0971 | 0.6938 |
| 0.3957 | 0.95 | 1700 | 1.0586 | 0.6952 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.13.3
|
[
"angry",
"disgust",
"fear",
"happy",
"sad",
"surprise",
"neutral"
] |
hkivancoral/smids_10x_beit_large_adamax_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_beit_large_adamax_00001_fold2
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9196
- Accuracy: 0.9151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1587 | 1.0 | 750 | 0.2691 | 0.9101 |
| 0.0471 | 2.0 | 1500 | 0.3138 | 0.9135 |
| 0.0407 | 3.0 | 2250 | 0.4729 | 0.9118 |
| 0.0287 | 4.0 | 3000 | 0.5798 | 0.9068 |
| 0.012 | 5.0 | 3750 | 0.7233 | 0.9118 |
| 0.0109 | 6.0 | 4500 | 0.7175 | 0.9168 |
| 0.0017 | 7.0 | 5250 | 0.7940 | 0.9085 |
| 0.0129 | 8.0 | 6000 | 0.7917 | 0.9068 |
| 0.0001 | 9.0 | 6750 | 0.8466 | 0.9068 |
| 0.0033 | 10.0 | 7500 | 0.8662 | 0.9002 |
| 0.0001 | 11.0 | 8250 | 0.9262 | 0.9035 |
| 0.0005 | 12.0 | 9000 | 0.8648 | 0.9035 |
| 0.0001 | 13.0 | 9750 | 0.9176 | 0.9101 |
| 0.0001 | 14.0 | 10500 | 0.9531 | 0.8985 |
| 0.0002 | 15.0 | 11250 | 0.9250 | 0.9035 |
| 0.0418 | 16.0 | 12000 | 0.9389 | 0.9085 |
| 0.0 | 17.0 | 12750 | 0.9725 | 0.9035 |
| 0.0001 | 18.0 | 13500 | 0.9072 | 0.9101 |
| 0.0173 | 19.0 | 14250 | 0.9123 | 0.9151 |
| 0.0042 | 20.0 | 15000 | 0.9275 | 0.9068 |
| 0.0 | 21.0 | 15750 | 0.9111 | 0.9101 |
| 0.0243 | 22.0 | 16500 | 0.9348 | 0.9101 |
| 0.0002 | 23.0 | 17250 | 1.0125 | 0.9052 |
| 0.0002 | 24.0 | 18000 | 0.8943 | 0.9101 |
| 0.0 | 25.0 | 18750 | 1.0215 | 0.9035 |
| 0.0001 | 26.0 | 19500 | 0.9907 | 0.9085 |
| 0.0358 | 27.0 | 20250 | 0.9413 | 0.9101 |
| 0.0003 | 28.0 | 21000 | 0.8860 | 0.9201 |
| 0.0 | 29.0 | 21750 | 0.9273 | 0.9218 |
| 0.0 | 30.0 | 22500 | 0.9583 | 0.9068 |
| 0.0 | 31.0 | 23250 | 0.9280 | 0.9218 |
| 0.0 | 32.0 | 24000 | 0.9420 | 0.9168 |
| 0.0 | 33.0 | 24750 | 0.9244 | 0.9185 |
| 0.0 | 34.0 | 25500 | 0.9598 | 0.9085 |
| 0.0 | 35.0 | 26250 | 0.9576 | 0.9101 |
| 0.0 | 36.0 | 27000 | 0.9574 | 0.9101 |
| 0.0013 | 37.0 | 27750 | 0.9671 | 0.9101 |
| 0.0 | 38.0 | 28500 | 0.9627 | 0.9101 |
| 0.0 | 39.0 | 29250 | 0.9639 | 0.9118 |
| 0.0001 | 40.0 | 30000 | 0.9418 | 0.9118 |
| 0.0003 | 41.0 | 30750 | 0.9216 | 0.9135 |
| 0.0 | 42.0 | 31500 | 0.9226 | 0.9185 |
| 0.0 | 43.0 | 32250 | 0.9076 | 0.9218 |
| 0.0 | 44.0 | 33000 | 0.9133 | 0.9151 |
| 0.0006 | 45.0 | 33750 | 0.9164 | 0.9151 |
| 0.0 | 46.0 | 34500 | 0.9118 | 0.9168 |
| 0.0 | 47.0 | 35250 | 0.9173 | 0.9151 |
| 0.0 | 48.0 | 36000 | 0.9178 | 0.9101 |
| 0.0 | 49.0 | 36750 | 0.9196 | 0.9135 |
| 0.0 | 50.0 | 37500 | 0.9196 | 0.9151 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
Optikan/V3_Image_classification__points_durs__google_vit-base-patch16-224-in21k
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V3_Image_classification__points_durs__google_vit-base-patch16-224-in21k
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0411
- Accuracy: 0.9927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6667 | 1.0 | 15 | 0.5893 | 0.9121 |
| 0.4394 | 2.0 | 30 | 0.3294 | 0.9487 |
| 0.2685 | 3.0 | 45 | 0.1365 | 0.9707 |
| 0.0936 | 4.0 | 60 | 0.0752 | 0.9853 |
| 0.0517 | 5.0 | 75 | 0.0553 | 0.9890 |
| 0.0436 | 6.0 | 90 | 0.0556 | 0.9890 |
| 0.018 | 7.0 | 105 | 0.0557 | 0.9890 |
| 0.0189 | 8.0 | 120 | 0.0457 | 0.9890 |
| 0.013 | 9.0 | 135 | 0.0343 | 0.9927 |
| 0.0115 | 10.0 | 150 | 0.0270 | 0.9963 |
| 0.0101 | 11.0 | 165 | 0.0355 | 0.9927 |
| 0.0085 | 12.0 | 180 | 0.0356 | 0.9927 |
| 0.0079 | 13.0 | 195 | 0.0259 | 0.9963 |
| 0.0069 | 14.0 | 210 | 0.0345 | 0.9927 |
| 0.0066 | 15.0 | 225 | 0.0360 | 0.9927 |
| 0.0061 | 16.0 | 240 | 0.0359 | 0.9927 |
| 0.0059 | 17.0 | 255 | 0.0360 | 0.9927 |
| 0.0055 | 18.0 | 270 | 0.0368 | 0.9927 |
| 0.0054 | 19.0 | 285 | 0.0375 | 0.9927 |
| 0.0051 | 20.0 | 300 | 0.0375 | 0.9927 |
| 0.0049 | 21.0 | 315 | 0.0380 | 0.9927 |
| 0.0047 | 22.0 | 330 | 0.0380 | 0.9927 |
| 0.0046 | 23.0 | 345 | 0.0383 | 0.9927 |
| 0.0044 | 24.0 | 360 | 0.0386 | 0.9927 |
| 0.0043 | 25.0 | 375 | 0.0388 | 0.9927 |
| 0.0041 | 26.0 | 390 | 0.0388 | 0.9927 |
| 0.0041 | 27.0 | 405 | 0.0391 | 0.9927 |
| 0.0039 | 28.0 | 420 | 0.0392 | 0.9927 |
| 0.0038 | 29.0 | 435 | 0.0396 | 0.9927 |
| 0.0037 | 30.0 | 450 | 0.0397 | 0.9927 |
| 0.0037 | 31.0 | 465 | 0.0397 | 0.9927 |
| 0.0036 | 32.0 | 480 | 0.0399 | 0.9927 |
| 0.0035 | 33.0 | 495 | 0.0401 | 0.9927 |
| 0.0034 | 34.0 | 510 | 0.0402 | 0.9927 |
| 0.0034 | 35.0 | 525 | 0.0403 | 0.9927 |
| 0.0033 | 36.0 | 540 | 0.0403 | 0.9927 |
| 0.0033 | 37.0 | 555 | 0.0405 | 0.9927 |
| 0.0032 | 38.0 | 570 | 0.0406 | 0.9927 |
| 0.0032 | 39.0 | 585 | 0.0406 | 0.9927 |
| 0.0031 | 40.0 | 600 | 0.0407 | 0.9927 |
| 0.0031 | 41.0 | 615 | 0.0408 | 0.9927 |
| 0.0031 | 42.0 | 630 | 0.0408 | 0.9927 |
| 0.003 | 43.0 | 645 | 0.0409 | 0.9927 |
| 0.003 | 44.0 | 660 | 0.0410 | 0.9927 |
| 0.003 | 45.0 | 675 | 0.0410 | 0.9927 |
| 0.003 | 46.0 | 690 | 0.0410 | 0.9927 |
| 0.003 | 47.0 | 705 | 0.0410 | 0.9927 |
| 0.0029 | 48.0 | 720 | 0.0411 | 0.9927 |
| 0.0029 | 49.0 | 735 | 0.0411 | 0.9927 |
| 0.0029 | 50.0 | 750 | 0.0411 | 0.9927 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.1.1
- Datasets 2.15.0
- Tokenizers 0.13.3
|
[
"avec_points_durs",
"sans_points_durs"
] |
moock/swinv2-tiny-patch4-window8-256-finetuned-gardner-icm-max
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-finetuned-gardner-icm-max
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0741
- Accuracy: 0.6429
## Model description
Predict Inner Cell Mass Grade - Gardner Score from an embryo image
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0925 | 0.94 | 11 | 1.0631 | 0.7952 |
| 0.9552 | 1.96 | 23 | 0.6336 | 0.7952 |
| 0.6566 | 2.98 | 35 | 0.5356 | 0.7952 |
| 0.5686 | 4.0 | 47 | 0.5150 | 0.7952 |
| 0.5703 | 4.94 | 58 | 0.5129 | 0.7952 |
| 0.5726 | 5.96 | 70 | 0.5154 | 0.7952 |
| 0.5482 | 6.98 | 82 | 0.5142 | 0.7952 |
| 0.568 | 8.0 | 94 | 0.5109 | 0.7952 |
| 0.5245 | 8.94 | 105 | 0.5134 | 0.7952 |
| 0.5979 | 9.96 | 117 | 0.5238 | 0.7952 |
| 0.5442 | 10.98 | 129 | 0.5076 | 0.7952 |
| 0.545 | 12.0 | 141 | 0.5062 | 0.7952 |
| 0.5514 | 12.94 | 152 | 0.5013 | 0.7952 |
| 0.5377 | 13.96 | 164 | 0.5045 | 0.7952 |
| 0.5282 | 14.98 | 176 | 0.5038 | 0.7952 |
| 0.5389 | 16.0 | 188 | 0.4994 | 0.7952 |
| 0.5039 | 16.94 | 199 | 0.4996 | 0.7952 |
| 0.5348 | 17.96 | 211 | 0.4940 | 0.7952 |
| 0.5426 | 18.72 | 220 | 0.4947 | 0.7952 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"0",
"1",
"2"
] |
moock/swinv2-tiny-patch4-window8-256-finetuned-gardner-te-max
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-finetuned-gardner-te-max
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8795
- Accuracy: 0.5940
## Model description
Predict Trophectoderm Grade - Gardner Score from an embryo image
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0943 | 0.94 | 11 | 1.0750 | 0.6325 |
| 0.9996 | 1.96 | 23 | 0.8011 | 0.6325 |
| 0.7731 | 2.98 | 35 | 0.7182 | 0.6325 |
| 0.7564 | 4.0 | 47 | 0.7109 | 0.6325 |
| 0.7331 | 4.94 | 58 | 0.7026 | 0.6325 |
| 0.7336 | 5.96 | 70 | 0.6848 | 0.6325 |
| 0.7305 | 6.98 | 82 | 0.6938 | 0.6325 |
| 0.7314 | 8.0 | 94 | 0.6549 | 0.6325 |
| 0.6905 | 8.94 | 105 | 0.6364 | 0.6867 |
| 0.7315 | 9.96 | 117 | 0.6223 | 0.6687 |
| 0.6839 | 10.98 | 129 | 0.6528 | 0.7530 |
| 0.6931 | 12.0 | 141 | 0.6209 | 0.7410 |
| 0.6705 | 12.94 | 152 | 0.6296 | 0.7169 |
| 0.7227 | 13.96 | 164 | 0.6039 | 0.7108 |
| 0.6695 | 14.98 | 176 | 0.6049 | 0.7530 |
| 0.6981 | 16.0 | 188 | 0.5965 | 0.7048 |
| 0.6566 | 16.94 | 199 | 0.6111 | 0.7410 |
| 0.6828 | 17.96 | 211 | 0.5969 | 0.7530 |
| 0.6632 | 18.72 | 220 | 0.5947 | 0.7530 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"0",
"1",
"2"
] |
hkivancoral/smids_10x_beit_large_adamax_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_beit_large_adamax_00001_fold3
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7342
- Accuracy: 0.93
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1234 | 1.0 | 750 | 0.2380 | 0.9133 |
| 0.0658 | 2.0 | 1500 | 0.2732 | 0.9317 |
| 0.0204 | 3.0 | 2250 | 0.3498 | 0.9217 |
| 0.0213 | 4.0 | 3000 | 0.4104 | 0.925 |
| 0.0054 | 5.0 | 3750 | 0.4509 | 0.9317 |
| 0.0002 | 6.0 | 4500 | 0.5343 | 0.9233 |
| 0.0104 | 7.0 | 5250 | 0.5450 | 0.9267 |
| 0.0001 | 8.0 | 6000 | 0.6214 | 0.9217 |
| 0.0002 | 9.0 | 6750 | 0.5669 | 0.9333 |
| 0.0 | 10.0 | 7500 | 0.5842 | 0.9233 |
| 0.0003 | 11.0 | 8250 | 0.5405 | 0.9267 |
| 0.0007 | 12.0 | 9000 | 0.6365 | 0.9233 |
| 0.0 | 13.0 | 9750 | 0.6437 | 0.9267 |
| 0.0006 | 14.0 | 10500 | 0.6868 | 0.92 |
| 0.0 | 15.0 | 11250 | 0.6484 | 0.93 |
| 0.0 | 16.0 | 12000 | 0.6945 | 0.925 |
| 0.0 | 17.0 | 12750 | 0.6473 | 0.925 |
| 0.0 | 18.0 | 13500 | 0.7329 | 0.9233 |
| 0.0 | 19.0 | 14250 | 0.6697 | 0.9283 |
| 0.0 | 20.0 | 15000 | 0.7054 | 0.9317 |
| 0.0 | 21.0 | 15750 | 0.7229 | 0.9267 |
| 0.0001 | 22.0 | 16500 | 0.6657 | 0.9267 |
| 0.0 | 23.0 | 17250 | 0.6845 | 0.925 |
| 0.0 | 24.0 | 18000 | 0.7071 | 0.9233 |
| 0.0 | 25.0 | 18750 | 0.7119 | 0.9267 |
| 0.0 | 26.0 | 19500 | 0.7250 | 0.9283 |
| 0.0 | 27.0 | 20250 | 0.7491 | 0.93 |
| 0.0 | 28.0 | 21000 | 0.7325 | 0.9267 |
| 0.0 | 29.0 | 21750 | 0.7225 | 0.93 |
| 0.0 | 30.0 | 22500 | 0.7702 | 0.93 |
| 0.0 | 31.0 | 23250 | 0.7702 | 0.93 |
| 0.0 | 32.0 | 24000 | 0.7279 | 0.93 |
| 0.0 | 33.0 | 24750 | 0.7215 | 0.9283 |
| 0.0 | 34.0 | 25500 | 0.7215 | 0.9267 |
| 0.0 | 35.0 | 26250 | 0.7456 | 0.9267 |
| 0.0 | 36.0 | 27000 | 0.7430 | 0.9267 |
| 0.0 | 37.0 | 27750 | 0.7363 | 0.9283 |
| 0.0 | 38.0 | 28500 | 0.7489 | 0.93 |
| 0.0 | 39.0 | 29250 | 0.7854 | 0.9267 |
| 0.0 | 40.0 | 30000 | 0.7378 | 0.9283 |
| 0.0 | 41.0 | 30750 | 0.7334 | 0.93 |
| 0.0 | 42.0 | 31500 | 0.7235 | 0.9333 |
| 0.0 | 43.0 | 32250 | 0.7203 | 0.93 |
| 0.0 | 44.0 | 33000 | 0.7319 | 0.9267 |
| 0.0 | 45.0 | 33750 | 0.7326 | 0.93 |
| 0.0 | 46.0 | 34500 | 0.7443 | 0.93 |
| 0.0 | 47.0 | 35250 | 0.7511 | 0.93 |
| 0.0 | 48.0 | 36000 | 0.7575 | 0.93 |
| 0.0 | 49.0 | 36750 | 0.7357 | 0.93 |
| 0.0 | 50.0 | 37500 | 0.7342 | 0.93 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
kjlkjl/vit-base-patch16-224-in21k
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0500
- Accuracy: 0.2143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 2.0641 | 0.1429 |
| No log | 2.0 | 2 | 2.0558 | 0.2857 |
| No log | 3.0 | 3 | 2.0500 | 0.2143 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"disability",
"healthy",
"osteochondrosis",
"osteoporosis",
"other",
"scoliosis",
"spondylolisthesis",
"vertebral_compression_fracture"
] |
kjlkjl/swin-tiny-patch4-window7-224
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1087
- Accuracy: 0.1429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 2.1404 | 0.0714 |
| No log | 2.0 | 2 | 2.1244 | 0.1429 |
| No log | 3.0 | 3 | 2.1087 | 0.1429 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"disability",
"healthy",
"osteochondrosis",
"osteoporosis",
"other",
"scoliosis",
"spondylolisthesis",
"vertebral_compression_fracture"
] |
Muinez/artwork-scorer
|
Trained on 120,000 images collected from Pixiv rankings, the score is the normalized ratio of likes to views
This model is a fine-tuned version of [facebook/convnextv2-base-22k-384](https://huggingface.co/facebook/convnextv2-base-22k-384)
|
[
"score",
"views",
"date"
] |
bookworm88/vit224-2
|
Usage:<br>
Image classification<br>
Project:<br>
Cover quilt<br>
Labels:<br>
11covered_with_a_quilt_and_only_the_head_exposed<br>
12covered_with_a_quilt_and_exposed_other_parts_of_the_body<br>
<br>
Indicators:<br>
Accuracy: 0.9591836734693877<br>
Precision: 0.9545454545454546<br>
Recall: 0.9655172413793103<br>
F1 Score: 0.9583333333333333<br>
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body"
] |
enverkulahli/my_awesome_catSound_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_catSound_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9396
- Accuracy: 0.7653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4061 | 0.99 | 74 | 1.3136 | 0.6770 |
| 1.0114 | 2.0 | 149 | 1.0185 | 0.7393 |
| 0.8646 | 2.98 | 222 | 0.9396 | 0.7653 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"angry",
"defence",
"fighting",
"happy",
"huntingmind",
"mating",
"mothercall",
"paining",
"resting",
"warning"
] |
TrieuNguyen/chest_xray_pneumonia
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chest_xray_pneumonia
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2508
- Accuracy: 0.9151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1091 | 0.99 | 81 | 0.2422 | 0.9119 |
| 0.1085 | 2.0 | 163 | 0.2777 | 0.9167 |
| 0.1131 | 2.99 | 244 | 0.1875 | 0.9407 |
| 0.1129 | 4.0 | 326 | 0.2339 | 0.9183 |
| 0.0698 | 4.99 | 407 | 0.2581 | 0.9263 |
| 0.0904 | 6.0 | 489 | 0.2544 | 0.9167 |
| 0.0851 | 6.99 | 570 | 0.2023 | 0.9407 |
| 0.0833 | 8.0 | 652 | 0.2047 | 0.9327 |
| 0.0604 | 8.99 | 733 | 0.2738 | 0.9199 |
| 0.0671 | 9.94 | 810 | 0.2508 | 0.9151 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"normal",
"pneumonia"
] |
yuanhuaisen/autotrain-khvt4-4vmox
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.18480469286441803
f1: 0.962962962962963
precision: 1.0
recall: 0.9285714285714286
auc: 1.0
accuracy: 0.96
|
[
"11train_covered_with_a_quilt_and_only_the_head_exposed",
"12train_covered_with_a_quilt_and_exposed_other_parts_of_the_body"
] |
Optikan/V4_Image_classification__points_durs__google_vit-base-patch16-224-in21k
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V4_Image_classification__points_durs__google_vit-base-patch16-224-in21k
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2221
- Accuracy: 0.9560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6743 | 1.0 | 13 | 0.6315 | 0.7566 |
| 0.6051 | 2.0 | 26 | 0.4384 | 0.9150 |
| 0.4588 | 3.0 | 39 | 0.2402 | 0.9326 |
| 0.1818 | 4.0 | 52 | 0.1702 | 0.9384 |
| 0.1102 | 5.0 | 65 | 0.1409 | 0.9413 |
| 0.0733 | 6.0 | 78 | 0.1516 | 0.9501 |
| 0.0423 | 7.0 | 91 | 0.1613 | 0.9560 |
| 0.0286 | 8.0 | 104 | 0.1843 | 0.9501 |
| 0.0192 | 9.0 | 117 | 0.1672 | 0.9560 |
| 0.0159 | 10.0 | 130 | 0.1703 | 0.9589 |
| 0.0173 | 11.0 | 143 | 0.1729 | 0.9560 |
| 0.0143 | 12.0 | 156 | 0.1786 | 0.9560 |
| 0.0105 | 13.0 | 169 | 0.1821 | 0.9560 |
| 0.0091 | 14.0 | 182 | 0.1827 | 0.9589 |
| 0.0096 | 15.0 | 195 | 0.1859 | 0.9560 |
| 0.0081 | 16.0 | 208 | 0.1989 | 0.9560 |
| 0.0075 | 17.0 | 221 | 0.2012 | 0.9560 |
| 0.0347 | 18.0 | 234 | 0.2507 | 0.9384 |
| 0.0232 | 19.0 | 247 | 0.2271 | 0.9413 |
| 0.0065 | 20.0 | 260 | 0.1950 | 0.9589 |
| 0.0102 | 21.0 | 273 | 0.2378 | 0.9472 |
| 0.0064 | 22.0 | 286 | 0.2265 | 0.9501 |
| 0.0058 | 23.0 | 299 | 0.2033 | 0.9560 |
| 0.0055 | 24.0 | 312 | 0.2402 | 0.9501 |
| 0.005 | 25.0 | 325 | 0.2500 | 0.9443 |
| 0.0054 | 26.0 | 338 | 0.2450 | 0.9472 |
| 0.0048 | 27.0 | 351 | 0.2431 | 0.9501 |
| 0.0047 | 28.0 | 364 | 0.2439 | 0.9472 |
| 0.0046 | 29.0 | 377 | 0.2445 | 0.9472 |
| 0.0044 | 30.0 | 390 | 0.2434 | 0.9472 |
| 0.0042 | 31.0 | 403 | 0.2441 | 0.9472 |
| 0.0042 | 32.0 | 416 | 0.2426 | 0.9472 |
| 0.0042 | 33.0 | 429 | 0.2414 | 0.9472 |
| 0.004 | 34.0 | 442 | 0.2383 | 0.9472 |
| 0.004 | 35.0 | 455 | 0.2349 | 0.9472 |
| 0.0039 | 36.0 | 468 | 0.2340 | 0.9472 |
| 0.0038 | 37.0 | 481 | 0.2325 | 0.9472 |
| 0.0037 | 38.0 | 494 | 0.2311 | 0.9501 |
| 0.0038 | 39.0 | 507 | 0.2280 | 0.9501 |
| 0.0037 | 40.0 | 520 | 0.2263 | 0.9531 |
| 0.0036 | 41.0 | 533 | 0.2248 | 0.9531 |
| 0.0036 | 42.0 | 546 | 0.2242 | 0.9531 |
| 0.0036 | 43.0 | 559 | 0.2236 | 0.9531 |
| 0.0035 | 44.0 | 572 | 0.2231 | 0.9560 |
| 0.0035 | 45.0 | 585 | 0.2224 | 0.9560 |
| 0.0035 | 46.0 | 598 | 0.2223 | 0.9560 |
| 0.0035 | 47.0 | 611 | 0.2220 | 0.9560 |
| 0.0035 | 48.0 | 624 | 0.2221 | 0.9560 |
| 0.0034 | 49.0 | 637 | 0.2221 | 0.9560 |
| 0.0035 | 50.0 | 650 | 0.2221 | 0.9560 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.1.1
- Datasets 2.15.0
- Tokenizers 0.13.3
|
[
"hard_points",
"no_hard_points"
] |
yuanhuaisen/autotrain-koz62-88avl
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.5237402319908142
f1: 0.9333333333333333
precision: 0.875
recall: 1.0
auc: 0.9805194805194805
accuracy: 0.92
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body"
] |
kg59/vit-base-patch16-224-finetuned-cedar
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-cedar
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4667
- Accuracy: 0.7883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5419 | 1.0 | 54 | 0.5085 | 0.7657 |
| 0.4541 | 2.0 | 108 | 0.4667 | 0.7883 |
| 0.3847 | 3.0 | 162 | 0.5603 | 0.7320 |
| 0.3669 | 4.0 | 216 | 0.4869 | 0.7749 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"full_forg",
"full_org"
] |
99spokes/autotrain-xanso-s7ois
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 1.8660757541656494
f1_macro: 0.08333333333333334
f1_micro: 0.2923076923076923
f1_weighted: 0.14923076923076925
precision_macro: 0.0890937019969278
precision_micro: 0.2923076923076923
precision_weighted: 0.14193548387096774
recall_macro: 0.15476190476190474
recall_micro: 0.2923076923076923
recall_weighted: 0.2923076923076923
accuracy: 0.2923076923076923
|
[
"action",
"frameset",
"geometry",
"other",
"placeholder",
"studio-other",
"studio-side"
] |
99spokes/autotrain-vit
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.020408163265306124
f1_micro: 0.07692307692307693
f1_weighted: 0.010989010989010992
precision_macro: 0.01098901098901099
precision_micro: 0.07692307692307693
precision_weighted: 0.00591715976331361
recall_macro: 0.14285714285714285
recall_micro: 0.07692307692307693
recall_weighted: 0.07692307692307693
accuracy: 0.07692307692307693
|
[
"action",
"frameset",
"geometry",
"other",
"placeholder",
"studio-other",
"studio-side"
] |
kjlkjl/resnet-50
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0320
- Accuracy: 0.5186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3301 | 1.0 | 32 | 1.3377 | 0.3477 |
| 1.2001 | 2.0 | 64 | 1.2172 | 0.4414 |
| 1.1188 | 3.0 | 96 | 1.1265 | 0.5010 |
| 1.0655 | 4.0 | 128 | 1.1025 | 0.5010 |
| 1.0437 | 5.0 | 160 | 1.0753 | 0.5010 |
| 1.0374 | 6.0 | 192 | 1.0629 | 0.5029 |
| 1.0181 | 7.0 | 224 | 1.0452 | 0.5137 |
| 1.0011 | 8.0 | 256 | 1.0381 | 0.5127 |
| 1.0074 | 9.0 | 288 | 1.0268 | 0.5098 |
| 0.9977 | 10.0 | 320 | 1.0320 | 0.5186 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"mild_demented",
"moderate_demented",
"non_demented",
"very_mild_demented"
] |
hkivancoral/smids_10x_beit_large_adamax_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_beit_large_adamax_00001_fold4
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1774
- Accuracy: 0.8933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.174 | 1.0 | 750 | 0.3318 | 0.8817 |
| 0.0694 | 2.0 | 1500 | 0.3979 | 0.8833 |
| 0.0385 | 3.0 | 2250 | 0.6069 | 0.8817 |
| 0.0028 | 4.0 | 3000 | 0.7041 | 0.8767 |
| 0.0151 | 5.0 | 3750 | 0.8263 | 0.8783 |
| 0.017 | 6.0 | 4500 | 0.8468 | 0.8917 |
| 0.0004 | 7.0 | 5250 | 0.9156 | 0.8817 |
| 0.0149 | 8.0 | 6000 | 0.9947 | 0.8883 |
| 0.0019 | 9.0 | 6750 | 0.9986 | 0.8833 |
| 0.0 | 10.0 | 7500 | 1.0174 | 0.89 |
| 0.0002 | 11.0 | 8250 | 1.0347 | 0.8983 |
| 0.0006 | 12.0 | 9000 | 1.1212 | 0.8883 |
| 0.0007 | 13.0 | 9750 | 1.1145 | 0.9 |
| 0.002 | 14.0 | 10500 | 1.1511 | 0.895 |
| 0.0113 | 15.0 | 11250 | 1.1891 | 0.8833 |
| 0.0193 | 16.0 | 12000 | 1.1467 | 0.8833 |
| 0.0 | 17.0 | 12750 | 1.2067 | 0.8833 |
| 0.0 | 18.0 | 13500 | 1.1030 | 0.8917 |
| 0.0 | 19.0 | 14250 | 1.2269 | 0.8817 |
| 0.0 | 20.0 | 15000 | 1.2142 | 0.8983 |
| 0.0 | 21.0 | 15750 | 1.2333 | 0.8833 |
| 0.0 | 22.0 | 16500 | 1.2215 | 0.89 |
| 0.0 | 23.0 | 17250 | 1.1755 | 0.88 |
| 0.0001 | 24.0 | 18000 | 1.2025 | 0.89 |
| 0.0 | 25.0 | 18750 | 1.1234 | 0.8967 |
| 0.0 | 26.0 | 19500 | 1.1299 | 0.8933 |
| 0.0 | 27.0 | 20250 | 1.1278 | 0.8933 |
| 0.0 | 28.0 | 21000 | 1.1853 | 0.89 |
| 0.0 | 29.0 | 21750 | 1.1366 | 0.8967 |
| 0.0 | 30.0 | 22500 | 1.2109 | 0.8817 |
| 0.0 | 31.0 | 23250 | 1.2247 | 0.88 |
| 0.0124 | 32.0 | 24000 | 1.2057 | 0.885 |
| 0.0 | 33.0 | 24750 | 1.2082 | 0.8933 |
| 0.0 | 34.0 | 25500 | 1.1875 | 0.8933 |
| 0.0 | 35.0 | 26250 | 1.1823 | 0.8983 |
| 0.0 | 36.0 | 27000 | 1.1794 | 0.8883 |
| 0.0 | 37.0 | 27750 | 1.1760 | 0.8917 |
| 0.0 | 38.0 | 28500 | 1.1363 | 0.895 |
| 0.0 | 39.0 | 29250 | 1.1574 | 0.895 |
| 0.0 | 40.0 | 30000 | 1.1725 | 0.8933 |
| 0.0 | 41.0 | 30750 | 1.1844 | 0.8867 |
| 0.0 | 42.0 | 31500 | 1.1542 | 0.8933 |
| 0.0 | 43.0 | 32250 | 1.1472 | 0.895 |
| 0.0 | 44.0 | 33000 | 1.1640 | 0.8917 |
| 0.0 | 45.0 | 33750 | 1.1642 | 0.89 |
| 0.0 | 46.0 | 34500 | 1.1680 | 0.8933 |
| 0.0 | 47.0 | 35250 | 1.1880 | 0.895 |
| 0.0 | 48.0 | 36000 | 1.1744 | 0.8933 |
| 0.0 | 49.0 | 36750 | 1.1763 | 0.8933 |
| 0.0008 | 50.0 | 37500 | 1.1774 | 0.8933 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_beit_large_adamax_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_beit_large_adamax_00001_fold5
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8705
- Accuracy: 0.9183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.151 | 1.0 | 750 | 0.2341 | 0.9117 |
| 0.085 | 2.0 | 1500 | 0.2729 | 0.9117 |
| 0.0389 | 3.0 | 2250 | 0.3555 | 0.9183 |
| 0.0354 | 4.0 | 3000 | 0.4728 | 0.92 |
| 0.0161 | 5.0 | 3750 | 0.5494 | 0.9117 |
| 0.0006 | 6.0 | 4500 | 0.5920 | 0.9167 |
| 0.0191 | 7.0 | 5250 | 0.7177 | 0.9083 |
| 0.0025 | 8.0 | 6000 | 0.7193 | 0.9183 |
| 0.0296 | 9.0 | 6750 | 0.7219 | 0.9183 |
| 0.0071 | 10.0 | 7500 | 0.7346 | 0.9067 |
| 0.0001 | 11.0 | 8250 | 0.8516 | 0.9133 |
| 0.0012 | 12.0 | 9000 | 0.7790 | 0.9217 |
| 0.0009 | 13.0 | 9750 | 0.7769 | 0.9117 |
| 0.0 | 14.0 | 10500 | 0.8050 | 0.92 |
| 0.0 | 15.0 | 11250 | 0.7869 | 0.9167 |
| 0.0001 | 16.0 | 12000 | 0.8102 | 0.9133 |
| 0.0588 | 17.0 | 12750 | 0.7913 | 0.9183 |
| 0.0 | 18.0 | 13500 | 0.9080 | 0.9117 |
| 0.0 | 19.0 | 14250 | 0.7883 | 0.915 |
| 0.0 | 20.0 | 15000 | 0.8588 | 0.9183 |
| 0.0001 | 21.0 | 15750 | 0.8772 | 0.9167 |
| 0.0001 | 22.0 | 16500 | 0.8747 | 0.9133 |
| 0.0001 | 23.0 | 17250 | 0.7911 | 0.9217 |
| 0.0 | 24.0 | 18000 | 0.7828 | 0.9217 |
| 0.0 | 25.0 | 18750 | 0.7802 | 0.9233 |
| 0.0 | 26.0 | 19500 | 0.8237 | 0.92 |
| 0.0 | 27.0 | 20250 | 0.8003 | 0.9217 |
| 0.0 | 28.0 | 21000 | 0.8936 | 0.9133 |
| 0.0009 | 29.0 | 21750 | 0.8831 | 0.915 |
| 0.0181 | 30.0 | 22500 | 0.8036 | 0.9217 |
| 0.0 | 31.0 | 23250 | 0.7557 | 0.9267 |
| 0.0 | 32.0 | 24000 | 0.8859 | 0.92 |
| 0.0 | 33.0 | 24750 | 0.8754 | 0.92 |
| 0.0001 | 34.0 | 25500 | 0.8554 | 0.9117 |
| 0.0 | 35.0 | 26250 | 0.8615 | 0.9167 |
| 0.0 | 36.0 | 27000 | 0.8299 | 0.9217 |
| 0.0035 | 37.0 | 27750 | 0.8816 | 0.9167 |
| 0.0 | 38.0 | 28500 | 0.8681 | 0.9233 |
| 0.0 | 39.0 | 29250 | 0.8281 | 0.92 |
| 0.0 | 40.0 | 30000 | 0.8247 | 0.9183 |
| 0.0008 | 41.0 | 30750 | 0.8595 | 0.9183 |
| 0.0 | 42.0 | 31500 | 0.8563 | 0.92 |
| 0.0038 | 43.0 | 32250 | 0.8322 | 0.925 |
| 0.0 | 44.0 | 33000 | 0.8334 | 0.9183 |
| 0.0 | 45.0 | 33750 | 0.8475 | 0.9183 |
| 0.0 | 46.0 | 34500 | 0.8657 | 0.92 |
| 0.0 | 47.0 | 35250 | 0.8614 | 0.9183 |
| 0.0 | 48.0 | 36000 | 0.8662 | 0.92 |
| 0.0 | 49.0 | 36750 | 0.8708 | 0.9183 |
| 0.0 | 50.0 | 37500 | 0.8705 | 0.9183 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
BhavanaMalla/image_classification_food101VITmodel
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification_food101VITmodel
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5424
- Accuracy: 0.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2504 | 0.96 | 12 | 3.4853 | 0.695 |
| 3.1914 | 2.0 | 25 | 2.7080 | 0.695 |
| 2.6501 | 2.88 | 36 | 2.5424 | 0.7 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
yuanhuaisen/autotrain-sfnkd-pexdp
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.5256543159484863
f1: 0.9285714285714286
precision: 0.9285714285714286
recall: 0.9285714285714286
auc: 0.9805194805194806
accuracy: 0.92
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body"
] |
99spokes/bike-image-classifier-autotrain-resnet
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.029761904761904757
f1_micro: 0.09803921568627451
f1_weighted: 0.01750700280112045
precision_macro: 0.016339869281045753
precision_micro: 0.09803921568627451
precision_weighted: 0.009611687812379853
recall_macro: 0.16666666666666666
recall_micro: 0.09803921568627451
recall_weighted: 0.09803921568627451
accuracy: 0.09803921568627451
|
[
"action",
"frameset",
"geometry",
"placeholder",
"studio-other",
"studio-side"
] |
99spokes/autotrain-uzhlw-t4qte
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.029761904761904757
f1_micro: 0.09803921568627451
f1_weighted: 0.01750700280112045
precision_macro: 0.016339869281045753
precision_micro: 0.09803921568627451
precision_weighted: 0.009611687812379853
recall_macro: 0.16666666666666666
recall_micro: 0.09803921568627451
recall_weighted: 0.09803921568627451
accuracy: 0.09803921568627451
|
[
"action",
"frameset",
"geometry",
"placeholder",
"studio-other",
"studio-side"
] |
rsadaphule/vit-base-patch16-224-finetuned-flower
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.1.0+cu121
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"daisy",
"dandelion",
"roses",
"sunflowers",
"tulips"
] |
yuanhuaisen/autotrain-vz7yn-mlwzp
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.5804179310798645
f1: 0.8275862068965518
precision: 0.8
recall: 0.8571428571428571
auc: 0.9365079365079365
accuracy: 0.782608695652174
|
[
"11covered_with_a_quilt,_only_the_head_and_shoulders_exposed",
"12covered_with_a_quilt,_exposed_head_and_shoulders_except_for_other_organs"
] |
dima806/movie_identification_by_frame
|
Calculates (with about 50% accuracy) the probability that a given image is a screenshot from a movie (currently 804 movies).
See https://www.kaggle.com/code/dima806/movie-identification-by-frame-vit for details.
```
Accuracy: 0.4962
F1 Score: 0.4704
Classification report:
precision recall f1-score support
10 Things I Hate About You (1999) 0.4675 0.4489 0.4580 401
12 Monkeys (1995) 0.5800 0.3625 0.4462 400
12 Years a Slave (2013) 0.6916 0.5761 0.6286 401
127 Hours (2010) 0.7647 0.1950 0.3108 400
13 Hours The Secret Soldiers Of Benghazi (2016) 0.4322 0.6758 0.5272 401
1917 (2019) 0.4668 0.7550 0.5769 400
21 Grams (2003) 0.5523 0.6185 0.5835 401
25th Hour (2002) 0.4489 0.5375 0.4892 400
300 (2006) 0.6368 0.9401 0.7593 401
310 to Yuma (2007) 0.6020 0.7600 0.6718 400
500 Days Of Summer (2009) 0.5332 0.6608 0.5902 401
A Beautiful Mind (2001) 0.3358 0.1125 0.1685 400
A Bronx Tale (1993) 0.4377 0.3242 0.3725 401
A Bugs Life (1998) 0.6303 0.7481 0.6842 401
A Few Good Men (1992) 0.4194 0.7531 0.5388 401
A Fish Called Wanda (1988) 0.4833 0.5062 0.4945 401
A Good Person (2023) 0.4541 0.2344 0.3092 401
A History of Violence (2005) 0.3919 0.7525 0.5154 400
A League of Their Own (1992) 0.4160 0.6608 0.5106 401
A Man Called Otto (2022) 0.4394 0.8250 0.5734 400
A Scanner Darkly (2006) 0.6934 0.9476 0.8008 401
A Serious Man (2009) 0.4785 0.6658 0.5568 401
A Single Man (2009) 0.5440 0.6950 0.6103 400
A Star Is Born (2018) 0.5349 0.3450 0.4195 400
A Time To Kill (1996) 0.2938 0.4525 0.3563 400
A Walk to Remember (2002) 0.6078 0.3875 0.4733 400
About Schmidt (2002) 0.3822 0.3317 0.3551 401
About Time (2013) 0.7922 0.3050 0.4404 400
About a Boy (2002) 0.4556 0.5636 0.5039 401
Across the Universe (2007) 0.5882 0.1750 0.2697 400
Adaptation (2002) 0.4970 0.2100 0.2953 400
Air (2023) 0.5606 0.6808 0.6149 401
Aladdin (1992) 0.7545 0.9501 0.8411 401
Aliens Special Edition (1986) 0.5860 0.3150 0.4098 400
Allied (2016) 0.4563 0.1175 0.1869 400
Almost Famous EXTENDED (2000) 0.4524 0.1900 0.2676 400
American Beauty (1999) 0.5031 0.1995 0.2857 401
American Gangster (2007) 0.4225 0.5925 0.4932 400
American History X (1998) 0.5837 0.3575 0.4434 400
American Hustle (2013) 0.3896 0.5250 0.4473 400
American Sniper (2014) 0.5473 0.4775 0.5100 400
An Education (2009) 0.5523 0.7650 0.6415 400
Anastasia (1997) 0.5950 0.6575 0.6247 400
Anchorman The Legend Of Ron Burgundy (2004) 0.6169 0.4539 0.5230 401
Apocalypto (2006) 0.6258 0.7650 0.6884 400
Apollo 13 (1995) 0.4957 0.4289 0.4599 401
Argo (2012) 0.6806 0.3242 0.4392 401
Army of Darkness (1992) 0.4411 0.5137 0.4747 401
As Good as It Gets (1997) 0.5657 0.6350 0.5984 400
Atonement (2007) 0.5414 0.1796 0.2697 401
August Rush (2007) 0.4650 0.2825 0.3515 400
Austin Powers - International Man of Mystery (1997) 0.4349 0.5425 0.4828 400
Avatar (2009) 0.4305 0.5561 0.4853 401
Avatar The Way Of Water (2022) 0.3958 0.7925 0.5279 400
Awakenings (1990) 0.4224 0.7332 0.5360 401
Babel (2006) 0.8529 0.9975 0.9195 401
Baby Driver (2017) 0.4347 0.4825 0.4573 400
Babylon (2022) 0.4830 0.3550 0.4092 400
Back to the Future II (1989) 0.4207 0.3650 0.3909 400
Back to the Future III (1990) 0.4579 0.5575 0.5028 400
Bad Times At The El Royale (2018) 0.4291 0.2718 0.3328 401
Barbie (2023) 0.5725 0.7300 0.6418 400
Basic Instinct (1992) 0.6657 0.5525 0.6038 400
Batman (1989) 0.4143 0.4050 0.4096 400
Batman Begins (2005) 0.4061 0.2643 0.3202 401
Batman Returns (1992) 0.3465 0.1975 0.2516 400
Beauty And The Beast (2017) 0.3610 0.5875 0.4472 400
Beauty and the Beast (1991) 0.6111 0.7132 0.6582 401
Before Midnight (2013) 0.3444 0.8850 0.4958 400
Before Sunrise (1995) 0.5096 0.7930 0.6205 401
Before Sunset (2004) 0.5900 0.9100 0.7158 400
Before The Devil Knows Youre Dead (2007) 0.6827 0.7100 0.6961 400
Begin Again (2013) 0.6944 0.7100 0.7021 400
Being John Malkovich (1999) 0.5675 0.5137 0.5393 401
Ben-Hur (1959) 0.5110 0.8653 0.6426 401
Beveryly Hills Cop (1984) 0.6194 0.2075 0.3109 400
Big (1988) 0.4834 0.4000 0.4378 400
Big Fish (2003) 0.7500 0.0225 0.0437 400
Billy Elliot (2000) 0.4154 0.2700 0.3273 400
Birdman (2014) 0.6000 0.4938 0.5417 401
Black Hawk Down (2001) 0.5740 0.7950 0.6667 400
Black Mirror Bandersnatch (2018) 0.7234 0.1696 0.2747 401
Black Panther (2018) 0.3932 0.1150 0.1779 400
Blade (1998) 0.5460 0.6808 0.6060 401
Blade Runner 2049 (2017) 0.4245 0.3375 0.3760 400
Blow (2001) 0.5455 0.0300 0.0569 400
Blue Jasmine (2013) 0.5518 0.9027 0.6850 401
Blue Valentine (2010) 0.5621 0.4300 0.4873 400
Bohemian Rhapsody (2018) 0.4554 0.1147 0.1833 401
Boogie Nights (1997) 0.4167 0.1621 0.2334 401
Booksmart (2019) 0.3597 0.6600 0.4656 400
Bowling For Columbine (2002) 0.5402 0.3017 0.3872 401
Boyhood (2014) 0.6552 0.0475 0.0886 400
Boys Dont Cry (1999) 0.4224 0.3400 0.3767 400
Boyz n The Hood (1991) 0.5281 0.3525 0.4228 400
Braveheart (1995) 0.5973 0.7750 0.6746 400
Brick (2005) 0.4208 0.5975 0.4938 400
Bridge Of Spies (2015) 0.5356 0.6384 0.5825 401
Bridge to Terabithia (2007) 0.6087 0.6633 0.6348 401
Brokeback Mountain (2005) 0.4667 0.1397 0.2150 401
Broken Flowers (2005) 0.3615 0.4589 0.4044 401
Bronson (2008) 0.6792 0.7250 0.7013 400
Brooklyn (2015) 0.5604 0.5087 0.5333 401
Brothers (2009) 0.4767 0.6150 0.5371 400
Buried (2010) 0.9056 0.5262 0.6656 401
Burn After Reading (2008) 0.5084 0.6060 0.5529 401
CODA (2021) 0.4135 0.7525 0.5337 400
Call Me By Your Name (2017) 0.7761 0.3900 0.5191 400
Cape Fear (1991) 0.3873 0.3350 0.3592 400
Captain America Civil War (2016) 0.2727 0.1272 0.1735 401
Captain Fantastic (2016) 0.7941 0.1347 0.2303 401
Captain Phillips (2013) 0.4706 0.0399 0.0736 401
Carnage (2011) 0.4212 0.9800 0.5892 401
Carol (2015) 0.5092 0.4825 0.4955 400
Cars (2006) 0.6806 0.6500 0.6650 400
Casino (1995) 0.4848 0.3192 0.3850 401
Cast Away (2000) 0.4423 0.0575 0.1018 400
Catch Me If You Can (2002) 0.7660 0.2693 0.3985 401
Changeling (2008) 0.6246 0.9900 0.7660 400
Charlie Wilsons War (2007) 0.5405 0.4000 0.4598 400
Charlie and the Chocolate Factory (2005) 0.6007 0.4325 0.5029 400
Chasing Amy (1997) 0.4301 0.5910 0.4979 401
Chef (2014) 0.3592 0.7157 0.4783 401
Chicago (2002) 0.4568 0.1850 0.2633 400
Chicken Run (2000) 0.5065 0.7800 0.6142 400
Children of Men (2006) 0.6284 0.6300 0.6292 400
Chocolat (2000) 0.4712 0.6125 0.5326 400
Chronicle (2012) 0.6849 0.2500 0.3663 400
Cinderella Man (2005) 0.6649 0.6334 0.6488 401
Clerks 2 (2006) 0.6344 0.7375 0.6821 400
Closer (2004) 0.7283 0.1671 0.2718 401
Cloud Atlas (2012) 0.5375 0.3392 0.4159 401
Cloverfield (2008) 0.0633 0.2125 0.0975 400
Coach Carter (2005) 0.5401 0.3700 0.4392 400
Coherence (2013) 0.7790 0.8878 0.8298 401
Cold Moutians (2003) 0.6169 0.2369 0.3423 401
Collateral (2004) 0.4896 0.5875 0.5341 400
Constantine (2005) 0.5385 0.1400 0.2222 400
Contact (1997) 0.4515 0.3017 0.3617 401
Cop Land (1997) 0.5867 0.3975 0.4739 400
Coraline (2009) 0.4883 0.5200 0.5036 400
Corpse Bride (2005) 0.4240 0.8279 0.5608 401
Crash (2004) 1.0000 0.0150 0.0296 400
Creed (2015) 0.4984 0.3800 0.4312 400
Creed II (2018) 0.4828 0.1746 0.2564 401
Crimson Tide (1995) 0.2789 0.6825 0.3959 400
Cruella (2021) 0.7612 0.3825 0.5092 400
Cube (1997) 0.6314 0.7750 0.6958 400
Dancer In The Dark (2000) 0.8243 0.9476 0.8817 401
Dances with Wolves (1990) 0.3623 0.4165 0.3875 401
Dark City (1998) 0.4119 0.4663 0.4374 401
Darkest Hour (2017) 0.5393 0.8575 0.6622 400
Dawn of the Dead (2004) 0.5874 0.4200 0.4898 400
Dawn of the Planet of the Apes (2014) 0.6397 0.2170 0.3240 401
Dazed and Confused (1993) 0.3855 0.4239 0.4038 401
Dead Man (1995) 0.6473 0.7506 0.6952 401
Death At A Funeral (2007) 0.6453 0.9050 0.7534 400
Death Proof (2007) 0.5085 0.3741 0.4310 401
Definitely Maybe (2008) 0.6334 0.7300 0.6783 400
Deja Vu (2006) 0.4304 0.6858 0.5288 401
Demolition (2015) 0.6358 0.4963 0.5574 401
Desperado (1995) 0.4580 0.5037 0.4798 401
Despicable Me (2010) 0.4851 0.6100 0.5404 400
Die Hard 2 (1990) 0.3333 0.5775 0.4227 400
Die Hard 3 (1995) 0.3422 0.1925 0.2464 400
Die Hard 4 (2007) 0.4777 0.5900 0.5280 400
Dirty Harry (1971) 0.5743 0.1446 0.2311 401
Doctor Strange (2016) 0.2684 0.1272 0.1726 401
Doctor Strange In The Multiverse Of Madness (2022) 0.4114 0.1621 0.2326 401
Dogma (1999) 0.3211 0.6575 0.4315 400
Dogville (2003) 0.6399 0.9375 0.7606 400
Donnie Brasco (1997) 0.4615 0.0150 0.0290 401
Donnie Darko DIRECTORS CUT (2001) 0.5274 0.4564 0.4893 401
Dont Look Up (2021) 0.3617 0.2125 0.2677 400
Doubt (2008) 0.5845 0.8300 0.6860 400
Dr. No (1962) 0.4274 0.6250 0.5076 400
Dr. Strangelove or How I Learned to Stop Worrying and Love the Bomb (1964) 0.8429 0.8828 0.8624 401
Dredd (2012) 0.4962 0.6475 0.5618 400
Drive (2011) 0.6131 0.7232 0.6636 401
Dune (2021) 0.3126 0.5525 0.3993 400
Dungeons Dragons Honor Among Thieves (2023) 0.3658 0.6085 0.4569 401
Dunkirk (2017) 0.4873 0.8155 0.6101 401
Eastern Promises (2007) 0.6000 0.0898 0.1562 401
Election (1999) 0.6210 0.4875 0.5462 400
Elemental (2023) 0.7438 0.8275 0.7834 400
Elf (2003) 0.5519 0.2525 0.3465 400
Elizabeth (1998) 0.5720 0.3475 0.4323 400
Elvis (2022) 0.5714 0.0300 0.0570 400
Encanto (2021) 0.5725 0.5625 0.5675 400
Enchanted (2007) 0.6989 0.3067 0.4263 401
End of Watch (2012) 0.8321 0.2850 0.4246 400
Enemy At The Gates (2001) 0.4373 0.3750 0.4038 400
Enemy of the State (1998) 0.3907 0.3575 0.3734 400
Enter the Dragon (1973) 0.4813 0.7075 0.5729 400
Equilibrium (2002) 0.3710 0.3766 0.3738 401
Erin Brockovich (2000) 0.4535 0.6100 0.5203 400
Escape from New York (1981) 0.5314 0.6550 0.5868 400
Eternal Sunshine of the Spotless Mind (2004) 0.5868 0.2450 0.3457 400
Ever After A Cinderella Story (1998) 0.4197 0.6384 0.5064 401
Everest (2015) 0.5763 0.1700 0.2625 400
Everything Everywhere All At Once (2022) 0.6729 0.4514 0.5403 401
Extraction 2 (2023) 0.5024 0.2575 0.3405 400
Eyes Wide Shut (1999) 0.4530 0.6875 0.5462 400
Face Off (1997) 0.4286 0.0900 0.1488 400
Fahrenheit 9 11 (2004) 0.5141 0.5475 0.5303 400
Falling Down (1993) 0.4659 0.6309 0.5360 401
Fantastic Mr Fox (2009) 0.6060 0.9150 0.7291 400
Fargo (1996) 0.4914 0.5000 0.4957 400
Fear And Loathing In Las Vegas (1998) 0.3647 0.5425 0.4362 400
Fences (2016) 0.6453 0.8050 0.7164 400
Filth (2013) 0.4505 0.7850 0.5725 400
Finding Dory (2016) 0.5707 0.5337 0.5515 401
Finding Nemo (2003) 0.6402 0.6434 0.6418 401
Finding Neverland (2004) 0.6667 0.3791 0.4833 401
First Man (2018) 0.3619 0.2425 0.2904 400
Flags of our Fathers (2006) 0.4214 0.4225 0.4220 400
Flight (2012) 0.6275 0.7200 0.6705 400
Ford V Ferrari (2019) 0.3139 0.5761 0.4063 401
Forgetting Sarah Marshall (2008) 0.5434 0.7050 0.6137 400
Four Weddings And A Funeral (1994) 0.4599 0.4450 0.4524 400
Foxcatcher (2014) 0.7547 0.1995 0.3156 401
Fracture (2007) 0.4901 0.2475 0.3289 400
Frequency (2000) 0.4344 0.6359 0.5162 401
Friday (1995) 0.5417 0.8579 0.6641 401
From Dusk Till Dawn (1996) 0.4107 0.6550 0.5048 400
Frost Nixon (2008) 0.6730 0.3541 0.4641 401
Frozen (2013) 0.5543 0.3700 0.4438 400
Furious 6 (2013) 0.5133 0.5325 0.5227 400
Furious Seven (2015) 0.4783 0.0274 0.0519 401
Galaxy Quest (1999) 0.3480 0.5750 0.4336 400
Gangs of New York (2002) 0.4893 0.7955 0.6059 401
Gattaca (1997) 0.4605 0.1750 0.2536 400
Ghandi (1982) 0.3921 0.5675 0.4637 400
Ghost (1990) 0.4714 0.0825 0.1404 400
Ghost World (2001) 0.3862 0.4200 0.4024 400
Ghostbusters (1984) 0.4612 0.4888 0.4746 401
Ghostbusters Afterlife (2021) 0.4624 0.2000 0.2792 400
Gifted (2017) 0.4340 0.2544 0.3208 401
Girl Interrupted (1999) 0.3745 0.6584 0.4774 401
Gladiator EXTENDED REMASTERED (2000) 0.6000 0.0150 0.0293 400
Glengarry Glen Ross (1992) 0.4643 0.8753 0.6067 401
Goldfinger (1964) 0.7381 0.3092 0.4359 401
Gone Baby Gone (2007) 0.5096 0.5985 0.5505 401
Gone Girl (2014) 0.6461 0.3915 0.4876 401
Good Time (2017) 0.3783 0.4700 0.4192 400
Good Will Hunting (1997) 0.4571 0.2793 0.3467 401
Goodfellas (1990) 0.4783 0.1925 0.2745 400
Gran Torino (2008) 0.5175 0.8875 0.6538 400
Gravity (2013) 0.3351 0.6300 0.4375 400
Grease (1978) 0.4117 0.7750 0.5377 400
Green Book (2018) 0.4980 0.3125 0.3840 400
Green Street Hooligans (2005) 0.7321 0.2045 0.3197 401
Greyhound (2020) 0.3626 0.4750 0.4113 400
Grindhouse (2007) 0.0000 0.0000 0.0000 401
Guardians Of The Galaxy Vol. 2 (2017) 0.3212 0.3100 0.3155 400
Guardians of the Galaxy (2014) 0.2684 0.4539 0.3373 401
Hachiko - A Dogs Tale (2009) 0.9924 0.9800 0.9862 400
Hacksaw Ridge (2016) 0.5899 0.2050 0.3043 400
Hamilton (2020) 0.7315 0.9377 0.8219 401
Happy Gilmore (1996) 0.5366 0.6035 0.5681 401
Harry Potter And The Chamber Of Secrets (2002) 0.3698 0.3541 0.3618 401
Harry Potter And The Half-Blood Prince (2009) 0.5278 0.4275 0.4724 400
Harry Potter And The Prisoner Of Azkaban (2004) 0.5857 0.7581 0.6609 401
Heat (1995) 0.3906 0.6025 0.4739 400
Hell Or High Water (2016) 0.5047 0.6675 0.5748 400
Hellboy The Golden Army (2008) 0.5198 0.7207 0.6040 401
Her (2013) 0.6650 0.6633 0.6642 401
Hidden Figures (2016) 0.4391 0.5050 0.4698 400
High Fidelity (2000) 0.4891 0.3342 0.3970 401
Highlander (1986) 0.5047 0.2675 0.3497 400
Home Alone (1990) 0.4371 0.6775 0.5314 400
Hot Fuzz (2007) 0.5371 0.4525 0.4912 400
Hotel Rawanda (2008) 0.5241 0.5960 0.5578 401
Hotel Transylvania (2012) 0.6493 0.5600 0.6013 400
Hotel Transylvania 4 Transformania (2022) 0.4869 0.4175 0.4495 400
How To Train Your Dragon The Hidden World (2019) 0.7035 0.6983 0.7009 401
How to Train Your Dragon 2 (2014) 0.3696 0.8055 0.5067 401
Hugo (2011) 0.5254 0.8775 0.6573 400
Hustle (2022) 0.6454 0.5037 0.5658 401
I Love You, Man (2009) 0.5434 0.8925 0.6755 400
I Origins (2014) 0.3971 0.6225 0.4849 400
I am Sam (2001) 0.4188 0.5800 0.4864 400
I, Tonya (2017) 0.4985 0.4275 0.4603 400
Identity (2003) 0.3569 0.2525 0.2958 400
Imagine That (2009) 0.5058 0.6550 0.5708 400
In Bruges (2008) 0.6141 0.7581 0.6786 401
In The Line Of Fire (1993) 0.3857 0.6434 0.4822 401
In The Name Of The Father (1993) 0.4167 0.3750 0.3947 400
Independence Day (1996) 0.4118 0.0350 0.0645 400
Indiana Jones And The Temple Of Doom (1984) 0.4417 0.3975 0.4184 400
Indiana Jones and the Last Crusade (1989) 0.5397 0.2550 0.3463 400
Inside Llewyn Davis (2013) 0.4910 0.8155 0.6129 401
Inside Man (2006) 0.5354 0.2650 0.3545 400
Inside Out (2015) 0.5568 0.7225 0.6289 400
Insomnia (2002) 0.4602 0.5337 0.4942 401
Interstellar (2014) 0.7161 0.5675 0.6332 400
Invictus (2009) 0.5784 0.6534 0.6136 401
Iron Man (2008) 0.4757 0.1225 0.1948 400
Isle Of Dogs (2018) 0.4271 0.6209 0.5061 401
Its Kind of a Funny Story (2010) 0.6229 0.7332 0.6735 401
JFK (1991) 0.5198 0.2950 0.3764 400
Jackie Brown (1997) 0.5113 0.5650 0.5368 400
James Bond Casino Royale (2006) 0.4816 0.6200 0.5421 400
James Bond GoldenEye (1995) 0.3321 0.2319 0.2731 401
John Q (2002) 0.4425 0.7581 0.5588 401
John Wick (2014) 0.4337 0.3350 0.3780 400
John Wick Chapter 2 (2017) 0.3462 0.1575 0.2165 400
John Wick Chapter 3 - Parabellum (2019) 0.5728 0.8650 0.6892 400
John Wick Chapter 4 (2023) 0.4349 0.7332 0.5460 401
Jojo Rabbit (2019) 0.4565 0.8000 0.5813 400
Julie and Julia (2009) 0.4080 0.8130 0.5433 401
Jumanji (1995) 0.6978 0.3175 0.4364 400
Jumanji Welcome To The Jungle (2017) 0.3811 0.5650 0.4552 400
Juno (2007) 0.5521 0.4375 0.4881 400
K-PAX (2001) 0.3978 0.2725 0.3234 400
Kick-Ass (2010) 0.4708 0.6450 0.5443 400
Kill Bill Vol 1 (2003) 0.4098 0.1875 0.2573 400
Kill Bill Vol 2 (2004) 0.4803 0.1825 0.2645 400
King Kong (2005) 0.5021 0.3050 0.3795 400
King Richard (2021) 0.6642 0.4489 0.5357 401
Kingdom Of Heaven (2005) 0.4310 0.2575 0.3224 400
Kiss Kiss Bang Bang (2005) 0.5429 0.0475 0.0874 400
Klaus (2019) 0.5604 0.7057 0.6247 401
Kubo And The Two Strings (2016) 0.4464 0.8225 0.5787 400
Kung Fu Panda 2 0.4828 0.4564 0.4692 401
L.A Confidential (1997) 0.3678 0.4000 0.3832 400
La La Land (2016) 0.4234 0.4339 0.4286 401
Lady Bird (2017) 0.3975 0.3150 0.3515 400
Lars and the Real Girl (2007) 0.3608 0.7756 0.4925 401
Lawless (2012) 0.5496 0.5675 0.5584 400
Layer Cake (2004) 0.6143 0.2145 0.3179 401
Leaving Las Vegas (1995) 0.3705 0.3575 0.3639 400
Legends of the Fall (1994) 0.4750 0.1425 0.2192 400
Leon The Professional Extended (1994) 0.4083 0.7900 0.5383 400
Les Misérables (2012) 0.5637 0.7057 0.6268 401
Letters From Iwo Jima (2006) 0.4654 0.8750 0.6076 400
Licorice Pizza (2021) 0.5920 0.5536 0.5722 401
Life of Brian (1979) 720p 0.4094 0.6708 0.5085 401
Life of Pi (2012) 0.8214 0.0574 0.1072 401
Limitless (2011) 0.4527 0.2750 0.3421 400
Lincoln (2012) 0.6193 0.8155 0.7040 401
Lion (2016) 0.8121 0.3025 0.4408 400
Little Children (2006) 0.4741 0.4800 0.4770 400
Little Miss Sunshine (2006) 0.4305 0.6350 0.5131 400
Little Women (2019) 0.7701 0.1675 0.2752 400
Lock Stock and Two Smoking Barrels (1998) 0.5707 0.8775 0.6916 400
Locke (2013) 0.8129 0.9125 0.8598 400
Logan (2017) 0.3846 0.1247 0.1883 401
Logan Lucky (2017) 0.4597 0.5686 0.5084 401
Looper (2012) 0.5689 0.2369 0.3345 401
Lord of War (2005) 0.6066 0.6475 0.6264 400
Lost Highway (1997) 0.4565 0.5250 0.4884 400
Lost in Translation (2003) 0.4158 0.7900 0.5448 400
Love Actually (2003) 0.7599 0.5775 0.6562 400
Love, Simon (2018) 0.7603 0.6025 0.6722 400
Lucky Number Slevin (2006) 0.5444 0.4750 0.5073 400
Mad Max 2 The Road Warrior (1981) 0.4420 0.4575 0.4496 400
Magnolia (1999) 0.4142 0.3200 0.3611 400
Mallrats (1995) 0.7746 0.2750 0.4059 400
Man On The Moon (1999) 0.4678 0.5436 0.5029 401
Man of Steel (2013) 0.4728 0.2175 0.2979 400
Man on Fire (2004) 0.8738 0.9327 0.9023 401
Manchester By The Sea (2016) 0.5994 0.4750 0.5300 400
Margin Call (2011) 0.6493 0.7175 0.6817 400
Marley and Me (2008) 0.4409 0.5225 0.4783 400
Marriage Story (2019) 0.5712 0.8325 0.6775 400
Master and Commander The Far Side of the World (2003) 0.3432 0.4065 0.3721 401
Match Point (2005) 0.5094 0.4713 0.4896 401
Matchstick Men (2003) 0.4962 0.3267 0.3940 401
Matilda (1996) 0.6205 0.6933 0.6549 401
Maverick (1994) 0.4429 0.6200 0.5167 400
Me Before You (2016) 0.4805 0.5835 0.5270 401
Mean Girls (2004) 0.4496 0.7925 0.5738 400
Meet Joe Black (1998) 0.4892 0.7350 0.5874 400
Megamind (2010) 0.5087 0.4375 0.4704 400
Melancholia (2011) 0.7687 0.2575 0.3858 400
Memento (2000) 0.5581 0.3000 0.3902 400
Memoirs of a Geisha (2005) 0.5223 0.4975 0.5096 400
Men of Honor (2000) 0.5177 0.1820 0.2694 401
Michael Clayton (2007) 0.5187 0.6925 0.5931 400
Midnight In Paris (2011) 0.4034 0.9377 0.5641 401
Milk (2008) 0.4388 0.4289 0.4338 401
Millers Crossing (1990) 0.5395 0.4763 0.5060 401
Million Dollar Baby (2004) 0.6497 0.5750 0.6101 400
Misery (1990) 0.3380 0.7875 0.4730 400
Mission Impossible (1996) 0.4398 0.5750 0.4984 400
Mission Impossible - Fallout (2018) 0.4907 0.1322 0.2083 401
Mission Impossible Ghost Protocol (2011) 0.4176 0.4625 0.4389 400
Mission Impossible Rogue Nation (2015) 0.4069 0.2950 0.3420 400
Moana (2016) 0.5353 0.3591 0.4299 401
Mollys Game (2017) 0.4598 0.2575 0.3301 400
Monster (2003) 0.5258 0.3825 0.4428 400
Monsters Inc (2001) 0.5963 0.7257 0.6547 401
Monsters University (2013) 0.7132 0.6775 0.6949 400
Moon (2009) 0.6856 0.3925 0.4992 400
Moonlight (2016) 0.5952 0.4300 0.4993 400
Moonrise Kingdom (2012) 0.6000 0.5175 0.5557 400
Moulin Rouge! (2001) 0.5224 0.6425 0.5762 400
Mr Brooks (2007) 0.6138 0.5325 0.5703 400
Mr Nobody (2009) 0.5854 0.1796 0.2748 401
Mud (2012) 0.4898 0.1800 0.2633 400
Mulan (1998) 0.7275 0.7456 0.7365 401
Mulholland Drive (2001) 0.3871 0.1496 0.2158 401
Munich (2005) 0.5506 0.1225 0.2004 400
My Cousin Vinny (1992) 0.4281 0.6175 0.5056 400
Mystic River (2003) 0.3042 0.6775 0.4198 400
Napoleon Dynamite (2004) 0.3962 0.6758 0.4995 401
National Lampoons Christmas Vacation (1989) 0.5644 0.6025 0.5828 400
Natural Born Killers (1994) 0.4461 0.2269 0.3008 401
Nebraska (2013) 0.7076 0.7925 0.7476 400
Never Let Me Go (2010) 0.5848 0.4050 0.4786 400
Nightcrawler (2014) 0.4161 0.3225 0.3634 400
Nightmare Alley (2021) 0.5771 0.6175 0.5966 400
No Country For Old Men (2007) 0.6327 0.3100 0.4161 400
No Time To Die (2021) 0.3163 0.2469 0.2773 401
Nobody (2021) 0.7236 0.5810 0.6445 401
Nocturnal Animals (2016) 0.4408 0.4638 0.4520 401
Nomadland (2020) 0.4560 0.6600 0.5393 400
Notting Hill (1999) 0.4463 0.4675 0.4567 400
Now You See Me (2013) 0.3529 0.5250 0.4221 400
Oblivion (2013) 0.3903 0.3017 0.3404 401
Oceans Eleven (2001) 0.4604 0.5675 0.5084 400
Okja (2017) 0.5016 0.3850 0.4356 400
Old School (2003) 0.5448 0.3641 0.4365 401
Once (2006) 0.6199 0.2643 0.3706 401
One Day (2011) 0.6077 0.3940 0.4781 401
One Hundred And One Dalmatians (1961) 0.9373 0.9327 0.9350 401
Only Lovers Left Alive (2013) 0.5747 0.4425 0.5000 400
Paddington (2014) 0.5673 0.1471 0.2337 401
Paranorman (2012) 0.4834 0.6550 0.5563 400
Passengers (2016) 0.4547 0.5900 0.5136 400
Past Lives (2023) 0.8246 0.2350 0.3658 400
Patriots Day (2016) 0.5385 0.2275 0.3199 400
Pay It Forward (2000) 0.4623 0.2294 0.3067 401
Payback (1999) 0.4663 0.9350 0.6223 400
Perfume - The Story Of A Murderer (2006) 0.6254 0.5300 0.5737 400
Phantom Thread (2017) 0.5599 0.6409 0.5977 401
Philadelphia (1993) 0.4553 0.5600 0.5022 400
Philomena (2013) 0.5709 0.4015 0.4714 401
Phone Booth (2002) 0.5202 0.9000 0.6593 400
Pi (1998) 0.9211 0.9050 0.9130 400
Pitch Black (2000) 0.4224 0.2450 0.3101 400
Planes, Trains Automobiles (1987) 0.4868 0.0925 0.1555 400
Planet Of The Apes (1968) 0.3930 0.8450 0.5365 400
Planet Terror (2007) 0.6043 0.6933 0.6458 401
Platoon (1986) 0.5235 0.6975 0.5981 400
Pleasantville (1998) 0.4378 0.5436 0.4850 401
Point Break (1991) 0.4038 0.0525 0.0929 400
Precious (2009) 0.5941 0.5985 0.5963 401
Predestination (2014) 0.4545 0.2244 0.3005 401
Pretty Woman (1990) 0.5459 0.7575 0.6346 400
Pride and Prejudice (2005) 0.5903 0.6683 0.6269 401
Primal Fear (1996) 0.4868 0.3675 0.4188 400
Prisoners (2013) 0.4582 0.5062 0.4810 401
Promising Young Woman (2020) 0.2519 0.6608 0.3648 401
Pulp Fiction (1994) 0.3475 0.6933 0.4629 401
Punch Drunk Love (2002) 0.4899 0.8525 0.6223 400
Puss In Boots The Last Wish (2022) 0.5405 0.5000 0.5195 400
Rambo (2008) 0.5714 0.0798 0.1400 401
Rango (2009) 0.5197 0.4950 0.5070 400
Ray (2004) 0.5421 0.7400 0.6258 400
Ready Player One (2018) 0.3582 0.1796 0.2392 401
Real Steel (2011) 0.5627 0.4150 0.4777 400
Red (2010) 0.5368 0.3100 0.3930 400
Red Dragon (2002) 0.4467 0.4500 0.4483 400
Remeber The Titans (2000) 0.5893 0.1650 0.2578 400
Remember Me (2010) 0.4601 0.7182 0.5609 401
Requiem for a Dream DIRECTORS CUT (2000) 0.5714 0.1100 0.1845 400
Rescue Dawn (2006) 0.6337 0.3840 0.4783 401
Reservoir Dogs (1992) 0.5990 0.8625 0.7070 400
Revolutionary Road (2008) 0.5553 0.6135 0.5829 401
Rio (2011) 0.4704 0.4375 0.4534 400
Rio 2 (2014) 0.5060 0.7406 0.6012 401
Road to Predition (2002) 0.4278 0.3925 0.4094 400
RoboCop (1987) 0.5237 0.4700 0.4954 400
Rock n Rolla (2008) 0.6222 0.9100 0.7391 400
Rocketman (2019) 0.4236 0.3325 0.3725 400
Rocky Balboa (2006) 0.7449 0.6350 0.6856 400
Rogue One (2016) 0.2613 0.3325 0.2926 400
Ronin (1998) 0.4873 0.4300 0.4568 400
Room (2015) 0.3964 0.6075 0.4798 400
Rounders (1998) 0.5740 0.6484 0.6089 401
Ruby Gillman Teenage Kraken (2023) 0.4572 0.6275 0.5290 400
Ruby Sparks (2012) 0.5032 0.3900 0.4394 400
Runaway Jury (2003) 0.4435 0.2750 0.3395 400
Running Scared (2006) 0.5732 0.5661 0.5696 401
Rush (2013) 0.4595 0.8925 0.6066 400
Rushmore (1998) 0.4171 0.6475 0.5073 400
Saving Mr. Banks (2013) 0.3780 0.6775 0.4852 400
Saving Private Ryan (1998) 0.5335 0.7575 0.6260 400
Scent of a Woman (1992) 0.4619 0.7107 0.5599 401
Schindlers List (1993) 0.5674 0.5985 0.5825 401
Scott Pilgrim vs the World (2010) 0.6708 0.2693 0.3843 401
Se7en (1995) 0.4619 0.2725 0.3428 400
Searching (2018) 0.4832 0.7925 0.6004 400
Sense And Sensibility (1995) 0.5941 0.4500 0.5121 400
Serenity (2005) 0.5090 0.7100 0.5929 400
Seven Pounds (2008) 0.7059 0.4190 0.5258 401
Seven Psychopaths (2012) 0.5377 0.5350 0.5363 400
Seven Years In Tibet (1997) 0.4008 0.2475 0.3060 400
Shakespeare In Love (1998) 0.5076 0.6700 0.5776 400
Shame (2011) 0.5136 0.6125 0.5587 400
Shaun Of The Dead (2004) 0.4636 0.5237 0.4918 401
Sherlock Holmes (2009) 0.5751 0.7830 0.6631 401
Sherlock Holmes A Game Of Shadows (2011) 0.5191 0.6085 0.5603 401
Shrek (2001) 0.4619 0.6675 0.5460 400
Shrek 2 (2004) 0.5172 0.5262 0.5216 401
Side Effects (2013) 0.3844 0.6400 0.4803 400
Sideways (2004) 0.4768 0.3850 0.4260 400
Silence (2016) 0.5423 0.2718 0.3621 401
Silver Linings Playbook (2012) 0.6556 0.4414 0.5276 401
Sin City EXTENDED and UNRATED (2005) 0.6287 0.9626 0.7606 401
Sing (2016) 0.5808 0.3775 0.4576 400
Sing 2 (2021) 0.4771 0.1820 0.2635 401
Sing Street (2016) 0.6000 0.3516 0.4434 401
Skull (2022) 0.2971 0.3825 0.3344 400
Skyfall (2012) 0.5909 0.1950 0.2932 400
Sleepers (1996) 0.3111 0.0350 0.0629 400
Slumdog Millionaire (2008) 0.7642 0.2344 0.3588 401
Snatch (2000) 0.5456 0.8504 0.6647 401
Snowden (2016) 0.4133 0.0775 0.1305 400
Soul (2020) 0.4853 0.3300 0.3929 400
Sound Of Metal (2019) 0.7364 0.6075 0.6658 400
Source Code (2011) 0.5785 0.9002 0.7044 401
South Park Bigger Longer and Uncut (1999) 0.8966 0.8675 0.8818 400
Southpaw (2015) 0.5204 0.1275 0.2048 400
Speed (1994) 0.4054 0.1875 0.2564 400
Spider Man 2 (2004) 0.5273 0.4350 0.4767 400
Spider-Man Across The Spider-Verse (2023) 0.4315 0.6908 0.5312 401
Spider-Man Into The Spider-Verse (2018) 0.4130 0.3025 0.3492 400
Spider-Man No Way Home (2021) 0.2647 0.1347 0.1785 401
Spirited Away (2001) 0.8676 0.7375 0.7973 400
Spotlight (2015) 0.4403 0.6450 0.5233 400
Spy (2015) 0.3617 0.4250 0.3908 400
Spy Game (2001) 0.4286 0.2843 0.3418 401
St. Vincent (2014) 0.5396 0.3575 0.4301 400
Star Trek (2009) 0.4545 0.0500 0.0901 400
Star Trek Beyond (2016) 0.3930 0.3625 0.3771 400
Star Trek First Contact (1996) 0.3602 0.5686 0.4410 401
Star Trek II The Wrath of Khan (1982) 0.5143 0.7650 0.6151 400
Star Trek Into Darkness (2013) 0.2690 0.6883 0.3868 401
Star Wars Episode III - Revenge Of The Sith (2005) 0.2980 0.2600 0.2777 400
Star Wars Episode IV - A New Hope (1977) 0.4824 0.3775 0.4236 400
Star Wars Episode V - The Empire Strikes Back (1980) 0.2594 0.3092 0.2821 401
Star Wars Episode VI - Return Of The Jedi (1983) 0.3585 0.3666 0.3625 401
Star Wars Episode VII - The Force Awakens (2015) 0.3946 0.1446 0.2117 401
Stardust (2007) 0.4772 0.7307 0.5773 401
Starship Troopers (1997) 0.2371 0.1150 0.1549 400
State Of Play (2009) 0.5658 0.4300 0.4886 400
Steve Jobs (2015) 0.4419 0.2950 0.3538 400
Still Alice (2014) 0.4340 0.5175 0.4721 400
Straight Outta Compton (2015) 0.5160 0.2825 0.3651 400
Stranger Than Fiction (2006) 0.5509 0.6883 0.6120 401
Sunshine (2007) 0.7303 0.5536 0.6298 401
Super 8 (2011) 0.4876 0.2450 0.3261 400
Super Size Me (2004) 0.7339 0.6825 0.7073 400
Superman (1978) 0.3103 0.6075 0.4108 400
T2 Trainspotting (2017) 0.6030 0.8105 0.6915 401
TMNT (2007) 0.6096 0.8529 0.7110 401
Taken (2008) 0.3659 0.6000 0.4545 400
Tangled (2010) 0.4848 0.5561 0.5180 401
Tarzan (1999) 0.9054 0.8375 0.8701 400
Team America World Police (2004) 0.5423 0.7050 0.6130 400
Terminator 2 (1991) 0.5593 0.1646 0.2543 401
Terms And Conditions May Apply (2013) 0.4271 0.3075 0.3576 400
Thank You For Smoking (2005) 0.4132 0.7500 0.5329 400
The Abyss (1989) 0.3123 0.3100 0.3112 400
The Adjustment Bureau (2011) 0.4576 0.7406 0.5657 401
The Adventures of Tintin (2011) 0.5122 0.3150 0.3901 400
The Assassination Of Jesse James By The Coward Robert Ford (2007) 0.7204 0.5475 0.6222 400
The Aviator (2004) 0.5211 0.5860 0.5516 401
The Ballad Of Buster Scruggs (2018) 0.6377 0.2200 0.3271 400
The Bank Job (2008) 0.6955 0.8828 0.7780 401
The Banshees Of Inisherin (2022) 0.4099 0.2275 0.2926 400
The Basketball Diaries (1995) 0.5607 0.4500 0.4993 400
The Batman (2022) 0.6495 0.6300 0.6396 400
The Big Short (2015) 0.4253 0.5550 0.4816 400
The Big Sick (2017) 0.3757 0.8425 0.5197 400
The Blind Side (2009) 0.5442 0.6000 0.5707 400
The Boat That Rocked (2009) 0.5050 0.1275 0.2036 400
The Book Thief (2013) 0.5443 0.5375 0.5409 400
The Boondock Saints (1999) 0.4439 0.2175 0.2919 400
The Bourne Supremacy (2004) 0.6262 0.6309 0.6286 401
The Bourne Ultimatum (2007) 0.4662 0.3267 0.3842 401
The Bourne identity (2002) 0.4932 0.1825 0.2664 400
The Boy in the Striped Pyjamas (2008) 0.5362 0.7600 0.6287 400
The Breakfast Club (1985) 0.5585 0.9075 0.6914 400
The Bucket List (2007) 0.5329 0.6875 0.6004 400
The Butler (2013) 0.5242 0.6209 0.5685 401
The Butterfly Effect (2004) 0.5735 0.0975 0.1667 400
The Chronicles of Narnia - The Lion, The Witch, and The Wardrobe (2005) 0.3104 0.3275 0.3187 400
The Cider House Rules (1999) 0.4052 0.4314 0.4179 401
The Constant Gardener (2005) 0.7692 0.0750 0.1367 400
The Count Of Monte Cristo (2002) 0.3188 0.0549 0.0936 401
The Covenant (2023) 0.5000 0.7175 0.5893 400
The Croods (2013) 0.4897 0.5950 0.5372 400
The Crow (1994) 0.6378 0.6983 0.6667 401
The Curious Case of Benjamin Button (2008) 0.7069 0.1025 0.1790 400
The Curse Of The Were-Rabbit (2005) 0.6110 0.5575 0.5830 400
The Danish Girl (2015) 0.6807 0.7282 0.7036 401
The Darjeeling Limited (2007) 0.6205 0.9075 0.7371 400
The Dark Knight Rises (2012) 0.5973 0.6675 0.6305 400
The Death Of Stalin (2017) 0.6165 0.6150 0.6158 400
The Departed (2006) 0.5228 0.6300 0.5714 400
The Descendants(2011) 0.4817 0.4913 0.4864 401
The Devil All The Time (2020) 0.7143 0.0125 0.0245 401
The Disaster Artist (2017) 0.7800 0.0975 0.1733 400
The Dreamers (2003) 0.4368 0.6650 0.5273 400
The Drop (2014) 0.5186 0.6975 0.5949 400
The Emperors New Groove (2000) 0.7377 0.7506 0.7441 401
The English Patient (1996) 0.4708 0.3217 0.3822 401
The Equalizer (2014) 0.4343 0.5686 0.4924 401
The Fall (2006) 0.5392 0.6683 0.5969 401
The Father (2020) 0.4088 0.9300 0.5679 400
The Favourite (2018) 0.6642 0.6808 0.6724 401
The Fifth Element Remastered (1997) 0.5172 0.2244 0.3130 401
The Fighter (2010) 0.6929 0.7032 0.6980 401
The Florida Project (2017) 0.4411 0.7950 0.5674 400
The Founder (2016) 0.4231 0.6325 0.5070 400
The Fountian (2004) 0.5434 0.5950 0.5680 400
The French Connection (1971) 0.5509 0.4600 0.5014 400
The Fugitive (1993) 0.6111 0.0274 0.0525 401
The Full Monty (1997) 0.5539 0.6409 0.5942 401
The Game (1997) 0.4141 0.4100 0.4121 400
The Gentlemen (2019) 0.5000 0.0773 0.1339 401
The Ghost Writer (2010 0.5058 0.3267 0.3970 401
The Gift (2015) 0.8000 0.0698 0.1284 401
The Girl with the Dragon Tattoo (2011) 0.4755 0.4125 0.4418 400
The Godfather Part 3 (1990) 0.6612 0.9125 0.7668 400
The Grand Budapest Hotel (2014) 0.7329 0.9400 0.8237 400
The Greatest Showman (2017) 0.2654 0.5500 0.3580 400
The Green Mile (1999) 0.4332 0.8025 0.5627 400
The Hateful Eight (2015) 0.6427 0.9400 0.7635 400
The Help (2011) 0.4945 0.6783 0.5720 401
The Hobbit An Unexpected Journey (2012) 0.3852 0.7925 0.5184 400
The Hobbit The Battle of the Five Armies (2014) 0.4437 0.8055 0.5722 401
The Hobbit The Desolation of Smaug (2013) 0.4663 0.6035 0.5261 401
The Hours (2002) 0.4442 0.4364 0.4403 401
The Hunchback of Notre Dame (1996) 0.6394 0.5675 0.6013 400
The Hunger Games (2012) 0.6432 0.3865 0.4829 401
The Hunger Games Catching Fire (2013) 0.7429 0.5850 0.6545 400
The Hunt for Red October (1990) 0.3553 0.4564 0.3996 401
The Hurricane (1999) 0.5380 0.2475 0.3390 400
The Hurt Locker (2008) 0.6114 0.5611 0.5852 401
The Ides of March (2011) 0.5154 0.5450 0.5298 400
The Illusionist (2006) 0.5594 0.8825 0.6848 400
The Imitation Game (2014) 0.3608 0.7325 0.4835 400
The Impossible (2012) 0.6523 0.6409 0.6465 401
The Incredibles (2004) 0.3755 0.4750 0.4194 400
The Intern (2015) 0.5917 0.6434 0.6165 401
The Irishman (2019) 0.5424 0.4000 0.4604 400
The Iron Giant (1999) 0.7792 0.7850 0.7821 400
The Italian Job (2003) 0.4212 0.3267 0.3680 401
The Jacket (2005) 0.6386 0.1322 0.2190 401
The Judge (2014) 0.3123 0.6925 0.4305 400
The Jungle Book (2016) 0.3856 0.7357 0.5060 401
The Karate Kid (1984) 0.5647 0.2394 0.3363 401
The Karate Kid (2010) 0.5909 0.6808 0.6327 401
The Kids Are All Right (2010) 0.5639 0.8825 0.6881 400
The Killing Of A Sacred Deer (2017) 0.4893 0.5112 0.5000 401
The King (2019) 0.5880 0.6100 0.5988 400
The Kingdom (2007) 0.5493 0.5575 0.5533 400
The Kings Speech (2010) 0.5477 0.8475 0.6654 400
The LEGO Batman Movie (2017) 0.5838 0.7382 0.6520 401
The Last Boy Scout (1991) 0.4732 0.3975 0.4321 400
The Last Duel (2021) 0.3384 0.3875 0.3613 400
The Last King of Scotland (2006) 0.6667 0.1995 0.3071 401
The Last Samurai (2003) 0.4664 0.3125 0.3743 400
The Last of the Mohicans DDC (1992) 0.7755 0.9500 0.8539 400
The Lego Movie (2014) 0.6085 0.7132 0.6567 401
The Life Aquatic with Steve Zissou (2004) 0.4473 0.8275 0.5807 400
The Life Of David Gale (2013) 0.4576 0.4450 0.4512 400
The Lighthouse (2019) 0.6576 0.9052 0.7618 401
The Lincoln Lawyer (2011) 0.5935 0.6900 0.6382 400
The Little Mermaid (2023) 0.3161 0.3525 0.3333 400
The Lobster (2015) 0.6875 0.1650 0.2661 400
The Lord Of The Rings The Fellowship Of The Ring (2001) 0.4149 0.0975 0.1579 400
The Lord Of The Rings The Return Of The King (2003) 0.3679 0.3541 0.3609 401
The Lord Of The Rings The Two Towers (2002) 0.2681 0.3525 0.3045 400
The Machinist (2004) 0.4915 0.5775 0.5310 400
The Man From U.N.C.L.E. (2015) 0.4177 0.0825 0.1378 400
The Man From the Earth (2007) 0.3890 0.7032 0.5009 401
The Man Who Wasnt There (2001) 0.6651 0.7132 0.6883 401
The Martian (2015) 0.5208 0.2500 0.3378 400
The Master (2012) 0.7697 0.3175 0.4496 400
The Matrix (1999) 0.4371 0.6600 0.5259 400
The Mitchells Vs The Machines (2021) 0.4017 0.3625 0.3811 400
The Mule (2018) 0.3420 0.4275 0.3800 400
The Mummy (1999) 0.4104 0.3150 0.3564 400
The Next Three Days (2010) 0.4321 0.4763 0.4531 401
The Nightmare Before Christmas (1993) 0.3862 0.5175 0.4423 400
The Northman (2022) 0.3975 0.6250 0.4859 400
The Notebook (2004) 0.4864 0.5786 0.5285 401
The Passion Of The Christ (2004) 0.4553 0.8400 0.5905 400
The Patriot Extended Cut (2000) 0.4876 0.2444 0.3256 401
The Perks of Being a Wallflower (2012) 0.5978 0.6800 0.6363 400
The Phantom of the Opera (2004) 0.4534 0.5225 0.4855 400
The Pianist (2002) 0.4419 0.3791 0.4081 401
The Place Beyond the Pines (2012) 0.8906 0.4264 0.5767 401
The Post (2017) 0.6421 0.4350 0.5186 400
The Prestige (2006) 0.3506 0.4725 0.4026 400
The Prince Of Egypt (1998) 0.6712 0.6175 0.6432 400
The Princess Bride (1987) 0.3838 0.5686 0.4583 401
The Princess and the Frog (2009) 0.6174 0.5325 0.5718 400
The Pursuit of Happyness (2006) 0.5430 0.7575 0.6326 400
The Queen (2006) 0.4670 0.6550 0.5453 400
The Reader (2008) 0.6516 0.5750 0.6109 400
The Revenant (2015) 0.3069 0.7731 0.4394 401
The Road (2009) 0.5704 0.8300 0.6762 400
The Rock (1996) 0.2138 0.1625 0.1847 400
The School of Rock (2003) 0.3681 0.8300 0.5100 400
The Sea Beast (2022) 0.5568 0.2444 0.3397 401
The Secret Life Of Pets (2016) 0.5756 0.5411 0.5578 401
The Secret Life Of Pets 2 (2019) 0.4808 0.5000 0.4902 400
The Shape Of Water (2017) 0.4975 0.7325 0.5925 400
The Silence Of The Lambs (1991) 0.5263 0.2244 0.3147 401
The Simpsons Movie (2007) 0.8548 0.9125 0.8827 400
The Sixth Sense (1999) 0.4706 0.4190 0.4433 401
The Spectacular Now (2013) 0.4679 0.3100 0.3729 400
The Suicide Squad (2021) 0.6483 0.3815 0.4804 401
The Super Mario Bros. Movie (2023) 0.6221 0.6035 0.6127 401
The Talented Mr. Ripley (1999) 0.5183 0.7425 0.6105 400
The Theory of Everything (2014) 0.5816 0.2843 0.3819 401
The Thin Red Line (1998) 0.4088 0.5761 0.4783 401
The Time Travelers Wife (2009) 0.5364 0.5900 0.5619 400
The Town EXTENDED (2010) 0.4769 0.5411 0.5070 401
The Trial Of The Chicago 7 (2020) 0.4139 0.7431 0.5317 401
The Two Popes (2019) 0.9231 0.1500 0.2581 400
The Unforgivable (2021) 0.5938 0.8150 0.6870 400
The Usual Suspects (1995) 0.4131 0.2675 0.3247 400
The Virgin Suicides (1999) 0.4605 0.3342 0.3873 401
The Walk (2015) 0.5257 0.2294 0.3194 401
The Warriors (1979) 0.5491 0.5175 0.5328 400
The Way Back (2010) 0.6550 0.3275 0.4367 400
The Way Way Back (2013) 0.4791 0.5450 0.5099 400
The Wrestler (2008) 0.6657 0.5711 0.6148 401
The X Files (1998) 0.1875 0.0150 0.0277 401
Thelma And Louise (1991) 0.4890 0.3875 0.4324 400
There Will Be Blood (2007) 0.4554 0.1275 0.1992 400
Theres Something About Mary EXTENDED (1998) 0.3823 0.5910 0.4643 401
They Live (1988) 0.8525 0.1297 0.2251 401
This Is England (2006) 0.5714 0.5686 0.5700 401
Thor (2011) 0.3333 0.0274 0.0507 401
Three Billboards Outside Ebbing, Missouri (2017) 0.2487 0.2425 0.2456 400
Three Kings (1999) 0.5838 0.5225 0.5515 400
Tick Tick...Boom (2021) 0.2479 0.0723 0.1120 401
Tinker Tailor Soldier Spy (2011) 0.6074 0.7425 0.6682 400
To All The Boys Ive Loved Before (2018) 0.7866 0.9377 0.8555 401
Tombstone (1993) 0.4777 0.5337 0.5041 401
Total Recall (1990) 0.7984 0.5137 0.6252 401
Traffic (2000) 0.4630 0.5475 0.5017 400
Training Day (2001) 0.4545 0.3000 0.3614 400
Transformers (2007) 0.4972 0.4425 0.4683 400
Treasure Planet (2002) 0.6865 0.7225 0.7040 400
Tremors (1990) 0.3796 0.7506 0.5042 401
Troy (2004) 0.5641 0.2750 0.3697 400
True Grit (2010) 0.6935 0.3225 0.4403 400
True Lies (1994) 0.4106 0.1550 0.2250 400
True Romance (1993) 0.3758 0.3100 0.3397 400
Turning Red (2022) 0.5662 0.7375 0.6406 400
Unbreakable (2000) 0.4908 0.2000 0.2842 400
Unbroken (2014) 0.4465 0.4264 0.4362 401
Uncut Gems (2019) 0.2834 0.5761 0.3799 401
Underworld - Extended Edition (2003) 0.4193 0.7925 0.5484 400
Unforgiven (1992) 0.5133 0.3850 0.4400 400
United 93 (2006) 0.7197 0.2369 0.3565 401
Unleashed (2005) 0.5491 0.6833 0.6089 401
Up (2009) 0.5058 0.3267 0.3970 401
Up In The Air (2009) 0.4252 0.4963 0.4580 401
Upgrade (2018) 0.3430 0.4439 0.3870 401
V for Vendetta (2006) 0.5000 0.1825 0.2674 400
Valkyrie (2008) 0.4815 0.6175 0.5411 400
Vice (2018) 0.8776 0.1075 0.1915 400
Vicky Cristina Barcelona (2008) 0.4326 0.8404 0.5712 401
Walk the Line EXTENDED (2005) 0.3889 0.3491 0.3679 401
War Dogs (2016) 0.8361 0.1272 0.2208 401
War For The Planet Of The Apes (2017) 0.3387 0.1571 0.2147 401
War Horse (2011) 0.5603 0.3242 0.4107 401
Warrior (2011) 0.5812 0.2768 0.3750 401
Watchmen (2009) 0.4901 0.7406 0.5899 401
We Bought a Zoo (2011) 0.6930 0.7431 0.7172 401
We Need to Talk About Kevin (2011) 0.4674 0.6259 0.5352 401
Wedding Crashers (2005) 0.6978 0.6334 0.6641 401
Were Were Soldiers (2002) 0.5098 0.7175 0.5961 400
What We Do in the Shadows (2014) 0.5335 0.7556 0.6254 401
Whats Eating Gilbert Grape (1993) 0.5360 0.8000 0.6419 400
Where The Crawdads Sing (2022) 0.4861 0.0875 0.1483 400
Whiplash (2014) 0.4488 0.6025 0.5144 400
Wild (2014) 0.5556 0.4988 0.5256 401
Willow (1988) 0.2898 0.6359 0.3981 401
Wind River (2017) 0.4966 0.7225 0.5886 400
Winters Bone (2010) 0.7907 0.4250 0.5528 400
Wonder (2017) 0.4894 0.2875 0.3622 400
World War Z (2013) 0.4301 0.2075 0.2799 400
Wrath Of Man (2021) 0.6048 0.5700 0.5869 400
X Men Days of Future Past (2014) 0.3333 0.0025 0.0050 400
X Men First Class (2011) 0.7027 0.0648 0.1187 401
X-Men (2000) 0.2745 0.2450 0.2589 400
X-Men 2 (2003) 0.3333 0.0249 0.0464 401
Zack Snyders Justice League (2021) 0.4468 0.8400 0.5833 400
Zero Dark Thirty (2012) 0.3645 0.2825 0.3183 400
Zodiac (2007) 0.4566 0.7481 0.5671 401
Zootopia (2016) 0.6239 0.3650 0.4606 400
shooter (2007) 0.5710 0.4725 0.5171 400
accuracy 0.4962 321922
macro avg 0.5183 0.4962 0.4704 321922
weighted avg 0.5183 0.4962 0.4704 321922
```
|
[
"10 things i hate about you (1999)",
"12 monkeys (1995)",
"12 years a slave (2013)",
"127 hours (2010)",
"13 hours the secret soldiers of benghazi (2016)",
"1917 (2019)",
"21 grams (2003)",
"25th hour (2002)",
"300 (2006)",
"310 to yuma (2007)",
"500 days of summer (2009)",
"a beautiful mind (2001)",
"a bronx tale (1993)",
"a bugs life (1998)",
"a few good men (1992)",
"a fish called wanda (1988)",
"a good person (2023)",
"a history of violence (2005)",
"a league of their own (1992)",
"a man called otto (2022)",
"a scanner darkly (2006)",
"a serious man (2009)",
"a single man (2009)",
"a star is born (2018)",
"a time to kill (1996)",
"a walk to remember (2002)",
"about schmidt (2002)",
"about time (2013)",
"about a boy (2002)",
"across the universe (2007)",
"adaptation (2002)",
"air (2023)",
"aladdin (1992)",
"aliens special edition (1986)",
"allied (2016)",
"almost famous extended (2000)",
"american beauty (1999)",
"american gangster (2007)",
"american history x (1998)",
"american hustle (2013)",
"american sniper (2014)",
"an education (2009)",
"anastasia (1997)",
"anchorman the legend of ron burgundy (2004)",
"apocalypto (2006)",
"apollo 13 (1995)",
"argo (2012)",
"army of darkness (1992)",
"as good as it gets (1997)",
"atonement (2007)",
"august rush (2007)",
"austin powers - international man of mystery (1997)",
"avatar (2009)",
"avatar the way of water (2022)",
"awakenings (1990)",
"babel (2006)",
"baby driver (2017)",
"babylon (2022)",
"back to the future ii (1989)",
"back to the future iii (1990)",
"bad times at the el royale (2018)",
"barbie (2023)",
"basic instinct (1992)",
"batman (1989)",
"batman begins (2005)",
"batman returns (1992)",
"beauty and the beast (2017)",
"beauty and the beast (1991)",
"before midnight (2013)",
"before sunrise (1995)",
"before sunset (2004)",
"before the devil knows youre dead (2007)",
"begin again (2013)",
"being john malkovich (1999)",
"ben-hur (1959)",
"beveryly hills cop (1984)",
"big (1988)",
"big fish (2003)",
"billy elliot (2000)",
"birdman (2014)",
"black hawk down (2001)",
"black mirror bandersnatch (2018)",
"black panther (2018)",
"blade (1998)",
"blade runner 2049 (2017)",
"blow (2001)",
"blue jasmine (2013)",
"blue valentine (2010)",
"bohemian rhapsody (2018)",
"boogie nights (1997)",
"booksmart (2019)",
"bowling for columbine (2002)",
"boyhood (2014)",
"boys dont cry (1999)",
"boyz n the hood (1991)",
"braveheart (1995)",
"brick (2005)",
"bridge of spies (2015)",
"bridge to terabithia (2007)",
"brokeback mountain (2005)",
"broken flowers (2005)",
"bronson (2008)",
"brooklyn (2015)",
"brothers (2009)",
"buried (2010)",
"burn after reading (2008)",
"coda (2021)",
"call me by your name (2017)",
"cape fear (1991)",
"captain america civil war (2016)",
"captain fantastic (2016)",
"captain phillips (2013)",
"carnage (2011)",
"carol (2015)",
"cars (2006)",
"casino (1995)",
"cast away (2000)",
"catch me if you can (2002)",
"changeling (2008)",
"charlie wilsons war (2007)",
"charlie and the chocolate factory (2005)",
"chasing amy (1997)",
"chef (2014)",
"chicago (2002)",
"chicken run (2000)",
"children of men (2006)",
"chocolat (2000)",
"chronicle (2012)",
"cinderella man (2005)",
"clerks 2 (2006)",
"closer (2004)",
"cloud atlas (2012)",
"cloverfield (2008)",
"coach carter (2005)",
"coherence (2013)",
"cold moutians (2003)",
"collateral (2004)",
"constantine (2005)",
"contact (1997)",
"cop land (1997)",
"coraline (2009)",
"corpse bride (2005)",
"crash (2004)",
"creed (2015)",
"creed ii (2018)",
"crimson tide (1995)",
"cruella (2021)",
"cube (1997)",
"dancer in the dark (2000)",
"dances with wolves (1990)",
"dark city (1998)",
"darkest hour (2017)",
"dawn of the dead (2004)",
"dawn of the planet of the apes (2014)",
"dazed and confused (1993)",
"dead man (1995)",
"death at a funeral (2007)",
"death proof (2007)",
"definitely maybe (2008)",
"deja vu (2006)",
"demolition (2015)",
"desperado (1995)",
"despicable me (2010)",
"die hard 2 (1990)",
"die hard 3 (1995)",
"die hard 4 (2007)",
"dirty harry (1971)",
"doctor strange (2016)",
"doctor strange in the multiverse of madness (2022)",
"dogma (1999)",
"dogville (2003)",
"donnie brasco (1997)",
"donnie darko directors cut (2001)",
"dont look up (2021)",
"doubt (2008)",
"dr. no (1962)",
"dr. strangelove or how i learned to stop worrying and love the bomb (1964)",
"dredd (2012)",
"drive (2011)",
"dune (2021)",
"dungeons dragons honor among thieves (2023)",
"dunkirk (2017)",
"eastern promises (2007)",
"election (1999)",
"elemental (2023)",
"elf (2003)",
"elizabeth (1998)",
"elvis (2022)",
"encanto (2021)",
"enchanted (2007)",
"end of watch (2012)",
"enemy at the gates (2001)",
"enemy of the state (1998)",
"enter the dragon (1973)",
"equilibrium (2002)",
"erin brockovich (2000)",
"escape from new york (1981)",
"eternal sunshine of the spotless mind (2004)",
"ever after a cinderella story (1998)",
"everest (2015)",
"everything everywhere all at once (2022)",
"extraction 2 (2023)",
"eyes wide shut (1999)",
"face off (1997)",
"fahrenheit 9 11 (2004)",
"falling down (1993)",
"fantastic mr fox (2009)",
"fargo (1996)",
"fear and loathing in las vegas (1998)",
"fences (2016)",
"filth (2013)",
"finding dory (2016)",
"finding nemo (2003)",
"finding neverland (2004)",
"first man (2018)",
"flags of our fathers (2006)",
"flight (2012)",
"ford v ferrari (2019)",
"forgetting sarah marshall (2008)",
"four weddings and a funeral (1994)",
"foxcatcher (2014)",
"fracture (2007)",
"frequency (2000)",
"friday (1995)",
"from dusk till dawn (1996)",
"frost nixon (2008)",
"frozen (2013)",
"furious 6 (2013)",
"furious seven (2015)",
"galaxy quest (1999)",
"gangs of new york (2002)",
"gattaca (1997)",
"ghandi (1982)",
"ghost (1990)",
"ghost world (2001)",
"ghostbusters (1984)",
"ghostbusters afterlife (2021)",
"gifted (2017)",
"girl interrupted (1999)",
"gladiator extended remastered (2000)",
"glengarry glen ross (1992)",
"goldfinger (1964)",
"gone baby gone (2007)",
"gone girl (2014)",
"good time (2017)",
"good will hunting (1997)",
"goodfellas (1990)",
"gran torino (2008)",
"gravity (2013)",
"grease (1978)",
"green book (2018)",
"green street hooligans (2005)",
"greyhound (2020)",
"grindhouse (2007)",
"guardians of the galaxy vol. 2 (2017)",
"guardians of the galaxy (2014)",
"hachiko - a dogs tale (2009)",
"hacksaw ridge (2016)",
"hamilton (2020)",
"happy gilmore (1996)",
"harry potter and the chamber of secrets (2002)",
"harry potter and the half-blood prince (2009)",
"harry potter and the prisoner of azkaban (2004)",
"heat (1995)",
"hell or high water (2016)",
"hellboy the golden army (2008)",
"her (2013)",
"hidden figures (2016)",
"high fidelity (2000)",
"highlander (1986)",
"home alone (1990)",
"hot fuzz (2007)",
"hotel rawanda (2008)",
"hotel transylvania (2012)",
"hotel transylvania 4 transformania (2022)",
"how to train your dragon the hidden world (2019)",
"how to train your dragon 2 (2014)",
"hugo (2011)",
"hustle (2022)",
"i love you, man (2009)",
"i origins (2014)",
"i am sam (2001)",
"i, tonya (2017)",
"identity (2003)",
"imagine that (2009)",
"in bruges (2008)",
"in the line of fire (1993)",
"in the name of the father (1993)",
"independence day (1996)",
"indiana jones and the temple of doom (1984)",
"indiana jones and the last crusade (1989)",
"inside llewyn davis (2013)",
"inside man (2006)",
"inside out (2015)",
"insomnia (2002)",
"interstellar (2014)",
"invictus (2009)",
"iron man (2008)",
"isle of dogs (2018)",
"its kind of a funny story (2010)",
"jfk (1991)",
"jackie brown (1997)",
"james bond casino royale (2006)",
"james bond goldeneye (1995)",
"john q (2002)",
"john wick (2014)",
"john wick chapter 2 (2017)",
"john wick chapter 3 - parabellum (2019)",
"john wick chapter 4 (2023)",
"jojo rabbit (2019)",
"julie and julia (2009)",
"jumanji (1995)",
"jumanji welcome to the jungle (2017)",
"juno (2007)",
"k-pax (2001)",
"kick-ass (2010)",
"kill bill vol 1 (2003)",
"kill bill vol 2 (2004)",
"king kong (2005)",
"king richard (2021)",
"kingdom of heaven (2005)",
"kiss kiss bang bang (2005)",
"klaus (2019)",
"kubo and the two strings (2016)",
"kung fu panda 2",
"l.a confidential (1997)",
"la la land (2016)",
"lady bird (2017)",
"lars and the real girl (2007)",
"lawless (2012)",
"layer cake (2004)",
"leaving las vegas (1995)",
"legends of the fall (1994)",
"leon the professional extended (1994)",
"les misérables (2012)",
"letters from iwo jima (2006)",
"licorice pizza (2021)",
"life of brian (1979) 720p",
"life of pi (2012)",
"limitless (2011)",
"lincoln (2012)",
"lion (2016)",
"little children (2006)",
"little miss sunshine (2006)",
"little women (2019)",
"lock stock and two smoking barrels (1998)",
"locke (2013)",
"logan (2017)",
"logan lucky (2017)",
"looper (2012)",
"lord of war (2005)",
"lost highway (1997)",
"lost in translation (2003)",
"love actually (2003)",
"love, simon (2018)",
"lucky number slevin (2006)",
"mad max 2 the road warrior (1981)",
"magnolia (1999)",
"mallrats (1995)",
"man on the moon (1999)",
"man of steel (2013)",
"man on fire (2004)",
"manchester by the sea (2016)",
"margin call (2011)",
"marley and me (2008)",
"marriage story (2019)",
"master and commander the far side of the world (2003)",
"match point (2005)",
"matchstick men (2003)",
"matilda (1996)",
"maverick (1994)",
"me before you (2016)",
"mean girls (2004)",
"meet joe black (1998)",
"megamind (2010)",
"melancholia (2011)",
"memento (2000)",
"memoirs of a geisha (2005)",
"men of honor (2000)",
"michael clayton (2007)",
"midnight in paris (2011)",
"milk (2008)",
"millers crossing (1990)",
"million dollar baby (2004)",
"misery (1990)",
"mission impossible (1996)",
"mission impossible - fallout (2018)",
"mission impossible ghost protocol (2011)",
"mission impossible rogue nation (2015)",
"moana (2016)",
"mollys game (2017)",
"monster (2003)",
"monsters inc (2001)",
"monsters university (2013)",
"moon (2009)",
"moonlight (2016)",
"moonrise kingdom (2012)",
"moulin rouge! (2001)",
"mr brooks (2007)",
"mr nobody (2009)",
"mud (2012)",
"mulan (1998)",
"mulholland drive (2001)",
"munich (2005)",
"my cousin vinny (1992)",
"mystic river (2003)",
"napoleon dynamite (2004)",
"national lampoons christmas vacation (1989)",
"natural born killers (1994)",
"nebraska (2013)",
"never let me go (2010)",
"nightcrawler (2014)",
"nightmare alley (2021)",
"no country for old men (2007)",
"no time to die (2021)",
"nobody (2021)",
"nocturnal animals (2016)",
"nomadland (2020)",
"notting hill (1999)",
"now you see me (2013)",
"oblivion (2013)",
"oceans eleven (2001)",
"okja (2017)",
"old school (2003)",
"once (2006)",
"one day (2011)",
"one hundred and one dalmatians (1961)",
"only lovers left alive (2013)",
"paddington (2014)",
"paranorman (2012)",
"passengers (2016)",
"past lives (2023)",
"patriots day (2016)",
"pay it forward (2000)",
"payback (1999)",
"perfume - the story of a murderer (2006)",
"phantom thread (2017)",
"philadelphia (1993)",
"philomena (2013)",
"phone booth (2002)",
"pi (1998)",
"pitch black (2000)",
"planes, trains automobiles (1987)",
"planet of the apes (1968)",
"planet terror (2007)",
"platoon (1986)",
"pleasantville (1998)",
"point break (1991)",
"precious (2009)",
"predestination (2014)",
"pretty woman (1990)",
"pride and prejudice (2005)",
"primal fear (1996)",
"prisoners (2013)",
"promising young woman (2020)",
"pulp fiction (1994)",
"punch drunk love (2002)",
"puss in boots the last wish (2022)",
"rambo (2008)",
"rango (2009)",
"ray (2004)",
"ready player one (2018)",
"real steel (2011)",
"red (2010)",
"red dragon (2002)",
"remeber the titans (2000)",
"remember me (2010)",
"requiem for a dream directors cut (2000)",
"rescue dawn (2006)",
"reservoir dogs (1992)",
"revolutionary road (2008)",
"rio (2011)",
"rio 2 (2014)",
"road to predition (2002)",
"robocop (1987)",
"rock n rolla (2008)",
"rocketman (2019)",
"rocky balboa (2006)",
"rogue one (2016)",
"ronin (1998)",
"room (2015)",
"rounders (1998)",
"ruby gillman teenage kraken (2023)",
"ruby sparks (2012)",
"runaway jury (2003)",
"running scared (2006)",
"rush (2013)",
"rushmore (1998)",
"saving mr. banks (2013)",
"saving private ryan (1998)",
"scent of a woman (1992)",
"schindlers list (1993)",
"scott pilgrim vs the world (2010)",
"se7en (1995)",
"searching (2018)",
"sense and sensibility (1995)",
"serenity (2005)",
"seven pounds (2008)",
"seven psychopaths (2012)",
"seven years in tibet (1997)",
"shakespeare in love (1998)",
"shame (2011)",
"shaun of the dead (2004)",
"sherlock holmes (2009)",
"sherlock holmes a game of shadows (2011)",
"shrek (2001)",
"shrek 2 (2004)",
"side effects (2013)",
"sideways (2004)",
"silence (2016)",
"silver linings playbook (2012)",
"sin city extended and unrated (2005)",
"sing (2016)",
"sing 2 (2021)",
"sing street (2016)",
"skull (2022)",
"skyfall (2012)",
"sleepers (1996)",
"slumdog millionaire (2008)",
"snatch (2000)",
"snowden (2016)",
"soul (2020)",
"sound of metal (2019)",
"source code (2011)",
"south park bigger longer and uncut (1999)",
"southpaw (2015)",
"speed (1994)",
"spider man 2 (2004)",
"spider-man across the spider-verse (2023)",
"spider-man into the spider-verse (2018)",
"spider-man no way home (2021)",
"spirited away (2001)",
"spotlight (2015)",
"spy (2015)",
"spy game (2001)",
"st. vincent (2014)",
"star trek (2009)",
"star trek beyond (2016)",
"star trek first contact (1996)",
"star trek ii the wrath of khan (1982)",
"star trek into darkness (2013)",
"star wars episode iii - revenge of the sith (2005)",
"star wars episode iv - a new hope (1977)",
"star wars episode v - the empire strikes back (1980)",
"star wars episode vi - return of the jedi (1983)",
"star wars episode vii - the force awakens (2015)",
"stardust (2007)",
"starship troopers (1997)",
"state of play (2009)",
"steve jobs (2015)",
"still alice (2014)",
"straight outta compton (2015)",
"stranger than fiction (2006)",
"sunshine (2007)",
"super 8 (2011)",
"super size me (2004)",
"superman (1978)",
"t2 trainspotting (2017)",
"tmnt (2007)",
"taken (2008)",
"tangled (2010)",
"tarzan (1999)",
"team america world police (2004)",
"terminator 2 (1991)",
"terms and conditions may apply (2013)",
"thank you for smoking (2005)",
"the abyss (1989)",
"the adjustment bureau (2011)",
"the adventures of tintin (2011)",
"the assassination of jesse james by the coward robert ford (2007)",
"the aviator (2004)",
"the ballad of buster scruggs (2018)",
"the bank job (2008)",
"the banshees of inisherin (2022)",
"the basketball diaries (1995)",
"the batman (2022)",
"the big short (2015)",
"the big sick (2017)",
"the blind side (2009)",
"the boat that rocked (2009)",
"the book thief (2013)",
"the boondock saints (1999)",
"the bourne supremacy (2004)",
"the bourne ultimatum (2007)",
"the bourne identity (2002)",
"the boy in the striped pyjamas (2008)",
"the breakfast club (1985)",
"the bucket list (2007)",
"the butler (2013)",
"the butterfly effect (2004)",
"the chronicles of narnia - the lion, the witch, and the wardrobe (2005)",
"the cider house rules (1999)",
"the constant gardener (2005)",
"the count of monte cristo (2002)",
"the covenant (2023)",
"the croods (2013)",
"the crow (1994)",
"the curious case of benjamin button (2008)",
"the curse of the were-rabbit (2005)",
"the danish girl (2015)",
"the darjeeling limited (2007)",
"the dark knight rises (2012)",
"the death of stalin (2017)",
"the departed (2006)",
"the descendants(2011)",
"the devil all the time (2020)",
"the disaster artist (2017)",
"the dreamers (2003)",
"the drop (2014)",
"the emperors new groove (2000)",
"the english patient (1996)",
"the equalizer (2014)",
"the fall (2006)",
"the father (2020)",
"the favourite (2018)",
"the fifth element remastered (1997)",
"the fighter (2010)",
"the florida project (2017)",
"the founder (2016)",
"the fountian (2004)",
"the french connection (1971)",
"the fugitive (1993)",
"the full monty (1997)",
"the game (1997)",
"the gentlemen (2019)",
"the ghost writer (2010",
"the gift (2015)",
"the girl with the dragon tattoo (2011)",
"the godfather part 3 (1990)",
"the grand budapest hotel (2014)",
"the greatest showman (2017)",
"the green mile (1999)",
"the hateful eight (2015)",
"the help (2011)",
"the hobbit an unexpected journey (2012)",
"the hobbit the battle of the five armies (2014)",
"the hobbit the desolation of smaug (2013)",
"the hours (2002)",
"the hunchback of notre dame (1996)",
"the hunger games (2012)",
"the hunger games catching fire (2013)",
"the hunt for red october (1990)",
"the hurricane (1999)",
"the hurt locker (2008)",
"the ides of march (2011)",
"the illusionist (2006)",
"the imitation game (2014)",
"the impossible (2012)",
"the incredibles (2004)",
"the intern (2015)",
"the irishman (2019)",
"the iron giant (1999)",
"the italian job (2003)",
"the jacket (2005)",
"the judge (2014)",
"the jungle book (2016)",
"the karate kid (1984)",
"the karate kid (2010)",
"the kids are all right (2010)",
"the killing of a sacred deer (2017)",
"the king (2019)",
"the kingdom (2007)",
"the kings speech (2010)",
"the lego batman movie (2017)",
"the last boy scout (1991)",
"the last duel (2021)",
"the last king of scotland (2006)",
"the last samurai (2003)",
"the last of the mohicans ddc (1992)",
"the lego movie (2014)",
"the life aquatic with steve zissou (2004)",
"the life of david gale (2013)",
"the lighthouse (2019)",
"the lincoln lawyer (2011)",
"the little mermaid (2023)",
"the lobster (2015)",
"the lord of the rings the fellowship of the ring (2001)",
"the lord of the rings the return of the king (2003)",
"the lord of the rings the two towers (2002)",
"the machinist (2004)",
"the man from u.n.c.l.e. (2015)",
"the man from the earth (2007)",
"the man who wasnt there (2001)",
"the martian (2015)",
"the master (2012)",
"the matrix (1999)",
"the mitchells vs the machines (2021)",
"the mule (2018)",
"the mummy (1999)",
"the next three days (2010)",
"the nightmare before christmas (1993)",
"the northman (2022)",
"the notebook (2004)",
"the passion of the christ (2004)",
"the patriot extended cut (2000)",
"the perks of being a wallflower (2012)",
"the phantom of the opera (2004)",
"the pianist (2002)",
"the place beyond the pines (2012)",
"the post (2017)",
"the prestige (2006)",
"the prince of egypt (1998)",
"the princess bride (1987)",
"the princess and the frog (2009)",
"the pursuit of happyness (2006)",
"the queen (2006)",
"the reader (2008)",
"the revenant (2015)",
"the road (2009)",
"the rock (1996)",
"the school of rock (2003)",
"the sea beast (2022)",
"the secret life of pets (2016)",
"the secret life of pets 2 (2019)",
"the shape of water (2017)",
"the silence of the lambs (1991)",
"the simpsons movie (2007)",
"the sixth sense (1999)",
"the spectacular now (2013)",
"the suicide squad (2021)",
"the super mario bros. movie (2023)",
"the talented mr. ripley (1999)",
"the theory of everything (2014)",
"the thin red line (1998)",
"the time travelers wife (2009)",
"the town extended (2010)",
"the trial of the chicago 7 (2020)",
"the two popes (2019)",
"the unforgivable (2021)",
"the usual suspects (1995)",
"the virgin suicides (1999)",
"the walk (2015)",
"the warriors (1979)",
"the way back (2010)",
"the way way back (2013)",
"the wrestler (2008)",
"the x files (1998)",
"thelma and louise (1991)",
"there will be blood (2007)",
"theres something about mary extended (1998)",
"they live (1988)",
"this is england (2006)",
"thor (2011)",
"three billboards outside ebbing, missouri (2017)",
"three kings (1999)",
"tick tick...boom (2021)",
"tinker tailor soldier spy (2011)",
"to all the boys ive loved before (2018)",
"tombstone (1993)",
"total recall (1990)",
"traffic (2000)",
"training day (2001)",
"transformers (2007)",
"treasure planet (2002)",
"tremors (1990)",
"troy (2004)",
"true grit (2010)",
"true lies (1994)",
"true romance (1993)",
"turning red (2022)",
"unbreakable (2000)",
"unbroken (2014)",
"uncut gems (2019)",
"underworld - extended edition (2003)",
"unforgiven (1992)",
"united 93 (2006)",
"unleashed (2005)",
"up (2009)",
"up in the air (2009)",
"upgrade (2018)",
"v for vendetta (2006)",
"valkyrie (2008)",
"vice (2018)",
"vicky cristina barcelona (2008)",
"walk the line extended (2005)",
"war dogs (2016)",
"war for the planet of the apes (2017)",
"war horse (2011)",
"warrior (2011)",
"watchmen (2009)",
"we bought a zoo (2011)",
"we need to talk about kevin (2011)",
"wedding crashers (2005)",
"were were soldiers (2002)",
"what we do in the shadows (2014)",
"whats eating gilbert grape (1993)",
"where the crawdads sing (2022)",
"whiplash (2014)",
"wild (2014)",
"willow (1988)",
"wind river (2017)",
"winters bone (2010)",
"wonder (2017)",
"world war z (2013)",
"wrath of man (2021)",
"x men days of future past (2014)",
"x men first class (2011)",
"x-men (2000)",
"x-men 2 (2003)",
"zack snyders justice league (2021)",
"zero dark thirty (2012)",
"zodiac (2007)",
"zootopia (2016)",
"shooter (2007)"
] |
rsadaphule/vit-base-patch16-224-finetuned-wildcats
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-wildcats
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the wildcat image dataset.
## Model description
Demo is hosted at https://huggingface.co/spaces/rsadaphule/wildcats
## Intended uses & limitations
Classify wildcats
## Training and evaluation data
Model has been trained on wildcat images
## Training procedure
Fine tune Vision Transformer on wildcat images
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
accuracy : 97%
### Framework versions
- Transformers 4.24.0
- Pytorch 2.1.0+cu121
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"african leopard",
"caracal",
"cheetah",
"clouded leopard",
"jaguar",
"lions",
"ocelot",
"puma",
"snow leopard",
"tiger"
] |
Mansour2002/vit-base-patch16-224-finetuned-flower
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.1.0+cu121
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"daisy",
"dandelion",
"roses",
"sunflowers",
"tulips"
] |
Organika/sdxl-detector
|
# SDXL Detector
This model was created by fine-tuning the [umm-maybe AI art detector](https://huggingface.co/umm-maybe/AI-image-detector) on a dataset of Wikimedia-SDXL image pairs, where the SDXL image is generated using a prompt based upon a BLIP-generated caption describing the Wikimedia image.
This model demonstrates greatly improved performance over the umm-maybe detector on images generated by more recent diffusion models as well as non-artistic imagery (given the broader range of subjects depicted in the random sample drawn from Wikimedia).
However, its performance may be lower for images generated using models other than SDXL. In particular, this model underperforms the original detector for images generated using older models (such as VQGAN+CLIP).
The data used for this fine-tune is either synthetic (generated by SDXL) and therefore non-copyrightable, or downloaded from Wikimedia and therefore meeting their definition of "free data" (see https://commons.wikimedia.org/wiki/Commons:Licensing for details). However, the original umm-maybe AI art detector was trained on data scraped from image links in Reddit posts, some of which may be copyrighted. Therefore this model as well as its predecessor should be considered appropriate for non-commercial (i.e. personal or educational) fair uses only.
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.08717025071382523
f1: 0.9732620320855615
precision: 0.994535519125683
recall: 0.9528795811518325
auc: 0.9980461893059392
accuracy: 0.9812734082397003
|
[
"artificial",
"human"
] |
dylanmontoya22/vit_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0069
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1328 | 3.85 | 500 | 0.0069 | 1.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
3una/finetuned-AffectNet
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-AffectNet
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8122
- Accuracy: 0.7345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.0686 | 1.0 | 163 | 2.0963 | 0.1549 |
| 1.7148 | 2.0 | 327 | 1.7250 | 0.2943 |
| 1.4591 | 3.0 | 490 | 1.4418 | 0.4204 |
| 1.3351 | 4.0 | 654 | 1.2648 | 0.5194 |
| 1.1343 | 5.0 | 817 | 1.0728 | 0.5908 |
| 1.1022 | 6.0 | 981 | 0.9741 | 0.6355 |
| 1.0476 | 7.0 | 1144 | 0.9203 | 0.6631 |
| 1.0049 | 8.0 | 1308 | 0.8769 | 0.6760 |
| 0.9561 | 9.0 | 1471 | 0.8438 | 0.6966 |
| 0.9409 | 10.0 | 1635 | 0.8283 | 0.6988 |
| 0.9419 | 11.0 | 1798 | 0.7867 | 0.7164 |
| 0.89 | 12.0 | 1962 | 0.7858 | 0.7139 |
| 0.8761 | 13.0 | 2125 | 0.7704 | 0.7147 |
| 0.8662 | 14.0 | 2289 | 0.7590 | 0.7225 |
| 0.8561 | 15.0 | 2452 | 0.7574 | 0.7199 |
| 0.8234 | 16.0 | 2616 | 0.7457 | 0.7238 |
| 0.844 | 17.0 | 2779 | 0.7416 | 0.7255 |
| 0.7908 | 18.0 | 2943 | 0.7485 | 0.7255 |
| 0.809 | 19.0 | 3106 | 0.7428 | 0.7250 |
| 0.7976 | 20.0 | 3270 | 0.7597 | 0.7203 |
| 0.7691 | 21.0 | 3433 | 0.7333 | 0.7345 |
| 0.7408 | 22.0 | 3597 | 0.7362 | 0.7246 |
| 0.7516 | 23.0 | 3760 | 0.7301 | 0.7298 |
| 0.7887 | 24.0 | 3924 | 0.7263 | 0.7332 |
| 0.7475 | 25.0 | 4087 | 0.7301 | 0.7293 |
| 0.7619 | 26.0 | 4251 | 0.7334 | 0.7298 |
| 0.7509 | 27.0 | 4414 | 0.7332 | 0.7345 |
| 0.7212 | 28.0 | 4578 | 0.7301 | 0.7367 |
| 0.7053 | 29.0 | 4741 | 0.7293 | 0.7328 |
| 0.6634 | 30.0 | 4905 | 0.7412 | 0.7298 |
| 0.677 | 31.0 | 5068 | 0.7221 | 0.7375 |
| 0.6453 | 32.0 | 5232 | 0.7281 | 0.7392 |
| 0.6961 | 33.0 | 5395 | 0.7280 | 0.7392 |
| 0.7135 | 34.0 | 5559 | 0.7348 | 0.7362 |
| 0.6871 | 35.0 | 5722 | 0.7334 | 0.7293 |
| 0.6829 | 36.0 | 5886 | 0.7281 | 0.7328 |
| 0.6742 | 37.0 | 6049 | 0.7332 | 0.7354 |
| 0.6167 | 38.0 | 6213 | 0.7274 | 0.7384 |
| 0.665 | 39.0 | 6376 | 0.7322 | 0.7311 |
| 0.6433 | 40.0 | 6540 | 0.7473 | 0.7345 |
| 0.6661 | 41.0 | 6703 | 0.7358 | 0.7341 |
| 0.6424 | 42.0 | 6867 | 0.7413 | 0.7324 |
| 0.6369 | 43.0 | 7030 | 0.7314 | 0.7414 |
| 0.611 | 44.0 | 7194 | 0.7325 | 0.7388 |
| 0.6556 | 45.0 | 7357 | 0.7485 | 0.7354 |
| 0.6524 | 46.0 | 7521 | 0.7434 | 0.7418 |
| 0.6176 | 47.0 | 7684 | 0.7402 | 0.7410 |
| 0.6142 | 48.0 | 7848 | 0.7480 | 0.7315 |
| 0.5968 | 49.0 | 8011 | 0.7457 | 0.7384 |
| 0.6132 | 50.0 | 8175 | 0.7514 | 0.7328 |
| 0.592 | 51.0 | 8338 | 0.7500 | 0.7375 |
| 0.6347 | 52.0 | 8502 | 0.7533 | 0.7345 |
| 0.5976 | 53.0 | 8665 | 0.7539 | 0.7324 |
| 0.5496 | 54.0 | 8829 | 0.7495 | 0.7388 |
| 0.5845 | 55.0 | 8992 | 0.7550 | 0.7367 |
| 0.5624 | 56.0 | 9156 | 0.7606 | 0.7362 |
| 0.5582 | 57.0 | 9319 | 0.7598 | 0.7341 |
| 0.6206 | 58.0 | 9483 | 0.7608 | 0.7345 |
| 0.5647 | 59.0 | 9646 | 0.7578 | 0.7388 |
| 0.6093 | 60.0 | 9810 | 0.7646 | 0.7358 |
| 0.5625 | 61.0 | 9973 | 0.7622 | 0.7388 |
| 0.6114 | 62.0 | 10137 | 0.7702 | 0.7324 |
| 0.5304 | 63.0 | 10300 | 0.7710 | 0.7367 |
| 0.5646 | 64.0 | 10464 | 0.7807 | 0.7298 |
| 0.5774 | 65.0 | 10627 | 0.7793 | 0.7328 |
| 0.5825 | 66.0 | 10791 | 0.7786 | 0.7375 |
| 0.5111 | 67.0 | 10954 | 0.7742 | 0.7380 |
| 0.5849 | 68.0 | 11118 | 0.7779 | 0.7349 |
| 0.5454 | 69.0 | 11281 | 0.7795 | 0.7367 |
| 0.5158 | 70.0 | 11445 | 0.7806 | 0.7345 |
| 0.5576 | 71.0 | 11608 | 0.7903 | 0.7345 |
| 0.5394 | 72.0 | 11772 | 0.7812 | 0.7380 |
| 0.5099 | 73.0 | 11935 | 0.7808 | 0.7354 |
| 0.5209 | 74.0 | 12099 | 0.7851 | 0.7319 |
| 0.5322 | 75.0 | 12262 | 0.7908 | 0.7401 |
| 0.5351 | 76.0 | 12426 | 0.7960 | 0.7306 |
| 0.5272 | 77.0 | 12589 | 0.7924 | 0.7324 |
| 0.477 | 78.0 | 12753 | 0.7981 | 0.7332 |
| 0.5186 | 79.0 | 12916 | 0.7942 | 0.7341 |
| 0.5366 | 80.0 | 13080 | 0.8016 | 0.7367 |
| 0.4809 | 81.0 | 13243 | 0.8014 | 0.7341 |
| 0.4889 | 82.0 | 13407 | 0.8008 | 0.7354 |
| 0.5287 | 83.0 | 13570 | 0.8010 | 0.7349 |
| 0.4926 | 84.0 | 13734 | 0.8047 | 0.7371 |
| 0.4989 | 85.0 | 13897 | 0.8046 | 0.7384 |
| 0.5483 | 86.0 | 14061 | 0.8022 | 0.7371 |
| 0.5157 | 87.0 | 14224 | 0.8055 | 0.7358 |
| 0.4999 | 88.0 | 14388 | 0.8071 | 0.7319 |
| 0.519 | 89.0 | 14551 | 0.8083 | 0.7362 |
| 0.4534 | 90.0 | 14715 | 0.8082 | 0.7384 |
| 0.429 | 91.0 | 14878 | 0.8103 | 0.7354 |
| 0.5073 | 92.0 | 15042 | 0.8116 | 0.7336 |
| 0.5358 | 93.0 | 15205 | 0.8106 | 0.7341 |
| 0.5049 | 94.0 | 15369 | 0.8111 | 0.7315 |
| 0.4745 | 95.0 | 15532 | 0.8118 | 0.7336 |
| 0.5052 | 96.0 | 15696 | 0.8104 | 0.7371 |
| 0.495 | 97.0 | 15859 | 0.8101 | 0.7354 |
| 0.4752 | 98.0 | 16023 | 0.8117 | 0.7349 |
| 0.4927 | 99.0 | 16186 | 0.8120 | 0.7336 |
| 0.4875 | 99.69 | 16300 | 0.8122 | 0.7345 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
omersubasi/vit-base-patch16-224-finetuned-flower
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.1.0+cu121
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"daisy",
"dandelion",
"roses",
"sunflowers",
"tulips"
] |
BhavanaMalla/distill_ViT_to_MobileNet
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distill_ViT_to_MobileNet
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: -16.9518
- Accuracy: 0.3759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| -17.0436 | 1.0 | 130 | -16.9518 | 0.3759 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"label_0",
"label_1",
"label_2"
] |
21nao3/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Accuracy: 0.3896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.91 | 5 | nan | 0.3896 |
| 0.0 | 2.0 | 11 | nan | 0.3896 |
| 0.0 | 2.73 | 15 | nan | 0.3896 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
[
"healthy",
"powdery",
"rust"
] |
MaulikMadhavi/vit-base-flowers102
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-flowers102
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the nelorth/oxford-flowers dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0770
- Accuracy: 0.9853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.5779 | 0.22 | 100 | 2.8895 | 0.7775 |
| 1.2226 | 0.45 | 200 | 1.5942 | 0.9255 |
| 0.606 | 0.67 | 300 | 0.8012 | 0.9529 |
| 0.3413 | 0.89 | 400 | 0.4845 | 0.9706 |
| 0.1571 | 1.11 | 500 | 0.2611 | 0.9814 |
| 0.1237 | 1.34 | 600 | 0.1691 | 0.9784 |
| 0.049 | 1.56 | 700 | 0.1146 | 0.9892 |
| 0.0763 | 1.78 | 800 | 0.1209 | 0.9863 |
| 0.0864 | 2.0 | 900 | 0.1223 | 0.9804 |
| 0.0786 | 2.23 | 1000 | 0.1075 | 0.9833 |
| 0.0269 | 2.45 | 1100 | 0.0919 | 0.9843 |
| 0.0178 | 2.67 | 1200 | 0.0795 | 0.9873 |
| 0.0165 | 2.9 | 1300 | 0.0727 | 0.9873 |
| 0.0144 | 3.12 | 1400 | 0.0784 | 0.9853 |
| 0.0138 | 3.34 | 1500 | 0.0759 | 0.9853 |
| 0.0135 | 3.56 | 1600 | 0.0737 | 0.9863 |
| 0.0123 | 3.79 | 1700 | 0.0770 | 0.9853 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"1",
"10",
"16",
"98",
"99",
"17",
"18",
"19",
"2",
"20",
"21",
"22",
"23",
"24",
"100",
"25",
"26",
"27",
"28",
"29",
"3",
"30",
"31",
"32",
"33",
"101",
"34",
"35",
"36",
"37",
"38",
"39",
"4",
"40",
"41",
"42",
"102",
"43",
"44",
"45",
"46",
"47",
"48",
"49",
"5",
"50",
"51",
"11",
"52",
"53",
"54",
"55",
"56",
"57",
"58",
"59",
"6",
"60",
"12",
"61",
"62",
"63",
"64",
"65",
"66",
"67",
"68",
"69",
"7",
"13",
"70",
"71",
"72",
"73",
"74",
"75",
"76",
"77",
"78",
"79",
"14",
"8",
"80",
"81",
"82",
"83",
"84",
"85",
"86",
"87",
"88",
"15",
"89",
"9",
"90",
"91",
"92",
"93",
"94",
"95",
"96",
"97"
] |
sooks/id1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# id1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sooks/id1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6181
- Accuracy: 0.6535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.6933 | 0.53 | 10000 | 0.6932 | 0.5008 |
| 0.6933 | 1.06 | 20000 | 0.6933 | 0.4992 |
| 0.6933 | 1.59 | 30000 | 0.6931 | 0.5008 |
| 0.6933 | 2.12 | 40000 | 0.6931 | 0.5161 |
| 0.6931 | 2.65 | 50000 | 0.6933 | 0.4991 |
| 0.6932 | 3.19 | 60000 | 0.6932 | 0.4991 |
| 0.6746 | 3.72 | 70000 | 0.6725 | 0.5796 |
| 0.6582 | 4.25 | 80000 | 0.6614 | 0.6032 |
| 0.6455 | 4.78 | 90000 | 0.6466 | 0.6132 |
| 0.6256 | 5.31 | 100000 | 0.6325 | 0.6391 |
| 0.6144 | 5.84 | 110000 | 0.6181 | 0.6535 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"ai",
"human"
] |
TeeA/resnet-50-finetuned-pokemon
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-pokemon-finetuned-pokemon
This model is a fine-tuned version of [TeeA/resnet-50-finetuned-pokemon](https://huggingface.co/TeeA/resnet-50-finetuned-pokemon) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 14.1746
- Accuracy: 0.0849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1894 | 0.99 | 38 | 9.2115 | 0.0137 |
| 1.1389 | 1.99 | 76 | 9.2521 | 0.0129 |
| 1.0432 | 2.98 | 114 | 9.4765 | 0.0144 |
| 1.0625 | 4.0 | 153 | 9.7668 | 0.0137 |
| 1.0805 | 4.99 | 191 | 10.2526 | 0.0137 |
| 1.0353 | 5.99 | 229 | 10.3238 | 0.0129 |
| 0.9747 | 6.98 | 267 | 10.5779 | 0.0165 |
| 0.9708 | 8.0 | 306 | 10.7458 | 0.0180 |
| 0.8886 | 8.99 | 344 | 11.0072 | 0.0194 |
| 0.8408 | 9.99 | 382 | 11.3171 | 0.0223 |
| 0.802 | 10.98 | 420 | 11.5545 | 0.0245 |
| 0.7903 | 12.0 | 459 | 11.7722 | 0.0288 |
| 0.7553 | 12.99 | 497 | 11.9834 | 0.0353 |
| 0.7413 | 13.99 | 535 | 11.9815 | 0.0446 |
| 0.6272 | 14.98 | 573 | 12.0871 | 0.0496 |
| 0.6944 | 16.0 | 612 | 12.3713 | 0.0590 |
| 0.6322 | 16.99 | 650 | 12.6826 | 0.0554 |
| 0.6131 | 17.99 | 688 | 12.4819 | 0.0612 |
| 0.5916 | 18.98 | 726 | 12.6246 | 0.0647 |
| 0.5094 | 20.0 | 765 | 12.6641 | 0.0669 |
| 0.5201 | 20.99 | 803 | 12.8861 | 0.0662 |
| 0.4731 | 21.99 | 841 | 12.7431 | 0.0655 |
| 0.5132 | 22.98 | 879 | 12.7786 | 0.0705 |
| 0.5036 | 24.0 | 918 | 12.9990 | 0.0727 |
| 0.4863 | 24.99 | 956 | 13.0419 | 0.0727 |
| 0.4852 | 25.99 | 994 | 13.0573 | 0.0734 |
| 0.4983 | 26.98 | 1032 | 13.1310 | 0.0719 |
| 0.459 | 28.0 | 1071 | 13.0688 | 0.0748 |
| 0.4556 | 28.99 | 1109 | 13.4128 | 0.0748 |
| 0.4729 | 29.99 | 1147 | 13.3530 | 0.0741 |
| 0.4659 | 30.98 | 1185 | 13.2308 | 0.0763 |
| 0.4337 | 32.0 | 1224 | 13.3264 | 0.0748 |
| 0.456 | 32.99 | 1262 | 13.3506 | 0.0741 |
| 0.4423 | 33.99 | 1300 | 13.3607 | 0.0784 |
| 0.4037 | 34.98 | 1338 | 13.2521 | 0.0734 |
| 0.3891 | 36.0 | 1377 | 13.3702 | 0.0777 |
| 0.3992 | 36.99 | 1415 | 13.4762 | 0.0777 |
| 0.4014 | 37.99 | 1453 | 13.5382 | 0.0791 |
| 0.3549 | 38.98 | 1491 | 13.5550 | 0.0791 |
| 0.4048 | 40.0 | 1530 | 13.6406 | 0.0799 |
| 0.3711 | 40.99 | 1568 | 13.5120 | 0.0777 |
| 0.3834 | 41.99 | 1606 | 13.9230 | 0.0799 |
| 0.3475 | 42.98 | 1644 | 13.8602 | 0.0791 |
| 0.3465 | 44.0 | 1683 | 13.6931 | 0.0806 |
| 0.3682 | 44.99 | 1721 | 13.7774 | 0.0784 |
| 0.3613 | 45.99 | 1759 | 14.0235 | 0.0791 |
| 0.368 | 46.98 | 1797 | 13.9289 | 0.0813 |
| 0.3961 | 48.0 | 1836 | 14.2549 | 0.0806 |
| 0.365 | 48.99 | 1874 | 14.1114 | 0.0813 |
| 0.3259 | 49.99 | 1912 | 13.9710 | 0.0806 |
| 0.2998 | 50.98 | 1950 | 14.0288 | 0.0806 |
| 0.3203 | 52.0 | 1989 | 13.9398 | 0.0813 |
| 0.3104 | 52.99 | 2027 | 14.0255 | 0.0820 |
| 0.3232 | 53.99 | 2065 | 13.9355 | 0.0827 |
| 0.3521 | 54.98 | 2103 | 13.8627 | 0.0806 |
| 0.3322 | 56.0 | 2142 | 14.0179 | 0.0806 |
| 0.3129 | 56.99 | 2180 | 13.9640 | 0.0820 |
| 0.3159 | 57.99 | 2218 | 14.1997 | 0.0799 |
| 0.3118 | 58.98 | 2256 | 14.1639 | 0.0820 |
| 0.3196 | 60.0 | 2295 | 14.0334 | 0.0806 |
| 0.301 | 60.99 | 2333 | 13.9954 | 0.0820 |
| 0.3142 | 61.99 | 2371 | 14.1432 | 0.0799 |
| 0.3192 | 62.98 | 2409 | 14.0269 | 0.0784 |
| 0.3342 | 64.0 | 2448 | 14.0450 | 0.0806 |
| 0.3045 | 64.99 | 2486 | 14.1746 | 0.0849 |
| 0.2991 | 65.99 | 2524 | 14.3192 | 0.0806 |
| 0.3228 | 66.98 | 2562 | 14.1782 | 0.0784 |
| 0.2711 | 68.0 | 2601 | 14.4261 | 0.0849 |
| 0.2473 | 68.99 | 2639 | 14.2303 | 0.0827 |
| 0.3287 | 69.99 | 2677 | 14.2750 | 0.0827 |
| 0.2673 | 70.98 | 2715 | 14.2303 | 0.0820 |
| 0.2843 | 72.0 | 2754 | 14.4086 | 0.0806 |
| 0.3099 | 72.99 | 2792 | 14.5184 | 0.0827 |
| 0.3102 | 73.99 | 2830 | 14.2768 | 0.0835 |
| 0.2911 | 74.98 | 2868 | 14.1010 | 0.0835 |
| 0.2927 | 76.0 | 2907 | 14.4618 | 0.0813 |
| 0.2967 | 76.99 | 2945 | 14.3581 | 0.0820 |
| 0.2446 | 77.99 | 2983 | 14.4562 | 0.0835 |
| 0.3035 | 78.98 | 3021 | 14.2681 | 0.0835 |
| 0.2989 | 80.0 | 3060 | 14.2768 | 0.0827 |
| 0.2486 | 80.99 | 3098 | 14.4242 | 0.0820 |
| 0.2622 | 81.99 | 3136 | 14.3810 | 0.0835 |
| 0.2892 | 82.98 | 3174 | 14.4637 | 0.0827 |
| 0.2668 | 84.0 | 3213 | 14.4597 | 0.0835 |
| 0.2527 | 84.99 | 3251 | 14.3098 | 0.0820 |
| 0.2636 | 85.99 | 3289 | 14.3741 | 0.0835 |
| 0.247 | 86.98 | 3327 | 14.5369 | 0.0842 |
| 0.2693 | 88.0 | 3366 | 14.4039 | 0.0835 |
| 0.2692 | 88.99 | 3404 | 14.6161 | 0.0835 |
| 0.28 | 89.99 | 3442 | 14.5244 | 0.0835 |
| 0.2535 | 90.98 | 3480 | 14.4062 | 0.0842 |
| 0.2887 | 92.0 | 3519 | 14.4113 | 0.0806 |
| 0.257 | 92.99 | 3557 | 14.3442 | 0.0842 |
| 0.2627 | 93.99 | 3595 | 14.4693 | 0.0835 |
| 0.2804 | 94.98 | 3633 | 14.3223 | 0.0835 |
| 0.2529 | 96.0 | 3672 | 14.3844 | 0.0835 |
| 0.2327 | 96.99 | 3710 | 14.4284 | 0.0835 |
| 0.2643 | 97.99 | 3748 | 14.5567 | 0.0835 |
| 0.284 | 98.98 | 3786 | 14.6738 | 0.0813 |
| 0.2503 | 99.35 | 3800 | 14.5363 | 0.0842 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"porygon",
"goldeen",
"hitmonlee",
"hitmonchan",
"gloom",
"aerodactyl",
"mankey",
"seadra",
"gengar",
"venonat",
"articuno",
"seaking",
"dugtrio",
"machop",
"jynx",
"oddish",
"dodrio",
"dragonair",
"weedle",
"golduck",
"flareon",
"krabby",
"parasect",
"ninetales",
"nidoqueen",
"kabutops",
"drowzee",
"caterpie",
"jigglypuff",
"machamp",
"clefairy",
"kangaskhan",
"dragonite",
"weepinbell",
"fearow",
"bellsprout",
"grimer",
"nidorina",
"staryu",
"horsea",
"electabuzz",
"dratini",
"machoke",
"magnemite",
"squirtle",
"gyarados",
"pidgeot",
"bulbasaur",
"nidoking",
"golem",
"dewgong",
"moltres",
"zapdos",
"poliwrath",
"vulpix",
"beedrill",
"charmander",
"abra",
"zubat",
"golbat",
"wigglytuff",
"charizard",
"slowpoke",
"poliwag",
"tentacruel",
"rhyhorn",
"onix",
"butterfree",
"exeggcute",
"sandslash",
"pinsir",
"rattata",
"growlithe",
"haunter",
"pidgey",
"ditto",
"farfetchd",
"pikachu",
"raticate",
"wartortle",
"vaporeon",
"cloyster",
"hypno",
"arbok",
"metapod",
"tangela",
"kingler",
"exeggutor",
"kadabra",
"seel",
"voltorb",
"chansey",
"venomoth",
"ponyta",
"vileplume",
"koffing",
"blastoise",
"tentacool",
"lickitung",
"paras",
"clefable",
"cubone",
"marowak",
"nidorino",
"jolteon",
"muk",
"magikarp",
"slowbro",
"tauros",
"kabuto",
"spearow",
"sandshrew",
"eevee",
"kakuna",
"omastar",
"ekans",
"geodude",
"magmar",
"snorlax",
"meowth",
"pidgeotto",
"venusaur",
"persian",
"rhydon",
"starmie",
"charmeleon",
"lapras",
"alakazam",
"graveler",
"psyduck",
"rapidash",
"doduo",
"magneton",
"arcanine",
"electrode",
"omanyte",
"poliwhirl",
"mew",
"alolan sandslash",
"mewtwo",
"weezing",
"gastly",
"victreebel",
"ivysaur",
"mrmime",
"shellder",
"scyther",
"diglett",
"primeape",
"raichu"
] |
Nusri7/Age_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Nusri7/Age_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1550
- Validation Loss: 0.1649
- Train Accuracy: 0.933
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.3846 | 0.3390 | 0.891 | 0 |
| 0.2197 | 0.1807 | 0.936 | 1 |
| 0.1885 | 0.1659 | 0.935 | 2 |
| 0.1706 | 0.1495 | 0.946 | 3 |
| 0.1550 | 0.1649 | 0.933 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"age_0-20",
"age_20-40",
"age_40-60",
"age_60-80"
] |
spolivin/food-vit-tutorial
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# food-vit-tutorial
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0267
- Accuracy: 0.916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7889 | 0.99 | 62 | 2.5577 | 0.838 |
| 1.7142 | 2.0 | 125 | 1.6126 | 0.879 |
| 1.2887 | 2.99 | 187 | 1.2513 | 0.903 |
| 1.0307 | 4.0 | 250 | 1.0673 | 0.922 |
| 1.0022 | 4.96 | 310 | 1.0267 | 0.916 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"apple_pie",
"baby_back_ribs",
"baklava",
"beef_carpaccio",
"beef_tartare",
"beet_salad",
"beignets",
"bibimbap",
"bread_pudding",
"breakfast_burrito",
"bruschetta",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare",
"waffles"
] |
johncban/vit-base-patch16-224-finetuned-flower
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.1.0+cu121
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"daisy",
"dandelion",
"roses",
"sunflowers",
"tulips"
] |
alirzb/S1_M1_R1_vit_42498800
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S1_M1_R1_vit_42498800
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0086
- Accuracy: 0.9978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1858 | 0.99 | 57 | 0.2279 | 0.9253 |
| 0.0313 | 1.99 | 115 | 0.0156 | 0.9968 |
| 0.0126 | 3.0 | 173 | 0.0210 | 0.9957 |
| 0.0039 | 4.0 | 231 | 0.0083 | 0.9989 |
| 0.0034 | 4.94 | 285 | 0.0086 | 0.9978 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S1_M1_R2_vit_42498972
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S1_M1_R2_vit_42498972
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0113
- Accuracy: 0.9981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1457 | 0.99 | 66 | 0.1152 | 0.9661 |
| 0.038 | 2.0 | 133 | 0.0171 | 0.9972 |
| 0.0083 | 2.99 | 199 | 0.0122 | 0.9972 |
| 0.0045 | 4.0 | 266 | 0.0116 | 0.9972 |
| 0.0025 | 4.96 | 330 | 0.0113 | 0.9981 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S1_M1_R3_vit_42499444
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S1_M1_R3_vit_42499444
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0076
- Accuracy: 0.9983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0799 | 0.99 | 73 | 0.0444 | 0.9958 |
| 0.0309 | 1.99 | 147 | 0.0085 | 0.9992 |
| 0.0072 | 3.0 | 221 | 0.0090 | 0.9983 |
| 0.0021 | 4.0 | 295 | 0.0076 | 0.9992 |
| 0.0018 | 4.95 | 365 | 0.0076 | 0.9983 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S2_M1_R1_vit_42499480
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S2_M1_R1_vit_42499480
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0083
- Accuracy: 0.9989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1704 | 1.0 | 58 | 0.1195 | 0.9784 |
| 0.0533 | 2.0 | 116 | 0.0143 | 0.9978 |
| 0.0184 | 3.0 | 174 | 0.0051 | 1.0 |
| 0.0044 | 4.0 | 232 | 0.0031 | 1.0 |
| 0.0027 | 5.0 | 290 | 0.0083 | 0.9989 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S2_M1_R2_vit_42499499
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S2_M1_R2_vit_42499499
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0063
- Accuracy: 0.9981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1394 | 0.99 | 66 | 0.0669 | 0.9915 |
| 0.0058 | 2.0 | 133 | 0.0206 | 0.9953 |
| 0.0118 | 2.99 | 199 | 0.0100 | 0.9981 |
| 0.0037 | 4.0 | 266 | 0.0097 | 0.9981 |
| 0.002 | 4.96 | 330 | 0.0063 | 0.9981 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S2_M1_R3_vit_42499514
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S2_M1_R3_vit_42499514
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0101
- Accuracy: 0.9975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0599 | 0.99 | 73 | 0.0336 | 0.9983 |
| 0.0232 | 1.99 | 147 | 0.0114 | 0.9975 |
| 0.0036 | 3.0 | 221 | 0.0147 | 0.9966 |
| 0.0027 | 4.0 | 295 | 0.0120 | 0.9975 |
| 0.002 | 4.95 | 365 | 0.0101 | 0.9975 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
tiennguyenbnbk/teacher-status-van-tiny-256-1-2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# teacher-status-van-tiny-256-1-2
This model is a fine-tuned version of [Visual-Attention-Network/van-tiny](https://huggingface.co/Visual-Attention-Network/van-tiny) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0859
- Accuracy: 0.9717
- F1 Score: 0.9778
- Recall: 0.9754
- Precision: 0.9802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:------:|:---------:|
| 0.6722 | 0.99 | 33 | 0.6499 | 0.6401 | 0.7806 | 1.0 | 0.6401 |
| 0.5431 | 2.0 | 67 | 0.4164 | 0.7817 | 0.8531 | 0.9902 | 0.7494 |
| 0.393 | 2.99 | 100 | 0.2833 | 0.8877 | 0.9078 | 0.8639 | 0.9564 |
| 0.354 | 4.0 | 134 | 0.1930 | 0.9276 | 0.9436 | 0.9459 | 0.9413 |
| 0.3007 | 4.99 | 167 | 0.1585 | 0.9370 | 0.9511 | 0.9557 | 0.9464 |
| 0.2898 | 6.0 | 201 | 0.1445 | 0.9465 | 0.9581 | 0.9557 | 0.9605 |
| 0.2824 | 6.99 | 234 | 0.1353 | 0.9465 | 0.9580 | 0.9525 | 0.9635 |
| 0.2763 | 8.0 | 268 | 0.1359 | 0.9486 | 0.9603 | 0.9721 | 0.9488 |
| 0.2473 | 8.99 | 301 | 0.1213 | 0.9570 | 0.9664 | 0.9672 | 0.9656 |
| 0.2598 | 10.0 | 335 | 0.1091 | 0.9570 | 0.9665 | 0.9705 | 0.9626 |
| 0.2476 | 10.99 | 368 | 0.1041 | 0.9633 | 0.9714 | 0.9754 | 0.9675 |
| 0.2376 | 12.0 | 402 | 0.0997 | 0.9601 | 0.9686 | 0.9623 | 0.9751 |
| 0.2402 | 12.99 | 435 | 0.0972 | 0.9622 | 0.9704 | 0.9672 | 0.9736 |
| 0.2324 | 14.0 | 469 | 0.0950 | 0.9664 | 0.9739 | 0.9803 | 0.9676 |
| 0.2256 | 14.99 | 502 | 0.0909 | 0.9706 | 0.9770 | 0.9754 | 0.9786 |
| 0.21 | 16.0 | 536 | 0.0922 | 0.9622 | 0.9703 | 0.9656 | 0.9752 |
| 0.217 | 16.99 | 569 | 0.0933 | 0.9612 | 0.9695 | 0.9656 | 0.9736 |
| 0.2092 | 18.0 | 603 | 0.0891 | 0.9664 | 0.9738 | 0.9754 | 0.9722 |
| 0.2063 | 18.99 | 636 | 0.0913 | 0.9654 | 0.9730 | 0.9738 | 0.9722 |
| 0.2217 | 20.0 | 670 | 0.0917 | 0.9643 | 0.9720 | 0.9672 | 0.9768 |
| 0.1952 | 20.99 | 703 | 0.0859 | 0.9717 | 0.9778 | 0.9754 | 0.9802 |
| 0.2068 | 22.0 | 737 | 0.0907 | 0.9685 | 0.9755 | 0.9770 | 0.9739 |
| 0.1914 | 22.99 | 770 | 0.0847 | 0.9696 | 0.9763 | 0.9787 | 0.9739 |
| 0.1961 | 24.0 | 804 | 0.0870 | 0.9685 | 0.9755 | 0.9770 | 0.9739 |
| 0.1911 | 24.99 | 837 | 0.0884 | 0.9664 | 0.9739 | 0.9770 | 0.9707 |
| 0.1961 | 26.0 | 871 | 0.0870 | 0.9685 | 0.9754 | 0.9738 | 0.9770 |
| 0.1978 | 26.99 | 904 | 0.0871 | 0.9685 | 0.9754 | 0.9754 | 0.9754 |
| 0.1854 | 28.0 | 938 | 0.0858 | 0.9685 | 0.9755 | 0.9770 | 0.9739 |
| 0.1733 | 28.99 | 971 | 0.0860 | 0.9685 | 0.9754 | 0.9738 | 0.9770 |
| 0.1762 | 29.55 | 990 | 0.0858 | 0.9664 | 0.9738 | 0.9738 | 0.9738 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"abnormal",
"normal"
] |
alirzb/S5_M1_fold1_vit_42499955
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold1_vit_42499955
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0169
- Accuracy: 0.9968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0731 | 1.0 | 79 | 0.0361 | 0.9945 |
| 0.0164 | 1.99 | 158 | 0.0198 | 0.9961 |
| 0.0087 | 2.99 | 237 | 0.0215 | 0.9953 |
| 0.0018 | 4.0 | 317 | 0.0206 | 0.9968 |
| 0.0016 | 4.98 | 395 | 0.0169 | 0.9968 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S5_M1_fold2_vit_42499968
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold2_vit_42499968
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0081
- Accuracy: 0.9976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0883 | 1.0 | 79 | 0.0413 | 0.9945 |
| 0.0258 | 1.99 | 158 | 0.0134 | 0.9968 |
| 0.0033 | 2.99 | 237 | 0.0133 | 0.9968 |
| 0.0022 | 4.0 | 317 | 0.0080 | 0.9984 |
| 0.0015 | 4.98 | 395 | 0.0081 | 0.9976 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S5_M1_fold3_vit_42499983
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold3_vit_42499983
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0057
- Accuracy: 0.9984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0823 | 1.0 | 79 | 0.0786 | 0.9834 |
| 0.0209 | 1.99 | 158 | 0.0370 | 0.9913 |
| 0.0074 | 2.99 | 237 | 0.0062 | 0.9984 |
| 0.0018 | 4.0 | 317 | 0.0057 | 0.9984 |
| 0.0016 | 4.98 | 395 | 0.0057 | 0.9984 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S5_M1_fold4_vit_42499997
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold4_vit_42499997
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0063
- Accuracy: 0.9992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1207 | 1.0 | 79 | 0.0699 | 0.9834 |
| 0.014 | 1.99 | 158 | 0.0094 | 0.9984 |
| 0.0027 | 2.99 | 237 | 0.0070 | 0.9992 |
| 0.002 | 4.0 | 317 | 0.0091 | 0.9984 |
| 0.0016 | 4.98 | 395 | 0.0063 | 0.9992 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S5_M1_fold5_vit_42500027
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold5_vit_42500027
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0080
- Accuracy: 0.9984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.087 | 1.0 | 79 | 0.0385 | 0.9961 |
| 0.0116 | 1.99 | 158 | 0.0212 | 0.9953 |
| 0.0235 | 2.99 | 237 | 0.0064 | 0.9992 |
| 0.007 | 4.0 | 317 | 0.0068 | 0.9992 |
| 0.0016 | 4.98 | 395 | 0.0080 | 0.9984 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S1_M1_R2_swint_42500764
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S1_M1_R2_swint_42500764
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0047
- Accuracy: 0.9981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0625 | 0.99 | 66 | 0.0473 | 0.9868 |
| 0.006 | 2.0 | 133 | 0.0091 | 0.9962 |
| 0.0023 | 2.99 | 199 | 0.0180 | 0.9962 |
| 0.0117 | 4.0 | 266 | 0.0049 | 0.9991 |
| 0.0175 | 4.96 | 330 | 0.0047 | 0.9981 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S1_M1_R3_swint_42500766
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S1_M1_R3_swint_42500766
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0133
- Accuracy: 0.9975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.034 | 0.99 | 73 | 0.0062 | 0.9966 |
| 0.012 | 1.99 | 147 | 0.0010 | 1.0 |
| 0.0144 | 3.0 | 221 | 0.0112 | 0.9975 |
| 0.0006 | 4.0 | 295 | 0.0134 | 0.9975 |
| 0.0192 | 4.95 | 365 | 0.0133 | 0.9975 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S1_M1_R1_swint_42500767
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S1_M1_R1_swint_42500767
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0079
- Accuracy: 0.9989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0579 | 0.99 | 57 | 0.0150 | 0.9968 |
| 0.018 | 1.99 | 115 | 0.0076 | 0.9978 |
| 0.0251 | 3.0 | 173 | 0.0160 | 0.9957 |
| 0.0011 | 4.0 | 231 | 0.0055 | 0.9989 |
| 0.0011 | 4.94 | 285 | 0.0079 | 0.9989 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S5_M1_fold3_swint_42500769
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold3_swint_42500769
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0158
- Accuracy: 0.9968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0382 | 1.0 | 79 | 0.0472 | 0.9897 |
| 0.0065 | 1.99 | 158 | 0.0091 | 0.9968 |
| 0.0496 | 2.99 | 237 | 0.0103 | 0.9961 |
| 0.0003 | 4.0 | 317 | 0.0107 | 0.9968 |
| 0.0002 | 4.98 | 395 | 0.0158 | 0.9968 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S5_M1_fold2_swint_42500768
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold2_swint_42500768
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0071
- Accuracy: 0.9992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0485 | 1.0 | 79 | 0.0253 | 0.9937 |
| 0.0072 | 1.99 | 158 | 0.0075 | 0.9984 |
| 0.0096 | 2.99 | 237 | 0.0070 | 0.9992 |
| 0.0003 | 4.0 | 317 | 0.0150 | 0.9961 |
| 0.0069 | 4.98 | 395 | 0.0071 | 0.9992 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S5_M1_fold4_swint_42500770
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold4_swint_42500770
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0109
- Accuracy: 0.9976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0316 | 1.0 | 79 | 0.0222 | 0.9953 |
| 0.0042 | 1.99 | 158 | 0.0357 | 0.9905 |
| 0.0107 | 2.99 | 237 | 0.0092 | 0.9961 |
| 0.0003 | 4.0 | 317 | 0.0059 | 0.9984 |
| 0.0019 | 4.98 | 395 | 0.0109 | 0.9976 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S5_M1_fold5_swint_42500771
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold5_swint_42500771
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0073
- Accuracy: 0.9984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0816 | 1.0 | 79 | 0.0136 | 0.9968 |
| 0.0189 | 1.99 | 158 | 0.0124 | 0.9976 |
| 0.0061 | 2.99 | 237 | 0.0085 | 0.9984 |
| 0.0003 | 4.0 | 317 | 0.0044 | 0.9992 |
| 0.0149 | 4.98 | 395 | 0.0073 | 0.9984 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S1_M1_R2_deit_42502103
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S1_M1_R2_deit_42502103
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0223
- Accuracy: 0.9934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0669 | 0.99 | 66 | 0.0171 | 0.9896 |
| 0.0082 | 2.0 | 133 | 0.0229 | 0.9934 |
| 0.0003 | 2.99 | 199 | 0.0231 | 0.9953 |
| 0.0056 | 4.0 | 266 | 0.0216 | 0.9925 |
| 0.0001 | 4.96 | 330 | 0.0223 | 0.9934 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S1_M1_R3_deit_42502104
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S1_M1_R3_deit_42502104
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0106 | 0.99 | 73 | 0.0038 | 0.9992 |
| 0.0365 | 1.99 | 147 | 0.0084 | 0.9983 |
| 0.0048 | 3.0 | 221 | 0.0009 | 1.0 |
| 0.0016 | 4.0 | 295 | 0.0016 | 0.9992 |
| 0.0 | 4.95 | 365 | 0.0000 | 1.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S1_M1_R1_deit_42502105
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S1_M1_R1_deit_42502105
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0090
- Accuracy: 0.9968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.033 | 0.99 | 57 | 0.0372 | 0.9870 |
| 0.0069 | 1.99 | 115 | 0.1166 | 0.9719 |
| 0.0008 | 3.0 | 173 | 0.0089 | 0.9978 |
| 0.0 | 4.0 | 231 | 0.0099 | 0.9978 |
| 0.0 | 4.94 | 285 | 0.0090 | 0.9968 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S5_M1_fold2_deit_42502106
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold2_deit_42502106
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0079
- Accuracy: 0.9976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0182 | 1.0 | 79 | 0.0090 | 0.9984 |
| 0.0176 | 1.99 | 158 | 0.0263 | 0.9913 |
| 0.0091 | 2.99 | 237 | 0.0271 | 0.9937 |
| 0.0 | 4.0 | 317 | 0.0080 | 0.9968 |
| 0.0 | 4.98 | 395 | 0.0079 | 0.9976 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S5_M1_fold3_deit_42502107
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold3_deit_42502107
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0154
- Accuracy: 0.9976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0606 | 1.0 | 79 | 0.0476 | 0.9866 |
| 0.0023 | 1.99 | 158 | 0.0305 | 0.9913 |
| 0.0001 | 2.99 | 237 | 0.0103 | 0.9976 |
| 0.0 | 4.0 | 317 | 0.0147 | 0.9976 |
| 0.0 | 4.98 | 395 | 0.0154 | 0.9976 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S5_M1_fold4_deit_42502108
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold4_deit_42502108
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0060
- Accuracy: 0.9984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0171 | 1.0 | 79 | 0.0099 | 0.9968 |
| 0.0279 | 1.99 | 158 | 0.0310 | 0.9929 |
| 0.0038 | 2.99 | 237 | 0.0052 | 0.9976 |
| 0.0 | 4.0 | 317 | 0.0106 | 0.9984 |
| 0.0 | 4.98 | 395 | 0.0060 | 0.9984 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
alirzb/S5_M1_fold5_deit_42502109
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold5_deit_42502109
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0121
- Accuracy: 0.9976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0411 | 1.0 | 79 | 0.0058 | 0.9984 |
| 0.0079 | 1.99 | 158 | 0.0085 | 0.9976 |
| 0.0013 | 2.99 | 237 | 0.0107 | 0.9984 |
| 0.0019 | 4.0 | 317 | 0.0124 | 0.9976 |
| 0.0 | 4.98 | 395 | 0.0121 | 0.9976 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"none_seizures",
"seizures"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.