model_id
stringlengths 7
105
| model_card
stringlengths 1
130k
| model_labels
listlengths 2
80k
|
---|---|---|
bdpc/vit-base_rvl_cdip-N1K_aAURC_32
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl_cdip-N1K_aAURC_32
This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5215
- Accuracy: 0.888
- Brier Loss: 0.1918
- Nll: 0.9026
- F1 Micro: 0.888
- F1 Macro: 0.8883
- Ece: 0.0880
- Aurc: 0.0205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 0.1629 | 1.0 | 500 | 0.3779 | 0.8875 | 0.1721 | 1.1899 | 0.8875 | 0.8877 | 0.0531 | 0.0201 |
| 0.1234 | 2.0 | 1000 | 0.4074 | 0.8868 | 0.1790 | 1.1333 | 0.8868 | 0.8874 | 0.0647 | 0.0213 |
| 0.0616 | 3.0 | 1500 | 0.4257 | 0.888 | 0.1813 | 1.0677 | 0.888 | 0.8879 | 0.0695 | 0.0201 |
| 0.0303 | 4.0 | 2000 | 0.4595 | 0.885 | 0.1869 | 1.0256 | 0.885 | 0.8856 | 0.0776 | 0.0222 |
| 0.0133 | 5.0 | 2500 | 0.4902 | 0.8848 | 0.1922 | 0.9983 | 0.8848 | 0.8849 | 0.0831 | 0.0228 |
| 0.0083 | 6.0 | 3000 | 0.4941 | 0.8862 | 0.1903 | 0.9464 | 0.8862 | 0.8868 | 0.0850 | 0.0211 |
| 0.0051 | 7.0 | 3500 | 0.5116 | 0.8875 | 0.1928 | 0.9118 | 0.8875 | 0.8873 | 0.0875 | 0.0207 |
| 0.0043 | 8.0 | 4000 | 0.5154 | 0.8882 | 0.1910 | 0.9138 | 0.8882 | 0.8887 | 0.0864 | 0.0205 |
| 0.0041 | 9.0 | 4500 | 0.5221 | 0.8865 | 0.1924 | 0.9101 | 0.8865 | 0.8868 | 0.0896 | 0.0206 |
| 0.0037 | 10.0 | 5000 | 0.5215 | 0.888 | 0.1918 | 0.9026 | 0.888 | 0.8883 | 0.0880 | 0.0205 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"letter",
"form",
"email",
"handwritten",
"advertisement",
"scientific_report",
"scientific_publication",
"specification",
"file_folder",
"news_article",
"budget",
"invoice",
"presentation",
"questionnaire",
"resume",
"memo"
] |
bdpc/vit-base_rvl_cdip-N1K_aAURC_16
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl_cdip-N1K_aAURC_16
This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5629
- Accuracy: 0.8892
- Brier Loss: 0.1995
- Nll: 0.8643
- F1 Micro: 0.8892
- F1 Macro: 0.8898
- Ece: 0.0923
- Aurc: 0.0215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 0.1794 | 1.0 | 1000 | 0.3827 | 0.8815 | 0.1829 | 1.1942 | 0.8815 | 0.8822 | 0.0573 | 0.0226 |
| 0.1415 | 2.0 | 2000 | 0.4705 | 0.8698 | 0.2118 | 1.1615 | 0.8698 | 0.8686 | 0.0859 | 0.0259 |
| 0.0725 | 3.0 | 3000 | 0.4582 | 0.8768 | 0.1996 | 1.0476 | 0.8768 | 0.8771 | 0.0845 | 0.0234 |
| 0.0388 | 4.0 | 4000 | 0.4958 | 0.879 | 0.2024 | 1.0000 | 0.879 | 0.8798 | 0.0877 | 0.0259 |
| 0.0153 | 5.0 | 5000 | 0.5171 | 0.8815 | 0.2047 | 0.9580 | 0.8815 | 0.8815 | 0.0942 | 0.0229 |
| 0.0069 | 6.0 | 6000 | 0.5334 | 0.8845 | 0.2021 | 0.9350 | 0.8845 | 0.8854 | 0.0922 | 0.0230 |
| 0.005 | 7.0 | 7000 | 0.5412 | 0.8905 | 0.1964 | 0.9179 | 0.8905 | 0.8907 | 0.0886 | 0.0218 |
| 0.0043 | 8.0 | 8000 | 0.5497 | 0.8892 | 0.1985 | 0.8970 | 0.8892 | 0.8900 | 0.0901 | 0.0225 |
| 0.0023 | 9.0 | 9000 | 0.5610 | 0.8878 | 0.1994 | 0.8679 | 0.8878 | 0.8883 | 0.0932 | 0.0220 |
| 0.0024 | 10.0 | 10000 | 0.5629 | 0.8892 | 0.1995 | 0.8643 | 0.8892 | 0.8898 | 0.0923 | 0.0215 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"letter",
"form",
"email",
"handwritten",
"advertisement",
"scientific_report",
"scientific_publication",
"specification",
"file_folder",
"news_article",
"budget",
"invoice",
"presentation",
"questionnaire",
"resume",
"memo"
] |
bdpc/vit-base_rvl_cdip-N1K_aAURC_8
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl_cdip-N1K_aAURC_8
This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5639
- Accuracy: 0.8882
- Brier Loss: 0.2015
- Nll: 0.8676
- F1 Micro: 0.8882
- F1 Macro: 0.8882
- Ece: 0.0953
- Aurc: 0.0259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 0.2121 | 1.0 | 2000 | 0.4148 | 0.862 | 0.2185 | 1.2055 | 0.8620 | 0.8637 | 0.0821 | 0.0286 |
| 0.1797 | 2.0 | 4000 | 0.4259 | 0.8738 | 0.2073 | 1.1106 | 0.8738 | 0.8729 | 0.0842 | 0.0269 |
| 0.0999 | 3.0 | 6000 | 0.4691 | 0.874 | 0.2161 | 1.0691 | 0.874 | 0.8739 | 0.0941 | 0.0287 |
| 0.0532 | 4.0 | 8000 | 0.5251 | 0.872 | 0.2218 | 1.1401 | 0.872 | 0.8726 | 0.0995 | 0.0287 |
| 0.0197 | 5.0 | 10000 | 0.5723 | 0.871 | 0.2303 | 1.0391 | 0.871 | 0.8710 | 0.1085 | 0.0297 |
| 0.0118 | 6.0 | 12000 | 0.5253 | 0.8845 | 0.2070 | 0.9140 | 0.8845 | 0.8847 | 0.0953 | 0.0246 |
| 0.0095 | 7.0 | 14000 | 0.5969 | 0.8718 | 0.2284 | 0.9640 | 0.8718 | 0.8717 | 0.1094 | 0.0249 |
| 0.0063 | 8.0 | 16000 | 0.5702 | 0.8848 | 0.2087 | 0.8868 | 0.8848 | 0.8846 | 0.1003 | 0.0258 |
| 0.0014 | 9.0 | 18000 | 0.5810 | 0.8825 | 0.2115 | 0.8615 | 0.8825 | 0.8828 | 0.0998 | 0.0271 |
| 0.0025 | 10.0 | 20000 | 0.5639 | 0.8882 | 0.2015 | 0.8676 | 0.8882 | 0.8882 | 0.0953 | 0.0259 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"letter",
"form",
"email",
"handwritten",
"advertisement",
"scientific_report",
"scientific_publication",
"specification",
"file_folder",
"news_article",
"budget",
"invoice",
"presentation",
"questionnaire",
"resume",
"memo"
] |
bdpc/vit-base_rvl_cdip-N1K_aAURC_4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl_cdip-N1K_aAURC_4
This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5297
- Accuracy: 0.874
- Brier Loss: 0.2289
- Nll: 0.9943
- F1 Micro: 0.874
- F1 Macro: 0.8744
- Ece: 0.1117
- Aurc: 0.0291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 0.2467 | 1.0 | 4000 | 0.3863 | 0.8403 | 0.2519 | 1.2558 | 0.8403 | 0.8407 | 0.0998 | 0.0394 |
| 0.1931 | 2.0 | 8000 | 0.4295 | 0.8482 | 0.2575 | 1.2140 | 0.8482 | 0.8486 | 0.1120 | 0.0362 |
| 0.1278 | 3.0 | 12000 | 0.4308 | 0.86 | 0.2406 | 1.1212 | 0.8600 | 0.8601 | 0.1063 | 0.0332 |
| 0.0798 | 4.0 | 16000 | 0.5079 | 0.853 | 0.2588 | 1.2528 | 0.853 | 0.8523 | 0.1221 | 0.0348 |
| 0.0422 | 5.0 | 20000 | 0.5064 | 0.8638 | 0.2443 | 1.1013 | 0.8638 | 0.8635 | 0.1165 | 0.0315 |
| 0.0123 | 6.0 | 24000 | 0.5186 | 0.8672 | 0.2378 | 1.0551 | 0.8672 | 0.8668 | 0.1155 | 0.0328 |
| 0.0048 | 7.0 | 28000 | 0.5372 | 0.8752 | 0.2306 | 1.1080 | 0.8752 | 0.8756 | 0.1101 | 0.0310 |
| 0.0098 | 8.0 | 32000 | 0.5395 | 0.8732 | 0.2325 | 1.0344 | 0.8732 | 0.8732 | 0.1135 | 0.0306 |
| 0.0019 | 9.0 | 36000 | 0.5249 | 0.875 | 0.2283 | 1.0203 | 0.875 | 0.8751 | 0.1099 | 0.0290 |
| 0.002 | 10.0 | 40000 | 0.5297 | 0.874 | 0.2289 | 0.9943 | 0.874 | 0.8744 | 0.1117 | 0.0291 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"letter",
"form",
"email",
"handwritten",
"advertisement",
"scientific_report",
"scientific_publication",
"specification",
"file_folder",
"news_article",
"budget",
"invoice",
"presentation",
"questionnaire",
"resume",
"memo"
] |
bdpc/vit-base_rvl_cdip-N1K_aAURC_2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl_cdip-N1K_aAURC_2
This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3354
- Accuracy: 0.8788
- Brier Loss: 0.2243
- Nll: 1.0945
- F1 Micro: 0.8788
- F1 Macro: 0.8793
- Ece: 0.1094
- Aurc: 0.0303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 0.1996 | 1.0 | 8000 | 0.2395 | 0.8413 | 0.2593 | 1.3668 | 0.8413 | 0.8446 | 0.1016 | 0.0414 |
| 0.1357 | 2.0 | 16000 | 0.2446 | 0.8545 | 0.2435 | 1.2587 | 0.8545 | 0.8550 | 0.1066 | 0.0364 |
| 0.1083 | 3.0 | 24000 | 0.2702 | 0.8515 | 0.2584 | 1.2968 | 0.8515 | 0.8528 | 0.1126 | 0.0379 |
| 0.0578 | 4.0 | 32000 | 0.3397 | 0.8327 | 0.2931 | 1.3827 | 0.8327 | 0.8322 | 0.1391 | 0.0494 |
| 0.0294 | 5.0 | 40000 | 0.3407 | 0.8538 | 0.2662 | 1.3685 | 0.8537 | 0.8536 | 0.1280 | 0.0378 |
| 0.0099 | 6.0 | 48000 | 0.3489 | 0.8585 | 0.2602 | 1.2438 | 0.8585 | 0.8599 | 0.1254 | 0.0350 |
| 0.0058 | 7.0 | 56000 | 0.3433 | 0.868 | 0.2418 | 1.2328 | 0.868 | 0.8666 | 0.1172 | 0.0362 |
| 0.0026 | 8.0 | 64000 | 0.3414 | 0.87 | 0.2401 | 1.1910 | 0.87 | 0.8704 | 0.1162 | 0.0304 |
| 0.0026 | 9.0 | 72000 | 0.3460 | 0.8728 | 0.2357 | 1.1567 | 0.8728 | 0.8729 | 0.1143 | 0.0308 |
| 0.0024 | 10.0 | 80000 | 0.3354 | 0.8788 | 0.2243 | 1.0945 | 0.8788 | 0.8793 | 0.1094 | 0.0303 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"letter",
"form",
"email",
"handwritten",
"advertisement",
"scientific_report",
"scientific_publication",
"specification",
"file_folder",
"news_article",
"budget",
"invoice",
"presentation",
"questionnaire",
"resume",
"memo"
] |
tangocrazyguy/resnet-50-finetuned-cats_vs_dogs
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-cats_vs_dogs
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the cats_vs_dogs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0889
- Accuracy: 0.9893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4648 | 1.0 | 128 | 0.3423 | 0.9781 |
| 0.2417 | 2.0 | 256 | 0.1214 | 0.9866 |
| 0.2032 | 2.99 | 384 | 0.0889 | 0.9893 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"cat",
"dog"
] |
PedroSampaio/Vit-Food-101
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# PedroSampaio/Vit-Food-101
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1413
- Validation Loss: 0.9888
- Train Accuracy: 0.7487
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 303000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 3.8226 | 2.8342 | 0.4591 | 0 |
| 2.3305 | 1.6515 | 0.6385 | 1 |
| 1.5993 | 1.2386 | 0.7017 | 2 |
| 1.3010 | 1.0929 | 0.7265 | 3 |
| 1.1413 | 0.9888 | 0.7487 | 4 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
Mahendra42/swin-tiny-patch4-window7-224_RCC_Classifier
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224_RCC_Classifier
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 8.0575
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 0.003 | 1.0 | 118 | 8.2459 | 0.0 |
| 0.0001 | 2.0 | 237 | 8.1140 | 0.0 |
| 0.0 | 2.99 | 354 | 8.0575 | 0.0 |
### Framework versions
- Transformers 4.34.1
- Pytorch 1.12.1
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"clear cell rcc",
"non clear cell"
] |
Pavarissy/ConvNextV2-large-DogBreed
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ConvNextV2-large-DogBreed
This model is a fine-tuned version of [facebook/convnextv2-large-22k-224](https://huggingface.co/facebook/convnextv2-large-22k-224) on dog breed classification dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5469
- Accuracy: 0.9139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.8578 | 1.0 | 13 | 4.6940 | 0.0671 |
| 4.6332 | 1.99 | 26 | 4.4169 | 0.1784 |
| 4.4095 | 2.99 | 39 | 4.1105 | 0.3485 |
| 3.8841 | 3.98 | 52 | 3.7581 | 0.5198 |
| 3.5964 | 4.98 | 65 | 3.3647 | 0.6647 |
| 3.2781 | 5.97 | 78 | 2.9442 | 0.7677 |
| 2.6006 | 6.97 | 91 | 2.5252 | 0.8180 |
| 2.2638 | 7.96 | 104 | 2.1256 | 0.8467 |
| 1.9609 | 8.96 | 117 | 1.7626 | 0.8766 |
| 1.3962 | 9.95 | 130 | 1.4453 | 0.9042 |
| 1.143 | 10.95 | 143 | 1.1818 | 0.9102 |
| 0.9423 | 11.94 | 156 | 0.9697 | 0.9138 |
| 0.7674 | 12.94 | 169 | 0.8097 | 0.9174 |
| 0.5007 | 13.93 | 182 | 0.6922 | 0.9186 |
| 0.4097 | 14.93 | 195 | 0.5999 | 0.9162 |
| 0.3392 | 16.0 | 209 | 0.5174 | 0.9269 |
| 0.2285 | 17.0 | 222 | 0.4685 | 0.9257 |
| 0.184 | 17.99 | 235 | 0.4337 | 0.9210 |
| 0.1587 | 18.99 | 248 | 0.4058 | 0.9257 |
| 0.1112 | 19.98 | 261 | 0.3824 | 0.9222 |
| 0.0967 | 20.98 | 274 | 0.3712 | 0.9150 |
| 0.0838 | 21.97 | 287 | 0.3584 | 0.9186 |
| 0.0665 | 22.97 | 300 | 0.3468 | 0.9174 |
| 0.0589 | 23.96 | 313 | 0.3428 | 0.9186 |
| 0.0551 | 24.96 | 326 | 0.3364 | 0.9186 |
| 0.0512 | 25.95 | 339 | 0.3334 | 0.9162 |
| 0.0441 | 26.95 | 352 | 0.3278 | 0.9210 |
| 0.0428 | 27.94 | 365 | 0.3275 | 0.9150 |
| 0.0387 | 28.94 | 378 | 0.3237 | 0.9210 |
| 0.036 | 29.93 | 391 | 0.3242 | 0.9150 |
| 0.0337 | 30.93 | 404 | 0.3204 | 0.9186 |
| 0.0328 | 32.0 | 418 | 0.3176 | 0.9198 |
| 0.0304 | 33.0 | 431 | 0.3183 | 0.9162 |
| 0.0283 | 33.99 | 444 | 0.3150 | 0.9210 |
| 0.029 | 34.99 | 457 | 0.3168 | 0.9174 |
| 0.0264 | 35.98 | 470 | 0.3146 | 0.9174 |
| 0.0259 | 36.98 | 483 | 0.3162 | 0.9174 |
| 0.0258 | 37.97 | 496 | 0.3126 | 0.9186 |
| 0.0251 | 38.97 | 509 | 0.3131 | 0.9174 |
| 0.0239 | 39.96 | 522 | 0.3145 | 0.9186 |
| 0.0234 | 40.96 | 535 | 0.3120 | 0.9198 |
| 0.023 | 41.95 | 548 | 0.3102 | 0.9198 |
| 0.0226 | 42.95 | 561 | 0.3123 | 0.9198 |
| 0.0222 | 43.94 | 574 | 0.3140 | 0.9186 |
| 0.0225 | 44.94 | 587 | 0.3119 | 0.9186 |
| 0.0215 | 45.93 | 600 | 0.3106 | 0.9198 |
| 0.0209 | 46.93 | 613 | 0.3113 | 0.9198 |
| 0.0212 | 48.0 | 627 | 0.3115 | 0.9198 |
| 0.021 | 49.0 | 640 | 0.3113 | 0.9198 |
| 0.0212 | 49.76 | 650 | 0.3113 | 0.9198 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"german_shepherd_dog",
"keeshond",
"doberman_pinscher",
"italian_greyhound",
"boston_terrier",
"lowchen",
"newfoundland",
"pharaoh_hound",
"belgian_tervuren",
"briard",
"beagle",
"collie",
"great_dane",
"komondor",
"bluetick_coonhound",
"great_pyrenees",
"mastiff",
"manchester_terrier",
"dogue_de_bordeaux",
"english_toy_spaniel",
"yorkshire_terrier",
"english_springer_spaniel",
"bull_terrier",
"norwegian_lundehund",
"leonberger",
"greater_swiss_mountain_dog",
"bloodhound",
"boykin_spaniel",
"bulldog",
"irish_terrier",
"anatolian_shepherd_dog",
"cairn_terrier",
"boxer",
"brussels_griffon",
"border_terrier",
"plott",
"cavalier_king_charles_spaniel",
"pomeranian",
"nova_scotia_duck_tolling_retriever",
"dalmatian",
"border_collie",
"norfolk_terrier",
"kuvasz",
"american_foxhound",
"chesapeake_bay_retriever",
"australian_terrier",
"belgian_malinois",
"greyhound",
"japanese_chin",
"chow_chow",
"icelandic_sheepdog",
"bernese_mountain_dog",
"chinese_shar-pei",
"finnish_spitz",
"giant_schnauzer",
"tibetan_mastiff",
"xoloitzcuintli",
"pointer",
"chinese_crested",
"norwich_terrier",
"norwegian_buhund",
"flat-coated_retriever",
"brittany",
"old_english_sheepdog",
"lakeland_terrier",
"german_shorthaired_pointer",
"english_cocker_spaniel",
"portuguese_water_dog",
"smooth_fox_terrier",
"bullmastiff",
"german_pinscher",
"norwegian_elkhound",
"bichon_frise",
"curly-coated_retriever",
"petit_basset_griffon_vendeen",
"entlebucher_mountain_dog",
"borzoi",
"parson_russell_terrier",
"french_bulldog",
"basenji",
"alaskan_malamute",
"clumber_spaniel",
"kerry_blue_terrier",
"silky_terrier",
"airedale_terrier",
"havanese",
"labrador_retriever",
"english_setter",
"dandie_dinmont_terrier",
"pembroke_welsh_corgi",
"australian_cattle_dog",
"cardigan_welsh_corgi",
"otterhound",
"basset_hound",
"afghan_hound",
"irish_water_spaniel",
"akita",
"field_spaniel",
"miniature_schnauzer",
"irish_wolfhound",
"affenpinscher",
"chihuahua",
"dachshund",
"german_wirehaired_pointer",
"welsh_springer_spaniel",
"golden_retriever",
"maltese",
"irish_setter",
"poodle",
"neapolitan_mastiff",
"cocker_spaniel",
"wirehaired_pointing_griffon",
"pekingese",
"beauceron",
"canaan_dog",
"black_and_tan_coonhound",
"australian_shepherd",
"bedlington_terrier",
"gordon_setter",
"irish_red_and_white_setter",
"bearded_collie",
"papillon",
"saint_bernard",
"ibizan_hound",
"lhasa_apso",
"american_eskimo_dog",
"bouvier_des_flandres",
"cane_corso",
"black_russian_terrier",
"belgian_sheepdog",
"american_staffordshire_terrier",
"glen_of_imaal_terrier",
"american_water_spaniel"
] |
kirlek/vit-base-patch16-224-finetuned-flower
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.1.0+cu118
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"daisy",
"dandelion",
"roses",
"sunflowers",
"tulips"
] |
theophilusijiebor1/Text2Image_PyData_23
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text2Image_PyData_23
This model is a fine-tuned version of [juliensimon/autotrain-chest-xray-demo-1677859324](https://huggingface.co/juliensimon/autotrain-chest-xray-demo-1677859324) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3421
- Accuracy: 0.8333
- F1: [0.71584699 0.88208617]
- Precision: [0.99242424 0.79065041]
- Recall: [0.55982906 0.9974359 ]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------------------:|:-----------------------:|:-----------------------:|
| 0.0451 | 0.98 | 40 | 0.6974 | 0.7933 | [0.62170088 0.85777288] | [0.99065421 0.75241779] | [0.45299145 0.9974359 ] |
| 0.036 | 1.99 | 81 | 0.3557 | 0.8958 | [0.84107579 0.92252682] | [0.98285714 0.86191537] | [0.73504274 0.99230769] |
| 0.043 | 2.99 | 122 | 0.4253 | 0.9006 | [0.84803922 0.92619048] | [0.99425287 0.86444444] | [0.73931624 0.9974359 ] |
| 0.0225 | 4.0 | 163 | 0.8776 | 0.8349 | [0.71934605 0.8830874 ] | [0.9924812 0.79226069] | [0.56410256 0.9974359 ] |
| 0.0153 | 4.98 | 203 | 0.7095 | 0.8670 | [0.78552972 0.90360046] | [0.99346405 0.82590234] | [0.64957265 0.9974359 ] |
| 0.0107 | 5.99 | 244 | 0.8537 | 0.8446 | [0.73994638 0.88914286] | [0.99280576 0.80206186] | [0.58974359 0.9974359 ] |
| 0.0052 | 6.99 | 285 | 1.0167 | 0.8462 | [0.74331551 0.89016018] | [0.99285714 0.80371901] | [0.59401709 0.9974359 ] |
| 0.0049 | 8.0 | 326 | 1.3230 | 0.8045 | [0.64942529 0.86444444] | [0.99122807 0.7627451 ] | [0.48290598 0.9974359 ] |
| 0.0061 | 8.98 | 366 | 1.2652 | 0.8269 | [0.70165746 0.87810384] | [0.9921875 0.78427419] | [0.54273504 0.9974359 ] |
| 0.004 | 9.99 | 407 | 1.4846 | 0.8157 | [0.67605634 0.8712206 ] | [0.99173554 0.77335984] | [0.51282051 0.9974359 ] |
| 0.0005 | 10.99 | 448 | 1.5685 | 0.8109 | [0.66477273 0.86830357] | [0.99152542 0.7687747 ] | [0.5 0.9974359] |
| 0.0029 | 12.0 | 489 | 1.2547 | 0.8397 | [0.72972973 0.88610478] | [0.99264706 0.79713115] | [0.57692308 0.9974359 ] |
| 0.0015 | 12.98 | 529 | 1.4026 | 0.8285 | [0.70523416 0.87909605] | [0.99224806 0.78585859] | [0.54700855 0.9974359 ] |
| 0.0012 | 13.99 | 570 | 1.4444 | 0.8237 | [0.69444444 0.87612613] | [0.99206349 0.7811245 ] | [0.53418803 0.9974359 ] |
| 0.0039 | 14.72 | 600 | 1.3421 | 0.8333 | [0.71584699 0.88208617] | [0.99242424 0.79065041] | [0.55982906 0.9974359 ] |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"normal",
"pneumonia"
] |
Cenlaroll/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Cenlaroll/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.3075
- Validation Loss: 1.4640
- Train Accuracy: 0.805
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.3075 | 1.4640 | 0.805 | 0 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
dima806/30_plant_types_image_detection
|
Predicts plant type given an image with about 93% accuracy.
See https://www.kaggle.com/code/dima806/30-plant-types-image-detection-vit for more details.
```
Classification report:
precision recall f1-score support
guava 0.9846 0.9600 0.9722 200
galangal 0.9418 0.8900 0.9152 200
bilimbi 0.9949 0.9750 0.9848 200
paddy 0.9731 0.9050 0.9378 200
eggplant 0.9848 0.9700 0.9773 200
cucumber 0.9561 0.9800 0.9679 200
cassava 0.9899 0.9800 0.9849 200
papaya 0.9851 0.9950 0.9900 200
banana 0.9950 0.9900 0.9925 200
orange 0.9534 0.9200 0.9364 200
cantaloupe 0.5271 0.3400 0.4134 200
coconut 0.9950 1.0000 0.9975 200
soybeans 0.9754 0.9900 0.9826 200
pomelo 0.9563 0.9850 0.9704 200
pineapple 0.9703 0.9800 0.9751 200
melon 0.5000 0.6150 0.5516 200
shallot 0.9949 0.9750 0.9848 200
peperchili 0.9755 0.9950 0.9851 200
spinach 0.9231 0.9600 0.9412 200
tobacco 0.9151 0.9700 0.9417 200
aloevera 0.9949 0.9800 0.9874 200
curcuma 0.9005 0.8600 0.8798 200
corn 0.9610 0.9850 0.9728 200
ginger 0.8551 0.8850 0.8698 200
sweetpotatoes 1.0000 0.9950 0.9975 200
kale 0.9268 0.9500 0.9383 200
longbeans 0.9850 0.9850 0.9850 200
watermelon 0.9252 0.9900 0.9565 200
mango 0.9239 0.9100 0.9169 200
waterapple 0.8807 0.9600 0.9187 200
accuracy 0.9292 6000
macro avg 0.9282 0.9292 0.9275 6000
weighted avg 0.9282 0.9292 0.9275 6000
```
|
[
"guava",
"galangal",
"bilimbi",
"paddy",
"eggplant",
"cucumber",
"cassava",
"papaya",
"banana",
"orange",
"cantaloupe",
"coconut",
"soybeans",
"pomelo",
"pineapple",
"melon",
"shallot",
"peperchili",
"spinach",
"tobacco",
"aloevera",
"curcuma",
"corn",
"ginger",
"sweetpotatoes",
"kale",
"longbeans",
"watermelon",
"mango",
"waterapple"
] |
arieg/fma_genre_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# arieg/fma_genre_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2507
- Validation Loss: 1.5488
- Train Accuracy: 0.4275
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 32000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.7655 | 1.6490 | 0.4525 | 0 |
| 1.5913 | 1.5925 | 0.4325 | 1 |
| 1.4669 | 1.5805 | 0.4125 | 2 |
| 1.3545 | 1.5728 | 0.405 | 3 |
| 1.2507 | 1.5488 | 0.4275 | 4 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"electronic",
"experimental",
"folk",
"hip-hop",
"instrumental",
"international",
"pop",
"rock"
] |
csiztom/vit-base-patch16-224-in21k-street-view
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# csiztom/vit-base-patch16-224-in21k-street-view
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.3425
- Train Accuracy: 0.3008
- Train Top-3-accuracy: 0.5072
- Validation Loss: 3.8645
- Validation Accuracy: 0.1618
- Validation Top-3-accuracy: 0.2830
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 6e-05, 'decay_steps': 5250, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 4.6754 | 0.0403 | 0.0977 | 4.4868 | 0.0723 | 0.1510 | 0 |
| 4.2813 | 0.1050 | 0.2225 | 4.2261 | 0.0996 | 0.2077 | 1 |
| 3.8606 | 0.1848 | 0.3483 | 4.0354 | 0.1300 | 0.2513 | 2 |
| 3.3425 | 0.3008 | 0.5072 | 3.8645 | 0.1618 | 0.2830 | 3 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"svendborg kommune",
"faaborg-midtfyn kommune",
"høje-taastrup kommune",
"egedal kommune",
"odense kommune0",
"sorø kommune",
"ærø kommune",
"rudersdal kommune",
"kolding kommune0",
"nordfyns kommune",
"none",
"struer kommune",
"lolland kommune",
"lyngby-taarbæk kommune",
"brønderslev kommune",
"bornholms regionskommune",
"ballerup kommune1",
"københavns kommune0000",
"randers kommune1",
"kalundborg kommune",
"esbjerg kommune1",
"sønderborg kommune",
"vesthimmerlands kommune",
"odsherred kommune",
"københavns kommune0010",
"hørsholm kommune",
"aalborg kommune1",
"ringsted kommune",
"esbjerg kommune0",
"randers kommune0",
"ballerup kommune0",
"herning kommune",
"københavns kommune0001",
"københavns kommune111",
"syddjurs kommune",
"københavns kommune1100",
"dragør kommune",
"næstved kommune",
"solrød kommune",
"helsingør kommune",
"københavns kommune100",
"aarhus kommune10001",
"københavns kommune1101",
"kerteminde kommune",
"københavns kommune1010",
"slagelse kommune",
"fredensborg kommune",
"københavns kommune010",
"brøndby kommune",
"assens kommune",
"favrskov kommune",
"københavns kommune1011",
"lemvig kommune",
"københavns kommune0011",
"aarhus kommune10000",
"samsø kommune",
"aalborg kommune0011",
"københavns kommune011",
"langeland kommune",
"fanø kommune",
"aarhus kommune01",
"rødovre kommune",
"tønder kommune",
"roskilde kommune",
"thisted kommune",
"ringkøbing-skjern kommune",
"vejle kommune0",
"holstebro kommune",
"aalborg kommune01",
"odder kommune",
"nyborg kommune",
"gentofte kommune",
"aarhus kommune00",
"skive kommune",
"hvidovre kommune",
"tårnby kommune",
"vejle kommune1",
"frederikshavn kommune",
"hillerød kommune",
"mariagerfjord kommune",
"vejen kommune",
"middelfart kommune",
"silkeborg kommune1",
"halsnæs kommune",
"frederiksberg kommune",
"furesø kommune",
"morsø kommune",
"ikast-brande kommune",
"ishøj kommune",
"silkeborg kommune0",
"norddjurs kommune",
"guldborgsund kommune",
"aarhus kommune1001",
"gladsaxe kommune",
"allerød kommune",
"jammerbugt kommune",
"varde kommune",
"frederikssund kommune",
"vallensbæk kommune",
"gribskov kommune",
"aalborg kommune00101",
"albertslund kommune",
"greve kommune",
"læsø kommune",
"haderslev kommune",
"rebild kommune",
"faxe kommune",
"holbæk kommune",
"horsens kommune",
"odense kommune11",
"viborg kommune1",
"aarhus kommune11",
"aarhus kommune101",
"fredericia kommune",
"glostrup kommune",
"aalborg kommune00100",
"køge kommune",
"lejre kommune",
"vordingborg kommune",
"hedensted kommune",
"odense kommune10",
"viborg kommune0",
"stevns kommune",
"aabenraa kommune",
"hjørring kommune0",
"aalborg kommune000",
"skanderborg kommune",
"billund kommune",
"kolding kommune1",
"hjørring kommune1"
] |
dima806/14_flower_types_image_detection
|
Returns flower type given an image with about 99% accuracy.
See https://www.kaggle.com/code/dima806/14-flowers-image-detection-vit for more details.

```
Classification report:
precision recall f1-score support
rose 0.9951 0.9737 0.9843 419
astilbe 0.9952 0.9905 0.9928 419
carnation 0.9627 0.9857 0.9741 419
tulip 0.9929 1.0000 0.9964 420
water_lily 1.0000 0.9905 0.9952 419
bellflower 0.9811 0.9905 0.9857 419
coreopsis 0.9881 0.9881 0.9881 419
common_daisy 0.9858 0.9928 0.9893 419
iris 0.9953 1.0000 0.9976 420
dandelion 0.9905 1.0000 0.9952 419
sunflower 0.9976 0.9976 0.9976 419
california_poppy 0.9951 0.9761 0.9855 419
black_eyed_susan 0.9882 1.0000 0.9941 419
calendula 0.9854 0.9667 0.9760 420
accuracy 0.9894 5869
macro avg 0.9895 0.9894 0.9894 5869
weighted avg 0.9895 0.9894 0.9894 5869
```
|
[
"rose",
"astilbe",
"carnation",
"tulip",
"water_lily",
"bellflower",
"coreopsis",
"common_daisy",
"iris",
"dandelion",
"sunflower",
"california_poppy",
"black_eyed_susan",
"calendula"
] |
dima806/75_butterfly_types_image_detection
|
Returns butterfly type given an image with about 97% accuracy.
See https://www.kaggle.com/code/dima806/75-butterfly-types-image-detection-vit for more details.
```
Classification report:
precision recall f1-score support
GREY HAIRSTREAK 0.9623 0.9808 0.9714 52
COMMON BANDED AWL 0.9804 0.9434 0.9615 53
CHESTNUT 0.9815 1.0000 0.9907 53
EASTERN DAPPLE WHITE 0.9362 0.8462 0.8889 52
COMMON WOOD-NYMPH 0.9123 1.0000 0.9541 52
CLEOPATRA 1.0000 0.9808 0.9903 52
ELBOWED PIERROT 1.0000 0.9808 0.9903 52
MILBERTS TORTOISESHELL 0.9434 0.9434 0.9434 53
PEACOCK 1.0000 1.0000 1.0000 52
MALACHITE 1.0000 1.0000 1.0000 52
RED ADMIRAL 0.9423 0.9245 0.9333 53
INDRA SWALLOW 0.9804 0.9615 0.9709 52
MOURNING CLOAK 1.0000 0.9808 0.9903 52
CRECENT 1.0000 0.9808 0.9903 52
AN 88 1.0000 1.0000 1.0000 52
BECKERS WHITE 0.9455 1.0000 0.9720 52
ATALA 1.0000 1.0000 1.0000 52
PURPLISH COPPER 0.9259 0.9615 0.9434 52
SILVER SPOT SKIPPER 0.9286 1.0000 0.9630 52
ZEBRA LONG WING 1.0000 1.0000 1.0000 52
RED POSTMAN 0.9455 1.0000 0.9720 52
TROPICAL LEAFWING 0.9623 0.9808 0.9714 52
JULIA 0.9444 0.9808 0.9623 52
DANAID EGGFLY 0.9767 0.8077 0.8842 52
AMERICAN SNOOT 0.9615 0.9434 0.9524 53
BANDED ORANGE HELICONIAN 0.9800 0.9245 0.9515 53
ULYSES 1.0000 0.9623 0.9808 53
LARGE MARBLE 0.9057 0.9231 0.9143 52
RED SPOTTED PURPLE 0.9811 1.0000 0.9905 52
EASTERN PINE ELFIN 0.9636 1.0000 0.9815 53
ADONIS 0.9811 0.9811 0.9811 53
CLOUDED SULPHUR 0.8519 0.8679 0.8598 53
CABBAGE WHITE 0.9630 1.0000 0.9811 52
BLUE SPOTTED CROW 1.0000 0.9808 0.9903 52
GOLD BANDED 0.9815 1.0000 0.9907 53
VICEROY 1.0000 0.9811 0.9905 53
MANGROVE SKIPPER 0.9804 0.9615 0.9709 52
MESTRA 1.0000 0.9038 0.9495 52
CAIRNS BIRDWING 1.0000 1.0000 1.0000 53
BLACK HAIRSTREAK 0.9800 0.9423 0.9608 52
PAPER KITE 1.0000 1.0000 1.0000 52
ORCHARD SWALLOW 0.9615 0.9615 0.9615 52
ORANGE OAKLEAF 1.0000 1.0000 1.0000 52
PIPEVINE SWALLOW 1.0000 1.0000 1.0000 52
SCARCE SWALLOW 0.9811 0.9811 0.9811 53
PURPLE HAIRSTREAK 0.9615 0.9434 0.9524 53
PAINTED LADY 0.9630 1.0000 0.9811 52
EASTERN COMA 0.8033 0.9423 0.8673 52
CHECQUERED SKIPPER 1.0000 0.8846 0.9388 52
SOUTHERN DOGFACE 0.9057 0.9057 0.9057 53
CRIMSON PATCH 1.0000 1.0000 1.0000 52
YELLOW SWALLOW TAIL 0.9464 1.0000 0.9725 53
POPINJAY 1.0000 1.0000 1.0000 53
BLUE MORPHO 0.9811 1.0000 0.9905 52
COPPER TAIL 0.9184 0.8654 0.8911 52
BROWN SIPROETA 0.9811 1.0000 0.9905 52
GREEN CELLED CATTLEHEART 1.0000 0.9623 0.9808 53
PINE WHITE 1.0000 0.9808 0.9903 52
WOOD SATYR 0.9630 0.9811 0.9720 53
QUESTION MARK 0.9302 0.7692 0.8421 52
RED CRACKER 1.0000 0.9808 0.9903 52
ORANGE TIP 0.9815 1.0000 0.9907 53
SLEEPY ORANGE 0.9623 0.9623 0.9623 53
AFRICAN GIANT SWALLOWTAIL 1.0000 0.9811 0.9905 53
BANDED PEACOCK 1.0000 1.0000 1.0000 53
GREAT EGGFLY 0.8387 1.0000 0.9123 52
SOOTYWING 0.9630 0.9811 0.9720 53
IPHICLUS SISTER 1.0000 1.0000 1.0000 53
TWO BARRED FLASHER 0.9298 1.0000 0.9636 53
CLODIUS PARNASSIAN 0.9811 1.0000 0.9905 52
APPOLLO 0.9811 0.9811 0.9811 53
MONARCH 0.9811 1.0000 0.9905 52
STRAITED QUEEN 0.9630 1.0000 0.9811 52
METALMARK 0.9600 0.9057 0.9320 53
GREAT JAY 1.0000 0.9623 0.9808 53
accuracy 0.9674 3930
macro avg 0.9685 0.9674 0.9673 3930
weighted avg 0.9685 0.9674 0.9673 3930
```
|
[
"grey hairstreak",
"common banded awl",
"chestnut",
"eastern dapple white",
"common wood-nymph",
"cleopatra",
"elbowed pierrot",
"milberts tortoiseshell",
"peacock",
"malachite",
"red admiral",
"indra swallow",
"mourning cloak",
"crecent",
"an 88",
"beckers white",
"atala",
"purplish copper",
"silver spot skipper",
"zebra long wing",
"red postman",
"tropical leafwing",
"julia",
"danaid eggfly",
"american snoot",
"banded orange heliconian",
"ulyses",
"large marble",
"red spotted purple",
"eastern pine elfin",
"adonis",
"clouded sulphur",
"cabbage white",
"blue spotted crow",
"gold banded",
"viceroy",
"mangrove skipper",
"mestra",
"cairns birdwing",
"black hairstreak",
"paper kite",
"orchard swallow",
"orange oakleaf",
"pipevine swallow",
"scarce swallow",
"purple hairstreak",
"painted lady",
"eastern coma",
"checquered skipper",
"southern dogface",
"crimson patch",
"yellow swallow tail",
"popinjay",
"blue morpho",
"copper tail",
"brown siproeta",
"green celled cattleheart",
"pine white",
"wood satyr",
"question mark",
"red cracker",
"orange tip",
"sleepy orange",
"african giant swallowtail",
"banded peacock",
"great eggfly",
"sootywing",
"iphiclus sister",
"two barred flasher",
"clodius parnassian",
"appollo",
"monarch",
"straited queen",
"metalmark",
"great jay"
] |
Nana/Plantaide
|
This repository would host models for plant disease detection machine learning algorithms.
|
[
"maize_leaf_blight (mlb)",
"bean_rust (br)",
"tomato_spider_mite",
"bean_angular_leaf_spot (als)",
"cassava_brown_streak_disease (cbsd)",
"bean_healthy",
"maize_lethal_necrosis (mln)",
"maize_fall_army_worm (faw)",
"tomato_mosaic_virus",
"tomato_early_blight",
"cassava_healthy",
"tomato_bacterial_spot",
"tomato_yellowleaf_curl_virus",
"tomato_healthy",
"maize_healthy",
"maize_cercospora_leaf_spot (cls)",
"tomato_target_spot",
"tomato_leaf_mold",
"maize_streak_virus (msv)",
"tomato_late_blight",
"tomato_septoria_leaf_spot",
"cassava_mosaic_disease (cmd)"
] |
rossning92/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0846
- F1: 0.5965
- Roc Auc: 0.7500
- Accuracy: 0.3659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 0.91 | 8 | 0.6685 | 0.0992 | 0.5341 | 0.0 |
| 0.7041 | 1.94 | 17 | 0.4597 | 0.1245 | 0.5440 | 0.0 |
| 0.5413 | 2.97 | 26 | 0.1970 | 0.0287 | 0.5067 | 0.0081 |
| 0.2531 | 4.0 | 35 | 0.1487 | 0.0099 | 0.5022 | 0.0 |
| 0.15 | 4.91 | 43 | 0.1465 | 0.0566 | 0.5142 | 0.0407 |
| 0.145 | 5.94 | 52 | 0.1433 | 0.1166 | 0.5312 | 0.0569 |
| 0.138 | 6.97 | 61 | 0.1412 | 0.2140 | 0.5629 | 0.0976 |
| 0.1374 | 8.0 | 70 | 0.1377 | 0.2698 | 0.5827 | 0.1138 |
| 0.1374 | 8.91 | 78 | 0.1319 | 0.2410 | 0.5726 | 0.1057 |
| 0.1309 | 9.94 | 87 | 0.1284 | 0.3100 | 0.6014 | 0.1382 |
| 0.1256 | 10.97 | 96 | 0.1228 | 0.2667 | 0.5824 | 0.1220 |
| 0.1196 | 12.0 | 105 | 0.1201 | 0.3500 | 0.6186 | 0.1463 |
| 0.116 | 12.91 | 113 | 0.1169 | 0.3732 | 0.6286 | 0.1707 |
| 0.1102 | 13.94 | 122 | 0.1137 | 0.3650 | 0.6220 | 0.1951 |
| 0.1062 | 14.97 | 131 | 0.1082 | 0.3843 | 0.6316 | 0.2195 |
| 0.1019 | 16.0 | 140 | 0.1048 | 0.4630 | 0.6751 | 0.2602 |
| 0.1019 | 16.91 | 148 | 0.1033 | 0.4475 | 0.6614 | 0.2602 |
| 0.0965 | 17.94 | 157 | 0.1046 | 0.4890 | 0.6899 | 0.2846 |
| 0.0935 | 18.97 | 166 | 0.1014 | 0.4651 | 0.6711 | 0.2358 |
| 0.0928 | 20.0 | 175 | 0.0998 | 0.4877 | 0.6918 | 0.2520 |
| 0.0897 | 20.91 | 183 | 0.0959 | 0.5145 | 0.6961 | 0.2683 |
| 0.0843 | 21.94 | 192 | 0.0933 | 0.5296 | 0.7080 | 0.2927 |
| 0.0829 | 22.97 | 201 | 0.0919 | 0.5610 | 0.7255 | 0.3171 |
| 0.0804 | 24.0 | 210 | 0.0917 | 0.5644 | 0.7257 | 0.3496 |
| 0.0804 | 24.91 | 218 | 0.0898 | 0.6036 | 0.7505 | 0.3577 |
| 0.0797 | 25.94 | 227 | 0.0886 | 0.5758 | 0.7331 | 0.3333 |
| 0.0762 | 26.97 | 236 | 0.0865 | 0.5740 | 0.7330 | 0.3415 |
| 0.0757 | 28.0 | 245 | 0.0879 | 0.5893 | 0.7429 | 0.3577 |
| 0.0736 | 28.91 | 253 | 0.0866 | 0.5875 | 0.7427 | 0.3415 |
| 0.0716 | 29.94 | 262 | 0.0855 | 0.5910 | 0.7430 | 0.3659 |
| 0.0722 | 30.97 | 271 | 0.0857 | 0.5917 | 0.7452 | 0.3577 |
| 0.0716 | 32.0 | 280 | 0.0864 | 0.5868 | 0.7405 | 0.3415 |
| 0.0716 | 32.91 | 288 | 0.0850 | 0.5917 | 0.7452 | 0.3577 |
| 0.0701 | 33.94 | 297 | 0.0849 | 0.5965 | 0.7500 | 0.3577 |
| 0.0701 | 34.97 | 306 | 0.0844 | 0.5875 | 0.7427 | 0.3496 |
| 0.0704 | 36.0 | 315 | 0.0846 | 0.5982 | 0.7501 | 0.3659 |
| 0.0695 | 36.57 | 320 | 0.0846 | 0.5965 | 0.7500 | 0.3659 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"alina-ruppel",
"alla-moisey",
"anya",
"backbend",
"cls-diana",
"cls-elizabeth",
"contortion-naked",
"contortion-passive",
"elizabeth",
"entse",
"flexshow-julia",
"front-splits",
"frontbend",
"ivanka",
"ksenia-rom",
"leg-shouldering",
"leopard-girl",
"mariana",
"marinelli-bend",
"mylittleyoxi3",
"nadya",
"needle-scale",
"olesya",
"oversplits",
"regina",
"starred",
"straddle-splits",
"straddle-splits-on-head",
"tanya",
"totaltoning",
"triple-fold",
"ula",
"zlata",
"zolboo"
] |
damiacc2/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# damiacc2/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2269
- Train Accuracy: 0.926
- Validation Loss: 0.2786
- Validation Accuracy: 0.9260
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4267 | 0.909 | 0.3843 | 0.9090 | 0 |
| 0.3467 | 0.917 | 0.3304 | 0.9170 | 1 |
| 0.2926 | 0.913 | 0.3178 | 0.9130 | 2 |
| 0.2469 | 0.917 | 0.3025 | 0.9170 | 3 |
| 0.2269 | 0.926 | 0.2786 | 0.9260 | 4 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
bdpc/vit-base_rvl_cdip-N1K_ce_2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl_cdip-N1K_ce_2
This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1280
- Accuracy: 0.871
- Brier Loss: 0.2424
- Nll: 1.0979
- F1 Micro: 0.871
- F1 Macro: 0.8714
- Ece: 0.1202
- Aurc: 0.0321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 0.6617 | 1.0 | 8000 | 0.8187 | 0.8305 | 0.2858 | 1.3679 | 0.8305 | 0.8317 | 0.1182 | 0.0482 |
| 0.4446 | 2.0 | 16000 | 0.7450 | 0.8598 | 0.2427 | 1.2010 | 0.8598 | 0.8593 | 0.1100 | 0.0344 |
| 0.3853 | 3.0 | 24000 | 0.9422 | 0.8452 | 0.2771 | 1.3313 | 0.8452 | 0.8459 | 0.1288 | 0.0413 |
| 0.2259 | 4.0 | 32000 | 0.9828 | 0.8498 | 0.2702 | 1.3463 | 0.8498 | 0.8475 | 0.1267 | 0.0426 |
| 0.1824 | 5.0 | 40000 | 1.0204 | 0.8578 | 0.2603 | 1.2439 | 0.8578 | 0.8580 | 0.1263 | 0.0369 |
| 0.0343 | 6.0 | 48000 | 1.1306 | 0.8532 | 0.2716 | 1.1966 | 0.8532 | 0.8533 | 0.1340 | 0.0367 |
| 0.0288 | 7.0 | 56000 | 1.1081 | 0.8638 | 0.2552 | 1.2206 | 0.8638 | 0.8642 | 0.1243 | 0.0345 |
| 0.0122 | 8.0 | 64000 | 1.1533 | 0.8625 | 0.2579 | 1.2019 | 0.8625 | 0.8620 | 0.1260 | 0.0348 |
| 0.004 | 9.0 | 72000 | 1.1360 | 0.868 | 0.2487 | 1.0798 | 0.868 | 0.8691 | 0.1228 | 0.0330 |
| 0.0027 | 10.0 | 80000 | 1.1280 | 0.871 | 0.2424 | 1.0979 | 0.871 | 0.8714 | 0.1202 | 0.0321 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"letter",
"form",
"email",
"handwritten",
"advertisement",
"scientific_report",
"scientific_publication",
"specification",
"file_folder",
"news_article",
"budget",
"invoice",
"presentation",
"questionnaire",
"resume",
"memo"
] |
sck/vca
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vca
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Recall |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.95 | 5 | 0.4596 | 0.0 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"0",
"1"
] |
Hafiz47/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hafiz47/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3692
- Validation Loss: 0.3328
- Train Accuracy: 0.926
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7777 | 1.6234 | 0.834 | 0 |
| 1.1884 | 0.7782 | 0.911 | 1 |
| 0.6717 | 0.5104 | 0.908 | 2 |
| 0.4754 | 0.4022 | 0.914 | 3 |
| 0.3692 | 0.3328 | 0.926 | 4 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
jayanta/vit-base-patch16-224-in21k-face-recognition
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-face-recognition
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0015
- Accuracy: 1.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0368 | 1.0 | 372 | 0.0346 | 1.0000 |
| 0.0094 | 2.0 | 744 | 0.0092 | 1.0000 |
| 0.0046 | 3.0 | 1116 | 0.0047 | 1.0000 |
| 0.0029 | 4.0 | 1488 | 0.0029 | 1.0 |
| 0.0022 | 5.0 | 1860 | 0.0023 | 0.9999 |
| 0.0017 | 6.0 | 2232 | 0.0017 | 1.0 |
| 0.0015 | 7.0 | 2604 | 0.0015 | 1.0 |
| 0.0014 | 8.0 | 2976 | 0.0015 | 1.0000 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu117
- Datasets 2.13.2
- Tokenizers 0.11.0
|
[
"al-faiz_ali",
"aryan",
"ranveer",
"riktom",
"tejas",
"divyanshu",
"jayanta",
"sai_dushwanth",
"unknown"
] |
bdpc/vit-base_rvl_cdip-N1K_aAURC_256
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl_cdip-N1K_aAURC_256
This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4424
- Accuracy: 0.8932
- Brier Loss: 0.1751
- Nll: 1.0235
- F1 Micro: 0.8932
- F1 Macro: 0.8934
- Ece: 0.0697
- Aurc: 0.0181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 63 | 0.3624 | 0.8972 | 0.1554 | 1.1865 | 0.8972 | 0.8975 | 0.0426 | 0.0165 |
| No log | 2.0 | 126 | 0.3717 | 0.8965 | 0.1582 | 1.1519 | 0.8965 | 0.8967 | 0.0460 | 0.0170 |
| No log | 3.0 | 189 | 0.3988 | 0.8922 | 0.1690 | 1.1118 | 0.8922 | 0.8927 | 0.0578 | 0.0177 |
| No log | 4.0 | 252 | 0.4032 | 0.8942 | 0.1677 | 1.0854 | 0.8942 | 0.8946 | 0.0590 | 0.0177 |
| No log | 5.0 | 315 | 0.4195 | 0.894 | 0.1706 | 1.0664 | 0.894 | 0.8942 | 0.0628 | 0.0179 |
| No log | 6.0 | 378 | 0.4251 | 0.8955 | 0.1711 | 1.0462 | 0.8955 | 0.8957 | 0.0637 | 0.0179 |
| No log | 7.0 | 441 | 0.4341 | 0.8925 | 0.1726 | 1.0210 | 0.8925 | 0.8927 | 0.0682 | 0.0181 |
| 0.057 | 8.0 | 504 | 0.4379 | 0.893 | 0.1744 | 1.0253 | 0.893 | 0.8932 | 0.0687 | 0.0180 |
| 0.057 | 9.0 | 567 | 0.4411 | 0.8928 | 0.1748 | 1.0200 | 0.8928 | 0.8929 | 0.0712 | 0.0181 |
| 0.057 | 10.0 | 630 | 0.4424 | 0.8932 | 0.1751 | 1.0235 | 0.8932 | 0.8934 | 0.0697 | 0.0181 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"letter",
"form",
"email",
"handwritten",
"advertisement",
"scientific_report",
"scientific_publication",
"specification",
"file_folder",
"news_article",
"budget",
"invoice",
"presentation",
"questionnaire",
"resume",
"memo"
] |
mhajjaj/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2816
- Accuracy: 0.9327
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.93 | 7 | 0.6878 | 0.7596 |
| 0.8517 | 2.0 | 15 | 0.3490 | 0.8462 |
| 0.3824 | 2.8 | 21 | 0.2816 | 0.9327 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
meeen/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"milddemented",
"moderatedemented",
"nondemented",
"verymilddemented"
] |
amey6056/resnet-50-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-eurosat
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1363
- Accuracy: 0.9648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7502 | 1.0 | 549 | 1.3285 | 0.7954 |
| 0.6126 | 2.0 | 1098 | 0.3205 | 0.9301 |
| 0.4063 | 3.0 | 1647 | 0.1893 | 0.9551 |
| 0.3333 | 4.0 | 2197 | 0.1515 | 0.9624 |
| 0.3365 | 5.0 | 2745 | 0.1363 | 0.9648 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple___apple_scab",
"apple___black_rot",
"apple___cedar_apple_rust",
"apple___healthy",
"blueberry___healthy",
"cherry_(including_sour)___powdery_mildew",
"cherry_(including_sour)___healthy",
"corn_(maize)___cercospora_leaf_spot gray_leaf_spot",
"corn_(maize)___common_rust_",
"corn_(maize)___northern_leaf_blight",
"corn_(maize)___healthy",
"grape___black_rot",
"grape___esca_(black_measles)",
"grape___leaf_blight_(isariopsis_leaf_spot)",
"grape___healthy",
"orange___haunglongbing_(citrus_greening)",
"peach___bacterial_spot",
"peach___healthy",
"pepper,_bell___bacterial_spot",
"pepper,_bell___healthy",
"potato___early_blight",
"potato___late_blight",
"potato___healthy",
"raspberry___healthy",
"soybean___healthy",
"squash___powdery_mildew",
"strawberry___leaf_scorch",
"strawberry___healthy",
"tomato___bacterial_spot",
"tomato___early_blight",
"tomato___late_blight",
"tomato___leaf_mold",
"tomato___septoria_leaf_spot",
"tomato___spider_mites two-spotted_spider_mite",
"tomato___target_spot",
"tomato___tomato_yellow_leaf_curl_virus",
"tomato___tomato_mosaic_virus",
"tomato___healthy"
] |
PedroSampaio/vit-base-patch16-224-in21k-finetuned-lora-food101-awesome
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
[
"apple_pie",
"baby_back_ribs",
"baklava",
"beef_carpaccio",
"beef_tartare",
"beet_salad",
"beignets",
"bibimbap",
"bread_pudding",
"breakfast_burrito",
"bruschetta",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare",
"waffles"
] |
Franman/clasificador-de-pesos
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-de-pesos
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0739
- Accuracy: 0.9843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0999 | 0.96 | 13 | 0.7987 | 0.6440 |
| 0.6342 | 2.0 | 27 | 0.2414 | 0.9424 |
| 0.4882 | 2.96 | 40 | 0.1646 | 0.9634 |
| 0.463 | 4.0 | 54 | 0.2528 | 0.9424 |
| 0.4609 | 4.96 | 67 | 0.1130 | 0.9791 |
| 0.4251 | 6.0 | 81 | 0.1304 | 0.9634 |
| 0.3802 | 6.96 | 94 | 0.0739 | 0.9843 |
| 0.4147 | 7.7 | 104 | 0.0675 | 0.9843 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"100",
"1000",
"200"
] |
dima806/10_ship_types_image_detection
|
Returns ship type given an image with about 99.6% accuracy.
See https://www.kaggle.com/code/dima806/ship-type-detection-vit for more details.
```
Classification report:
precision recall f1-score support
Bulkers 0.9927 1.0000 0.9963 409
Recreational 0.9902 0.9927 0.9915 409
Sailboat 0.9975 0.9853 0.9914 409
DDG 0.9976 1.0000 0.9988 409
Container Ship 1.0000 0.9951 0.9975 409
Tug 0.9951 0.9927 0.9939 410
Aircraft Carrier 1.0000 0.9976 0.9988 409
Cruise 1.0000 1.0000 1.0000 409
Submarine 0.9927 1.0000 0.9964 410
Car Carrier 0.9951 0.9976 0.9963 409
accuracy 0.9961 4092
macro avg 0.9961 0.9961 0.9961 4092
weighted avg 0.9961 0.9961 0.9961 4092
```
|
[
"bulkers",
"recreational",
"sailboat",
"ddg",
"container ship",
"tug",
"aircraft carrier",
"cruise",
"submarine",
"car carrier"
] |
PedroSampaio/vit-base-patch16-224-in21k-food101-16-7
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-food101-16-7
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3600
- Accuracy: 0.9080
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9168 | 1.0 | 1183 | 1.6711 | 0.8177 |
| 0.9489 | 2.0 | 2367 | 0.6971 | 0.8659 |
| 0.6068 | 3.0 | 3551 | 0.4862 | 0.8894 |
| 0.5981 | 4.0 | 4735 | 0.4238 | 0.8948 |
| 0.6099 | 5.0 | 5918 | 0.3905 | 0.8994 |
| 0.4873 | 6.0 | 7102 | 0.3715 | 0.9028 |
| 0.459 | 7.0 | 8281 | 0.3600 | 0.9080 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple_pie",
"baby_back_ribs",
"baklava",
"beef_carpaccio",
"beef_tartare",
"beet_salad",
"beignets",
"bibimbap",
"bread_pudding",
"breakfast_burrito",
"bruschetta",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare",
"waffles"
] |
PedroSampaio/swin-base-patch4-window7-224-in22k-food101-16-7
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-patch4-window7-224-in22k-food101-16-7
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2515
- Accuracy: 0.9292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8296 | 1.0 | 1183 | 0.4354 | 0.8731 |
| 0.6811 | 2.0 | 2367 | 0.3406 | 0.8999 |
| 0.4531 | 3.0 | 3551 | 0.2902 | 0.9154 |
| 0.5265 | 4.0 | 4735 | 0.2751 | 0.9199 |
| 0.4338 | 5.0 | 5918 | 0.2689 | 0.9227 |
| 0.3443 | 6.0 | 7102 | 0.2538 | 0.9276 |
| 0.3871 | 7.0 | 8281 | 0.2515 | 0.9292 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple_pie",
"baby_back_ribs",
"baklava",
"beef_carpaccio",
"beef_tartare",
"beet_salad",
"beignets",
"bibimbap",
"bread_pudding",
"breakfast_burrito",
"bruschetta",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare",
"waffles"
] |
PedroSampaio/vit-base-patch16-224-food101-16-7
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-food101-16-7
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3293
- Accuracy: 0.9081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9326 | 1.0 | 1183 | 0.5737 | 0.8566 |
| 0.6632 | 2.0 | 2367 | 0.4265 | 0.884 |
| 0.4608 | 3.0 | 3551 | 0.3747 | 0.8958 |
| 0.5356 | 4.0 | 4735 | 0.3557 | 0.8992 |
| 0.483 | 5.0 | 5918 | 0.3431 | 0.9044 |
| 0.3975 | 6.0 | 7102 | 0.3343 | 0.9071 |
| 0.3716 | 7.0 | 8281 | 0.3293 | 0.9081 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple_pie",
"baby_back_ribs",
"baklava",
"beef_carpaccio",
"beef_tartare",
"beet_salad",
"beignets",
"bibimbap",
"bread_pudding",
"breakfast_burrito",
"bruschetta",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare",
"waffles"
] |
PedroSampaio/swin-base-patch4-window7-224-food101-16-7
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-patch4-window7-224-food101-16-7
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2478
- Accuracy: 0.9289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8681 | 1.0 | 1183 | 0.4437 | 0.8731 |
| 0.6919 | 2.0 | 2367 | 0.3323 | 0.9038 |
| 0.4668 | 3.0 | 3551 | 0.2928 | 0.9158 |
| 0.5488 | 4.0 | 4735 | 0.2752 | 0.9209 |
| 0.4527 | 5.0 | 5918 | 0.2600 | 0.9255 |
| 0.3692 | 6.0 | 7102 | 0.2519 | 0.9272 |
| 0.3731 | 7.0 | 8281 | 0.2478 | 0.9289 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple_pie",
"baby_back_ribs",
"baklava",
"beef_carpaccio",
"beef_tartare",
"beet_salad",
"beignets",
"bibimbap",
"bread_pudding",
"breakfast_burrito",
"bruschetta",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare",
"waffles"
] |
PedroSampaio/fruits-360-16-7
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fruits-360-16-7
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0026
- Accuracy: 0.9992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0718 | 1.0 | 1057 | 0.0188 | 0.9976 |
| 0.0135 | 2.0 | 2115 | 0.0055 | 0.9992 |
| 0.0236 | 3.0 | 3173 | 0.0077 | 0.9976 |
| 0.0082 | 4.0 | 4231 | 0.0026 | 0.9992 |
| 0.004 | 5.0 | 5288 | 0.0036 | 0.9988 |
| 0.0067 | 6.0 | 6346 | 0.0024 | 0.9991 |
| 0.0005 | 7.0 | 7399 | 0.0022 | 0.9992 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple braeburn",
"apple crimson snow",
"apple golden",
"apple granny smith",
"apple pink lady",
"apple red",
"apple red delicious",
"apple red yellow",
"apricot",
"avocado",
"avocado ripe",
"banana",
"banana lady finger",
"banana red",
"beetroot",
"blueberry",
"cactus fruit",
"cantaloupe",
"carambula",
"cauliflower",
"cherry",
"cherry rainier",
"cherry wax black",
"cherry wax red",
"cherry wax yellow",
"chestnut",
"clementine",
"cocos",
"corn",
"corn husk",
"cucumber ripe",
"dates",
"eggplant",
"fig",
"ginger root",
"granadilla",
"grape blue",
"grape pink",
"grape white",
"grapefruit pink",
"grapefruit white",
"guava",
"hazelnut",
"huckleberry",
"kaki",
"kiwi",
"kohlrabi",
"kumquats",
"lemon",
"lemon meyer",
"limes",
"lychee",
"mandarine",
"mango",
"mango red",
"mangostan",
"maracuja",
"melon piel de sapo",
"mulberry",
"nectarine",
"nectarine flat",
"nut forest",
"nut pecan",
"onion red",
"onion red peeled",
"onion white",
"orange",
"papaya",
"passion fruit",
"peach",
"peach flat",
"pear",
"pear abate",
"pear forelle",
"pear kaiser",
"pear monster",
"pear red",
"pear stone",
"pear williams",
"pepino",
"pepper green",
"pepper orange",
"pepper red",
"pepper yellow",
"physalis",
"physalis with husk",
"pineapple",
"pineapple mini",
"pitahaya red",
"plum",
"pomegranate",
"pomelo sweetie",
"potato red",
"potato red washed",
"potato sweet",
"potato white",
"quince",
"rambutan",
"raspberry",
"redcurrant",
"salak",
"strawberry",
"strawberry wedge",
"tamarillo",
"tangelo",
"tomato",
"tomato cherry red",
"tomato heart",
"tomato maroon",
"tomato yellow",
"tomato not ripened",
"walnut",
"watermelon"
] |
KazuSuzuki/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# KazuSuzuki/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3898
- Validation Loss: 0.3488
- Train Accuracy: 0.907
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7694 | 1.6101 | 0.831 | 0 |
| 1.2109 | 0.7967 | 0.899 | 1 |
| 0.7029 | 0.5165 | 0.908 | 2 |
| 0.4933 | 0.4298 | 0.895 | 3 |
| 0.3898 | 0.3488 | 0.907 | 4 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
Akshay0706/Cinnamon-Plant-Model-Final
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Cinnamon-Plant-Model-Final
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0348
- eval_accuracy: 0.9796
- eval_runtime: 8.12
- eval_samples_per_second: 6.034
- eval_steps_per_second: 1.601
- epoch: 187.0
- step: 1683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"0",
"1"
] |
Akshay0706/Flower-Image-Classification-Model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flower-Image-Classification-Model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5898
- Accuracy: 0.9876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8603 | 1.0 | 1443 | 0.5898 | 0.9876 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"0",
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9",
"10",
"11",
"12",
"13",
"14",
"15",
"16",
"17",
"18",
"19",
"20",
"21",
"22",
"23",
"24",
"25",
"26",
"27",
"28",
"29",
"30",
"31",
"32",
"33",
"34",
"35",
"36",
"37"
] |
PedroSampaio/vit-base-patch16-224-fruits-360-16-7
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-fruits-360-16-7
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0010
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.04 | 1.0 | 1057 | 0.0216 | 0.9953 |
| 0.0053 | 2.0 | 2115 | 0.0081 | 0.9974 |
| 0.0252 | 3.0 | 3173 | 0.0043 | 0.9991 |
| 0.0221 | 4.0 | 4231 | 0.0038 | 0.9991 |
| 0.0116 | 5.0 | 5288 | 0.0010 | 1.0 |
| 0.0014 | 6.0 | 6346 | 0.0013 | 0.9997 |
| 0.0003 | 7.0 | 7399 | 0.0011 | 0.9996 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple braeburn",
"apple crimson snow",
"apple golden",
"apple granny smith",
"apple pink lady",
"apple red",
"apple red delicious",
"apple red yellow",
"apricot",
"avocado",
"avocado ripe",
"banana",
"banana lady finger",
"banana red",
"beetroot",
"blueberry",
"cactus fruit",
"cantaloupe",
"carambula",
"cauliflower",
"cherry",
"cherry rainier",
"cherry wax black",
"cherry wax red",
"cherry wax yellow",
"chestnut",
"clementine",
"cocos",
"corn",
"corn husk",
"cucumber ripe",
"dates",
"eggplant",
"fig",
"ginger root",
"granadilla",
"grape blue",
"grape pink",
"grape white",
"grapefruit pink",
"grapefruit white",
"guava",
"hazelnut",
"huckleberry",
"kaki",
"kiwi",
"kohlrabi",
"kumquats",
"lemon",
"lemon meyer",
"limes",
"lychee",
"mandarine",
"mango",
"mango red",
"mangostan",
"maracuja",
"melon piel de sapo",
"mulberry",
"nectarine",
"nectarine flat",
"nut forest",
"nut pecan",
"onion red",
"onion red peeled",
"onion white",
"orange",
"papaya",
"passion fruit",
"peach",
"peach flat",
"pear",
"pear abate",
"pear forelle",
"pear kaiser",
"pear monster",
"pear red",
"pear stone",
"pear williams",
"pepino",
"pepper green",
"pepper orange",
"pepper red",
"pepper yellow",
"physalis",
"physalis with husk",
"pineapple",
"pineapple mini",
"pitahaya red",
"plum",
"pomegranate",
"pomelo sweetie",
"potato red",
"potato red washed",
"potato sweet",
"potato white",
"quince",
"rambutan",
"raspberry",
"redcurrant",
"salak",
"strawberry",
"strawberry wedge",
"tamarillo",
"tangelo",
"tomato",
"tomato cherry red",
"tomato heart",
"tomato maroon",
"tomato yellow",
"tomato not ripened",
"walnut",
"watermelon"
] |
PedroSampaio/vit-base-patch16-224-in21k-fruits-360-16-7
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-fruits-360-16-7
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0131
- Accuracy: 0.9992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4608 | 1.0 | 1057 | 0.4181 | 0.9983 |
| 0.0699 | 2.0 | 2115 | 0.0649 | 0.9953 |
| 0.0313 | 3.0 | 3173 | 0.0243 | 0.9986 |
| 0.0143 | 4.0 | 4231 | 0.0131 | 0.9992 |
| 0.0121 | 5.0 | 5288 | 0.0103 | 0.9989 |
| 0.009 | 6.0 | 6346 | 0.0095 | 0.9988 |
| 0.0037 | 7.0 | 7399 | 0.0090 | 0.9989 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple braeburn",
"apple crimson snow",
"apple golden",
"apple granny smith",
"apple pink lady",
"apple red",
"apple red delicious",
"apple red yellow",
"apricot",
"avocado",
"avocado ripe",
"banana",
"banana lady finger",
"banana red",
"beetroot",
"blueberry",
"cactus fruit",
"cantaloupe",
"carambula",
"cauliflower",
"cherry",
"cherry rainier",
"cherry wax black",
"cherry wax red",
"cherry wax yellow",
"chestnut",
"clementine",
"cocos",
"corn",
"corn husk",
"cucumber ripe",
"dates",
"eggplant",
"fig",
"ginger root",
"granadilla",
"grape blue",
"grape pink",
"grape white",
"grapefruit pink",
"grapefruit white",
"guava",
"hazelnut",
"huckleberry",
"kaki",
"kiwi",
"kohlrabi",
"kumquats",
"lemon",
"lemon meyer",
"limes",
"lychee",
"mandarine",
"mango",
"mango red",
"mangostan",
"maracuja",
"melon piel de sapo",
"mulberry",
"nectarine",
"nectarine flat",
"nut forest",
"nut pecan",
"onion red",
"onion red peeled",
"onion white",
"orange",
"papaya",
"passion fruit",
"peach",
"peach flat",
"pear",
"pear abate",
"pear forelle",
"pear kaiser",
"pear monster",
"pear red",
"pear stone",
"pear williams",
"pepino",
"pepper green",
"pepper orange",
"pepper red",
"pepper yellow",
"physalis",
"physalis with husk",
"pineapple",
"pineapple mini",
"pitahaya red",
"plum",
"pomegranate",
"pomelo sweetie",
"potato red",
"potato red washed",
"potato sweet",
"potato white",
"quince",
"rambutan",
"raspberry",
"redcurrant",
"salak",
"strawberry",
"strawberry wedge",
"tamarillo",
"tangelo",
"tomato",
"tomato cherry red",
"tomato heart",
"tomato maroon",
"tomato yellow",
"tomato not ripened",
"walnut",
"watermelon"
] |
PedroSampaio/swin-base-patch4-window7-224-fruits-360-16-7
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-patch4-window7-224-fruits-360-16-7
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0010
- Accuracy: 0.9998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0631 | 1.0 | 1057 | 0.0149 | 0.9956 |
| 0.0254 | 2.0 | 2115 | 0.0042 | 0.9986 |
| 0.026 | 3.0 | 3173 | 0.0021 | 0.9990 |
| 0.0092 | 4.0 | 4231 | 0.0035 | 0.9986 |
| 0.0151 | 5.0 | 5288 | 0.0014 | 0.9997 |
| 0.0075 | 6.0 | 6346 | 0.0047 | 0.9989 |
| 0.0001 | 7.0 | 7399 | 0.0010 | 0.9998 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple braeburn",
"apple crimson snow",
"apple golden",
"apple granny smith",
"apple pink lady",
"apple red",
"apple red delicious",
"apple red yellow",
"apricot",
"avocado",
"avocado ripe",
"banana",
"banana lady finger",
"banana red",
"beetroot",
"blueberry",
"cactus fruit",
"cantaloupe",
"carambula",
"cauliflower",
"cherry",
"cherry rainier",
"cherry wax black",
"cherry wax red",
"cherry wax yellow",
"chestnut",
"clementine",
"cocos",
"corn",
"corn husk",
"cucumber ripe",
"dates",
"eggplant",
"fig",
"ginger root",
"granadilla",
"grape blue",
"grape pink",
"grape white",
"grapefruit pink",
"grapefruit white",
"guava",
"hazelnut",
"huckleberry",
"kaki",
"kiwi",
"kohlrabi",
"kumquats",
"lemon",
"lemon meyer",
"limes",
"lychee",
"mandarine",
"mango",
"mango red",
"mangostan",
"maracuja",
"melon piel de sapo",
"mulberry",
"nectarine",
"nectarine flat",
"nut forest",
"nut pecan",
"onion red",
"onion red peeled",
"onion white",
"orange",
"papaya",
"passion fruit",
"peach",
"peach flat",
"pear",
"pear abate",
"pear forelle",
"pear kaiser",
"pear monster",
"pear red",
"pear stone",
"pear williams",
"pepino",
"pepper green",
"pepper orange",
"pepper red",
"pepper yellow",
"physalis",
"physalis with husk",
"pineapple",
"pineapple mini",
"pitahaya red",
"plum",
"pomegranate",
"pomelo sweetie",
"potato red",
"potato red washed",
"potato sweet",
"potato white",
"quince",
"rambutan",
"raspberry",
"redcurrant",
"salak",
"strawberry",
"strawberry wedge",
"tamarillo",
"tangelo",
"tomato",
"tomato cherry red",
"tomato heart",
"tomato maroon",
"tomato yellow",
"tomato not ripened",
"walnut",
"watermelon"
] |
JiachengZhu/vit-base-beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0843
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3117 | 1.0 | 130 | 0.2071 | 0.9774 |
| 0.2063 | 2.0 | 260 | 0.1341 | 0.9699 |
| 0.1807 | 3.0 | 390 | 0.1080 | 0.9774 |
| 0.0836 | 4.0 | 520 | 0.0987 | 0.9774 |
| 0.1266 | 5.0 | 650 | 0.0843 | 0.9850 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
henrico219/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# henrico219/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3847
- Validation Loss: 0.3365
- Train Accuracy: 0.926
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7731 | 1.5928 | 0.83 | 0 |
| 1.2255 | 0.8033 | 0.903 | 1 |
| 0.7124 | 0.5400 | 0.906 | 2 |
| 0.5017 | 0.4041 | 0.911 | 3 |
| 0.3847 | 0.3365 | 0.926 | 4 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
bkkthon/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bkkthon/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6611
- Validation Loss: 1.0448
- Train Accuracy: 0.873
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.6611 | 1.0448 | 0.873 | 0 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
Kengi/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Kengi/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3700
- Validation Loss: 0.3118
- Train Accuracy: 0.924
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7522 | 1.5990 | 0.833 | 0 |
| 1.2011 | 0.7689 | 0.889 | 1 |
| 0.6871 | 0.5054 | 0.907 | 2 |
| 0.4777 | 0.3800 | 0.91 | 3 |
| 0.3700 | 0.3118 | 0.924 | 4 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
kyvu/mushroom-classification
|
<h1>Mushroom classification</h1>
<p>
The model will predict the mushroom image if it belongs to below species.
</p>
<ul>
<li>Agaricus</li>
<li>Amanita</li>
<li>Boletus</li>
<li>Cortinarius</li>
<li>Entoloma</li>
<li>Exidia</li>
<li>Hygrocybe</li>
<li>Inocybe</li>
<li>Lactarius</li>
<li>Russula</li>
<li>Suillus</li>
</ul>
|
[
"agaricus",
"amanita",
"suillus",
"boletus",
"cortinarius",
"entoloma",
"exidia",
"hygrocybe",
"inocybe",
"lactarius",
"russula"
] |
Mahendra42/swin-tiny-patch4-window7-224_RCC_Classifierv3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224_RCC_Classifierv3
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 7.6560
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 0.0095 | 1.0 | 118 | 2.6285 | 0.0 |
| 0.0038 | 2.0 | 236 | 5.5074 | 0.0 |
| 0.0001 | 3.0 | 354 | 5.7253 | 0.0 |
| 0.0111 | 4.0 | 472 | 6.6322 | 0.0 |
| 0.003 | 5.0 | 590 | 7.3159 | 0.0 |
| 0.0002 | 6.0 | 708 | 7.8263 | 0.0 |
| 0.0112 | 7.0 | 826 | 7.8469 | 0.0 |
| 0.0021 | 8.0 | 944 | 7.4348 | 0.0 |
| 0.0 | 9.0 | 1062 | 7.6901 | 0.0 |
| 0.0002 | 10.0 | 1180 | 7.6560 | 0.0 |
### Framework versions
- Transformers 4.34.1
- Pytorch 1.12.1
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"clear cell rcc",
"non clear cell"
] |
immohit/vit-fine-tuned
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-fine-tuned
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2311
- Accuracy: 0.9163
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2366 | 1.0 | 84 | 0.2311 | 0.9163 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"ai",
"human"
] |
ksukrit/convnextv2-tiny-1k-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-tiny-1k-224-finetuned-eurosat
This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5454
- Accuracy: 0.7513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6352 | 1.0 | 14 | 0.5507 | 0.7462 |
| 0.5508 | 2.0 | 28 | 0.5477 | 0.7487 |
| 0.5313 | 3.0 | 42 | 0.5454 | 0.7513 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
[
"bad",
"good"
] |
gcperk20/swin-small-patch4-window7-224-finetuned-piid
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-small-patch4-window7-224-finetuned-piid
This model is a fine-tuned version of [microsoft/swin-small-patch4-window7-224](https://huggingface.co/microsoft/swin-small-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6168
- Accuracy: 0.7763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2327 | 0.98 | 20 | 1.1687 | 0.5114 |
| 0.7354 | 2.0 | 41 | 0.7696 | 0.6712 |
| 0.602 | 2.98 | 61 | 0.7198 | 0.7078 |
| 0.5809 | 4.0 | 82 | 0.5824 | 0.7397 |
| 0.4989 | 4.98 | 102 | 0.5331 | 0.7489 |
| 0.4364 | 6.0 | 123 | 0.6137 | 0.7489 |
| 0.3321 | 6.98 | 143 | 0.5839 | 0.7717 |
| 0.3 | 8.0 | 164 | 0.5246 | 0.7763 |
| 0.3024 | 8.98 | 184 | 0.5557 | 0.7717 |
| 0.3433 | 10.0 | 205 | 0.5258 | 0.7900 |
| 0.258 | 10.98 | 225 | 0.6354 | 0.7489 |
| 0.1595 | 12.0 | 246 | 0.5492 | 0.8219 |
| 0.2295 | 12.98 | 266 | 0.5889 | 0.7900 |
| 0.1956 | 14.0 | 287 | 0.5670 | 0.7900 |
| 0.2028 | 14.98 | 307 | 0.5460 | 0.7900 |
| 0.1514 | 16.0 | 328 | 0.6587 | 0.7900 |
| 0.0934 | 16.98 | 348 | 0.6131 | 0.7945 |
| 0.1323 | 18.0 | 369 | 0.6615 | 0.7900 |
| 0.1213 | 18.98 | 389 | 0.6192 | 0.7671 |
| 0.1028 | 19.51 | 400 | 0.6168 | 0.7763 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"1",
"2",
"3",
"4"
] |
gcperk20/swin-base-patch4-window7-224-finetuned-piid
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-patch4-window7-224-finetuned-piid
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6630
- Accuracy: 0.8128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1815 | 0.98 | 20 | 1.0441 | 0.5251 |
| 0.6548 | 2.0 | 41 | 0.8150 | 0.6393 |
| 0.6083 | 2.98 | 61 | 0.6395 | 0.6986 |
| 0.4925 | 4.0 | 82 | 0.6273 | 0.6804 |
| 0.4448 | 4.98 | 102 | 0.4812 | 0.8174 |
| 0.3387 | 6.0 | 123 | 0.5868 | 0.7945 |
| 0.2622 | 6.98 | 143 | 0.7868 | 0.7260 |
| 0.2656 | 8.0 | 164 | 0.4432 | 0.8128 |
| 0.2259 | 8.98 | 184 | 0.6553 | 0.7489 |
| 0.1997 | 10.0 | 205 | 0.5143 | 0.7854 |
| 0.1892 | 10.98 | 225 | 0.5657 | 0.7945 |
| 0.1522 | 12.0 | 246 | 0.7339 | 0.7580 |
| 0.1309 | 12.98 | 266 | 0.6064 | 0.8174 |
| 0.1482 | 14.0 | 287 | 0.5875 | 0.8128 |
| 0.1459 | 14.98 | 307 | 0.6443 | 0.7900 |
| 0.1224 | 16.0 | 328 | 0.6521 | 0.8037 |
| 0.0533 | 16.98 | 348 | 0.5915 | 0.8493 |
| 0.1133 | 18.0 | 369 | 0.6152 | 0.8265 |
| 0.0923 | 18.98 | 389 | 0.6819 | 0.7854 |
| 0.086 | 19.51 | 400 | 0.6630 | 0.8128 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"1",
"2",
"3",
"4"
] |
gcperk20/deit-base-patch16-224-finetuned-piid
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-patch16-224-finetuned-piid
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6109
- Accuracy: 0.7443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.881 | 0.98 | 20 | 0.8373 | 0.6164 |
| 0.5554 | 2.0 | 41 | 0.7144 | 0.7169 |
| 0.509 | 2.98 | 61 | 0.6241 | 0.7489 |
| 0.3925 | 4.0 | 82 | 0.6171 | 0.7352 |
| 0.3738 | 4.88 | 100 | 0.6109 | 0.7443 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"1",
"2",
"3",
"4"
] |
ksukrit/convnextv2-base-22k-224-finetuned-hand_class
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-base-22k-224-finetuned-hand_class
This model is a fine-tuned version of [facebook/convnextv2-base-22k-224](https://huggingface.co/facebook/convnextv2-base-22k-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5846
- Accuracy: 0.7337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6258 | 1.0 | 14 | 0.5879 | 0.7136 |
| 0.5574 | 2.0 | 28 | 0.5707 | 0.7286 |
| 0.5062 | 3.0 | 42 | 0.5633 | 0.7186 |
| 0.4812 | 4.0 | 56 | 0.5761 | 0.7136 |
| 0.4418 | 5.0 | 70 | 0.5644 | 0.7312 |
| 0.4167 | 6.0 | 84 | 0.5756 | 0.7236 |
| 0.4091 | 7.0 | 98 | 0.5751 | 0.7337 |
| 0.379 | 8.0 | 112 | 0.5727 | 0.7312 |
| 0.3717 | 9.0 | 126 | 0.5877 | 0.7387 |
| 0.346 | 10.0 | 140 | 0.5846 | 0.7337 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
[
"bad",
"good"
] |
ksukrit/convnextv2-tiny-1k-224-finetuned-hand_class
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-tiny-1k-224-finetuned-hand_class
This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5260
- Accuracy: 0.7739
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.663 | 1.0 | 14 | 0.5878 | 0.6809 |
| 0.5594 | 2.0 | 28 | 0.5597 | 0.7337 |
| 0.5364 | 3.0 | 42 | 0.5466 | 0.7412 |
| 0.545 | 4.0 | 56 | 0.5489 | 0.7462 |
| 0.5267 | 5.0 | 70 | 0.5484 | 0.7462 |
| 0.506 | 6.0 | 84 | 0.5320 | 0.7663 |
| 0.5196 | 7.0 | 98 | 0.5328 | 0.7638 |
| 0.4933 | 8.0 | 112 | 0.5293 | 0.7663 |
| 0.4809 | 9.0 | 126 | 0.5280 | 0.7739 |
| 0.4836 | 10.0 | 140 | 0.5260 | 0.7739 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
[
"bad",
"good"
] |
ksukrit/convnextv2-tiny-1k-224-finetuned-hand-final
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-tiny-1k-224-finetuned-hand-final
This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6638
- Accuracy: 0.7563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6669 | 1.0 | 14 | 0.6050 | 0.6834 |
| 0.5796 | 2.0 | 28 | 0.5599 | 0.7362 |
| 0.5417 | 3.0 | 42 | 0.5486 | 0.7437 |
| 0.5466 | 4.0 | 56 | 0.5528 | 0.7387 |
| 0.5213 | 5.0 | 70 | 0.5673 | 0.7462 |
| 0.493 | 6.0 | 84 | 0.5432 | 0.7613 |
| 0.5051 | 7.0 | 98 | 0.5457 | 0.7513 |
| 0.4656 | 8.0 | 112 | 0.5444 | 0.7563 |
| 0.4399 | 9.0 | 126 | 0.5430 | 0.7613 |
| 0.4213 | 10.0 | 140 | 0.5507 | 0.7613 |
| 0.4118 | 11.0 | 154 | 0.5619 | 0.7538 |
| 0.4015 | 12.0 | 168 | 0.5383 | 0.7513 |
| 0.3785 | 13.0 | 182 | 0.5567 | 0.7563 |
| 0.3487 | 14.0 | 196 | 0.5972 | 0.7462 |
| 0.3401 | 15.0 | 210 | 0.6059 | 0.7462 |
| 0.3215 | 16.0 | 224 | 0.6051 | 0.7563 |
| 0.3171 | 17.0 | 238 | 0.6228 | 0.7513 |
| 0.2971 | 18.0 | 252 | 0.6529 | 0.7563 |
| 0.3111 | 19.0 | 266 | 0.6309 | 0.7588 |
| 0.2722 | 20.0 | 280 | 0.6444 | 0.7588 |
| 0.2677 | 21.0 | 294 | 0.6373 | 0.7588 |
| 0.2721 | 22.0 | 308 | 0.6393 | 0.7538 |
| 0.2694 | 23.0 | 322 | 0.6382 | 0.7613 |
| 0.2731 | 24.0 | 336 | 0.6543 | 0.7613 |
| 0.257 | 25.0 | 350 | 0.6638 | 0.7563 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
[
"bad",
"good"
] |
alperenoguz/smids_deit_base_f1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_deit_base_f1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4898
- Accuracy: 0.8748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3398 | 1.0 | 375 | 0.4288 | 0.8164 |
| 0.2944 | 2.0 | 750 | 0.4228 | 0.8297 |
| 0.1957 | 3.0 | 1125 | 0.4014 | 0.8497 |
| 0.176 | 4.0 | 1501 | 0.4565 | 0.8514 |
| 0.1333 | 5.0 | 1876 | 0.3698 | 0.8731 |
| 0.1322 | 6.0 | 2251 | 0.5002 | 0.8481 |
| 0.0952 | 7.0 | 2626 | 0.4711 | 0.8648 |
| 0.0941 | 8.0 | 3002 | 0.4872 | 0.8698 |
| 0.0946 | 9.0 | 3377 | 0.5003 | 0.8564 |
| 0.0911 | 9.99 | 3750 | 0.4898 | 0.8748 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
Krithiga/finetuned-indian-food
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-indian-food
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2313
- Accuracy: 0.9458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4578 | 0.3 | 100 | 0.3982 | 0.8842 |
| 0.3823 | 0.6 | 200 | 0.4436 | 0.8863 |
| 0.4317 | 0.9 | 300 | 0.4027 | 0.8820 |
| 0.3051 | 1.2 | 400 | 0.3762 | 0.8895 |
| 0.1829 | 1.5 | 500 | 0.3679 | 0.9086 |
| 0.2193 | 1.8 | 600 | 0.3046 | 0.9235 |
| 0.1673 | 2.1 | 700 | 0.3170 | 0.9224 |
| 0.2694 | 2.4 | 800 | 0.2726 | 0.9341 |
| 0.1209 | 2.7 | 900 | 0.2777 | 0.9288 |
| 0.146 | 3.0 | 1000 | 0.2415 | 0.9384 |
| 0.1515 | 3.3 | 1100 | 0.2313 | 0.9458 |
| 0.1645 | 3.6 | 1200 | 0.2394 | 0.9437 |
| 0.1142 | 3.9 | 1300 | 0.2325 | 0.9447 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Tokenizers 0.14.1
|
[
"burger",
"butter_naan",
"kaathi_rolls",
"kadai_paneer",
"kulfi",
"masala_dosa",
"momos",
"paani_puri",
"pakode",
"pav_bhaji",
"pizza",
"samosa",
"chai",
"chapati",
"chole_bhature",
"dal_makhani",
"dhokla",
"fried_rice",
"idli",
"jalebi"
] |
hkivancoral/hushem_40x_beit_base_f4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_base_f4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0004
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0524 | 1.0 | 109 | 0.3589 | 0.8571 |
| 0.0437 | 2.0 | 218 | 0.0457 | 0.9762 |
| 0.0078 | 2.99 | 327 | 0.1689 | 0.9762 |
| 0.0011 | 4.0 | 437 | 0.0860 | 0.9762 |
| 0.0006 | 5.0 | 546 | 0.0005 | 1.0 |
| 0.0001 | 6.0 | 655 | 0.0005 | 1.0 |
| 0.0001 | 6.99 | 764 | 0.1512 | 0.9762 |
| 0.0 | 8.0 | 874 | 0.0016 | 1.0 |
| 0.0001 | 9.0 | 983 | 0.0005 | 1.0 |
| 0.0 | 9.98 | 1090 | 0.0004 | 1.0 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
lpodina/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0628
- Accuracy: 0.9781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2876 | 1.0 | 190 | 0.1039 | 0.9659 |
| 0.2083 | 2.0 | 380 | 0.0740 | 0.9722 |
| 0.1625 | 3.0 | 570 | 0.0628 | 0.9781 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"annualcrop",
"forest",
"herbaceousvegetation",
"highway",
"industrial",
"pasture",
"permanentcrop",
"residential",
"river",
"sealake"
] |
sarabi1005/vit-base-beans_50
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans_50
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1514
- Accuracy: 0.9439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 468 | 0.1514 | 0.9439 |
| 0.2863 | 2.0 | 936 | 0.1917 | 0.9303 |
| 0.2377 | 3.0 | 1404 | 0.1725 | 0.9333 |
| 0.2142 | 4.0 | 1872 | 0.1782 | 0.9288 |
| 0.2058 | 5.0 | 2340 | 0.1788 | 0.9273 |
| 0.1899 | 6.0 | 2808 | 0.1824 | 0.9318 |
| 0.1838 | 7.0 | 3276 | 0.1879 | 0.9333 |
| 0.1757 | 8.0 | 3744 | 0.2391 | 0.9333 |
| 0.1852 | 9.0 | 4212 | 0.1725 | 0.9409 |
| 0.1634 | 10.0 | 4680 | 0.1762 | 0.9394 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"beany",
"nobean"
] |
akashmaggon/vit-base-age-classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-age-classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the fair_face dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0743
- Accuracy: 0.9879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2011 | 1.0 | 385 | 1.0297 | 0.5664 |
| 0.8578 | 2.0 | 770 | 0.7667 | 0.6936 |
| 0.5961 | 3.0 | 1155 | 0.4088 | 0.8703 |
| 0.3073 | 4.0 | 1540 | 0.1689 | 0.9581 |
| 0.1146 | 5.0 | 1925 | 0.0743 | 0.9879 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"0-2",
"3-9",
"10-19",
"20-29",
"30-39",
"40-49",
"50-59",
"60-69",
"more than 70"
] |
hkivancoral/hushem_40x_beit_base_f5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_base_f5
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2681
- Accuracy: 0.9512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0802 | 1.0 | 110 | 0.6945 | 0.8049 |
| 0.028 | 2.0 | 220 | 0.5751 | 0.9024 |
| 0.002 | 3.0 | 330 | 0.3641 | 0.9268 |
| 0.0009 | 4.0 | 440 | 0.5616 | 0.8780 |
| 0.0004 | 5.0 | 550 | 0.2822 | 0.9024 |
| 0.0003 | 6.0 | 660 | 0.7387 | 0.8537 |
| 0.0018 | 7.0 | 770 | 0.1999 | 0.9512 |
| 0.0001 | 8.0 | 880 | 0.3046 | 0.9512 |
| 0.0011 | 9.0 | 990 | 0.2897 | 0.9268 |
| 0.0002 | 10.0 | 1100 | 0.2681 | 0.9512 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
DrydenDev/autotrain-n64-cartridge-recognition-99270147297
|
# Disclaimer
- This is a Proof of Concept model, it hasn't been trained on enough n64 games to be considered reliable.
-
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 99270147297
- CO2 Emissions (in grams): 0.5176
## Validation Metrics
- Loss: 0.256
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
|
[
"fake",
"legit"
] |
hkivancoral/hushem_40x_beit_base_f1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_base_f1
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9231
- Accuracy: 0.8889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0764 | 1.0 | 107 | 0.7220 | 0.8 |
| 0.0168 | 2.0 | 214 | 1.0516 | 0.8 |
| 0.0193 | 2.99 | 321 | 1.1697 | 0.7556 |
| 0.0111 | 4.0 | 429 | 0.9218 | 0.8222 |
| 0.0033 | 5.0 | 536 | 1.0001 | 0.8444 |
| 0.0048 | 6.0 | 643 | 1.0798 | 0.8222 |
| 0.0 | 6.99 | 750 | 0.9561 | 0.8667 |
| 0.0 | 8.0 | 858 | 0.9979 | 0.8444 |
| 0.0 | 9.0 | 965 | 0.9770 | 0.8667 |
| 0.0 | 9.98 | 1070 | 0.9231 | 0.8889 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_beit_base_f2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_base_f2
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7078
- Accuracy: 0.7556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0596 | 1.0 | 107 | 1.1201 | 0.7333 |
| 0.0359 | 2.0 | 214 | 1.2931 | 0.7778 |
| 0.0216 | 2.99 | 321 | 1.0533 | 0.8 |
| 0.0167 | 4.0 | 429 | 1.8378 | 0.6667 |
| 0.0012 | 5.0 | 536 | 1.2704 | 0.8222 |
| 0.0008 | 6.0 | 643 | 1.5019 | 0.7778 |
| 0.0001 | 6.99 | 750 | 1.6378 | 0.7111 |
| 0.0 | 8.0 | 858 | 1.6578 | 0.7333 |
| 0.0001 | 9.0 | 965 | 1.7710 | 0.7556 |
| 0.0001 | 9.98 | 1070 | 1.7078 | 0.7556 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_beit_base_f3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_base_f3
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9422
- Accuracy: 0.8837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0594 | 1.0 | 108 | 0.4441 | 0.8372 |
| 0.0595 | 2.0 | 217 | 0.8725 | 0.8605 |
| 0.0173 | 3.0 | 325 | 0.5866 | 0.9070 |
| 0.0084 | 4.0 | 434 | 0.6360 | 0.8605 |
| 0.0005 | 5.0 | 542 | 0.6191 | 0.8837 |
| 0.0008 | 6.0 | 651 | 0.6635 | 0.9070 |
| 0.0008 | 7.0 | 759 | 0.8772 | 0.8837 |
| 0.0001 | 8.0 | 868 | 0.8012 | 0.9070 |
| 0.0 | 9.0 | 976 | 0.9139 | 0.8837 |
| 0.0001 | 9.95 | 1080 | 0.9422 | 0.8837 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_base_n_f1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_n_f1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7205
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0686 | 1.0 | 107 | 0.6578 | 0.8222 |
| 0.006 | 2.0 | 214 | 1.2420 | 0.8 |
| 0.0236 | 2.99 | 321 | 0.9909 | 0.7778 |
| 0.0001 | 4.0 | 429 | 0.7158 | 0.7333 |
| 0.0001 | 5.0 | 536 | 0.7001 | 0.7778 |
| 0.0001 | 6.0 | 643 | 0.7068 | 0.8 |
| 0.0 | 6.99 | 750 | 0.7117 | 0.8 |
| 0.0 | 8.0 | 858 | 0.7163 | 0.8 |
| 0.0 | 9.0 | 965 | 0.7199 | 0.8 |
| 0.0 | 9.98 | 1070 | 0.7205 | 0.8 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_base_n_f2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_n_f2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4506
- Accuracy: 0.7556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.039 | 1.0 | 107 | 0.9415 | 0.7333 |
| 0.0142 | 2.0 | 214 | 1.1180 | 0.7556 |
| 0.0006 | 2.99 | 321 | 1.2484 | 0.8 |
| 0.0002 | 4.0 | 429 | 1.2754 | 0.7556 |
| 0.0001 | 5.0 | 536 | 1.3350 | 0.7556 |
| 0.0 | 6.0 | 643 | 1.3982 | 0.7556 |
| 0.0 | 6.99 | 750 | 1.4243 | 0.7556 |
| 0.0 | 8.0 | 858 | 1.4408 | 0.7556 |
| 0.0 | 9.0 | 965 | 1.4480 | 0.7556 |
| 0.0 | 9.98 | 1070 | 1.4506 | 0.7556 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_base_n_f3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_n_f3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4024
- Accuracy: 0.9070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1057 | 1.0 | 108 | 0.5226 | 0.8372 |
| 0.0057 | 2.0 | 217 | 0.4792 | 0.9070 |
| 0.0066 | 3.0 | 325 | 0.2934 | 0.9302 |
| 0.0008 | 4.0 | 434 | 0.2441 | 0.9535 |
| 0.0001 | 5.0 | 542 | 0.3621 | 0.9070 |
| 0.0 | 6.0 | 651 | 0.3864 | 0.9070 |
| 0.0 | 7.0 | 759 | 0.3930 | 0.9070 |
| 0.0 | 8.0 | 868 | 0.3984 | 0.9070 |
| 0.0 | 9.0 | 976 | 0.4017 | 0.9070 |
| 0.0 | 9.95 | 1080 | 0.4024 | 0.9070 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
jsphelps12/autotrain-n-64-cartridge-classifier-99354147308
|
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 99354147308
- CO2 Emissions (in grams): 0.7381
## Validation Metrics
- Loss: 0.371
- Accuracy: 0.833
- Precision: 0.800
- Recall: 1.000
- AUC: 1.000
- F1: 0.889
|
[
"fake",
"legit"
] |
Braywayc/autotrain-n-64-image-classifier-99356147309
|
Fake Nintendo-64 cartridge detector
---
tags:
- autotrain
- vision
- image-classification
datasets:
- Braywayc/autotrain-data-n-64-image-classifier
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.3921587188762522
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 99356147309
- CO2 Emissions (in grams): 0.3922
## Validation Metrics
- Loss: 0.429
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
|
[
"fake",
"legit"
] |
dima806/facial_age_image_detection
|
Returns age bin based on a facial image.
See https://www.kaggle.com/code/dima806/facial-age-years-detection-vit for more details.

```
Classification report:
precision recall f1-score support
01 0.7341 0.9056 0.8109 445
02 0.4494 0.6787 0.5407 445
03 0.6978 0.2854 0.4051 445
04 0.8421 0.1438 0.2457 445
05 0.5707 0.9618 0.7163 445
06-07 0.7030 0.5798 0.6355 445
08-09 0.6500 0.8180 0.7244 445
10-12 0.6993 0.7056 0.7025 445
13-15 0.8034 0.7438 0.7725 445
16-20 0.7006 0.7416 0.7205 445
21-25 0.6796 0.6292 0.6534 445
26-30 0.4241 0.5843 0.4915 445
31-35 0.4654 0.2270 0.3051 445
36-40 0.4606 0.3416 0.3923 445
41-45 0.5074 0.6944 0.5863 445
46-50 0.4896 0.5811 0.5314 444
51-55 0.5158 0.5506 0.5326 445
56-60 0.5000 0.3491 0.4111 444
61-65 0.7083 0.1910 0.3009 445
66-70 0.4778 0.7995 0.5981 444
71-80 0.7687 0.7169 0.7419 445
81-90 0.8425 0.9978 0.9136 445
90+ 0.9978 1.0000 0.9989 444
accuracy 0.6185 10231
macro avg 0.6386 0.6185 0.5970 10231
weighted avg 0.6386 0.6185 0.5970 10231
```
|
[
"01",
"02",
"03",
"04",
"05",
"06-07",
"08-09",
"10-12",
"13-15",
"16-20",
"21-25",
"26-30",
"31-35",
"36-40",
"41-45",
"46-50",
"51-55",
"56-60",
"61-65",
"66-70",
"71-80",
"81-90",
"90+"
] |
hkivancoral/hushem_40x_deit_base_n_f4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_n_f4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1221
- Accuracy: 0.9762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0638 | 1.0 | 109 | 0.3320 | 0.8810 |
| 0.0062 | 2.0 | 218 | 0.2092 | 0.9048 |
| 0.0052 | 2.99 | 327 | 0.0666 | 0.9762 |
| 0.0022 | 4.0 | 437 | 0.1838 | 0.9524 |
| 0.0002 | 5.0 | 546 | 0.0452 | 0.9762 |
| 0.0 | 6.0 | 655 | 0.1681 | 0.9524 |
| 0.0 | 6.99 | 764 | 0.1386 | 0.9762 |
| 0.0 | 8.0 | 874 | 0.1281 | 0.9762 |
| 0.0 | 9.0 | 983 | 0.1236 | 0.9762 |
| 0.0 | 9.98 | 1090 | 0.1221 | 0.9762 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
saketsarin/vit-base-patch16-224-in21k_brain_tumor_diagnosis
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k_brain_tumor_diagnosis
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0630
- Accuracy: 0.9858
- F1: 0.9858
- Recall: 0.9858
- Precision: 0.9858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 1.379 | 1.0 | 352 | 0.2159 | 0.9310 | 0.9310 | 0.9310 | 0.9390 |
| 0.239 | 2.0 | 704 | 0.0814 | 0.9765 | 0.9766 | 0.9765 | 0.9767 |
| 0.0748 | 3.0 | 1056 | 0.0822 | 0.9808 | 0.9808 | 0.9808 | 0.9812 |
| 0.0748 | 4.0 | 1408 | 0.0651 | 0.9858 | 0.9858 | 0.9858 | 0.9858 |
| 0.0125 | 5.0 | 1760 | 0.0630 | 0.9858 | 0.9858 | 0.9858 | 0.9858 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"glioma",
"meningioma",
"notumor",
"pituitary"
] |
NatnichaYw/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# NatnichaYw/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.7988
- Validation Loss: 1.6494
- Train Accuracy: 0.837
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7988 | 1.6494 | 0.837 | 0 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
hkivancoral/hushem_40x_deit_base_n_f5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_n_f5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4453
- Accuracy: 0.8537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0209 | 1.0 | 110 | 0.5124 | 0.8049 |
| 0.0043 | 2.0 | 220 | 0.6220 | 0.8049 |
| 0.0003 | 3.0 | 330 | 0.5631 | 0.8293 |
| 0.0001 | 4.0 | 440 | 0.6476 | 0.8049 |
| 0.0001 | 5.0 | 550 | 0.4557 | 0.8293 |
| 0.0001 | 6.0 | 660 | 0.5177 | 0.8780 |
| 0.0001 | 7.0 | 770 | 0.4360 | 0.8780 |
| 0.0 | 8.0 | 880 | 0.4399 | 0.8780 |
| 0.0 | 9.0 | 990 | 0.4439 | 0.8537 |
| 0.0 | 10.0 | 1100 | 0.4453 | 0.8537 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
dima806/weather_types_image_detection
|
Returns weather type given an image with about 96% accuracy.
See https://www.kaggle.com/code/dima806/weather-types-image-prediction-vit for more details.
```
Classification report:
precision recall f1-score support
dew 0.9795 0.9897 0.9846 290
fogsmog 0.9715 0.9414 0.9562 290
frost 0.9674 0.9207 0.9435 290
glaze 0.8855 0.9069 0.8961 290
hail 0.9966 0.9966 0.9966 290
lightning 1.0000 1.0000 1.0000 290
rain 0.9561 0.9759 0.9659 290
rainbow 1.0000 1.0000 1.0000 290
rime 0.9078 0.8828 0.8951 290
sandstorm 0.9759 0.9759 0.9759 290
snow 0.9049 0.9517 0.9277 290
accuracy 0.9583 3190
macro avg 0.9587 0.9583 0.9583 3190
weighted avg 0.9587 0.9583 0.9583 3190
```
|
[
"dew",
"fogsmog",
"frost",
"glaze",
"hail",
"lightning",
"rain",
"rainbow",
"rime",
"sandstorm",
"snow"
] |
jordyvl/resnet101_rvl-cdip-cnn_rvl_cdip_100_examples_per_class_og_simkd_test
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101_rvl-cdip-cnn_rvl_cdip_100_examples_per_class_og_simkd_test
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 400 | 8965.8223 | 0.08 | 1.0296 | 6.6066 | 0.08 | 0.0326 | 0.2344 | 0.9219 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.14.5
- Tokenizers 0.12.1
|
[
"letter",
"form",
"email",
"handwritten",
"advertisement",
"scientific_report",
"scientific_publication",
"specification",
"file_folder",
"news_article",
"budget",
"invoice",
"presentation",
"questionnaire",
"resume",
"memo"
] |
chriamue/bird-species-classifier
|
# Model Card for "Bird Species Classifier"
## Model Description
The "Bird Species Classifier" is a state-of-the-art image classification model designed to identify various bird species from images. It uses the EfficientNet architecture and has been fine-tuned to achieve high accuracy in recognizing a wide range of bird species.
### How to Use
You can easily use the model in your Python environment with the following code:
```python
from transformers import AutoFeatureExtractor, AutoModelForImageClassification
extractor = AutoFeatureExtractor.from_pretrained("chriamue/bird-species-classifier")
model = AutoModelForImageClassification.from_pretrained("chriamue/bird-species-classifier")
```
### Applications
- Bird species identification for educational or ecological research.
- Assistance in biodiversity monitoring and conservation efforts.
- Enhancing user experience in nature apps and platforms.
## Training Data
The model was trained on the "Bird Species" dataset, which is a comprehensive collection of bird images. Key features of this dataset include:
- **Total Species**: 525 bird species.
- **Training Images**: 84,635 images.
- **Validation Images**: 2,625 images.
- **Test Images**: 2,625 images.
- **Image Format**: Color images (224x224x3) in JPG format.
- **Source**: Sourced from Kaggle.
## Training Results
The model achieved impressive results after 6 epochs of training:
- **Accuracy**: 96.8%
- **Loss**: 0.1379
- **Runtime**: 136.81 seconds
- **Samples per Second**: 19.188
- **Steps per Second**: 1.206
- **Total Training Steps**: 31,740
These metrics indicate a high level of performance, making the model reliable for practical applications.
## Limitations and Bias
- The performance of the model might vary under different lighting conditions or image qualities.
- The model's accuracy is dependent on the diversity and representation in the training dataset. It may perform less effectively on bird species not well represented in the dataset.
## Ethical Considerations
This model should be used responsibly, considering privacy and environmental impacts. It should not be used for harmful purposes such as targeting endangered species or violating wildlife protection laws.
## Acknowledgements
We would like to acknowledge the creators of the dataset on Kaggle for providing a rich source of data that made this model possible.
## See also
- [Bird Species Dataset](https://huggingface.co/datasets/chriamue/bird-species-dataset)
- [Kaggle Dataset](https://www.kaggle.com/datasets/gpiosenka/100-bird-species/data)
- [Bird Species Classifier](https://huggingface.co/dennisjooo/Birds-Classifier-EfficientNetB2)
|
[
"abbotts babbler",
"abbotts booby",
"alberts towhee",
"blue heron",
"blue malkoha",
"blue throated piping guan",
"blue throated toucanet",
"bobolink",
"bornean bristlehead",
"bornean leafbird",
"bornean pheasant",
"brandt cormarant",
"brewers blackbird",
"alexandrine parakeet",
"brown crepper",
"brown headed cowbird",
"brown noody",
"brown thrasher",
"bufflehead",
"bulwers pheasant",
"burchells courser",
"bush turkey",
"caatinga cacholote",
"cabots tragopan",
"alpine chough",
"cactus wren",
"california condor",
"california gull",
"california quail",
"campo flicker",
"canary",
"canvasback",
"cape glossy starling",
"cape longclaw",
"cape may warbler",
"altamira yellowthroat",
"cape rock thrush",
"capped heron",
"capuchinbird",
"carmine bee-eater",
"caspian tern",
"cassowary",
"cedar waxwing",
"cerulean warbler",
"chara de collar",
"chattering lory",
"american avocet",
"chestnet bellied euphonia",
"chestnut winged cuckoo",
"chinese bamboo partridge",
"chinese pond heron",
"chipping sparrow",
"chucao tapaculo",
"chukar partridge",
"cinnamon attila",
"cinnamon flycatcher",
"cinnamon teal",
"american bittern",
"clarks grebe",
"clarks nutcracker",
"cock of the rock",
"cockatoo",
"collared aracari",
"collared crescentchest",
"common firecrest",
"common grackle",
"common house martin",
"common iora",
"american coot",
"common loon",
"common poorwill",
"common starling",
"coppersmith barbet",
"coppery tailed coucal",
"crab plover",
"crane hawk",
"cream colored woodpecker",
"crested auklet",
"crested caracara",
"american dipper",
"crested coua",
"crested fireback",
"crested kingfisher",
"crested nuthatch",
"crested oropendola",
"crested serpent eagle",
"crested shriketit",
"crested wood partridge",
"crimson chat",
"crimson sunbird",
"american flamingo",
"crow",
"cuban tody",
"cuban trogon",
"curl crested aracuri",
"d-arnauds barbet",
"dalmatian pelican",
"darjeeling woodpecker",
"dark eyed junco",
"daurian redstart",
"demoiselle crane",
"american goldfinch",
"double barred finch",
"double brested cormarant",
"double eyed fig parrot",
"downy woodpecker",
"dunlin",
"dusky lory",
"dusky robin",
"eared pita",
"eastern bluebird",
"eastern bluebonnet",
"abyssinian ground hornbill",
"american kestrel",
"eastern golden weaver",
"eastern meadowlark",
"eastern rosella",
"eastern towee",
"eastern wip poor will",
"eastern yellow robin",
"ecuadorian hillstar",
"egyptian goose",
"elegant trogon",
"elliots pheasant",
"american pipit",
"emerald tanager",
"emperor penguin",
"emu",
"enggano myna",
"eurasian bullfinch",
"eurasian golden oriole",
"eurasian magpie",
"european goldfinch",
"european turtle dove",
"evening grosbeak",
"american redstart",
"fairy bluebird",
"fairy penguin",
"fairy tern",
"fan tailed widow",
"fasciated wren",
"fiery minivet",
"fiordland penguin",
"fire tailled myzornis",
"flame bowerbird",
"flame tanager",
"american robin",
"forest wagtail",
"frigate",
"frill back pigeon",
"gambels quail",
"gang gang cockatoo",
"gila woodpecker",
"gilded flicker",
"glossy ibis",
"go away bird",
"gold wing warbler",
"american wigeon",
"golden bower bird",
"golden cheeked warbler",
"golden chlorophonia",
"golden eagle",
"golden parakeet",
"golden pheasant",
"golden pipit",
"gouldian finch",
"grandala",
"gray catbird",
"amethyst woodstar",
"gray kingbird",
"gray partridge",
"great argus",
"great gray owl",
"great jacamar",
"great kiskadee",
"great potoo",
"great tinamou",
"great xenops",
"greater pewee",
"andean goose",
"greater prairie chicken",
"greator sage grouse",
"green broadbill",
"green jay",
"green magpie",
"green winged dove",
"grey cuckooshrike",
"grey headed chachalaca",
"grey headed fish eagle",
"grey plover",
"andean lapwing",
"groved billed ani",
"guinea turaco",
"guineafowl",
"gurneys pitta",
"gyrfalcon",
"hamerkop",
"harlequin duck",
"harlequin quail",
"harpy eagle",
"hawaiian goose",
"andean siskin",
"hawfinch",
"helmet vanga",
"hepatic tanager",
"himalayan bluetail",
"himalayan monal",
"hoatzin",
"hooded merganser",
"hoopoes",
"horned guan",
"horned lark",
"anhinga",
"horned sungem",
"house finch",
"house sparrow",
"hyacinth macaw",
"iberian magpie",
"ibisbill",
"imperial shaq",
"inca tern",
"indian bustard",
"indian pitta",
"african crowned crane",
"anianiau",
"indian roller",
"indian vulture",
"indigo bunting",
"indigo flycatcher",
"inland dotterel",
"ivory billed aracari",
"ivory gull",
"iwi",
"jabiru",
"jack snipe",
"annas hummingbird",
"jacobin pigeon",
"jandaya parakeet",
"japanese robin",
"java sparrow",
"jocotoco antpitta",
"kagu",
"kakapo",
"killdear",
"king eider",
"king vulture",
"antbird",
"kiwi",
"knob billed duck",
"kookaburra",
"lark bunting",
"laughing gull",
"lazuli bunting",
"lesser adjutant",
"lilac roller",
"limpkin",
"little auk",
"antillean euphonia",
"loggerhead shrike",
"long-eared owl",
"looney birds",
"lucifer hummingbird",
"magpie goose",
"malabar hornbill",
"malachite kingfisher",
"malagasy white eye",
"maleo",
"mallard duck",
"apapane",
"mandrin duck",
"mangrove cuckoo",
"marabou stork",
"masked bobwhite",
"masked booby",
"masked lapwing",
"mckays bunting",
"merlin",
"mikado pheasant",
"military macaw",
"apostlebird",
"mourning dove",
"myna",
"nicobar pigeon",
"noisy friarbird",
"northern beardless tyrannulet",
"northern cardinal",
"northern flicker",
"northern fulmar",
"northern gannet",
"northern goshawk",
"araripe manakin",
"northern jacana",
"northern mockingbird",
"northern parula",
"northern red bishop",
"northern shoveler",
"ocellated turkey",
"oilbird",
"okinawa rail",
"orange breasted trogon",
"orange brested bunting",
"ashy storm petrel",
"oriental bay owl",
"ornate hawk eagle",
"osprey",
"ostrich",
"ovenbird",
"oyster catcher",
"painted bunting",
"palila",
"palm nut vulture",
"paradise tanager",
"ashy thrushbird",
"parakett auklet",
"parus major",
"patagonian sierra finch",
"peacock",
"peregrine falcon",
"phainopepla",
"philippine eagle",
"pink robin",
"plush crested jay",
"pomarine jaeger",
"asian crested ibis",
"puffin",
"puna teal",
"purple finch",
"purple gallinule",
"purple martin",
"purple swamphen",
"pygmy kingfisher",
"pyrrhuloxia",
"quetzal",
"rainbow lorikeet",
"african emerald cuckoo",
"asian dollard bird",
"razorbill",
"red bearded bee eater",
"red bellied pitta",
"red billed tropicbird",
"red browed finch",
"red crossbill",
"red faced cormorant",
"red faced warbler",
"red fody",
"red headed duck",
"asian green bee eater",
"red headed woodpecker",
"red knot",
"red legged honeycreeper",
"red naped trogon",
"red shouldered hawk",
"red tailed hawk",
"red tailed thrush",
"red winged blackbird",
"red wiskered bulbul",
"regent bowerbird",
"asian openbill stork",
"ring-necked pheasant",
"roadrunner",
"rock dove",
"rose breasted cockatoo",
"rose breasted grosbeak",
"roseate spoonbill",
"rosy faced lovebird",
"rough leg buzzard",
"royal flycatcher",
"ruby crowned kinglet",
"auckland shaq",
"ruby throated hummingbird",
"ruddy shelduck",
"rudy kingfisher",
"rufous kingfisher",
"rufous trepe",
"rufuos motmot",
"samatran thrush",
"sand martin",
"sandhill crane",
"satyr tragopan",
"austral canastero",
"says phoebe",
"scarlet crowned fruit dove",
"scarlet faced liocichla",
"scarlet ibis",
"scarlet macaw",
"scarlet tanager",
"shoebill",
"short billed dowitcher",
"smiths longspur",
"snow goose",
"australasian figbird",
"snow partridge",
"snowy egret",
"snowy owl",
"snowy plover",
"snowy sheathbill",
"sora",
"spangled cotinga",
"splendid wren",
"spoon biled sandpiper",
"spotted catbird",
"avadavat",
"spotted whistling duck",
"squacco heron",
"sri lanka blue magpie",
"steamer duck",
"stork billed kingfisher",
"striated caracara",
"striped owl",
"stripped manakin",
"stripped swallow",
"sunbittern",
"azaras spinetail",
"superb starling",
"surf scoter",
"swinhoes pheasant",
"tailorbird",
"taiwan magpie",
"takahe",
"tasmanian hen",
"tawny frogmouth",
"teal duck",
"tit mouse",
"azure breasted pitta",
"touchan",
"townsends warbler",
"tree swallow",
"tricolored blackbird",
"tropical kingbird",
"trumpter swan",
"turkey vulture",
"turquoise motmot",
"umbrella bird",
"varied thrush",
"azure jay",
"veery",
"venezuelian troupial",
"verdin",
"vermilion flycather",
"victoria crowned pigeon",
"violet backed starling",
"violet cuckoo",
"violet green swallow",
"violet turaco",
"visayan hornbill",
"african firefinch",
"azure tanager",
"vulturine guineafowl",
"wall creaper",
"wattled curassow",
"wattled lapwing",
"whimbrel",
"white breasted waterhen",
"white browed crake",
"white cheeked turaco",
"white crested hornbill",
"white eared hummingbird",
"azure tit",
"white necked raven",
"white tailed tropic",
"white throated bee eater",
"wild turkey",
"willow ptarmigan",
"wilsons bird of paradise",
"wood duck",
"wood thrush",
"woodland kingfisher",
"wrentit",
"baikal teal",
"yellow bellied flowerpecker",
"yellow breasted chat",
"yellow cacique",
"yellow headed blackbird",
"zebra dove",
"bald eagle",
"bald ibis",
"bali starling",
"baltimore oriole",
"bananaquit",
"band tailed guan",
"banded broadbill",
"african oyster catcher",
"banded pita",
"banded stilt",
"bar-tailed godwit",
"barn owl",
"barn swallow",
"barred puffbird",
"barrows goldeneye",
"bay-breasted warbler",
"bearded barbet",
"bearded bellbird",
"african pied hornbill",
"bearded reedling",
"belted kingfisher",
"bird of paradise",
"black and yellow broadbill",
"black baza",
"black breasted puffbird",
"black cockato",
"black faced spoonbill",
"black francolin",
"black headed caique",
"african pygmy goose",
"black necked stilt",
"black skimmer",
"black swan",
"black tail crake",
"black throated bushtit",
"black throated huet",
"black throated warbler",
"black vented shearwater",
"black vulture",
"black-capped chickadee",
"albatross",
"black-necked grebe",
"black-throated sparrow",
"blackburniam warbler",
"blonde crested woodpecker",
"blood pheasant",
"blue coau",
"blue dacnis",
"blue gray gnatcatcher",
"blue grosbeak",
"blue grouse"
] |
NSYok/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# NSYok/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0392
- Validation Loss: 0.6724
- Train Accuracy: 0.8888
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 12800, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.6324 | 1.3778 | 0.8625 | 0 |
| 1.0392 | 0.6724 | 0.8888 | 1 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
jordyvl/resnet101_rvl-cdip-cnn_rvl_cdip_100_examples_per_class_simkd_test
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101_rvl-cdip-cnn_rvl_cdip_100_examples_per_class_simkd_test
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 400 | 1578.6876 | 0.045 | 1.0031 | 21.7938 | 0.045 | 0.0210 | 0.1721 | 0.9563 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.14.5
- Tokenizers 0.12.1
|
[
"letter",
"form",
"email",
"handwritten",
"advertisement",
"scientific_report",
"scientific_publication",
"specification",
"file_folder",
"news_article",
"budget",
"invoice",
"presentation",
"questionnaire",
"resume",
"memo"
] |
dima806/100_butterfly_types_image_detection
|
Predicts butterfly type given an image with about 96% accuracy.
See https://www.kaggle.com/code/dima806/100-butterfly-type-image-detection-vit for more details.
```
Classification report:
precision recall f1-score support
ADONIS 0.9348 0.8776 0.9053 49
AFRICAN GIANT SWALLOWTAIL 0.9800 1.0000 0.9899 49
AMERICAN SNOOT 0.9792 0.9400 0.9592 50
AN 88 1.0000 1.0000 1.0000 50
APPOLLO 0.9600 0.9796 0.9697 49
ARCIGERA FLOWER MOTH 0.9792 0.9592 0.9691 49
ATALA 1.0000 0.9592 0.9792 49
ATLAS MOTH 0.9057 0.9796 0.9412 49
BANDED ORANGE HELICONIAN 1.0000 1.0000 1.0000 49
BANDED PEACOCK 0.9792 0.9592 0.9691 49
BANDED TIGER MOTH 0.8936 0.8571 0.8750 49
BECKERS WHITE 0.9245 1.0000 0.9608 49
BIRD CHERRY ERMINE MOTH 1.0000 1.0000 1.0000 49
BLACK HAIRSTREAK 0.9583 0.9388 0.9485 49
BLUE MORPHO 0.9600 0.9796 0.9697 49
BLUE SPOTTED CROW 0.9792 0.9592 0.9691 49
BROOKES BIRDWING 1.0000 1.0000 1.0000 49
BROWN ARGUS 0.9074 0.9800 0.9423 50
BROWN SIPROETA 0.9800 1.0000 0.9899 49
CABBAGE WHITE 0.9800 0.9800 0.9800 50
CAIRNS BIRDWING 0.9804 1.0000 0.9901 50
CHALK HILL BLUE 0.8679 0.9200 0.8932 50
CHECQUERED SKIPPER 0.9796 0.9600 0.9697 50
CHESTNUT 0.9600 0.9796 0.9697 49
CINNABAR MOTH 1.0000 1.0000 1.0000 49
CLEARWING MOTH 0.8909 1.0000 0.9423 49
CLEOPATRA 0.9796 0.9796 0.9796 49
CLODIUS PARNASSIAN 0.9600 0.9600 0.9600 50
CLOUDED SULPHUR 0.8537 0.7143 0.7778 49
COMET MOTH 1.0000 0.9796 0.9897 49
COMMON BANDED AWL 0.9565 0.8980 0.9263 49
COMMON WOOD-NYMPH 0.9796 0.9796 0.9796 49
COPPER TAIL 0.9706 0.6735 0.7952 49
CRECENT 0.9796 0.9600 0.9697 50
CRIMSON PATCH 0.9804 1.0000 0.9901 50
DANAID EGGFLY 0.9792 0.9400 0.9592 50
EASTERN COMA 0.7458 0.8980 0.8148 49
EASTERN DAPPLE WHITE 0.8039 0.8367 0.8200 49
EASTERN PINE ELFIN 1.0000 0.9600 0.9796 50
ELBOWED PIERROT 1.0000 0.9600 0.9796 50
EMPEROR GUM MOTH 0.9388 0.9388 0.9388 49
GARDEN TIGER MOTH 0.8654 0.9184 0.8911 49
GIANT LEOPARD MOTH 1.0000 1.0000 1.0000 50
GLITTERING SAPPHIRE 1.0000 0.9796 0.9897 49
GOLD BANDED 0.9796 0.9796 0.9796 49
GREAT EGGFLY 0.8889 0.9796 0.9320 49
GREAT JAY 0.9375 0.9000 0.9184 50
GREEN CELLED CATTLEHEART 0.9796 0.9796 0.9796 49
GREEN HAIRSTREAK 1.0000 1.0000 1.0000 49
GREY HAIRSTREAK 0.9231 0.9796 0.9505 49
HERCULES MOTH 0.9167 0.8980 0.9072 49
HUMMING BIRD HAWK MOTH 1.0000 0.8571 0.9231 49
INDRA SWALLOW 1.0000 0.9592 0.9792 49
IO MOTH 1.0000 0.9388 0.9684 49
Iphiclus sister 1.0000 1.0000 1.0000 49
JULIA 1.0000 1.0000 1.0000 49
LARGE MARBLE 0.8723 0.8200 0.8454 50
LUNA MOTH 1.0000 0.9592 0.9792 49
MADAGASCAN SUNSET MOTH 1.0000 0.9796 0.9897 49
MALACHITE 1.0000 1.0000 1.0000 50
MANGROVE SKIPPER 0.9600 0.9796 0.9697 49
MESTRA 1.0000 0.9600 0.9796 50
METALMARK 0.9792 0.9592 0.9691 49
MILBERTS TORTOISESHELL 1.0000 0.9184 0.9574 49
MONARCH 0.9245 1.0000 0.9608 49
MOURNING CLOAK 1.0000 1.0000 1.0000 49
OLEANDER HAWK MOTH 1.0000 1.0000 1.0000 49
ORANGE OAKLEAF 0.9434 1.0000 0.9709 50
ORANGE TIP 0.9783 0.9184 0.9474 49
ORCHARD SWALLOW 1.0000 0.9796 0.9897 49
PAINTED LADY 0.9608 1.0000 0.9800 49
PAPER KITE 1.0000 0.9796 0.9897 49
PEACOCK 1.0000 1.0000 1.0000 49
PINE WHITE 0.9796 0.9796 0.9796 49
PIPEVINE SWALLOW 0.9074 0.9800 0.9423 50
POLYPHEMUS MOTH 0.8824 0.9184 0.9000 49
POPINJAY 1.0000 0.9796 0.9897 49
PURPLE HAIRSTREAK 0.9583 0.9388 0.9485 49
PURPLISH COPPER 0.8033 1.0000 0.8909 49
QUESTION MARK 0.8684 0.6735 0.7586 49
RED ADMIRAL 1.0000 0.9796 0.9897 49
RED CRACKER 0.9792 0.9592 0.9691 49
RED POSTMAN 0.9608 1.0000 0.9800 49
RED SPOTTED PURPLE 0.9800 1.0000 0.9899 49
ROSY MAPLE MOTH 0.9615 1.0000 0.9804 50
SCARCE SWALLOW 0.9412 0.9796 0.9600 49
SILVER SPOT SKIPPER 0.9074 1.0000 0.9515 49
SIXSPOT BURNET MOTH 1.0000 1.0000 1.0000 50
SLEEPY ORANGE 0.9057 0.9796 0.9412 49
SOOTYWING 0.9783 0.9184 0.9474 49
SOUTHERN DOGFACE 0.8148 0.8980 0.8544 49
STRAITED QUEEN 0.9796 0.9796 0.9796 49
TROPICAL LEAFWING 0.8889 0.9600 0.9231 50
TWO BARRED FLASHER 1.0000 0.9592 0.9792 49
ULYSES 1.0000 0.9592 0.9792 49
VICEROY 1.0000 0.9592 0.9792 49
WHITE LINED SPHINX MOTH 0.9615 1.0000 0.9804 50
WOOD SATYR 0.9412 0.9796 0.9600 49
YELLOW SWALLOW TAIL 0.9583 0.9388 0.9485 49
ZEBRA LONG WING 1.0000 0.9800 0.9899 50
accuracy 0.9561 4925
macro avg 0.9577 0.9561 0.9558 4925
weighted avg 0.9578 0.9561 0.9559 4925
```
|
[
"adonis",
"african giant swallowtail",
"american snoot",
"an 88",
"appollo",
"arcigera flower moth",
"atala",
"atlas moth",
"banded orange heliconian",
"banded peacock",
"banded tiger moth",
"beckers white",
"bird cherry ermine moth",
"black hairstreak",
"blue morpho",
"blue spotted crow",
"brookes birdwing",
"brown argus",
"brown siproeta",
"cabbage white",
"cairns birdwing",
"chalk hill blue",
"checquered skipper",
"chestnut",
"cinnabar moth",
"clearwing moth",
"cleopatra",
"clodius parnassian",
"clouded sulphur",
"comet moth",
"common banded awl",
"common wood-nymph",
"copper tail",
"crecent",
"crimson patch",
"danaid eggfly",
"eastern coma",
"eastern dapple white",
"eastern pine elfin",
"elbowed pierrot",
"emperor gum moth",
"garden tiger moth",
"giant leopard moth",
"glittering sapphire",
"gold banded",
"great eggfly",
"great jay",
"green celled cattleheart",
"green hairstreak",
"grey hairstreak",
"hercules moth",
"humming bird hawk moth",
"indra swallow",
"io moth",
"iphiclus sister",
"julia",
"large marble",
"luna moth",
"madagascan sunset moth",
"malachite",
"mangrove skipper",
"mestra",
"metalmark",
"milberts tortoiseshell",
"monarch",
"mourning cloak",
"oleander hawk moth",
"orange oakleaf",
"orange tip",
"orchard swallow",
"painted lady",
"paper kite",
"peacock",
"pine white",
"pipevine swallow",
"polyphemus moth",
"popinjay",
"purple hairstreak",
"purplish copper",
"question mark",
"red admiral",
"red cracker",
"red postman",
"red spotted purple",
"rosy maple moth",
"scarce swallow",
"silver spot skipper",
"sixspot burnet moth",
"sleepy orange",
"sootywing",
"southern dogface",
"straited queen",
"tropical leafwing",
"two barred flasher",
"ulyses",
"viceroy",
"white lined sphinx moth",
"wood satyr",
"yellow swallow tail",
"zebra long wing"
] |
jerryteps/resnet-50
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50
This model was trained from scratch on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1947
- Accuracy: 0.5408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5588 | 1.0 | 252 | 1.4406 | 0.4558 |
| 1.4831 | 2.0 | 505 | 1.3683 | 0.4790 |
| 1.4776 | 3.0 | 757 | 1.3199 | 0.4937 |
| 1.4246 | 4.0 | 1010 | 1.2881 | 0.5068 |
| 1.4102 | 5.0 | 1262 | 1.2469 | 0.5247 |
| 1.3806 | 6.0 | 1515 | 1.2276 | 0.5258 |
| 1.3861 | 7.0 | 1767 | 1.2121 | 0.5411 |
| 1.3791 | 8.0 | 2020 | 1.2075 | 0.5433 |
| 1.3683 | 9.0 | 2272 | 1.2011 | 0.5422 |
| 1.4119 | 9.98 | 2520 | 1.1947 | 0.5408 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.13.3
|
[
"angry",
"disgusted",
"fearful",
"happy",
"neutral",
"sad",
"surprised"
] |
Akshay0706/Rice-Image-Classification-Model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Rice-Image-Classification-Model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1592
- eval_accuracy: 0.9816
- eval_runtime: 34.3485
- eval_samples_per_second: 9.491
- eval_steps_per_second: 2.387
- epoch: 186.0
- step: 10788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"0",
"1",
"2",
"3",
"4",
"5"
] |
crasyangel/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2768
- Accuracy: 0.921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4414 | 1.0 | 625 | 0.4034 | 0.9011 |
| 0.2976 | 2.0 | 1250 | 0.3157 | 0.9102 |
| 0.2345 | 3.0 | 1875 | 0.2768 | 0.921 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.1.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
hkivancoral/hushem_40x_deit_tiny_f1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_f1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6177
- Accuracy: 0.8444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1057 | 1.0 | 107 | 0.4673 | 0.8222 |
| 0.0156 | 2.0 | 214 | 0.4217 | 0.8667 |
| 0.0006 | 2.99 | 321 | 0.3052 | 0.8889 |
| 0.0004 | 4.0 | 429 | 0.7953 | 0.8222 |
| 0.0004 | 5.0 | 536 | 0.2677 | 0.8667 |
| 0.0001 | 6.0 | 643 | 0.5139 | 0.8444 |
| 0.0 | 6.99 | 750 | 0.5825 | 0.8444 |
| 0.0 | 8.0 | 858 | 0.6043 | 0.8444 |
| 0.0 | 9.0 | 965 | 0.6155 | 0.8444 |
| 0.0 | 9.98 | 1070 | 0.6177 | 0.8444 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_tiny_f2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_f2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5181
- Accuracy: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1217 | 1.0 | 107 | 1.3584 | 0.6 |
| 0.0223 | 2.0 | 214 | 1.3123 | 0.7111 |
| 0.0028 | 2.99 | 321 | 1.5329 | 0.6667 |
| 0.0063 | 4.0 | 429 | 1.6403 | 0.6889 |
| 0.0001 | 5.0 | 536 | 1.5983 | 0.6667 |
| 0.0 | 6.0 | 643 | 1.5035 | 0.6667 |
| 0.0 | 6.99 | 750 | 1.5067 | 0.6444 |
| 0.0 | 8.0 | 858 | 1.5121 | 0.6667 |
| 0.0 | 9.0 | 965 | 1.5168 | 0.6667 |
| 0.0 | 9.98 | 1070 | 1.5181 | 0.6667 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_tiny_f3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_f3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7420
- Accuracy: 0.9070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1605 | 1.0 | 108 | 0.5491 | 0.7674 |
| 0.067 | 2.0 | 217 | 0.3900 | 0.9070 |
| 0.0289 | 3.0 | 325 | 0.7123 | 0.8372 |
| 0.0006 | 4.0 | 434 | 0.6304 | 0.9302 |
| 0.0039 | 5.0 | 542 | 0.7304 | 0.8837 |
| 0.0003 | 6.0 | 651 | 0.9750 | 0.8372 |
| 0.0 | 7.0 | 759 | 0.7131 | 0.8837 |
| 0.0 | 8.0 | 868 | 0.7257 | 0.9070 |
| 0.0 | 9.0 | 976 | 0.7388 | 0.9070 |
| 0.0 | 9.95 | 1080 | 0.7420 | 0.9070 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_tiny_f4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_f4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1343
- Accuracy: 0.9524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1784 | 1.0 | 109 | 0.2924 | 0.9048 |
| 0.0478 | 2.0 | 218 | 0.2362 | 0.8810 |
| 0.0048 | 2.99 | 327 | 0.2393 | 0.9286 |
| 0.0107 | 4.0 | 437 | 0.2679 | 0.8810 |
| 0.0008 | 5.0 | 546 | 0.1124 | 0.9524 |
| 0.0001 | 6.0 | 655 | 0.4513 | 0.9048 |
| 0.0 | 6.99 | 764 | 0.0770 | 0.9524 |
| 0.0 | 8.0 | 874 | 0.1185 | 0.9524 |
| 0.0 | 9.0 | 983 | 0.1295 | 0.9524 |
| 0.0 | 9.98 | 1090 | 0.1343 | 0.9524 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_tiny_f5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_f5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7504
- Accuracy: 0.8537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0726 | 1.0 | 110 | 0.3341 | 0.9024 |
| 0.0333 | 2.0 | 220 | 0.9022 | 0.7561 |
| 0.0011 | 3.0 | 330 | 0.5944 | 0.8293 |
| 0.0026 | 4.0 | 440 | 0.6847 | 0.8293 |
| 0.0012 | 5.0 | 550 | 1.1544 | 0.8049 |
| 0.0005 | 6.0 | 660 | 0.4633 | 0.8537 |
| 0.0031 | 7.0 | 770 | 0.5821 | 0.8780 |
| 0.0 | 8.0 | 880 | 0.7434 | 0.8780 |
| 0.0 | 9.0 | 990 | 0.7497 | 0.8537 |
| 0.0 | 10.0 | 1100 | 0.7504 | 0.8537 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
Raihan004/Action_all_10_class
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Action_all_10_class
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Action_small_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4725
- Accuracy: 0.8517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2411 | 0.36 | 100 | 1.1517 | 0.7546 |
| 0.8932 | 0.72 | 200 | 0.7856 | 0.7975 |
| 0.6907 | 1.08 | 300 | 0.6636 | 0.8221 |
| 0.5841 | 1.43 | 400 | 0.6388 | 0.8160 |
| 0.5425 | 1.79 | 500 | 0.5871 | 0.8436 |
| 0.5929 | 2.15 | 600 | 0.5646 | 0.8211 |
| 0.4406 | 2.51 | 700 | 0.5439 | 0.8405 |
| 0.4541 | 2.87 | 800 | 0.5318 | 0.8415 |
| 0.3835 | 3.23 | 900 | 0.5225 | 0.8344 |
| 0.3924 | 3.58 | 1000 | 0.5515 | 0.8303 |
| 0.5741 | 3.94 | 1100 | 0.5519 | 0.8252 |
| 0.3991 | 4.3 | 1200 | 0.4990 | 0.8446 |
| 0.4732 | 4.66 | 1300 | 0.5336 | 0.8303 |
| 0.3324 | 5.02 | 1400 | 0.5351 | 0.8282 |
| 0.3433 | 5.38 | 1500 | 0.4725 | 0.8517 |
| 0.2187 | 5.73 | 1600 | 0.5042 | 0.8466 |
| 0.2952 | 6.09 | 1700 | 0.5240 | 0.8548 |
| 0.2687 | 6.45 | 1800 | 0.5523 | 0.8364 |
| 0.3111 | 6.81 | 1900 | 0.5304 | 0.8497 |
| 0.2431 | 7.17 | 2000 | 0.5104 | 0.8569 |
| 0.3265 | 7.53 | 2100 | 0.5085 | 0.8691 |
| 0.2595 | 7.89 | 2200 | 0.5015 | 0.8569 |
| 0.1825 | 8.24 | 2300 | 0.4920 | 0.8620 |
| 0.2602 | 8.6 | 2400 | 0.5016 | 0.8620 |
| 0.2628 | 8.96 | 2500 | 0.4746 | 0.8681 |
| 0.1024 | 9.32 | 2600 | 0.4818 | 0.8691 |
| 0.1468 | 9.68 | 2700 | 0.4765 | 0.8681 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"কথা_বলা",
"কম্পিউটার_ব্যবহার_করা",
"খাওয়া",
"খেলা_করা",
"ঘুমানো",
"পান_করা",
"পড়া",
"রান্না_করা",
"লেখা",
"হাঁটা"
] |
atitat/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# atitat/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3826
- Validation Loss: 0.4117
- Train Accuracy: 0.891
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.8051 | 1.6299 | 0.837 | 0 |
| 1.2333 | 0.8371 | 0.9 | 1 |
| 0.7305 | 0.5069 | 0.922 | 2 |
| 0.4848 | 0.3805 | 0.927 | 3 |
| 0.3826 | 0.4117 | 0.891 | 4 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
Giecom/giecom-vit-model-clasification-waste
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# giecom-vit-model-clasification-waste
This model is a fine-tuned version performed by Miguel Calderon of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0066
- Accuracy: 0.9974
## Model description
El modelo giecom-vit-model-clasification-waste es una versión ajustada (finetuned) del modelo google/vit-base-patch16-224 utilizando el conjunto de datos viola77data/recycling-dataset. Este modelo está diseñado específicamente para la clasificación de imágenes de residuos reciclables, utilizando la arquitectura de Transformers. Ha demostrado ser altamente eficaz, alcanzando una precisión del 99.74% y una pérdida de 0.0066 en el conjunto de evaluación.
## Intended uses & limitations
El modelo ha sido entrenado específicamente para imágenes de residuos, por lo que su eficacia podría reducirse al utilizarlo en contextos o conjuntos de datos diferentes.
## Training and evaluation data
El modelo ha sido entrenado con hiperparámetros específicos, incluyendo una tasa de aprendizaje de 0.0002 y un tamaño de lote de 8, utilizando el optimizador Adam. Se entrenó durante 4 épocas, mostrando una mejora constante en la precisión y una reducción de la pérdida en el conjunto de validación.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7872 | 1.29 | 500 | 0.3043 | 0.9047 |
| 0.2279 | 2.57 | 1000 | 0.0463 | 0.9871 |
| 0.0406 | 3.86 | 1500 | 0.0066 | 0.9974 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"aluminium",
"batteries",
"takeaway cups",
"cardboard",
"disposable plates",
"glass",
"hard plastic",
"paper",
"paper towel",
"polystyrene",
"soft plastics"
] |
bdpc/resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_simkd
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_simkd
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4602
- Accuracy: 0.769
- Brier Loss: 0.3252
- Nll: 2.1002
- F1 Micro: 0.769
- F1 Macro: 0.7667
- Ece: 0.0388
- Aurc: 0.0678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 250 | 1.0910 | 0.059 | 0.9372 | 6.6175 | 0.059 | 0.0236 | 0.0366 | 0.9408 |
| 1.0976 | 2.0 | 500 | 1.0013 | 0.0838 | 0.9335 | 4.2665 | 0.0838 | 0.0443 | 0.0391 | 0.9208 |
| 1.0976 | 3.0 | 750 | 0.9171 | 0.1335 | 0.9308 | 2.8791 | 0.1335 | 0.0985 | 0.0770 | 0.8928 |
| 0.9312 | 4.0 | 1000 | 0.8701 | 0.1822 | 0.9243 | 2.7464 | 0.1822 | 0.1497 | 0.1142 | 0.8582 |
| 0.9312 | 5.0 | 1250 | 0.8306 | 0.274 | 0.8635 | 5.8805 | 0.274 | 0.2059 | 0.1347 | 0.6733 |
| 0.8353 | 6.0 | 1500 | 0.7791 | 0.396 | 0.7897 | 5.0905 | 0.396 | 0.3620 | 0.1762 | 0.4569 |
| 0.8353 | 7.0 | 1750 | 0.7452 | 0.47 | 0.7200 | 4.3882 | 0.47 | 0.4357 | 0.1822 | 0.3485 |
| 0.7569 | 8.0 | 2000 | 0.7148 | 0.5635 | 0.6470 | 3.6418 | 0.5635 | 0.5444 | 0.2022 | 0.2564 |
| 0.7569 | 9.0 | 2250 | 0.6847 | 0.6092 | 0.5626 | 3.0490 | 0.6092 | 0.5904 | 0.1508 | 0.1932 |
| 0.6953 | 10.0 | 2500 | 0.6552 | 0.648 | 0.5117 | 2.7913 | 0.648 | 0.6309 | 0.1312 | 0.1622 |
| 0.6953 | 11.0 | 2750 | 0.6369 | 0.662 | 0.4778 | 2.6400 | 0.662 | 0.6468 | 0.0959 | 0.1471 |
| 0.6357 | 12.0 | 3000 | 0.6074 | 0.6863 | 0.4436 | 2.4974 | 0.6863 | 0.6724 | 0.0734 | 0.1274 |
| 0.6357 | 13.0 | 3250 | 0.5915 | 0.6975 | 0.4226 | 2.4214 | 0.6975 | 0.6843 | 0.0607 | 0.1173 |
| 0.5943 | 14.0 | 3500 | 0.5811 | 0.7055 | 0.4080 | 2.3606 | 0.7055 | 0.6923 | 0.0487 | 0.1093 |
| 0.5943 | 15.0 | 3750 | 0.5694 | 0.7177 | 0.3947 | 2.2689 | 0.7178 | 0.7087 | 0.0553 | 0.1016 |
| 0.5665 | 16.0 | 4000 | 0.5555 | 0.7225 | 0.3866 | 2.2797 | 0.7225 | 0.7130 | 0.0394 | 0.0981 |
| 0.5665 | 17.0 | 4250 | 0.5502 | 0.725 | 0.3821 | 2.2616 | 0.7250 | 0.7166 | 0.0441 | 0.0957 |
| 0.5446 | 18.0 | 4500 | 0.5425 | 0.7345 | 0.3704 | 2.1992 | 0.7345 | 0.7277 | 0.0401 | 0.0893 |
| 0.5446 | 19.0 | 4750 | 0.5325 | 0.731 | 0.3670 | 2.1856 | 0.731 | 0.7257 | 0.0401 | 0.0872 |
| 0.5268 | 20.0 | 5000 | 0.5272 | 0.738 | 0.3661 | 2.2345 | 0.738 | 0.7335 | 0.0467 | 0.0865 |
| 0.5268 | 21.0 | 5250 | 0.5199 | 0.745 | 0.3582 | 2.1676 | 0.745 | 0.7407 | 0.0388 | 0.0827 |
| 0.5107 | 22.0 | 5500 | 0.5146 | 0.748 | 0.3530 | 2.1726 | 0.748 | 0.7446 | 0.0417 | 0.0802 |
| 0.5107 | 23.0 | 5750 | 0.5101 | 0.7482 | 0.3516 | 2.1670 | 0.7482 | 0.7445 | 0.0398 | 0.0799 |
| 0.4973 | 24.0 | 6000 | 0.5076 | 0.7455 | 0.3533 | 2.1814 | 0.7455 | 0.7431 | 0.0396 | 0.0807 |
| 0.4973 | 25.0 | 6250 | 0.4971 | 0.7512 | 0.3476 | 2.1618 | 0.7513 | 0.7469 | 0.0414 | 0.0780 |
| 0.484 | 26.0 | 6500 | 0.4934 | 0.753 | 0.3464 | 2.1725 | 0.753 | 0.7497 | 0.0473 | 0.0780 |
| 0.484 | 27.0 | 6750 | 0.4916 | 0.756 | 0.3415 | 2.1408 | 0.756 | 0.7527 | 0.0480 | 0.0753 |
| 0.4709 | 28.0 | 7000 | 0.4886 | 0.7582 | 0.3405 | 2.1415 | 0.7582 | 0.7547 | 0.0410 | 0.0746 |
| 0.4709 | 29.0 | 7250 | 0.4844 | 0.7582 | 0.3377 | 2.1252 | 0.7582 | 0.7556 | 0.0483 | 0.0742 |
| 0.4617 | 30.0 | 7500 | 0.4831 | 0.757 | 0.3372 | 2.1383 | 0.757 | 0.7540 | 0.0425 | 0.0731 |
| 0.4617 | 31.0 | 7750 | 0.4781 | 0.759 | 0.3344 | 2.1035 | 0.7590 | 0.7572 | 0.0404 | 0.0718 |
| 0.4529 | 32.0 | 8000 | 0.4794 | 0.7562 | 0.3375 | 2.1457 | 0.7562 | 0.7545 | 0.0385 | 0.0731 |
| 0.4529 | 33.0 | 8250 | 0.4777 | 0.7625 | 0.3336 | 2.0834 | 0.7625 | 0.7607 | 0.0433 | 0.0717 |
| 0.4462 | 34.0 | 8500 | 0.4730 | 0.7598 | 0.3328 | 2.1058 | 0.7598 | 0.7566 | 0.0496 | 0.0716 |
| 0.4462 | 35.0 | 8750 | 0.4730 | 0.761 | 0.3324 | 2.0874 | 0.761 | 0.7600 | 0.0461 | 0.0712 |
| 0.4404 | 36.0 | 9000 | 0.4692 | 0.7635 | 0.3309 | 2.0914 | 0.7635 | 0.7616 | 0.0481 | 0.0703 |
| 0.4404 | 37.0 | 9250 | 0.4691 | 0.7618 | 0.3298 | 2.0866 | 0.7618 | 0.7598 | 0.0457 | 0.0703 |
| 0.4351 | 38.0 | 9500 | 0.4666 | 0.762 | 0.3294 | 2.0963 | 0.762 | 0.7593 | 0.0428 | 0.0700 |
| 0.4351 | 39.0 | 9750 | 0.4639 | 0.7668 | 0.3265 | 2.1028 | 0.7668 | 0.7652 | 0.0453 | 0.0688 |
| 0.4309 | 40.0 | 10000 | 0.4627 | 0.7675 | 0.3287 | 2.0981 | 0.7675 | 0.7658 | 0.0449 | 0.0694 |
| 0.4309 | 41.0 | 10250 | 0.4634 | 0.765 | 0.3264 | 2.1151 | 0.765 | 0.7631 | 0.0441 | 0.0684 |
| 0.4269 | 42.0 | 10500 | 0.4626 | 0.7658 | 0.3260 | 2.0977 | 0.7658 | 0.7644 | 0.0414 | 0.0684 |
| 0.4269 | 43.0 | 10750 | 0.4609 | 0.7672 | 0.3259 | 2.0944 | 0.7672 | 0.7656 | 0.0420 | 0.0681 |
| 0.4248 | 44.0 | 11000 | 0.4616 | 0.7662 | 0.3253 | 2.0942 | 0.7663 | 0.7652 | 0.0458 | 0.0678 |
| 0.4248 | 45.0 | 11250 | 0.4605 | 0.7658 | 0.3258 | 2.1447 | 0.7658 | 0.7629 | 0.0408 | 0.0678 |
| 0.4233 | 46.0 | 11500 | 0.4604 | 0.7662 | 0.3266 | 2.1007 | 0.7663 | 0.7640 | 0.0493 | 0.0686 |
| 0.4233 | 47.0 | 11750 | 0.4601 | 0.7652 | 0.3252 | 2.0893 | 0.7652 | 0.7633 | 0.0463 | 0.0684 |
| 0.4221 | 48.0 | 12000 | 0.4600 | 0.7645 | 0.3255 | 2.0695 | 0.7645 | 0.7629 | 0.0472 | 0.0683 |
| 0.4221 | 49.0 | 12250 | 0.4605 | 0.7662 | 0.3257 | 2.0778 | 0.7663 | 0.7640 | 0.0425 | 0.0682 |
| 0.4211 | 50.0 | 12500 | 0.4602 | 0.769 | 0.3252 | 2.1002 | 0.769 | 0.7667 | 0.0388 | 0.0678 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"letter",
"form",
"email",
"handwritten",
"advertisement",
"scientific_report",
"scientific_publication",
"specification",
"file_folder",
"news_article",
"budget",
"invoice",
"presentation",
"questionnaire",
"resume",
"memo"
] |
bdpc/resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_og_simkd
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_og_simkd
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3748
- Accuracy: 0.8023
- Brier Loss: 0.2845
- Nll: 1.8818
- F1 Micro: 0.8023
- F1 Macro: 0.8020
- Ece: 0.0375
- Aurc: 0.0534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 250 | 0.8880 | 0.1955 | 0.8872 | 5.3865 | 0.1955 | 0.1551 | 0.0582 | 0.7111 |
| 0.9199 | 2.0 | 500 | 0.6464 | 0.407 | 0.7284 | 5.2363 | 0.4070 | 0.3745 | 0.0770 | 0.4284 |
| 0.9199 | 3.0 | 750 | 0.5608 | 0.5945 | 0.5337 | 3.5976 | 0.5945 | 0.5912 | 0.0561 | 0.1950 |
| 0.563 | 4.0 | 1000 | 0.4962 | 0.6905 | 0.4235 | 2.6948 | 0.6905 | 0.6885 | 0.0474 | 0.1170 |
| 0.563 | 5.0 | 1250 | 0.4613 | 0.7177 | 0.3858 | 2.5472 | 0.7178 | 0.7181 | 0.0512 | 0.0964 |
| 0.4567 | 6.0 | 1500 | 0.4372 | 0.742 | 0.3584 | 2.3396 | 0.7420 | 0.7425 | 0.0527 | 0.0824 |
| 0.4567 | 7.0 | 1750 | 0.4271 | 0.7595 | 0.3406 | 2.2123 | 0.7595 | 0.7596 | 0.0459 | 0.0756 |
| 0.4103 | 8.0 | 2000 | 0.4129 | 0.7658 | 0.3308 | 2.1667 | 0.7658 | 0.7666 | 0.0439 | 0.0704 |
| 0.4103 | 9.0 | 2250 | 0.4070 | 0.7678 | 0.3296 | 2.1663 | 0.7678 | 0.7692 | 0.0485 | 0.0699 |
| 0.3836 | 10.0 | 2500 | 0.4017 | 0.7725 | 0.3209 | 2.1207 | 0.7725 | 0.7732 | 0.0426 | 0.0667 |
| 0.3836 | 11.0 | 2750 | 0.3984 | 0.7768 | 0.3153 | 2.0353 | 0.7768 | 0.7771 | 0.0454 | 0.0651 |
| 0.3645 | 12.0 | 3000 | 0.3961 | 0.7752 | 0.3124 | 2.0755 | 0.7752 | 0.7754 | 0.0428 | 0.0642 |
| 0.3645 | 13.0 | 3250 | 0.3961 | 0.786 | 0.3071 | 1.9949 | 0.786 | 0.7861 | 0.0407 | 0.0612 |
| 0.3497 | 14.0 | 3500 | 0.3899 | 0.7823 | 0.3053 | 1.9769 | 0.7823 | 0.7823 | 0.0435 | 0.0606 |
| 0.3497 | 15.0 | 3750 | 0.3873 | 0.7853 | 0.3021 | 1.9881 | 0.7853 | 0.7849 | 0.0479 | 0.0594 |
| 0.3378 | 16.0 | 4000 | 0.3861 | 0.7833 | 0.3026 | 1.9263 | 0.7833 | 0.7834 | 0.0431 | 0.0593 |
| 0.3378 | 17.0 | 4250 | 0.3853 | 0.7913 | 0.2970 | 1.9108 | 0.7913 | 0.7917 | 0.0390 | 0.0571 |
| 0.3271 | 18.0 | 4500 | 0.3840 | 0.7903 | 0.2978 | 1.9643 | 0.7903 | 0.7902 | 0.0377 | 0.0576 |
| 0.3271 | 19.0 | 4750 | 0.3828 | 0.7915 | 0.2967 | 1.9332 | 0.7915 | 0.7914 | 0.0393 | 0.0572 |
| 0.3186 | 20.0 | 5000 | 0.3806 | 0.7913 | 0.2938 | 1.9410 | 0.7913 | 0.7909 | 0.0410 | 0.0563 |
| 0.3186 | 21.0 | 5250 | 0.3815 | 0.7953 | 0.2921 | 1.9285 | 0.7953 | 0.7949 | 0.0387 | 0.0566 |
| 0.3111 | 22.0 | 5500 | 0.3838 | 0.7895 | 0.2949 | 1.9126 | 0.7895 | 0.7894 | 0.0382 | 0.0570 |
| 0.3111 | 23.0 | 5750 | 0.3799 | 0.7955 | 0.2902 | 1.9332 | 0.7955 | 0.7955 | 0.0373 | 0.0558 |
| 0.305 | 24.0 | 6000 | 0.3796 | 0.7947 | 0.2912 | 1.8615 | 0.7947 | 0.7940 | 0.0418 | 0.0561 |
| 0.305 | 25.0 | 6250 | 0.3805 | 0.7947 | 0.2912 | 1.8999 | 0.7947 | 0.7940 | 0.0413 | 0.0558 |
| 0.2993 | 26.0 | 6500 | 0.3842 | 0.7925 | 0.2913 | 1.9451 | 0.7925 | 0.7927 | 0.0339 | 0.0559 |
| 0.2993 | 27.0 | 6750 | 0.3784 | 0.794 | 0.2908 | 1.9151 | 0.7940 | 0.7942 | 0.0389 | 0.0553 |
| 0.2943 | 28.0 | 7000 | 0.3779 | 0.7957 | 0.2895 | 1.8758 | 0.7957 | 0.7957 | 0.0392 | 0.0549 |
| 0.2943 | 29.0 | 7250 | 0.3776 | 0.7955 | 0.2892 | 1.8785 | 0.7955 | 0.7947 | 0.0445 | 0.0549 |
| 0.2905 | 30.0 | 7500 | 0.3775 | 0.7973 | 0.2879 | 1.8786 | 0.7973 | 0.7972 | 0.0379 | 0.0550 |
| 0.2905 | 31.0 | 7750 | 0.3773 | 0.7945 | 0.2903 | 1.9039 | 0.7945 | 0.7942 | 0.0405 | 0.0551 |
| 0.2863 | 32.0 | 8000 | 0.3764 | 0.7963 | 0.2880 | 1.8569 | 0.7963 | 0.7962 | 0.0375 | 0.0549 |
| 0.2863 | 33.0 | 8250 | 0.3775 | 0.7925 | 0.2884 | 1.9070 | 0.7925 | 0.7917 | 0.0411 | 0.0544 |
| 0.2831 | 34.0 | 8500 | 0.3762 | 0.7935 | 0.2873 | 1.8608 | 0.7935 | 0.7933 | 0.0389 | 0.0547 |
| 0.2831 | 35.0 | 8750 | 0.3765 | 0.7973 | 0.2868 | 1.9316 | 0.7973 | 0.7970 | 0.0385 | 0.0540 |
| 0.28 | 36.0 | 9000 | 0.3750 | 0.7967 | 0.2857 | 1.8871 | 0.7967 | 0.7965 | 0.0375 | 0.0540 |
| 0.28 | 37.0 | 9250 | 0.3761 | 0.793 | 0.2874 | 1.8977 | 0.793 | 0.7926 | 0.0405 | 0.0543 |
| 0.2775 | 38.0 | 9500 | 0.3760 | 0.7983 | 0.2861 | 1.8613 | 0.7983 | 0.7987 | 0.0422 | 0.0540 |
| 0.2775 | 39.0 | 9750 | 0.3761 | 0.7955 | 0.2870 | 1.8744 | 0.7955 | 0.7957 | 0.0412 | 0.0545 |
| 0.2755 | 40.0 | 10000 | 0.3753 | 0.8007 | 0.2852 | 1.8640 | 0.8007 | 0.8006 | 0.0345 | 0.0532 |
| 0.2755 | 41.0 | 10250 | 0.3753 | 0.8023 | 0.2857 | 1.8637 | 0.8023 | 0.8025 | 0.0363 | 0.0535 |
| 0.2735 | 42.0 | 10500 | 0.3751 | 0.7995 | 0.2851 | 1.9134 | 0.7995 | 0.7994 | 0.0403 | 0.0531 |
| 0.2735 | 43.0 | 10750 | 0.3753 | 0.8 | 0.2857 | 1.8832 | 0.8000 | 0.7996 | 0.0406 | 0.0538 |
| 0.2717 | 44.0 | 11000 | 0.3746 | 0.7985 | 0.2851 | 1.8545 | 0.7985 | 0.7982 | 0.0432 | 0.0532 |
| 0.2717 | 45.0 | 11250 | 0.3747 | 0.7985 | 0.2847 | 1.8730 | 0.7985 | 0.7984 | 0.0400 | 0.0534 |
| 0.2701 | 46.0 | 11500 | 0.3744 | 0.801 | 0.2843 | 1.8783 | 0.801 | 0.8007 | 0.0411 | 0.0532 |
| 0.2701 | 47.0 | 11750 | 0.3744 | 0.798 | 0.2852 | 1.8843 | 0.798 | 0.7975 | 0.0420 | 0.0535 |
| 0.2694 | 48.0 | 12000 | 0.3753 | 0.7993 | 0.2857 | 1.8875 | 0.7993 | 0.7988 | 0.0405 | 0.0532 |
| 0.2694 | 49.0 | 12250 | 0.3758 | 0.7965 | 0.2868 | 1.8927 | 0.7965 | 0.7964 | 0.0415 | 0.0539 |
| 0.2684 | 50.0 | 12500 | 0.3748 | 0.8023 | 0.2845 | 1.8818 | 0.8023 | 0.8020 | 0.0375 | 0.0534 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"letter",
"form",
"email",
"handwritten",
"advertisement",
"scientific_report",
"scientific_publication",
"specification",
"file_folder",
"news_article",
"budget",
"invoice",
"presentation",
"questionnaire",
"resume",
"memo"
] |
bdpc/resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_hint
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_hint
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 20.4893
- Accuracy: 0.7622
- Brier Loss: 0.3995
- Nll: 2.6673
- F1 Micro: 0.7622
- F1 Macro: 0.7619
- Ece: 0.1742
- Aurc: 0.0853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 250 | 27.0152 | 0.144 | 0.9329 | 8.3774 | 0.144 | 0.1293 | 0.0760 | 0.8496 |
| 26.9201 | 2.0 | 500 | 25.8022 | 0.4547 | 0.8625 | 4.1098 | 0.4547 | 0.4194 | 0.3292 | 0.3673 |
| 26.9201 | 3.0 | 750 | 24.5485 | 0.5617 | 0.6135 | 3.0722 | 0.5617 | 0.5439 | 0.1557 | 0.2257 |
| 24.565 | 4.0 | 1000 | 23.9825 | 0.6388 | 0.5062 | 2.7343 | 0.6388 | 0.6354 | 0.1084 | 0.1537 |
| 24.565 | 5.0 | 1250 | 23.8483 | 0.6747 | 0.4518 | 2.5930 | 0.6747 | 0.6686 | 0.0597 | 0.1289 |
| 23.3904 | 6.0 | 1500 | 23.2280 | 0.7137 | 0.3953 | 2.4736 | 0.7138 | 0.7117 | 0.0486 | 0.0997 |
| 23.3904 | 7.0 | 1750 | 23.0275 | 0.725 | 0.3781 | 2.3823 | 0.7250 | 0.7238 | 0.0414 | 0.0911 |
| 22.6462 | 8.0 | 2000 | 22.8213 | 0.7358 | 0.3699 | 2.3745 | 0.7358 | 0.7351 | 0.0539 | 0.0881 |
| 22.6462 | 9.0 | 2250 | 22.6219 | 0.7468 | 0.3629 | 2.3056 | 0.7468 | 0.7465 | 0.0617 | 0.0852 |
| 22.0944 | 10.0 | 2500 | 22.4746 | 0.751 | 0.3593 | 2.3500 | 0.751 | 0.7523 | 0.0637 | 0.0846 |
| 22.0944 | 11.0 | 2750 | 22.3503 | 0.752 | 0.3624 | 2.4245 | 0.752 | 0.7533 | 0.0810 | 0.0834 |
| 21.6411 | 12.0 | 3000 | 22.2263 | 0.7545 | 0.3693 | 2.4277 | 0.7545 | 0.7547 | 0.0972 | 0.0885 |
| 21.6411 | 13.0 | 3250 | 22.1353 | 0.7522 | 0.3740 | 2.4647 | 0.7522 | 0.7532 | 0.1141 | 0.0862 |
| 21.2742 | 14.0 | 3500 | 22.1122 | 0.7475 | 0.3868 | 2.5369 | 0.7475 | 0.7495 | 0.1250 | 0.0922 |
| 21.2742 | 15.0 | 3750 | 22.0040 | 0.7508 | 0.3842 | 2.5364 | 0.7508 | 0.7501 | 0.1304 | 0.0911 |
| 20.9515 | 16.0 | 4000 | 21.8795 | 0.758 | 0.3772 | 2.5474 | 0.7580 | 0.7578 | 0.1324 | 0.0846 |
| 20.9515 | 17.0 | 4250 | 21.7554 | 0.754 | 0.3892 | 2.5498 | 0.754 | 0.7543 | 0.1420 | 0.0923 |
| 20.6695 | 18.0 | 4500 | 21.6863 | 0.749 | 0.3981 | 2.6337 | 0.749 | 0.7507 | 0.1510 | 0.0922 |
| 20.6695 | 19.0 | 4750 | 21.6123 | 0.7498 | 0.4007 | 2.5993 | 0.7498 | 0.7499 | 0.1551 | 0.0921 |
| 20.4239 | 20.0 | 5000 | 21.5128 | 0.7595 | 0.3845 | 2.5510 | 0.7595 | 0.7590 | 0.1498 | 0.0870 |
| 20.4239 | 21.0 | 5250 | 21.4770 | 0.7542 | 0.4005 | 2.6396 | 0.7542 | 0.7547 | 0.1623 | 0.0932 |
| 20.2131 | 22.0 | 5500 | 21.3497 | 0.7612 | 0.3892 | 2.5117 | 0.7612 | 0.7609 | 0.1539 | 0.0891 |
| 20.2131 | 23.0 | 5750 | 21.3489 | 0.7572 | 0.3956 | 2.5227 | 0.7572 | 0.7570 | 0.1608 | 0.0883 |
| 20.0332 | 24.0 | 6000 | 21.2609 | 0.7585 | 0.3939 | 2.5487 | 0.7585 | 0.7595 | 0.1629 | 0.0860 |
| 20.0332 | 25.0 | 6250 | 21.2046 | 0.7552 | 0.3982 | 2.6283 | 0.7552 | 0.7559 | 0.1663 | 0.0878 |
| 19.8699 | 26.0 | 6500 | 21.1515 | 0.7528 | 0.4038 | 2.6730 | 0.7528 | 0.7536 | 0.1721 | 0.0858 |
| 19.8699 | 27.0 | 6750 | 21.0789 | 0.7562 | 0.4003 | 2.6027 | 0.7562 | 0.7575 | 0.1683 | 0.0876 |
| 19.7228 | 28.0 | 7000 | 21.0357 | 0.7565 | 0.3996 | 2.6490 | 0.7565 | 0.7561 | 0.1707 | 0.0844 |
| 19.7228 | 29.0 | 7250 | 20.9975 | 0.758 | 0.3971 | 2.6300 | 0.7580 | 0.7574 | 0.1704 | 0.0835 |
| 19.589 | 30.0 | 7500 | 20.9221 | 0.7568 | 0.4007 | 2.5841 | 0.7568 | 0.7567 | 0.1714 | 0.0860 |
| 19.589 | 31.0 | 7750 | 20.8725 | 0.7562 | 0.3996 | 2.5775 | 0.7562 | 0.7562 | 0.1752 | 0.0847 |
| 19.4738 | 32.0 | 8000 | 20.8438 | 0.7572 | 0.3999 | 2.6441 | 0.7572 | 0.7570 | 0.1693 | 0.0877 |
| 19.4738 | 33.0 | 8250 | 20.8337 | 0.755 | 0.4052 | 2.6660 | 0.755 | 0.7555 | 0.1743 | 0.0868 |
| 19.3704 | 34.0 | 8500 | 20.7635 | 0.7575 | 0.4022 | 2.6885 | 0.7575 | 0.7583 | 0.1764 | 0.0868 |
| 19.3704 | 35.0 | 8750 | 20.7705 | 0.7608 | 0.4001 | 2.6415 | 0.7608 | 0.7601 | 0.1735 | 0.0856 |
| 19.2791 | 36.0 | 9000 | 20.7221 | 0.7632 | 0.3984 | 2.7139 | 0.7632 | 0.7640 | 0.1706 | 0.0857 |
| 19.2791 | 37.0 | 9250 | 20.6873 | 0.7622 | 0.3986 | 2.6743 | 0.7622 | 0.7625 | 0.1715 | 0.0838 |
| 19.2036 | 38.0 | 9500 | 20.6757 | 0.7618 | 0.3990 | 2.6225 | 0.7618 | 0.7620 | 0.1735 | 0.0852 |
| 19.2036 | 39.0 | 9750 | 20.6421 | 0.7588 | 0.4018 | 2.6342 | 0.7588 | 0.7579 | 0.1761 | 0.0870 |
| 19.1398 | 40.0 | 10000 | 20.6432 | 0.761 | 0.4057 | 2.6595 | 0.761 | 0.7610 | 0.1760 | 0.0868 |
| 19.1398 | 41.0 | 10250 | 20.5778 | 0.7672 | 0.3981 | 2.6180 | 0.7672 | 0.7674 | 0.1680 | 0.0850 |
| 19.0835 | 42.0 | 10500 | 20.5628 | 0.764 | 0.3981 | 2.6309 | 0.764 | 0.7625 | 0.1726 | 0.0851 |
| 19.0835 | 43.0 | 10750 | 20.5530 | 0.7632 | 0.3995 | 2.6470 | 0.7632 | 0.7628 | 0.1733 | 0.0868 |
| 19.0398 | 44.0 | 11000 | 20.5625 | 0.761 | 0.4029 | 2.6650 | 0.761 | 0.7608 | 0.1764 | 0.0864 |
| 19.0398 | 45.0 | 11250 | 20.5637 | 0.7628 | 0.4010 | 2.6709 | 0.7628 | 0.7623 | 0.1760 | 0.0850 |
| 19.0073 | 46.0 | 11500 | 20.5378 | 0.7628 | 0.3998 | 2.6522 | 0.7628 | 0.7631 | 0.1749 | 0.0859 |
| 19.0073 | 47.0 | 11750 | 20.5199 | 0.7615 | 0.4010 | 2.6406 | 0.7615 | 0.7619 | 0.1748 | 0.0867 |
| 18.9818 | 48.0 | 12000 | 20.5378 | 0.761 | 0.4031 | 2.6434 | 0.761 | 0.7616 | 0.1767 | 0.0856 |
| 18.9818 | 49.0 | 12250 | 20.4962 | 0.7652 | 0.3962 | 2.6250 | 0.7652 | 0.7653 | 0.1720 | 0.0853 |
| 18.9734 | 50.0 | 12500 | 20.4893 | 0.7622 | 0.3995 | 2.6673 | 0.7622 | 0.7619 | 0.1742 | 0.0853 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"letter",
"form",
"email",
"handwritten",
"advertisement",
"scientific_report",
"scientific_publication",
"specification",
"file_folder",
"news_article",
"budget",
"invoice",
"presentation",
"questionnaire",
"resume",
"memo"
] |
xxChrisYang/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xxChrisYang/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3677
- Validation Loss: 0.3606
- Train Accuracy: 0.904
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7467 | 1.6168 | 0.832 | 0 |
| 1.1704 | 0.7672 | 0.907 | 1 |
| 0.6836 | 0.5157 | 0.913 | 2 |
| 0.4500 | 0.4047 | 0.914 | 3 |
| 0.3677 | 0.3606 | 0.904 | 4 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
paulac9/results
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8263
- Accuracy: 0.6462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 1.1611 | 0.5077 |
| No log | 2.0 | 3 | 1.1465 | 0.5077 |
| No log | 3.0 | 5 | 1.1057 | 0.5077 |
| No log | 4.0 | 6 | 1.1134 | 0.5077 |
| No log | 5.0 | 7 | 1.0864 | 0.5077 |
| No log | 6.0 | 9 | 0.9838 | 0.5385 |
| 0.5404 | 7.0 | 11 | 0.9655 | 0.5538 |
| 0.5404 | 8.0 | 12 | 0.9630 | 0.5692 |
| 0.5404 | 9.0 | 13 | 0.9631 | 0.5538 |
| 0.5404 | 10.0 | 15 | 1.0177 | 0.5385 |
| 0.5404 | 11.0 | 17 | 1.0124 | 0.5538 |
| 0.5404 | 12.0 | 18 | 0.9905 | 0.5692 |
| 0.5404 | 13.0 | 19 | 0.9473 | 0.6154 |
| 0.5207 | 14.0 | 21 | 0.9549 | 0.6 |
| 0.5207 | 15.0 | 23 | 0.9348 | 0.5846 |
| 0.5207 | 16.0 | 24 | 0.9019 | 0.5846 |
| 0.5207 | 17.0 | 25 | 0.8687 | 0.5846 |
| 0.5207 | 18.0 | 27 | 0.8462 | 0.5846 |
| 0.5207 | 19.0 | 29 | 0.8418 | 0.6154 |
| 0.5146 | 20.0 | 30 | 0.8419 | 0.6 |
| 0.5146 | 21.0 | 31 | 0.8435 | 0.5692 |
| 0.5146 | 22.0 | 33 | 0.8415 | 0.5538 |
| 0.5146 | 23.0 | 35 | 0.8293 | 0.6154 |
| 0.5146 | 24.0 | 36 | 0.8254 | 0.6 |
| 0.5146 | 25.0 | 37 | 0.8219 | 0.6154 |
| 0.5146 | 26.0 | 39 | 0.8195 | 0.6462 |
| 0.4352 | 27.0 | 41 | 0.8192 | 0.6462 |
| 0.4352 | 28.0 | 42 | 0.8198 | 0.6308 |
| 0.4352 | 29.0 | 43 | 0.8230 | 0.6615 |
| 0.4352 | 30.0 | 45 | 0.8264 | 0.6462 |
| 0.4352 | 31.0 | 47 | 0.8268 | 0.6462 |
| 0.4352 | 32.0 | 48 | 0.8266 | 0.6462 |
| 0.4352 | 33.0 | 49 | 0.8263 | 0.6462 |
| 0.4724 | 33.33 | 50 | 0.8263 | 0.6462 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"mild_demented",
"moderate_demented",
"non_demented",
"very_mild_demented"
] |
arpanl/custom
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# custom
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3082
- Accuracy: 0.8922
- F1: 0.7977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"faces",
"faces_easy",
"beaver",
"yin_yang",
"binocular",
"bonsai",
"brain",
"brontosaurus",
"buddha",
"butterfly",
"camera",
"cannon",
"car_side",
"leopards",
"ceiling_fan",
"cellphone",
"chair",
"chandelier",
"cougar_body",
"cougar_face",
"crab",
"crayfish",
"crocodile",
"crocodile_head",
"motorbikes",
"cup",
"dalmatian",
"dollar_bill",
"dolphin",
"dragonfly",
"electric_guitar",
"elephant",
"emu",
"euphonium",
"ewer",
"accordion",
"ferry",
"flamingo",
"flamingo_head",
"garfield",
"gerenuk",
"gramophone",
"grand_piano",
"hawksbill",
"headphone",
"hedgehog",
"airplanes",
"helicopter",
"ibis",
"inline_skate",
"joshua_tree",
"kangaroo",
"ketch",
"lamp",
"laptop",
"llama",
"lobster",
"anchor",
"lotus",
"mandolin",
"mayfly",
"menorah",
"metronome",
"minaret",
"nautilus",
"octopus",
"okapi",
"pagoda",
"ant",
"panda",
"pigeon",
"pizza",
"platypus",
"pyramid",
"revolver",
"rhino",
"rooster",
"saxophone",
"schooner",
"barrel",
"scissors",
"scorpion",
"sea_horse",
"snoopy",
"soccer_ball",
"stapler",
"starfish",
"stegosaurus",
"stop_sign",
"strawberry",
"bass",
"sunflower",
"tick",
"trilobite",
"umbrella",
"watch",
"water_lilly",
"wheelchair",
"wild_cat",
"windsor_chair",
"wrench"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.