model_id
stringlengths 7
105
| model_card
stringlengths 1
130k
| model_labels
listlengths 2
80k
|
---|---|---|
Augusto777/swinv2-tiny-patch4-window8-256-DMAE-da-colab
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-DMAE-da-colab
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9394
- Accuracy: 0.7391
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.3823 | 0.9565 | 11 | 1.4058 | 0.1957 |
| 1.3366 | 2.0 | 23 | 1.4482 | 0.1957 |
| 1.2352 | 2.9565 | 34 | 1.2309 | 0.4565 |
| 1.1374 | 4.0 | 46 | 1.1031 | 0.6087 |
| 1.0344 | 4.9565 | 57 | 1.0230 | 0.5870 |
| 0.8772 | 6.0 | 69 | 0.9115 | 0.6522 |
| 0.7321 | 6.9565 | 80 | 0.8858 | 0.6522 |
| 0.6319 | 8.0 | 92 | 0.8665 | 0.6522 |
| 0.6438 | 8.9565 | 103 | 0.7738 | 0.7174 |
| 0.4714 | 10.0 | 115 | 0.8492 | 0.6304 |
| 0.433 | 10.9565 | 126 | 0.8386 | 0.6957 |
| 0.4793 | 12.0 | 138 | 0.9394 | 0.7391 |
| 0.4769 | 12.9565 | 149 | 0.9471 | 0.6522 |
| 0.3872 | 14.0 | 161 | 1.1526 | 0.6087 |
| 0.3906 | 14.9565 | 172 | 1.0575 | 0.6522 |
| 0.3798 | 16.0 | 184 | 1.0593 | 0.6957 |
| 0.3377 | 16.9565 | 195 | 1.0783 | 0.6087 |
| 0.3919 | 18.0 | 207 | 1.1067 | 0.6522 |
| 0.3631 | 18.9565 | 218 | 1.1018 | 0.6739 |
| 0.2762 | 20.0 | 230 | 1.1479 | 0.6522 |
| 0.2935 | 20.9565 | 241 | 1.1055 | 0.6957 |
| 0.3029 | 22.0 | 253 | 1.1203 | 0.6739 |
| 0.2857 | 22.9565 | 264 | 1.2820 | 0.6304 |
| 0.2603 | 24.0 | 276 | 1.2550 | 0.6304 |
| 0.2162 | 24.9565 | 287 | 1.1655 | 0.6739 |
| 0.2465 | 26.0 | 299 | 1.2511 | 0.6739 |
| 0.2238 | 26.9565 | 310 | 1.3461 | 0.6304 |
| 0.2271 | 28.0 | 322 | 1.3472 | 0.6304 |
| 0.2694 | 28.9565 | 333 | 1.4501 | 0.6304 |
| 0.1903 | 30.0 | 345 | 1.4629 | 0.6304 |
| 0.2054 | 30.9565 | 356 | 1.4672 | 0.6304 |
| 0.199 | 32.0 | 368 | 1.4725 | 0.6304 |
| 0.2034 | 32.9565 | 379 | 1.4507 | 0.6522 |
| 0.2048 | 34.0 | 391 | 1.4330 | 0.6304 |
| 0.1767 | 34.9565 | 402 | 1.4638 | 0.6304 |
| 0.1799 | 36.0 | 414 | 1.4232 | 0.6304 |
| 0.1903 | 36.9565 | 425 | 1.4508 | 0.6304 |
| 0.1864 | 38.0 | 437 | 1.4460 | 0.6304 |
| 0.1818 | 38.2609 | 440 | 1.4456 | 0.6304 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/swinv2-tiny-patch4-window8-256-DMAE-da2-colab
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-DMAE-da2-colab
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9368
- Accuracy: 0.7609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.4149 | 0.9565 | 11 | 1.3905 | 0.2174 |
| 1.3431 | 1.9348 | 22 | 1.3828 | 0.3043 |
| 1.2396 | 2.9130 | 33 | 1.2675 | 0.4348 |
| 1.1377 | 3.9783 | 45 | 1.2067 | 0.3478 |
| 1.0144 | 4.9565 | 56 | 0.9060 | 0.6087 |
| 0.9016 | 5.9348 | 67 | 0.8025 | 0.6739 |
| 0.7941 | 6.9130 | 78 | 0.7812 | 0.6957 |
| 0.6986 | 7.9783 | 90 | 0.9441 | 0.5870 |
| 0.6245 | 8.9565 | 101 | 0.8641 | 0.6957 |
| 0.6044 | 9.9348 | 112 | 0.8648 | 0.6087 |
| 0.536 | 10.9130 | 123 | 0.8800 | 0.5870 |
| 0.4825 | 11.9783 | 135 | 0.8388 | 0.7391 |
| 0.4972 | 12.9565 | 146 | 0.8763 | 0.7174 |
| 0.4284 | 13.9348 | 157 | 0.8228 | 0.6957 |
| 0.3961 | 14.9130 | 168 | 0.8260 | 0.7174 |
| 0.3877 | 15.9783 | 180 | 0.9368 | 0.7609 |
| 0.3744 | 16.9565 | 191 | 1.1221 | 0.6304 |
| 0.3266 | 17.9348 | 202 | 1.0177 | 0.6739 |
| 0.3257 | 18.9130 | 213 | 1.0300 | 0.6957 |
| 0.3164 | 19.9783 | 225 | 1.1344 | 0.6957 |
| 0.2965 | 20.9565 | 236 | 0.9283 | 0.7391 |
| 0.293 | 21.9348 | 247 | 1.0128 | 0.6957 |
| 0.2929 | 22.9130 | 258 | 1.0450 | 0.7609 |
| 0.2878 | 23.9783 | 270 | 1.1482 | 0.7174 |
| 0.2447 | 24.9565 | 281 | 1.0716 | 0.7174 |
| 0.2601 | 25.9348 | 292 | 1.0770 | 0.6957 |
| 0.2299 | 26.9130 | 303 | 1.1769 | 0.7391 |
| 0.2401 | 27.9783 | 315 | 1.1407 | 0.7174 |
| 0.2347 | 28.9565 | 326 | 1.1929 | 0.6957 |
| 0.2584 | 29.9348 | 337 | 1.0957 | 0.6739 |
| 0.2204 | 30.9130 | 348 | 1.1721 | 0.6739 |
| 0.2031 | 31.9783 | 360 | 1.0843 | 0.6739 |
| 0.2241 | 32.9565 | 371 | 1.1350 | 0.6957 |
| 0.1798 | 33.9348 | 382 | 1.2419 | 0.6957 |
| 0.2435 | 34.9130 | 393 | 1.1522 | 0.6957 |
| 0.1857 | 35.9783 | 405 | 1.1207 | 0.6957 |
| 0.1889 | 36.9565 | 416 | 1.1711 | 0.6957 |
| 0.2043 | 37.9348 | 427 | 1.1978 | 0.6957 |
| 0.1951 | 38.9130 | 438 | 1.2107 | 0.7174 |
| 0.1901 | 39.1087 | 440 | 1.2108 | 0.7174 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/swinv2-tiny-patch4-window8-256-DMAE-da2-colab2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-DMAE-da2-colab2
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7389
- Accuracy: 0.7174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.3473 | 0.9565 | 11 | 1.3763 | 0.2609 |
| 1.2385 | 1.9348 | 22 | 1.2596 | 0.5 |
| 1.1683 | 2.9130 | 33 | 1.2476 | 0.5 |
| 1.0809 | 3.9783 | 45 | 1.1625 | 0.5435 |
| 1.0147 | 4.9565 | 56 | 1.0490 | 0.5435 |
| 0.9755 | 5.9348 | 67 | 0.9155 | 0.6739 |
| 0.89 | 6.9130 | 78 | 0.9343 | 0.6304 |
| 0.832 | 7.9783 | 90 | 0.7973 | 0.6522 |
| 0.7846 | 8.9565 | 101 | 0.8325 | 0.6087 |
| 0.74 | 9.9348 | 112 | 0.8328 | 0.6087 |
| 0.679 | 10.9130 | 123 | 0.7195 | 0.7174 |
| 0.6487 | 11.9783 | 135 | 0.6740 | 0.7826 |
| 0.6774 | 12.9565 | 146 | 0.7157 | 0.6957 |
| 0.5727 | 13.9348 | 157 | 0.6934 | 0.7174 |
| 0.5872 | 14.9130 | 168 | 0.7335 | 0.6739 |
| 0.6116 | 15.9783 | 180 | 0.6892 | 0.7609 |
| 0.5783 | 16.9565 | 191 | 0.6796 | 0.7174 |
| 0.5658 | 17.9348 | 202 | 0.6966 | 0.7609 |
| 0.5447 | 18.9130 | 213 | 0.6708 | 0.7174 |
| 0.5452 | 19.9783 | 225 | 0.7297 | 0.6957 |
| 0.509 | 20.9565 | 236 | 0.7137 | 0.6522 |
| 0.527 | 21.9348 | 247 | 0.7047 | 0.7174 |
| 0.5382 | 22.9130 | 258 | 0.7737 | 0.6739 |
| 0.508 | 23.9783 | 270 | 0.7073 | 0.7174 |
| 0.4579 | 24.9565 | 281 | 0.7355 | 0.6739 |
| 0.492 | 25.9348 | 292 | 0.7152 | 0.7174 |
| 0.4273 | 26.9130 | 303 | 0.7243 | 0.7174 |
| 0.4823 | 27.9783 | 315 | 0.7477 | 0.6739 |
| 0.4659 | 28.9565 | 326 | 0.7193 | 0.6957 |
| 0.4401 | 29.9348 | 337 | 0.7509 | 0.6957 |
| 0.4717 | 30.9130 | 348 | 0.7327 | 0.6957 |
| 0.4139 | 31.9783 | 360 | 0.7041 | 0.7174 |
| 0.4365 | 32.9565 | 371 | 0.7286 | 0.6957 |
| 0.4088 | 33.9348 | 382 | 0.7495 | 0.7174 |
| 0.4505 | 34.9130 | 393 | 0.7399 | 0.7174 |
| 0.4143 | 35.9783 | 405 | 0.7160 | 0.7174 |
| 0.4093 | 36.9565 | 416 | 0.7210 | 0.7174 |
| 0.43 | 37.9348 | 427 | 0.7342 | 0.7174 |
| 0.424 | 38.9130 | 438 | 0.7386 | 0.7174 |
| 0.4169 | 39.1087 | 440 | 0.7389 | 0.7174 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/swinv2-tiny-patch4-window8-256-DMAE-da-colab2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-DMAE-da-colab2
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8835
- Accuracy: 0.7609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.357 | 0.9565 | 11 | 1.3906 | 0.3913 |
| 1.2964 | 2.0 | 23 | 1.2819 | 0.4348 |
| 1.1609 | 2.9565 | 34 | 1.1804 | 0.4783 |
| 1.0747 | 4.0 | 46 | 1.0911 | 0.6087 |
| 1.027 | 4.9565 | 57 | 1.0176 | 0.6304 |
| 0.8985 | 6.0 | 69 | 0.8963 | 0.6739 |
| 0.8031 | 6.9565 | 80 | 0.9867 | 0.6739 |
| 0.7744 | 8.0 | 92 | 0.8710 | 0.6522 |
| 0.7488 | 8.9565 | 103 | 0.8845 | 0.6957 |
| 0.6767 | 10.0 | 115 | 0.8693 | 0.6957 |
| 0.6082 | 10.9565 | 126 | 0.8133 | 0.6739 |
| 0.6354 | 12.0 | 138 | 0.8771 | 0.6739 |
| 0.6422 | 12.9565 | 149 | 0.8137 | 0.7174 |
| 0.584 | 14.0 | 161 | 0.8861 | 0.6522 |
| 0.5763 | 14.9565 | 172 | 0.8459 | 0.7391 |
| 0.5238 | 16.0 | 184 | 0.8590 | 0.7174 |
| 0.528 | 16.9565 | 195 | 0.8705 | 0.7174 |
| 0.5626 | 18.0 | 207 | 0.8636 | 0.7174 |
| 0.5395 | 18.9565 | 218 | 0.8794 | 0.6957 |
| 0.4696 | 20.0 | 230 | 0.8835 | 0.7609 |
| 0.488 | 20.9565 | 241 | 0.8889 | 0.7391 |
| 0.4764 | 22.0 | 253 | 0.9109 | 0.7174 |
| 0.4668 | 22.9565 | 264 | 0.8893 | 0.7391 |
| 0.4676 | 24.0 | 276 | 0.9082 | 0.6957 |
| 0.4619 | 24.9565 | 287 | 0.9353 | 0.7174 |
| 0.4727 | 26.0 | 299 | 0.9331 | 0.7174 |
| 0.4461 | 26.9565 | 310 | 0.8937 | 0.7391 |
| 0.428 | 28.0 | 322 | 0.9175 | 0.7174 |
| 0.4694 | 28.9565 | 333 | 0.9340 | 0.6957 |
| 0.3812 | 30.0 | 345 | 0.9722 | 0.6739 |
| 0.4252 | 30.9565 | 356 | 0.9433 | 0.7174 |
| 0.3883 | 32.0 | 368 | 0.9420 | 0.7391 |
| 0.4228 | 32.9565 | 379 | 0.9483 | 0.6739 |
| 0.4288 | 34.0 | 391 | 0.9529 | 0.7174 |
| 0.3982 | 34.9565 | 402 | 0.9506 | 0.7174 |
| 0.3935 | 36.0 | 414 | 0.9539 | 0.6739 |
| 0.3974 | 36.9565 | 425 | 0.9599 | 0.6957 |
| 0.3893 | 38.0 | 437 | 0.9608 | 0.6957 |
| 0.4201 | 38.2609 | 440 | 0.9608 | 0.6957 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
alem-147/poisoned-baseline-vit-base
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poisoned-baseline-vit-base
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4929
- Accuracy: 0.8271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1789 | 1.0 | 130 | 1.2838 | 0.4887 |
| 0.8468 | 2.0 | 260 | 0.6919 | 0.7068 |
| 0.6958 | 3.0 | 390 | 0.7107 | 0.6842 |
| 0.6643 | 4.0 | 520 | 0.5809 | 0.7744 |
| 0.5287 | 5.0 | 650 | 0.5954 | 0.7444 |
| 0.4707 | 6.0 | 780 | 0.4929 | 0.8271 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
alem-147/poisoned-baseline-vit-base-pretrained
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poisoned-baseline-vit-base-pretrained
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0036
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3126 | 1.0 | 130 | 0.0421 | 1.0 |
| 0.1395 | 2.0 | 260 | 0.1107 | 0.9699 |
| 0.0526 | 3.0 | 390 | 0.1821 | 0.9474 |
| 0.0228 | 4.0 | 520 | 0.0476 | 0.9850 |
| 0.0141 | 5.0 | 650 | 0.0366 | 0.9925 |
| 0.0036 | 6.0 | 780 | 0.0036 | 1.0 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
alem-147/bad-beans-vit-base
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bad-beans-vit-base
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6612
- Accuracy: 0.7143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1432 | 1.0 | 130 | 1.0374 | 0.5338 |
| 0.9194 | 2.0 | 260 | 0.8384 | 0.6165 |
| 0.7836 | 3.0 | 390 | 0.7307 | 0.6617 |
| 0.6775 | 4.0 | 520 | 0.6612 | 0.7143 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
alem-147/poison-distill-ViT
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poison-distill-ViT
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: -53.3057
- Accuracy: 0.7218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| -13.7561 | 1.0 | 130 | -15.7562 | 0.5789 |
| -20.5455 | 2.0 | 260 | -18.6536 | 0.6165 |
| -25.3189 | 3.0 | 390 | -28.8659 | 0.6241 |
| -32.4562 | 4.0 | 520 | -31.9035 | 0.5940 |
| -37.0539 | 5.0 | 650 | -40.0929 | 0.7068 |
| -43.0244 | 6.0 | 780 | -41.5399 | 0.6466 |
| -46.1567 | 7.0 | 910 | -47.8440 | 0.6692 |
| -51.1963 | 8.0 | 1040 | -51.4154 | 0.6692 |
| -54.7388 | 9.0 | 1170 | -53.5994 | 0.7293 |
| -56.1867 | 10.0 | 1300 | -53.2331 | 0.7293 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
flxowens/celebrity-classifier-alpha-1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# celebrity-classifier-alpha-1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5674
- Accuracy: 0.5012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 6.89 | 1.0 | 57 | 6.8778 | 0.0008 |
| 6.7604 | 2.0 | 114 | 6.7367 | 0.0187 |
| 6.5063 | 3.0 | 171 | 6.4866 | 0.0467 |
| 6.2493 | 4.0 | 228 | 6.2322 | 0.0800 |
| 5.9905 | 5.0 | 285 | 6.0155 | 0.1039 |
| 5.7537 | 6.0 | 342 | 5.7997 | 0.1361 |
| 5.5712 | 7.0 | 399 | 5.6379 | 0.1529 |
| 5.384 | 8.0 | 456 | 5.4450 | 0.1936 |
| 5.1517 | 9.0 | 513 | 5.2739 | 0.2150 |
| 4.9379 | 10.0 | 570 | 5.1161 | 0.2530 |
| 4.8069 | 11.0 | 627 | 4.9782 | 0.2673 |
| 4.6418 | 12.0 | 684 | 4.8380 | 0.3005 |
| 4.4666 | 13.0 | 741 | 4.6963 | 0.3132 |
| 4.3445 | 14.0 | 798 | 4.5707 | 0.3346 |
| 4.1866 | 15.0 | 855 | 4.4440 | 0.3660 |
| 4.0571 | 16.0 | 912 | 4.3320 | 0.3926 |
| 3.9432 | 17.0 | 969 | 4.2483 | 0.3899 |
| 3.8203 | 18.0 | 1026 | 4.1406 | 0.4058 |
| 3.7025 | 19.0 | 1083 | 4.0536 | 0.4262 |
| 3.6165 | 20.0 | 1140 | 3.9738 | 0.4311 |
| 3.5122 | 21.0 | 1197 | 3.9039 | 0.4517 |
| 3.4541 | 22.0 | 1254 | 3.8438 | 0.4603 |
| 3.3528 | 23.0 | 1311 | 3.7834 | 0.4625 |
| 3.3077 | 24.0 | 1368 | 3.7017 | 0.4820 |
| 3.263 | 25.0 | 1425 | 3.6716 | 0.4740 |
| 3.2036 | 26.0 | 1482 | 3.6239 | 0.4955 |
| 3.1572 | 27.0 | 1539 | 3.6172 | 0.4927 |
| 3.1123 | 28.0 | 1596 | 3.5982 | 0.5034 |
| 3.0804 | 29.0 | 1653 | 3.5672 | 0.5048 |
| 3.0423 | 30.0 | 1710 | 3.5674 | 0.5012 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"aaron eckhart",
"aaron paul",
"adam driver",
"blake lively",
"bob odenkirk",
"bonnie wright",
"boyd holbrook",
"brad pitt",
"bradley cooper",
"brendan fraser",
"brian cox",
"brie larson",
"brittany snow",
"adam lambert",
"bryan cranston",
"bryce dallas howard",
"busy philipps",
"caitriona balfe",
"cameron diaz",
"camila cabello",
"camila mendes",
"cardi b",
"carey mulligan",
"carla gugino",
"adam levine",
"carrie underwood",
"casey affleck",
"cate blanchett",
"catherine keener",
"catherine zeta-jones",
"celine dion",
"chace crawford",
"chadwick boseman",
"channing tatum",
"charlie cox",
"adam sandler",
"charlie day",
"charlie hunnam",
"charlie plummer",
"charlize theron",
"chiara ferragni",
"chiwetel ejiofor",
"chloe bennet",
"chloe grace moretz",
"chloe sevigny",
"chloë grace moretz",
"adam scott",
"chloë sevigny",
"chris cooper",
"chris evans",
"chris hemsworth",
"chris martin",
"chris messina",
"chris noth",
"chris o'dowd",
"chris pine",
"chris pratt",
"adele",
"chris tucker",
"chrissy teigen",
"christian bale",
"christian slater",
"christina aguilera",
"christina applegate",
"christina hendricks",
"christina milian",
"christina ricci",
"christine baranski",
"adrian grenier",
"christoph waltz",
"christopher plummer",
"christopher walken",
"cillian murphy",
"claire foy",
"clive owen",
"clive standen",
"cobie smulders",
"colin farrell",
"colin firth",
"adèle exarchopoulos",
"colin hanks",
"connie britton",
"conor mcgregor",
"constance wu",
"constance zimmer",
"courteney cox",
"cristiano ronaldo",
"daisy ridley",
"dak prescott",
"dakota fanning",
"aidan gillen",
"dakota johnson",
"damian lewis",
"dan stevens",
"danai gurira",
"dane dehaan",
"daniel craig",
"daniel dae kim",
"daniel day-lewis",
"daniel gillies",
"daniel kaluuya",
"aidan turner",
"daniel mays",
"daniel radcliffe",
"danny devito",
"darren criss",
"dave bautista",
"dave franco",
"dave grohl",
"daveed diggs",
"david attenborough",
"david beckham",
"aaron rodgers",
"aishwarya rai",
"david duchovny",
"david harbour",
"david oyelowo",
"david schwimmer",
"david tennant",
"david thewlis",
"dax shepard",
"debra messing",
"demi lovato",
"dennis quaid",
"aja naomi king",
"denzel washington",
"dermot mulroney",
"dev patel",
"diane keaton",
"diane kruger",
"diane lane",
"diego boneta",
"diego luna",
"djimon hounsou",
"dolly parton",
"alden ehrenreich",
"domhnall gleeson",
"dominic cooper",
"dominic monaghan",
"dominic west",
"don cheadle",
"donald glover",
"donald sutherland",
"donald trump",
"dua lipa",
"dwayne \"the rock\" johnson",
"aldis hodge",
"dwayne johnson",
"dylan o'brien",
"ed harris",
"ed helms",
"ed sheeran",
"eddie murphy",
"eddie redmayne",
"edgar ramirez",
"edward norton",
"eiza gonzalez",
"alec baldwin",
"eiza gonzález",
"elijah wood",
"elisabeth moss",
"elisha cuthbert",
"eliza coupe",
"elizabeth banks",
"elizabeth debicki",
"elizabeth lail",
"elizabeth mcgovern",
"elizabeth moss",
"alex morgan",
"elizabeth olsen",
"elle fanning",
"ellen degeneres",
"ellen page",
"ellen pompeo",
"ellie goulding",
"elon musk",
"emile hirsch",
"emilia clarke",
"emilia fox",
"alex pettyfer",
"emily beecham",
"emily blunt",
"emily browning",
"emily deschanel",
"emily hampshire",
"emily mortimer",
"emily ratajkowski",
"emily vancamp",
"emily watson",
"emma bunton",
"alex rodriguez",
"emma chamberlain",
"emma corrin",
"emma mackey",
"emma roberts",
"emma stone",
"emma thompson",
"emma watson",
"emmanuelle chriqui",
"emmy rossum",
"eoin macken",
"alexander skarsgård",
"eric bana",
"ethan hawke",
"eva green",
"eva longoria",
"eva mendes",
"evan peters",
"evan rachel wood",
"evangeline lilly",
"ewan mcgregor",
"ezra miller",
"alexandra daddario",
"felicity huffman",
"felicity jones",
"finn wolfhard",
"florence pugh",
"florence welch",
"forest whitaker",
"freddie highmore",
"freddie prinze jr.",
"freema agyeman",
"freida pinto",
"aaron taylor-johnson",
"alfre woodard",
"freya allan",
"gabrielle union",
"gael garcia bernal",
"gael garcía bernal",
"gal gadot",
"garrett hedlund",
"gary oldman",
"gemma arterton",
"gemma chan",
"gemma whelan",
"alia shawkat",
"george clooney",
"george lucas",
"gerard butler",
"giancarlo esposito",
"giannis antetokounmpo",
"gigi hadid",
"gillian anderson",
"gillian jacobs",
"gina carano",
"gina gershon",
"alice braga",
"gina rodriguez",
"ginnifer goodwin",
"gisele bundchen",
"glenn close",
"grace kelly",
"greg kinnear",
"greta gerwig",
"greta scacchi",
"greta thunberg",
"gugu mbatha-raw",
"alice eve",
"guy ritchie",
"gwen stefani",
"gwendoline christie",
"gwyneth paltrow",
"hafthor bjornsson",
"hailee steinfeld",
"hailey bieber",
"haley joel osment",
"halle berry",
"hannah simone",
"alicia keys",
"harrison ford",
"harry styles",
"harvey weinstein",
"hayden panettiere",
"hayley atwell",
"helen hunt",
"helen mirren",
"helena bonham carter",
"henry cavill",
"henry golding",
"alicia vikander",
"hilary swank",
"himesh patel",
"hozier",
"hugh bonneville",
"hugh dancy",
"hugh grant",
"hugh jackman",
"hugh laurie",
"ian somerhalder",
"idris elba",
"alison brie",
"imelda staunton",
"imogen poots",
"ioan gruffudd",
"isabella rossellini",
"isabelle huppert",
"isla fisher",
"issa rae",
"iwan rheon",
"j.k. rowling",
"j.k. simmons",
"allison janney",
"jack black",
"jack reynor",
"jack whitehall",
"jackie chan",
"jada pinkett smith",
"jaden smith",
"jaimie alexander",
"jake gyllenhaal",
"jake johnson",
"jake t. austin",
"allison williams",
"james cameron",
"james corden",
"james franco",
"james marsden",
"james mcavoy",
"james norton",
"jamie bell",
"jamie chung",
"jamie dornan",
"jamie foxx",
"alyson hannigan",
"jamie lee curtis",
"jamie oliver",
"jane fonda",
"jane krakowski",
"jane levy",
"jane lynch",
"jane seymour",
"janelle monáe",
"january jones",
"jared leto",
"abbi jacobson",
"amanda peet",
"jason bateman",
"jason clarke",
"jason derulo",
"jason isaacs",
"jason momoa",
"jason mraz",
"jason schwartzman",
"jason segel",
"jason statham",
"jason sudeikis",
"amanda seyfried",
"javier bardem",
"jay baruchel",
"jay-z",
"jeff bezos",
"jeff bridges",
"jeff daniels",
"jeff goldblum",
"jeffrey dean morgan",
"jeffrey donovan",
"jeffrey wright",
"amandla stenberg",
"jemima kirke",
"jenna coleman",
"jenna fischer",
"jenna ortega",
"jennifer aniston",
"jennifer connelly",
"jennifer coolidge",
"jennifer esposito",
"jennifer garner",
"jennifer hudson",
"amber heard",
"jennifer lawrence",
"jennifer lopez",
"jennifer love hewitt",
"jenny slate",
"jeremy irons",
"jeremy renner",
"jeremy strong",
"jerry seinfeld",
"jesse eisenberg",
"jesse metcalfe",
"america ferrera",
"jesse plemons",
"jesse tyler ferguson",
"jesse williams",
"jessica alba",
"jessica biel",
"jessica chastain",
"jessica lange",
"jessie buckley",
"jim carrey",
"jim parsons",
"amy adams",
"joan collins",
"joan cusack",
"joanne froggatt",
"joaquin phoenix",
"jodie comer",
"jodie foster",
"joe jonas",
"joe keery",
"joel edgerton",
"joel kinnaman",
"amy poehler",
"joel mchale",
"john boyega",
"john c. reilly",
"john cena",
"john cho",
"john cleese",
"john corbett",
"john david washington",
"john goodman",
"john hawkes",
"amy schumer",
"john krasinski",
"john legend",
"john leguizamo",
"john lithgow",
"john malkovich",
"john mayer",
"john mulaney",
"john oliver",
"john slattery",
"john travolta",
"ana de armas",
"john turturro",
"johnny depp",
"johnny knoxville",
"jon bernthal",
"jon favreau",
"jon hamm",
"jonah hill",
"jonathan groff",
"jonathan majors",
"jonathan pryce",
"andie macdowell",
"jonathan rhys meyers",
"jordan peele",
"jordana brewster",
"joseph fiennes",
"joseph gordon-levitt",
"josh allen",
"josh brolin",
"josh gad",
"josh hartnett",
"josh hutcherson",
"abhishek bachchan",
"andrew garfield",
"josh radnor",
"jude law",
"judy dench",
"judy greer",
"julia garner",
"julia louis-dreyfus",
"julia roberts",
"julia stiles",
"julian casablancas",
"julian mcmahon",
"andrew lincoln",
"julianna margulies",
"julianne hough",
"julianne moore",
"julianne nicholson",
"juliette binoche",
"juliette lewis",
"juno temple",
"jurnee smollett",
"justin bartha",
"justin bieber",
"andrew scott",
"justin hartley",
"justin herbert",
"justin long",
"justin theroux",
"justin timberlake",
"kj apa",
"kaitlyn dever",
"kaley cuoco",
"kanye west",
"karl urban",
"andy garcia",
"kat dennings",
"kate beckinsale",
"kate bosworth",
"kate hudson",
"kate mara",
"kate middleton",
"kate upton",
"kate walsh",
"kate winslet",
"katee sackhoff",
"andy samberg",
"katherine heigl",
"katherine langford",
"katherine waterston",
"kathryn hahn",
"katie holmes",
"katie mcgrath",
"katy perry",
"kaya scodelario",
"keanu reeves",
"keegan-michael key",
"andy serkis",
"keira knightley",
"keke palmer",
"kelly clarkson",
"kelly macdonald",
"kelly marie tran",
"kelly reilly",
"kelly ripa",
"kelvin harrison jr.",
"keri russell",
"kerry washington",
"angela bassett",
"kevin bacon",
"kevin costner",
"kevin hart",
"kevin spacey",
"ki hong lee",
"kiefer sutherland",
"kieran culkin",
"kiernan shipka",
"kim dickens",
"kim kardashian",
"angelina jolie",
"kirsten dunst",
"kit harington",
"kourtney kardashian",
"kristen bell",
"kristen stewart",
"kristen wiig",
"kristin davis",
"krysten ritter",
"kyle chandler",
"kylie jenner",
"anna camp",
"kylie minogue",
"lady gaga",
"lake bell",
"lakeith stanfield",
"lamar jackson",
"lana del rey",
"laura dern",
"laura harrier",
"laura linney",
"laura prepon",
"anna faris",
"laurence fishburne",
"laverne cox",
"lebron james",
"lea michele",
"lea seydoux",
"lee pace",
"leighton meester",
"lena headey",
"leonardo da vinci",
"leonardo dicaprio",
"abigail breslin",
"anna kendrick",
"leslie mann",
"leslie odom jr.",
"lewis hamilton",
"liam hemsworth",
"liam neeson",
"lili reinhart",
"lily aldridge",
"lily allen",
"lily collins",
"lily james",
"anna paquin",
"lily rabe",
"lily tomlin",
"lin-manuel miranda",
"linda cardellini",
"lionel messi",
"lisa bonet",
"lisa kudrow",
"liv tyler",
"lizzo",
"logan lerman",
"annasophia robb",
"lorde",
"lucy boynton",
"lucy hale",
"lucy lawless",
"lucy liu",
"luke evans",
"luke perry",
"luke wilson",
"lupita nyong'o",
"léa seydoux",
"annabelle wallis",
"mackenzie davis",
"madelaine petsch",
"mads mikkelsen",
"mae whitman",
"maggie gyllenhaal",
"maggie q",
"maggie siff",
"maggie smith",
"mahershala ali",
"mahira khan",
"anne hathaway",
"maisie richardson-sellers",
"maisie williams",
"mandy moore",
"mandy patinkin",
"marc anthony",
"margaret qualley",
"margot robbie",
"maria sharapova",
"marion cotillard",
"marisa tomei",
"anne marie",
"mariska hargitay",
"mark hamill",
"mark ruffalo",
"mark strong",
"mark wahlberg",
"mark zuckerberg",
"marlon brando",
"martin freeman",
"martin scorsese",
"mary elizabeth winstead",
"anne-marie",
"mary j. blige",
"mary steenburgen",
"mary-louise parker",
"matt bomer",
"matt damon",
"matt leblanc",
"matt smith",
"matthew fox",
"matthew goode",
"matthew macfadyen",
"ansel elgort",
"matthew mcconaughey",
"matthew perry",
"matthew rhys",
"matthew stafford",
"max minghella",
"maya angelou",
"maya hawke",
"maya rudolph",
"megan fox",
"megan rapinoe",
"anson mount",
"meghan markle",
"mel gibson",
"melanie lynskey",
"melissa benoist",
"melissa mccarthy",
"melonie diaz",
"meryl streep",
"mia wasikowska",
"michael b. jordan",
"michael c. hall",
"anthony hopkins",
"michael caine",
"michael cera",
"michael cudlitz",
"michael douglas",
"michael ealy",
"michael fassbender",
"michael jordan",
"michael keaton",
"michael pena",
"michael peña",
"abigail spencer",
"anthony joshua",
"michael phelps",
"michael shannon",
"michael sheen",
"michael stuhlbarg",
"michelle dockery",
"michelle monaghan",
"michelle obama",
"michelle pfeiffer",
"michelle rodriguez",
"michelle williams",
"anthony mackie",
"michelle yeoh",
"michiel huisman",
"mila kunis",
"miles teller",
"milla jovovich",
"millie bobby brown",
"milo ventimiglia",
"mindy kaling",
"miranda cosgrove",
"miranda kerr",
"antonio banderas",
"mireille enos",
"molly ringwald",
"morgan freeman",
"mélanie laurent",
"naomi campbell",
"naomi harris",
"naomi scott",
"naomi watts",
"naomie harris",
"nas",
"anya taylor-joy",
"natalie dormer",
"natalie imbruglia",
"natalie morales",
"natalie portman",
"nathalie emmanuel",
"nathalie portman",
"nathan fillion",
"naya rivera",
"neil patrick harris",
"neil degrasse tyson",
"ariana grande",
"neve campbell",
"neymar jr.",
"nicholas braun",
"nicholas hoult",
"nick jonas",
"nick kroll",
"nick offerman",
"nick robinson",
"nicole kidman",
"nikolaj coster-waldau",
"armie hammer",
"nina dobrev",
"noah centineo",
"noomi rapace",
"norman reedus",
"novak djokovic",
"octavia spencer",
"odessa young",
"odette annable",
"olivia colman",
"olivia cooke",
"ashley judd",
"olivia holt",
"olivia munn",
"olivia wilde",
"oprah winfrey",
"orlando bloom",
"oscar isaac",
"owen wilson",
"pablo picasso",
"patrick dempsey",
"patrick mahomes",
"ashton kutcher",
"patrick stewart",
"patrick wilson",
"paul bettany",
"paul dano",
"paul giamatti",
"paul mccartney",
"paul rudd",
"paul wesley",
"paula patton",
"pedro almodóvar",
"aubrey plaza",
"pedro pascal",
"penelope cruz",
"penélope cruz",
"pete davidson",
"peter dinklage",
"phoebe dynevor",
"phoebe waller-bridge",
"pierce brosnan",
"portia de rossi",
"priyanka chopra",
"auli'i cravalho",
"quentin tarantino",
"rachel bilson",
"rachel brosnahan",
"rachel mcadams",
"rachel weisz",
"rafe spall",
"rainn wilson",
"ralph fiennes",
"rami malek",
"rashida jones",
"adam brody",
"awkwafina",
"ray liotta",
"ray romano",
"rebecca ferguson",
"rebecca hall",
"reese witherspoon",
"regina hall",
"regina king",
"renee zellweger",
"renée zellweger",
"rhys ifans",
"barack obama",
"ricardo montalban",
"richard armitage",
"richard gere",
"richard jenkins",
"richard madden",
"ricky gervais",
"ricky martin",
"rihanna",
"riley keough",
"rita ora",
"bella hadid",
"river phoenix",
"riz ahmed",
"rob lowe",
"robert carlyle",
"robert de niro",
"robert downey jr.",
"robert pattinson",
"robert sheehan",
"robin tunney",
"robin williams",
"bella thorne",
"roger federer",
"rooney mara",
"rosamund pike",
"rosario dawson",
"rose byrne",
"rose leslie",
"roselyn sanchez",
"ruby rose",
"rupert grint",
"russell brand",
"ben barnes",
"russell crowe",
"russell wilson",
"ruth bader ginsburg",
"ruth wilson",
"ryan eggold",
"ryan gosling",
"ryan murphy",
"ryan phillippe",
"ryan reynolds",
"ryan seacrest",
"ben mendelsohn",
"salma hayek",
"sam claflin",
"sam heughan",
"sam rockwell",
"sam smith",
"samara weaving",
"samuel l. jackson",
"sandra bullock",
"sandra oh",
"saoirse ronan",
"ben stiller",
"sarah gadon",
"sarah hyland",
"sarah jessica parker",
"sarah michelle gellar",
"sarah paulson",
"sarah silverman",
"sarah wayne callies",
"sasha alexander",
"scarlett johansson",
"scott speedman",
"ben whishaw",
"sean bean",
"sebastian stan",
"selena gomez",
"selma blair",
"serena williams",
"seth macfarlane",
"seth meyers",
"seth rogen",
"shailene woodley",
"shakira",
"benedict cumberbatch",
"shania twain",
"sharlto copley",
"shawn mendes",
"shia labeouf",
"shiri appleby",
"shohreh aghdashloo",
"shonda rhimes",
"sienna miller",
"sigourney weaver",
"simon baker",
"benedict wong",
"simon cowell",
"simon pegg",
"simone biles",
"sofia boutella",
"sofia vergara",
"sophie turner",
"sophie wessex",
"stanley tucci",
"stephen amell",
"stephen colbert",
"adam devine",
"benicio del toro",
"stephen curry",
"stephen dorff",
"sterling k. brown",
"sterling knight",
"steve carell",
"steven yeun",
"susan sarandon",
"taika waititi",
"taraji p. henson",
"taron egerton",
"bill gates",
"taylor hill",
"taylor kitsch",
"taylor lautner",
"taylor schilling",
"taylor swift",
"teresa palmer",
"terrence howard",
"tessa thompson",
"thandie newton",
"the weeknd",
"bill hader",
"theo james",
"thomas brodie-sangster",
"thomas jane",
"tiger woods",
"tilda swinton",
"tim burton",
"tim cook",
"timothee chalamet",
"timothy olyphant",
"timothy spall",
"bill murray",
"timothée chalamet",
"tina fey",
"tobey maguire",
"toby jones",
"toby kebbell",
"toby regbo",
"tom brady",
"tom brokaw",
"tom cavanagh",
"tom cruise",
"bill pullman",
"tom ellis",
"tom felton",
"tom hanks",
"tom hardy",
"tom hiddleston",
"tom holland",
"tom hollander",
"tom hopper",
"tom selleck",
"toni collette",
"bill skarsgård",
"tony hale",
"topher grace",
"tracee ellis ross",
"tyra banks",
"tyrese gibson",
"uma thurman",
"usain bolt",
"uzo aduba",
"vanessa hudgens",
"vanessa kirby",
"billie eilish",
"vera farmiga",
"victoria pedretti",
"viggo mortensen",
"vin diesel",
"vince vaughn",
"vincent cassel",
"vincent d'onofrio",
"vincent kartheiser",
"viola davis",
"walton goggins",
"billie lourd",
"wes anderson",
"wes bentley",
"whoopi goldberg",
"will ferrell",
"will poulter",
"willem dafoe",
"william jackson harper",
"william shatner",
"winona ryder",
"woody harrelson",
"billy crudup",
"yara shahidi",
"yvonne strahovski",
"zac efron",
"zach braff",
"zach galifianakis",
"zachary levi",
"zachary quinto",
"zayn malik",
"zazie beetz",
"zendaya",
"billy porter",
"zoe kazan",
"zoe kravitz",
"zoe saldana",
"zoey deutch",
"zooey deschanel",
"zoë kravitz",
"zoë saldana"
] |
flxowens/celebrity-classifier-alpha-2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# celebrity-classifier-alpha-2
This model was trained on [tonyassi/celebrity-1000](https://huggingface.co/datasets/tonyassi/celebrity-1000) dataset using [flxowens/celebrity-classifier-alpha-1](https://huggingface.co/flxowens/celebrity-classifier-alpha-1) as a base.
It achieves the following results on the evaluation set:
- Loss: 1.1460
- Accuracy: 0.8155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.1688 | 1.0 | 57 | 3.2075 | 0.6368 |
| 3.1011 | 2.0 | 114 | 3.1265 | 0.6516 |
| 2.9635 | 3.0 | 171 | 2.9998 | 0.6582 |
| 2.7566 | 4.0 | 228 | 2.8278 | 0.6483 |
| 2.627 | 5.0 | 285 | 2.6790 | 0.6714 |
| 2.4471 | 6.0 | 342 | 2.5465 | 0.6761 |
| 2.2209 | 7.0 | 399 | 2.3950 | 0.6915 |
| 2.065 | 8.0 | 456 | 2.2217 | 0.7182 |
| 1.8663 | 9.0 | 513 | 2.1201 | 0.7190 |
| 1.7552 | 10.0 | 570 | 2.0301 | 0.7239 |
| 1.6236 | 11.0 | 627 | 1.9579 | 0.7294 |
| 1.4972 | 12.0 | 684 | 1.8372 | 0.7435 |
| 1.4122 | 13.0 | 741 | 1.7470 | 0.7503 |
| 1.3002 | 14.0 | 798 | 1.6856 | 0.7534 |
| 1.2374 | 15.0 | 855 | 1.5974 | 0.7718 |
| 1.1495 | 16.0 | 912 | 1.5241 | 0.7787 |
| 1.103 | 17.0 | 969 | 1.4876 | 0.7721 |
| 1.0296 | 18.0 | 1026 | 1.4428 | 0.7789 |
| 1.0221 | 19.0 | 1083 | 1.3996 | 0.7899 |
| 0.9271 | 20.0 | 1140 | 1.3016 | 0.8084 |
| 0.8718 | 21.0 | 1197 | 1.3076 | 0.7998 |
| 0.8373 | 22.0 | 1254 | 1.3225 | 0.7891 |
| 0.8346 | 23.0 | 1311 | 1.2529 | 0.8007 |
| 0.7973 | 24.0 | 1368 | 1.1711 | 0.8188 |
| 0.794 | 25.0 | 1425 | 1.1997 | 0.8084 |
| 0.7688 | 26.0 | 1482 | 1.1541 | 0.8174 |
| 0.7452 | 27.0 | 1539 | 1.1727 | 0.8133 |
| 0.7457 | 28.0 | 1596 | 1.1591 | 0.8122 |
| 0.7496 | 29.0 | 1653 | 1.1205 | 0.8177 |
| 0.707 | 30.0 | 1710 | 1.1460 | 0.8155 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"aaron eckhart",
"aaron paul",
"adam driver",
"blake lively",
"bob odenkirk",
"bonnie wright",
"boyd holbrook",
"brad pitt",
"bradley cooper",
"brendan fraser",
"brian cox",
"brie larson",
"brittany snow",
"adam lambert",
"bryan cranston",
"bryce dallas howard",
"busy philipps",
"caitriona balfe",
"cameron diaz",
"camila cabello",
"camila mendes",
"cardi b",
"carey mulligan",
"carla gugino",
"adam levine",
"carrie underwood",
"casey affleck",
"cate blanchett",
"catherine keener",
"catherine zeta-jones",
"celine dion",
"chace crawford",
"chadwick boseman",
"channing tatum",
"charlie cox",
"adam sandler",
"charlie day",
"charlie hunnam",
"charlie plummer",
"charlize theron",
"chiara ferragni",
"chiwetel ejiofor",
"chloe bennet",
"chloe grace moretz",
"chloe sevigny",
"chloë grace moretz",
"adam scott",
"chloë sevigny",
"chris cooper",
"chris evans",
"chris hemsworth",
"chris martin",
"chris messina",
"chris noth",
"chris o'dowd",
"chris pine",
"chris pratt",
"adele",
"chris tucker",
"chrissy teigen",
"christian bale",
"christian slater",
"christina aguilera",
"christina applegate",
"christina hendricks",
"christina milian",
"christina ricci",
"christine baranski",
"adrian grenier",
"christoph waltz",
"christopher plummer",
"christopher walken",
"cillian murphy",
"claire foy",
"clive owen",
"clive standen",
"cobie smulders",
"colin farrell",
"colin firth",
"adèle exarchopoulos",
"colin hanks",
"connie britton",
"conor mcgregor",
"constance wu",
"constance zimmer",
"courteney cox",
"cristiano ronaldo",
"daisy ridley",
"dak prescott",
"dakota fanning",
"aidan gillen",
"dakota johnson",
"damian lewis",
"dan stevens",
"danai gurira",
"dane dehaan",
"daniel craig",
"daniel dae kim",
"daniel day-lewis",
"daniel gillies",
"daniel kaluuya",
"aidan turner",
"daniel mays",
"daniel radcliffe",
"danny devito",
"darren criss",
"dave bautista",
"dave franco",
"dave grohl",
"daveed diggs",
"david attenborough",
"david beckham",
"aaron rodgers",
"aishwarya rai",
"david duchovny",
"david harbour",
"david oyelowo",
"david schwimmer",
"david tennant",
"david thewlis",
"dax shepard",
"debra messing",
"demi lovato",
"dennis quaid",
"aja naomi king",
"denzel washington",
"dermot mulroney",
"dev patel",
"diane keaton",
"diane kruger",
"diane lane",
"diego boneta",
"diego luna",
"djimon hounsou",
"dolly parton",
"alden ehrenreich",
"domhnall gleeson",
"dominic cooper",
"dominic monaghan",
"dominic west",
"don cheadle",
"donald glover",
"donald sutherland",
"donald trump",
"dua lipa",
"dwayne \"the rock\" johnson",
"aldis hodge",
"dwayne johnson",
"dylan o'brien",
"ed harris",
"ed helms",
"ed sheeran",
"eddie murphy",
"eddie redmayne",
"edgar ramirez",
"edward norton",
"eiza gonzalez",
"alec baldwin",
"eiza gonzález",
"elijah wood",
"elisabeth moss",
"elisha cuthbert",
"eliza coupe",
"elizabeth banks",
"elizabeth debicki",
"elizabeth lail",
"elizabeth mcgovern",
"elizabeth moss",
"alex morgan",
"elizabeth olsen",
"elle fanning",
"ellen degeneres",
"ellen page",
"ellen pompeo",
"ellie goulding",
"elon musk",
"emile hirsch",
"emilia clarke",
"emilia fox",
"alex pettyfer",
"emily beecham",
"emily blunt",
"emily browning",
"emily deschanel",
"emily hampshire",
"emily mortimer",
"emily ratajkowski",
"emily vancamp",
"emily watson",
"emma bunton",
"alex rodriguez",
"emma chamberlain",
"emma corrin",
"emma mackey",
"emma roberts",
"emma stone",
"emma thompson",
"emma watson",
"emmanuelle chriqui",
"emmy rossum",
"eoin macken",
"alexander skarsgård",
"eric bana",
"ethan hawke",
"eva green",
"eva longoria",
"eva mendes",
"evan peters",
"evan rachel wood",
"evangeline lilly",
"ewan mcgregor",
"ezra miller",
"alexandra daddario",
"felicity huffman",
"felicity jones",
"finn wolfhard",
"florence pugh",
"florence welch",
"forest whitaker",
"freddie highmore",
"freddie prinze jr.",
"freema agyeman",
"freida pinto",
"aaron taylor-johnson",
"alfre woodard",
"freya allan",
"gabrielle union",
"gael garcia bernal",
"gael garcía bernal",
"gal gadot",
"garrett hedlund",
"gary oldman",
"gemma arterton",
"gemma chan",
"gemma whelan",
"alia shawkat",
"george clooney",
"george lucas",
"gerard butler",
"giancarlo esposito",
"giannis antetokounmpo",
"gigi hadid",
"gillian anderson",
"gillian jacobs",
"gina carano",
"gina gershon",
"alice braga",
"gina rodriguez",
"ginnifer goodwin",
"gisele bundchen",
"glenn close",
"grace kelly",
"greg kinnear",
"greta gerwig",
"greta scacchi",
"greta thunberg",
"gugu mbatha-raw",
"alice eve",
"guy ritchie",
"gwen stefani",
"gwendoline christie",
"gwyneth paltrow",
"hafthor bjornsson",
"hailee steinfeld",
"hailey bieber",
"haley joel osment",
"halle berry",
"hannah simone",
"alicia keys",
"harrison ford",
"harry styles",
"harvey weinstein",
"hayden panettiere",
"hayley atwell",
"helen hunt",
"helen mirren",
"helena bonham carter",
"henry cavill",
"henry golding",
"alicia vikander",
"hilary swank",
"himesh patel",
"hozier",
"hugh bonneville",
"hugh dancy",
"hugh grant",
"hugh jackman",
"hugh laurie",
"ian somerhalder",
"idris elba",
"alison brie",
"imelda staunton",
"imogen poots",
"ioan gruffudd",
"isabella rossellini",
"isabelle huppert",
"isla fisher",
"issa rae",
"iwan rheon",
"j.k. rowling",
"j.k. simmons",
"allison janney",
"jack black",
"jack reynor",
"jack whitehall",
"jackie chan",
"jada pinkett smith",
"jaden smith",
"jaimie alexander",
"jake gyllenhaal",
"jake johnson",
"jake t. austin",
"allison williams",
"james cameron",
"james corden",
"james franco",
"james marsden",
"james mcavoy",
"james norton",
"jamie bell",
"jamie chung",
"jamie dornan",
"jamie foxx",
"alyson hannigan",
"jamie lee curtis",
"jamie oliver",
"jane fonda",
"jane krakowski",
"jane levy",
"jane lynch",
"jane seymour",
"janelle monáe",
"january jones",
"jared leto",
"abbi jacobson",
"amanda peet",
"jason bateman",
"jason clarke",
"jason derulo",
"jason isaacs",
"jason momoa",
"jason mraz",
"jason schwartzman",
"jason segel",
"jason statham",
"jason sudeikis",
"amanda seyfried",
"javier bardem",
"jay baruchel",
"jay-z",
"jeff bezos",
"jeff bridges",
"jeff daniels",
"jeff goldblum",
"jeffrey dean morgan",
"jeffrey donovan",
"jeffrey wright",
"amandla stenberg",
"jemima kirke",
"jenna coleman",
"jenna fischer",
"jenna ortega",
"jennifer aniston",
"jennifer connelly",
"jennifer coolidge",
"jennifer esposito",
"jennifer garner",
"jennifer hudson",
"amber heard",
"jennifer lawrence",
"jennifer lopez",
"jennifer love hewitt",
"jenny slate",
"jeremy irons",
"jeremy renner",
"jeremy strong",
"jerry seinfeld",
"jesse eisenberg",
"jesse metcalfe",
"america ferrera",
"jesse plemons",
"jesse tyler ferguson",
"jesse williams",
"jessica alba",
"jessica biel",
"jessica chastain",
"jessica lange",
"jessie buckley",
"jim carrey",
"jim parsons",
"amy adams",
"joan collins",
"joan cusack",
"joanne froggatt",
"joaquin phoenix",
"jodie comer",
"jodie foster",
"joe jonas",
"joe keery",
"joel edgerton",
"joel kinnaman",
"amy poehler",
"joel mchale",
"john boyega",
"john c. reilly",
"john cena",
"john cho",
"john cleese",
"john corbett",
"john david washington",
"john goodman",
"john hawkes",
"amy schumer",
"john krasinski",
"john legend",
"john leguizamo",
"john lithgow",
"john malkovich",
"john mayer",
"john mulaney",
"john oliver",
"john slattery",
"john travolta",
"ana de armas",
"john turturro",
"johnny depp",
"johnny knoxville",
"jon bernthal",
"jon favreau",
"jon hamm",
"jonah hill",
"jonathan groff",
"jonathan majors",
"jonathan pryce",
"andie macdowell",
"jonathan rhys meyers",
"jordan peele",
"jordana brewster",
"joseph fiennes",
"joseph gordon-levitt",
"josh allen",
"josh brolin",
"josh gad",
"josh hartnett",
"josh hutcherson",
"abhishek bachchan",
"andrew garfield",
"josh radnor",
"jude law",
"judy dench",
"judy greer",
"julia garner",
"julia louis-dreyfus",
"julia roberts",
"julia stiles",
"julian casablancas",
"julian mcmahon",
"andrew lincoln",
"julianna margulies",
"julianne hough",
"julianne moore",
"julianne nicholson",
"juliette binoche",
"juliette lewis",
"juno temple",
"jurnee smollett",
"justin bartha",
"justin bieber",
"andrew scott",
"justin hartley",
"justin herbert",
"justin long",
"justin theroux",
"justin timberlake",
"kj apa",
"kaitlyn dever",
"kaley cuoco",
"kanye west",
"karl urban",
"andy garcia",
"kat dennings",
"kate beckinsale",
"kate bosworth",
"kate hudson",
"kate mara",
"kate middleton",
"kate upton",
"kate walsh",
"kate winslet",
"katee sackhoff",
"andy samberg",
"katherine heigl",
"katherine langford",
"katherine waterston",
"kathryn hahn",
"katie holmes",
"katie mcgrath",
"katy perry",
"kaya scodelario",
"keanu reeves",
"keegan-michael key",
"andy serkis",
"keira knightley",
"keke palmer",
"kelly clarkson",
"kelly macdonald",
"kelly marie tran",
"kelly reilly",
"kelly ripa",
"kelvin harrison jr.",
"keri russell",
"kerry washington",
"angela bassett",
"kevin bacon",
"kevin costner",
"kevin hart",
"kevin spacey",
"ki hong lee",
"kiefer sutherland",
"kieran culkin",
"kiernan shipka",
"kim dickens",
"kim kardashian",
"angelina jolie",
"kirsten dunst",
"kit harington",
"kourtney kardashian",
"kristen bell",
"kristen stewart",
"kristen wiig",
"kristin davis",
"krysten ritter",
"kyle chandler",
"kylie jenner",
"anna camp",
"kylie minogue",
"lady gaga",
"lake bell",
"lakeith stanfield",
"lamar jackson",
"lana del rey",
"laura dern",
"laura harrier",
"laura linney",
"laura prepon",
"anna faris",
"laurence fishburne",
"laverne cox",
"lebron james",
"lea michele",
"lea seydoux",
"lee pace",
"leighton meester",
"lena headey",
"leonardo da vinci",
"leonardo dicaprio",
"abigail breslin",
"anna kendrick",
"leslie mann",
"leslie odom jr.",
"lewis hamilton",
"liam hemsworth",
"liam neeson",
"lili reinhart",
"lily aldridge",
"lily allen",
"lily collins",
"lily james",
"anna paquin",
"lily rabe",
"lily tomlin",
"lin-manuel miranda",
"linda cardellini",
"lionel messi",
"lisa bonet",
"lisa kudrow",
"liv tyler",
"lizzo",
"logan lerman",
"annasophia robb",
"lorde",
"lucy boynton",
"lucy hale",
"lucy lawless",
"lucy liu",
"luke evans",
"luke perry",
"luke wilson",
"lupita nyong'o",
"léa seydoux",
"annabelle wallis",
"mackenzie davis",
"madelaine petsch",
"mads mikkelsen",
"mae whitman",
"maggie gyllenhaal",
"maggie q",
"maggie siff",
"maggie smith",
"mahershala ali",
"mahira khan",
"anne hathaway",
"maisie richardson-sellers",
"maisie williams",
"mandy moore",
"mandy patinkin",
"marc anthony",
"margaret qualley",
"margot robbie",
"maria sharapova",
"marion cotillard",
"marisa tomei",
"anne marie",
"mariska hargitay",
"mark hamill",
"mark ruffalo",
"mark strong",
"mark wahlberg",
"mark zuckerberg",
"marlon brando",
"martin freeman",
"martin scorsese",
"mary elizabeth winstead",
"anne-marie",
"mary j. blige",
"mary steenburgen",
"mary-louise parker",
"matt bomer",
"matt damon",
"matt leblanc",
"matt smith",
"matthew fox",
"matthew goode",
"matthew macfadyen",
"ansel elgort",
"matthew mcconaughey",
"matthew perry",
"matthew rhys",
"matthew stafford",
"max minghella",
"maya angelou",
"maya hawke",
"maya rudolph",
"megan fox",
"megan rapinoe",
"anson mount",
"meghan markle",
"mel gibson",
"melanie lynskey",
"melissa benoist",
"melissa mccarthy",
"melonie diaz",
"meryl streep",
"mia wasikowska",
"michael b. jordan",
"michael c. hall",
"anthony hopkins",
"michael caine",
"michael cera",
"michael cudlitz",
"michael douglas",
"michael ealy",
"michael fassbender",
"michael jordan",
"michael keaton",
"michael pena",
"michael peña",
"abigail spencer",
"anthony joshua",
"michael phelps",
"michael shannon",
"michael sheen",
"michael stuhlbarg",
"michelle dockery",
"michelle monaghan",
"michelle obama",
"michelle pfeiffer",
"michelle rodriguez",
"michelle williams",
"anthony mackie",
"michelle yeoh",
"michiel huisman",
"mila kunis",
"miles teller",
"milla jovovich",
"millie bobby brown",
"milo ventimiglia",
"mindy kaling",
"miranda cosgrove",
"miranda kerr",
"antonio banderas",
"mireille enos",
"molly ringwald",
"morgan freeman",
"mélanie laurent",
"naomi campbell",
"naomi harris",
"naomi scott",
"naomi watts",
"naomie harris",
"nas",
"anya taylor-joy",
"natalie dormer",
"natalie imbruglia",
"natalie morales",
"natalie portman",
"nathalie emmanuel",
"nathalie portman",
"nathan fillion",
"naya rivera",
"neil patrick harris",
"neil degrasse tyson",
"ariana grande",
"neve campbell",
"neymar jr.",
"nicholas braun",
"nicholas hoult",
"nick jonas",
"nick kroll",
"nick offerman",
"nick robinson",
"nicole kidman",
"nikolaj coster-waldau",
"armie hammer",
"nina dobrev",
"noah centineo",
"noomi rapace",
"norman reedus",
"novak djokovic",
"octavia spencer",
"odessa young",
"odette annable",
"olivia colman",
"olivia cooke",
"ashley judd",
"olivia holt",
"olivia munn",
"olivia wilde",
"oprah winfrey",
"orlando bloom",
"oscar isaac",
"owen wilson",
"pablo picasso",
"patrick dempsey",
"patrick mahomes",
"ashton kutcher",
"patrick stewart",
"patrick wilson",
"paul bettany",
"paul dano",
"paul giamatti",
"paul mccartney",
"paul rudd",
"paul wesley",
"paula patton",
"pedro almodóvar",
"aubrey plaza",
"pedro pascal",
"penelope cruz",
"penélope cruz",
"pete davidson",
"peter dinklage",
"phoebe dynevor",
"phoebe waller-bridge",
"pierce brosnan",
"portia de rossi",
"priyanka chopra",
"auli'i cravalho",
"quentin tarantino",
"rachel bilson",
"rachel brosnahan",
"rachel mcadams",
"rachel weisz",
"rafe spall",
"rainn wilson",
"ralph fiennes",
"rami malek",
"rashida jones",
"adam brody",
"awkwafina",
"ray liotta",
"ray romano",
"rebecca ferguson",
"rebecca hall",
"reese witherspoon",
"regina hall",
"regina king",
"renee zellweger",
"renée zellweger",
"rhys ifans",
"barack obama",
"ricardo montalban",
"richard armitage",
"richard gere",
"richard jenkins",
"richard madden",
"ricky gervais",
"ricky martin",
"rihanna",
"riley keough",
"rita ora",
"bella hadid",
"river phoenix",
"riz ahmed",
"rob lowe",
"robert carlyle",
"robert de niro",
"robert downey jr.",
"robert pattinson",
"robert sheehan",
"robin tunney",
"robin williams",
"bella thorne",
"roger federer",
"rooney mara",
"rosamund pike",
"rosario dawson",
"rose byrne",
"rose leslie",
"roselyn sanchez",
"ruby rose",
"rupert grint",
"russell brand",
"ben barnes",
"russell crowe",
"russell wilson",
"ruth bader ginsburg",
"ruth wilson",
"ryan eggold",
"ryan gosling",
"ryan murphy",
"ryan phillippe",
"ryan reynolds",
"ryan seacrest",
"ben mendelsohn",
"salma hayek",
"sam claflin",
"sam heughan",
"sam rockwell",
"sam smith",
"samara weaving",
"samuel l. jackson",
"sandra bullock",
"sandra oh",
"saoirse ronan",
"ben stiller",
"sarah gadon",
"sarah hyland",
"sarah jessica parker",
"sarah michelle gellar",
"sarah paulson",
"sarah silverman",
"sarah wayne callies",
"sasha alexander",
"scarlett johansson",
"scott speedman",
"ben whishaw",
"sean bean",
"sebastian stan",
"selena gomez",
"selma blair",
"serena williams",
"seth macfarlane",
"seth meyers",
"seth rogen",
"shailene woodley",
"shakira",
"benedict cumberbatch",
"shania twain",
"sharlto copley",
"shawn mendes",
"shia labeouf",
"shiri appleby",
"shohreh aghdashloo",
"shonda rhimes",
"sienna miller",
"sigourney weaver",
"simon baker",
"benedict wong",
"simon cowell",
"simon pegg",
"simone biles",
"sofia boutella",
"sofia vergara",
"sophie turner",
"sophie wessex",
"stanley tucci",
"stephen amell",
"stephen colbert",
"adam devine",
"benicio del toro",
"stephen curry",
"stephen dorff",
"sterling k. brown",
"sterling knight",
"steve carell",
"steven yeun",
"susan sarandon",
"taika waititi",
"taraji p. henson",
"taron egerton",
"bill gates",
"taylor hill",
"taylor kitsch",
"taylor lautner",
"taylor schilling",
"taylor swift",
"teresa palmer",
"terrence howard",
"tessa thompson",
"thandie newton",
"the weeknd",
"bill hader",
"theo james",
"thomas brodie-sangster",
"thomas jane",
"tiger woods",
"tilda swinton",
"tim burton",
"tim cook",
"timothee chalamet",
"timothy olyphant",
"timothy spall",
"bill murray",
"timothée chalamet",
"tina fey",
"tobey maguire",
"toby jones",
"toby kebbell",
"toby regbo",
"tom brady",
"tom brokaw",
"tom cavanagh",
"tom cruise",
"bill pullman",
"tom ellis",
"tom felton",
"tom hanks",
"tom hardy",
"tom hiddleston",
"tom holland",
"tom hollander",
"tom hopper",
"tom selleck",
"toni collette",
"bill skarsgård",
"tony hale",
"topher grace",
"tracee ellis ross",
"tyra banks",
"tyrese gibson",
"uma thurman",
"usain bolt",
"uzo aduba",
"vanessa hudgens",
"vanessa kirby",
"billie eilish",
"vera farmiga",
"victoria pedretti",
"viggo mortensen",
"vin diesel",
"vince vaughn",
"vincent cassel",
"vincent d'onofrio",
"vincent kartheiser",
"viola davis",
"walton goggins",
"billie lourd",
"wes anderson",
"wes bentley",
"whoopi goldberg",
"will ferrell",
"will poulter",
"willem dafoe",
"william jackson harper",
"william shatner",
"winona ryder",
"woody harrelson",
"billy crudup",
"yara shahidi",
"yvonne strahovski",
"zac efron",
"zach braff",
"zach galifianakis",
"zachary levi",
"zachary quinto",
"zayn malik",
"zazie beetz",
"zendaya",
"billy porter",
"zoe kazan",
"zoe kravitz",
"zoe saldana",
"zoey deutch",
"zooey deschanel",
"zoë kravitz",
"zoë saldana"
] |
alem-147/poison-distill-vit-imagenet-teacher
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poison-distill-vit-imagenet-teacher
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: -1.3063
- Accuracy: 0.5038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| -0.1813 | 1.0 | 130 | -0.9692 | 0.5113 |
| -0.3694 | 2.0 | 260 | -0.9944 | 0.5263 |
| -0.3258 | 3.0 | 390 | -1.2729 | 0.3835 |
| -0.4198 | 4.0 | 520 | -1.1187 | 0.5038 |
| -0.5634 | 5.0 | 650 | -1.3899 | 0.5338 |
| -0.8886 | 6.0 | 780 | -0.4891 | 0.4962 |
| -1.0453 | 7.0 | 910 | -1.5857 | 0.4812 |
| -1.5477 | 8.0 | 1040 | -1.5516 | 0.4887 |
| -1.5745 | 9.0 | 1170 | -1.7739 | 0.4737 |
| -1.7883 | 10.0 | 1300 | -1.6647 | 0.4586 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
alem-147/poison-distill-vit-lowperf-teacher
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poison-distill-vit-lowperf-teacher
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: -147.2981
- Accuracy: 0.6692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| -41.2606 | 1.0 | 130 | -56.4288 | 0.4511 |
| -63.2817 | 2.0 | 260 | -71.5028 | 0.5940 |
| -79.4609 | 3.0 | 390 | -89.7701 | 0.5714 |
| -95.0787 | 4.0 | 520 | -104.8132 | 0.6241 |
| -108.1566 | 5.0 | 650 | -113.9035 | 0.6090 |
| -119.6772 | 6.0 | 780 | -127.5839 | 0.6090 |
| -128.6957 | 7.0 | 910 | -135.8344 | 0.5865 |
| -135.677 | 8.0 | 1040 | -141.0722 | 0.5564 |
| -140.7586 | 9.0 | 1170 | -145.1082 | 0.6466 |
| -143.6635 | 10.0 | 1300 | -147.5504 | 0.6617 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
omidmns/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0713
- Accuracy: 0.9763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2692 | 1.0 | 190 | 0.1522 | 0.9489 |
| 0.1389 | 2.0 | 380 | 0.1087 | 0.96 |
| 0.1385 | 3.0 | 570 | 0.0713 | 0.9763 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"annualcrop",
"forest",
"herbaceousvegetation",
"highway",
"industrial",
"pasture",
"permanentcrop",
"residential",
"river",
"sealake"
] |
ricardoSLabs/cidaut_version_1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cidaut_version_1
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0103
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.9524 | 5 | 0.5816 | 0.6790 |
| 0.578 | 1.9048 | 10 | 0.2481 | 0.9506 |
| 0.578 | 2.8571 | 15 | 0.0853 | 0.9877 |
| 0.1389 | 4.0 | 21 | 0.0528 | 0.9877 |
| 0.1389 | 4.9524 | 26 | 0.0357 | 0.9877 |
| 0.0891 | 5.9048 | 31 | 0.0606 | 0.9815 |
| 0.0891 | 6.8571 | 36 | 0.0949 | 0.9753 |
| 0.1003 | 7.6190 | 40 | 0.0103 | 1.0 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"editada",
"real"
] |
Swapnil949/vit-finetuned-cifar100
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"0",
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9",
"10",
"11",
"12",
"13",
"14",
"15",
"16",
"17",
"18",
"19",
"20",
"21",
"22",
"23",
"24",
"25",
"26",
"27",
"28",
"29",
"30",
"31",
"32",
"33",
"34",
"35",
"36",
"37",
"38",
"39",
"40",
"41",
"42",
"43",
"44",
"45",
"46",
"47",
"48",
"49",
"50",
"51",
"52",
"53",
"54",
"55",
"56",
"57",
"58",
"59",
"60",
"61",
"62",
"63",
"64",
"65",
"66",
"67",
"68",
"69",
"70",
"71",
"72",
"73",
"74",
"75",
"76",
"77",
"78",
"79",
"80",
"81",
"82",
"83",
"84",
"85",
"86",
"87",
"88",
"89",
"90",
"91",
"92",
"93",
"94",
"95",
"96",
"97",
"98",
"99"
] |
Swapnil949/mambavision-finetuned-cifar100
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"0",
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9",
"10",
"11",
"12",
"13",
"14",
"15",
"16",
"17",
"18",
"19",
"20",
"21",
"22",
"23",
"24",
"25",
"26",
"27",
"28",
"29",
"30",
"31",
"32",
"33",
"34",
"35",
"36",
"37",
"38",
"39",
"40",
"41",
"42",
"43",
"44",
"45",
"46",
"47",
"48",
"49",
"50",
"51",
"52",
"53",
"54",
"55",
"56",
"57",
"58",
"59",
"60",
"61",
"62",
"63",
"64",
"65",
"66",
"67",
"68",
"69",
"70",
"71",
"72",
"73",
"74",
"75",
"76",
"77",
"78",
"79",
"80",
"81",
"82",
"83",
"84",
"85",
"86",
"87",
"88",
"89",
"90",
"91",
"92",
"93",
"94",
"95",
"96",
"97",
"98",
"99"
] |
tbjohnson123/vit-base-patch16-224-finetuned-flower
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1
- Datasets 2.19.1
- Tokenizers 0.20.4
|
[
"daisy",
"dandelion",
"roses",
"sunflowers",
"tulips"
] |
griffio/vit-large-patch16-224-in21k-dungeon-geo-morphs-denoised-04Dec24-001
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-patch16-224-in21k-dungeon-geo-morphs-denoised-04Dec24-001
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the dungeon-geo-morphs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0863
- Accuracy: 0.9697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3958 | 4.0 | 10 | 0.9619 | 0.8384 |
| 0.4596 | 8.0 | 20 | 0.3876 | 0.9333 |
| 0.1012 | 12.0 | 30 | 0.1928 | 0.9616 |
| 0.022 | 16.0 | 40 | 0.1181 | 0.9636 |
| 0.0066 | 20.0 | 50 | 0.0936 | 0.9677 |
| 0.0036 | 24.0 | 60 | 0.0863 | 0.9697 |
| 0.0028 | 28.0 | 70 | 0.0848 | 0.9697 |
| 0.0025 | 32.0 | 80 | 0.0826 | 0.9697 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"four",
"one",
"three",
"two",
"zero"
] |
griffio/vit-large-patch16-224-in21k-dungeon-geo-morphs-denoised-04Dec24-002
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-patch16-224-in21k-dungeon-geo-morphs-denoised-04Dec24-002
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1107
- Accuracy: 0.9657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4876 | 4.0 | 10 | 1.1611 | 0.6545 |
| 0.6201 | 8.0 | 20 | 0.5442 | 0.9152 |
| 0.1543 | 12.0 | 30 | 0.2724 | 0.9556 |
| 0.0344 | 16.0 | 40 | 0.1593 | 0.9636 |
| 0.0095 | 20.0 | 50 | 0.1314 | 0.9657 |
| 0.0047 | 24.0 | 60 | 0.1091 | 0.9657 |
| 0.0033 | 28.0 | 70 | 0.1139 | 0.9636 |
| 0.0029 | 32.0 | 80 | 0.1107 | 0.9657 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"four",
"one",
"three",
"two",
"zero"
] |
griffio/vit-large-patch16-224-in21k-dungeon-geo-morphs-denoised-04Dec24-003
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-patch16-224-in21k-dungeon-geo-morphs-denoised-04Dec24-003
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the dungeon-geo-morphs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0725
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5126 | 4.0 | 10 | 1.3321 | 0.6384 |
| 1.049 | 8.0 | 20 | 0.7613 | 0.8444 |
| 0.5397 | 12.0 | 30 | 0.4086 | 0.9434 |
| 0.2381 | 16.0 | 40 | 0.2025 | 0.9899 |
| 0.1152 | 20.0 | 50 | 0.1160 | 0.9899 |
| 0.058 | 24.0 | 60 | 0.0725 | 1.0 |
| 0.0392 | 28.0 | 70 | 0.0678 | 0.9899 |
| 0.026 | 32.0 | 80 | 0.0444 | 0.9960 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"four",
"one",
"three",
"two",
"zero"
] |
griffio/vit-large-patch16-224-in21k-dungeon-geo-morphs-denoised-04Dec24-004
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-patch16-224-in21k-dungeon-geo-morphs-denoised-04Dec24-004
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the dungeon-geo-morphs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0486
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4428 | 4.0 | 10 | 1.0774 | 0.7192 |
| 0.8313 | 8.0 | 20 | 0.5764 | 0.9010 |
| 0.3864 | 12.0 | 30 | 0.2752 | 0.9576 |
| 0.164 | 16.0 | 40 | 0.1299 | 0.9879 |
| 0.08 | 20.0 | 50 | 0.0736 | 0.9960 |
| 0.046 | 24.0 | 60 | 0.0486 | 1.0 |
| 0.0243 | 28.0 | 70 | 0.0369 | 0.9939 |
| 0.0194 | 32.0 | 80 | 0.0254 | 1.0 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"four",
"one",
"three",
"two",
"zero"
] |
CristianR8/resnet-50-cocoa
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-cocoa
This model is a fine-tuned version of [google/efficientnet-b0](https://huggingface.co/google/efficientnet-b0) on the SemilleroCV/Cocoa-dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2657
- Accuracy: 0.9097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0627 | 1.0 | 196 | 1.5223 | 0.5596 |
| 0.591 | 2.0 | 392 | 0.8975 | 0.8303 |
| 0.6623 | 3.0 | 588 | 0.6564 | 0.8773 |
| 0.4874 | 4.0 | 784 | 0.6842 | 0.8339 |
| 0.4671 | 5.0 | 980 | 0.4894 | 0.8809 |
| 0.5623 | 6.0 | 1176 | 0.4160 | 0.8736 |
| 0.3917 | 7.0 | 1372 | 0.4022 | 0.8845 |
| 0.3153 | 8.0 | 1568 | 0.4939 | 0.8412 |
| 0.5814 | 9.0 | 1764 | 0.3540 | 0.8773 |
| 0.5883 | 10.0 | 1960 | 0.3493 | 0.8953 |
| 0.4616 | 11.0 | 2156 | 0.7928 | 0.7762 |
| 0.499 | 12.0 | 2352 | 2.0659 | 0.2960 |
| 0.2236 | 13.0 | 2548 | 0.4444 | 0.8520 |
| 0.2083 | 14.0 | 2744 | 0.4640 | 0.8736 |
| 0.3408 | 15.0 | 2940 | 0.3775 | 0.8773 |
| 0.3529 | 16.0 | 3136 | 0.3519 | 0.8881 |
| 0.3859 | 17.0 | 3332 | 0.3310 | 0.9061 |
| 0.3557 | 18.0 | 3528 | 0.3475 | 0.8917 |
| 0.4979 | 19.0 | 3724 | 0.3839 | 0.8592 |
| 0.7133 | 20.0 | 3920 | 0.3032 | 0.9134 |
| 0.4489 | 21.0 | 4116 | 0.4246 | 0.8520 |
| 0.2605 | 22.0 | 4312 | 0.2951 | 0.8989 |
| 0.3787 | 23.0 | 4508 | 0.4357 | 0.8520 |
| 0.3015 | 24.0 | 4704 | 0.3990 | 0.8917 |
| 0.1965 | 25.0 | 4900 | 0.3536 | 0.9097 |
| 0.3903 | 26.0 | 5096 | 0.4166 | 0.8592 |
| 0.1902 | 27.0 | 5292 | 0.4354 | 0.8520 |
| 0.2089 | 28.0 | 5488 | 0.4089 | 0.8592 |
| 0.3574 | 29.0 | 5684 | 0.4787 | 0.8231 |
| 0.3532 | 30.0 | 5880 | 0.3165 | 0.9097 |
| 0.2967 | 31.0 | 6076 | 0.3105 | 0.9134 |
| 0.2364 | 32.0 | 6272 | 0.3560 | 0.9061 |
| 0.3136 | 33.0 | 6468 | 0.2657 | 0.9097 |
| 0.4061 | 34.0 | 6664 | 0.2680 | 0.9134 |
| 0.3296 | 35.0 | 6860 | 0.3798 | 0.9061 |
| 0.2905 | 36.0 | 7056 | 0.5098 | 0.8556 |
| 0.2763 | 37.0 | 7252 | 0.4219 | 0.8809 |
| 0.2454 | 38.0 | 7448 | 0.2852 | 0.9134 |
| 0.6077 | 39.0 | 7644 | 0.3603 | 0.8989 |
| 0.1966 | 40.0 | 7840 | 0.3519 | 0.8736 |
| 0.2473 | 41.0 | 8036 | 0.3343 | 0.9025 |
| 0.2795 | 42.0 | 8232 | 0.3384 | 0.9170 |
| 0.1249 | 43.0 | 8428 | 0.4046 | 0.8773 |
| 0.2943 | 44.0 | 8624 | 0.3953 | 0.8917 |
| 0.3002 | 45.0 | 8820 | 0.5003 | 0.8592 |
| 0.1525 | 46.0 | 9016 | 0.3232 | 0.9170 |
| 0.4022 | 47.0 | 9212 | 0.3113 | 0.9170 |
| 0.4994 | 48.0 | 9408 | 0.4494 | 0.8556 |
| 0.6512 | 49.0 | 9604 | 0.3722 | 0.9206 |
| 0.3152 | 50.0 | 9800 | 0.2852 | 0.9097 |
| 0.1165 | 51.0 | 9996 | 0.4138 | 0.8628 |
| 0.216 | 52.0 | 10192 | 0.3413 | 0.8953 |
| 0.1455 | 53.0 | 10388 | 0.3046 | 0.9170 |
| 0.554 | 54.0 | 10584 | 0.2849 | 0.8989 |
| 0.3586 | 55.0 | 10780 | 0.3517 | 0.9134 |
| 0.2239 | 56.0 | 10976 | 0.4538 | 0.9025 |
| 0.1725 | 57.0 | 11172 | 0.4492 | 0.8592 |
| 0.4689 | 58.0 | 11368 | 0.4739 | 0.8628 |
| 0.3565 | 59.0 | 11564 | 0.2831 | 0.9206 |
| 0.2259 | 60.0 | 11760 | 0.3465 | 0.9206 |
| 0.2212 | 61.0 | 11956 | 0.2884 | 0.9314 |
| 0.2648 | 62.0 | 12152 | 0.4875 | 0.8448 |
| 0.3438 | 63.0 | 12348 | 0.3989 | 0.9061 |
| 0.4785 | 64.0 | 12544 | 0.5953 | 0.8520 |
| 0.06 | 65.0 | 12740 | 0.2954 | 0.9278 |
| 0.1965 | 66.0 | 12936 | 0.5033 | 0.8520 |
| 0.3548 | 67.0 | 13132 | 0.4132 | 0.8809 |
| 0.1279 | 68.0 | 13328 | 0.3743 | 0.9170 |
| 0.2879 | 69.0 | 13524 | 0.6423 | 0.7762 |
| 0.1757 | 70.0 | 13720 | 0.5979 | 0.8014 |
| 0.3338 | 71.0 | 13916 | 0.4398 | 0.8989 |
| 0.1604 | 72.0 | 14112 | 0.5634 | 0.8231 |
| 0.1078 | 73.0 | 14308 | 0.6204 | 0.7762 |
| 0.258 | 74.0 | 14504 | 0.3685 | 0.8953 |
| 0.1227 | 75.0 | 14700 | 0.7026 | 0.8159 |
| 0.2257 | 76.0 | 14896 | 0.4048 | 0.9170 |
| 0.1786 | 77.0 | 15092 | 0.4891 | 0.8845 |
| 0.2006 | 78.0 | 15288 | 0.4216 | 0.8773 |
| 0.3144 | 79.0 | 15484 | 0.2721 | 0.8953 |
| 0.1969 | 80.0 | 15680 | 0.4270 | 0.8484 |
| 0.1405 | 81.0 | 15876 | 0.7632 | 0.7834 |
| 0.1427 | 82.0 | 16072 | 0.3249 | 0.9025 |
| 0.2493 | 83.0 | 16268 | 0.3838 | 0.8989 |
| 0.331 | 84.0 | 16464 | 0.3330 | 0.9206 |
| 0.1231 | 85.0 | 16660 | 0.3246 | 0.8700 |
| 0.2781 | 86.0 | 16856 | 0.3710 | 0.8736 |
| 0.7193 | 87.0 | 17052 | 0.3384 | 0.9061 |
| 0.1149 | 88.0 | 17248 | 0.3703 | 0.9097 |
| 0.0269 | 89.0 | 17444 | 0.5013 | 0.8592 |
| 0.0967 | 90.0 | 17640 | 0.3456 | 0.8989 |
| 0.177 | 91.0 | 17836 | 0.3799 | 0.8881 |
| 0.1917 | 92.0 | 18032 | 0.3239 | 0.9061 |
| 0.2082 | 93.0 | 18228 | 0.4861 | 0.8989 |
| 0.3836 | 94.0 | 18424 | 0.4444 | 0.8736 |
| 0.1 | 95.0 | 18620 | 0.3713 | 0.8845 |
| 0.1785 | 96.0 | 18816 | 0.4279 | 0.8303 |
| 0.19 | 97.0 | 19012 | 0.6588 | 0.8412 |
| 0.099 | 98.0 | 19208 | 0.6632 | 0.8267 |
| 0.1467 | 99.0 | 19404 | 0.4642 | 0.8809 |
| 0.2617 | 100.0 | 19600 | 0.3624 | 0.8809 |
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"fermentado",
"hongo",
"insecto",
"insufi_fermen",
"pizarroso",
"violeta"
] |
AdityasArsenal/finetuned-for-YogaPoses
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-for-YogaPoses
This model is a fine-tuned version of [google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) on the indian_food_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2234
- Accuracy: 0.9463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.2812 | 1.8868 | 100 | 0.1830 | 0.9329 |
| 0.1828 | 3.7736 | 200 | 0.2234 | 0.9463 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"downdog",
"goddess",
"plank",
"tree",
"warrior2"
] |
LambrightBrandeis/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2630
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.4631 | 1.0 |
| No log | 2.0 | 2 | 0.3182 | 1.0 |
| No log | 3.0 | 3 | 0.2630 | 1.0 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"book",
"dog"
] |
zubairsalman7/xray_vit
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-xray-tumor
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the chest-xray-tumor dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2989
- Accuracy: 0.9574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.5283 | 3.6765 | 125 | 0.2948 | 0.9606 |
| 0.516 | 7.3529 | 250 | 0.2843 | 0.9601 |
| 0.4878 | 11.0294 | 375 | 0.2756 | 0.9601 |
| 0.459 | 14.7059 | 500 | 0.2801 | 0.9601 |
| 0.4462 | 18.3824 | 625 | 0.2761 | 0.9595 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"no tumor",
"tumor"
] |
Tianmu28/mammals_multiclass_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mammals_multiclass_classification
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2452
- Accuracy: 0.9496
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5501 | 1.0 | 394 | 0.3697 | 0.9207 |
| 0.0757 | 2.0 | 788 | 0.2894 | 0.9311 |
| 0.034 | 3.0 | 1182 | 0.2865 | 0.9304 |
| 0.0043 | 4.0 | 1576 | 0.2610 | 0.9385 |
| 0.0024 | 5.0 | 1970 | 0.2526 | 0.9415 |
| 0.0007 | 6.0 | 2364 | 0.2452 | 0.9496 |
| 0.0006 | 7.0 | 2758 | 0.2432 | 0.9481 |
| 0.0004 | 8.0 | 3152 | 0.2442 | 0.9481 |
| 0.0004 | 9.0 | 3546 | 0.2484 | 0.9496 |
| 0.0003 | 10.0 | 3940 | 0.2545 | 0.9467 |
| 0.0003 | 11.0 | 4334 | 0.2543 | 0.9481 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24",
"label_25",
"label_26",
"label_27",
"label_28",
"label_29",
"label_30",
"label_31",
"label_32",
"label_33",
"label_34",
"label_35",
"label_36",
"label_37",
"label_38",
"label_39",
"label_40",
"label_41",
"label_42",
"label_43",
"label_44"
] |
anonymous-429/osf-swin-base-patch4-window7-cifar10
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
anonymous-429/osf-swinv2-base-patch4-window7-cifar100
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"apple",
"aquarium_fish",
"baby",
"bear",
"beaver",
"bed",
"bee",
"beetle",
"bicycle",
"bottle",
"bowl",
"boy",
"bridge",
"bus",
"butterfly",
"camel",
"can",
"castle",
"caterpillar",
"cattle",
"chair",
"chimpanzee",
"clock",
"cloud",
"cockroach",
"couch",
"cra",
"crocodile",
"cup",
"dinosaur",
"dolphin",
"elephant",
"flatfish",
"forest",
"fox",
"girl",
"hamster",
"house",
"kangaroo",
"keyboard",
"lamp",
"lawn_mower",
"leopard",
"lion",
"lizard",
"lobster",
"man",
"maple_tree",
"motorcycle",
"mountain",
"mouse",
"mushroom",
"oak_tree",
"orange",
"orchid",
"otter",
"palm_tree",
"pear",
"pickup_truck",
"pine_tree",
"plain",
"plate",
"poppy",
"porcupine",
"possum",
"rabbit",
"raccoon",
"ray",
"road",
"rocket",
"rose",
"sea",
"seal",
"shark",
"shrew",
"skunk",
"skyscraper",
"snail",
"snake",
"spider",
"squirrel",
"streetcar",
"sunflower",
"sweet_pepper",
"table",
"tank",
"telephone",
"television",
"tiger",
"tractor",
"train",
"trout",
"tulip",
"turtle",
"wardrobe",
"whale",
"willow_tree",
"wolf",
"woman",
"worm"
] |
anonymous-429/osf-vit-base-patch16-224-cifar10
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
anonymous-429/osf-vit-base-patch16-224-imagenet
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane2",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
anonymous-429/osf-vit-base-patch16-224-cifar100
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"apple",
"aquarium_fish",
"baby",
"bear",
"beaver",
"bed",
"bee",
"beetle",
"bicycle",
"bottle",
"bowl",
"boy",
"bridge",
"bus",
"butterfly",
"camel",
"can",
"castle",
"caterpillar",
"cattle",
"chair",
"chimpanzee",
"clock",
"cloud",
"cockroach",
"couch",
"cra",
"crocodile",
"cup",
"dinosaur",
"dolphin",
"elephant",
"flatfish",
"forest",
"fox",
"girl",
"hamster",
"house",
"kangaroo",
"keyboard",
"lamp",
"lawn_mower",
"leopard",
"lion",
"lizard",
"lobster",
"man",
"maple_tree",
"motorcycle",
"mountain",
"mouse",
"mushroom",
"oak_tree",
"orange",
"orchid",
"otter",
"palm_tree",
"pear",
"pickup_truck",
"pine_tree",
"plain",
"plate",
"poppy",
"porcupine",
"possum",
"rabbit",
"raccoon",
"ray",
"road",
"rocket",
"rose",
"sea",
"seal",
"shark",
"shrew",
"skunk",
"skyscraper",
"snail",
"snake",
"spider",
"squirrel",
"streetcar",
"sunflower",
"sweet_pepper",
"table",
"tank",
"telephone",
"television",
"tiger",
"tractor",
"train",
"trout",
"tulip",
"turtle",
"wardrobe",
"whale",
"willow_tree",
"wolf",
"woman",
"worm"
] |
Krishnamsai/vit-base-patch16-224-finetuned-skin
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-skin
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8163
- Accuracy: 0.8138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.3407 | 0.9811 | 13 | 1.1039 | 0.7447 |
| 1.0256 | 1.9623 | 26 | 0.8735 | 0.8138 |
| 0.8621 | 2.9434 | 39 | 0.8163 | 0.8138 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"blister_aug",
"carbuncle_aug",
"cold sore_aug",
"contact dermatitis_aug",
"cellulitis_aug",
"eczema_aug",
"flat_warts_aug",
"freckles_aug",
"lupus_aug",
"measles_aug",
"periungal_wart_aug",
"rosacea_aug",
"scabies_aug",
"vitiligo_aug",
"impetigo_aug"
] |
RohitG009/results
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7298
- Accuracy: 0.7736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7786 | 1.0 | 3077 | 0.8279 | 0.7250 |
| 0.4657 | 2.0 | 6154 | 0.7298 | 0.7736 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"3",
"0",
"1",
"4",
"5",
"6",
"2"
] |
sksatyam/finetuned-websites
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-websites
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the finetuned-websites dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8349
- Accuracy: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.6802 | 4.1667 | 100 | 1.2561 | 0.5735 |
| 0.3727 | 8.3333 | 200 | 0.8349 | 0.75 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"banks",
"lokaly",
"shopping_sites",
"social_media",
"vis",
"vscode",
"youtube"
] |
rohan4s/finetuned-traditional-food-vit
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-traditional-food-vit
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0747
- Accuracy: 0.9890
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0964 | 3.125 | 100 | 0.0747 | 0.9890 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"bakarkhani",
"flattened rice",
"jalebi",
"morobba"
] |
rohan4s/finetuned-indian-food
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-indian-food
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0439
- Accuracy: 0.9890
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"bakarkhani",
"flattened rice",
"jalebi",
"morobba"
] |
Newvel/face_age_detection_base_v2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# face_age_detection_base_v2
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0879
- Accuracy: 0.9702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1243 | 0.9968 | 157 | 0.1266 | 0.9556 |
| 0.1147 | 1.9952 | 314 | 0.1105 | 0.9648 |
| 0.0909 | 2.9937 | 471 | 0.1035 | 0.9660 |
| 0.0647 | 3.9921 | 628 | 0.0879 | 0.9702 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"not an adult",
"adult"
] |
omidmns/vit-base-beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0393
- Accuracy: 0.9916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1829 | 1.0 | 2869 | 0.1319 | 0.9686 |
| 0.1706 | 2.0 | 5738 | 0.0846 | 0.9795 |
| 0.0941 | 3.0 | 8607 | 0.0590 | 0.9862 |
| 0.0977 | 4.0 | 11476 | 0.0447 | 0.9906 |
| 0.1617 | 5.0 | 14345 | 0.0393 | 0.9916 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"annualcrop",
"forest",
"herbaceousvegetation",
"highway",
"industrial",
"pasture",
"permanentcrop",
"residential",
"river",
"sealake"
] |
LaLegumbreArtificial/NEO_MUL_EXP2_0
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NEO_MUL_EXP2_0
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0688
- Accuracy: 0.9725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.2069 | 0.9886 | 65 | 0.1702 | 0.9383 |
| 0.1063 | 1.9924 | 131 | 0.0850 | 0.9733 |
| 0.0888 | 2.9962 | 197 | 0.0946 | 0.9683 |
| 0.0869 | 4.0 | 263 | 0.0700 | 0.9717 |
| 0.0582 | 4.9430 | 325 | 0.0688 | 0.9725 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
[
"burn through",
"contamination",
"good weld",
"lack of fusion",
"lack of penetration",
"misalignment"
] |
Tianmu28/city_multiclass_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# city_multiclass_classification
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2000
- Accuracy: 0.9667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6756 | 1.0 | 53 | 1.2548 | 0.8389 |
| 0.3699 | 2.0 | 106 | 0.3864 | 0.9667 |
| 0.0426 | 3.0 | 159 | 0.1737 | 0.9889 |
| 0.0101 | 4.0 | 212 | 0.1243 | 0.9889 |
| 0.0062 | 5.0 | 265 | 0.1115 | 0.9889 |
| 0.0046 | 6.0 | 318 | 0.1028 | 0.9889 |
| 0.0037 | 7.0 | 371 | 0.0979 | 0.9889 |
| 0.0034 | 8.0 | 424 | 0.0928 | 0.9889 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24",
"label_25",
"label_26",
"label_27",
"label_28",
"label_29"
] |
LaLegumbreArtificial/NEO_MUL_EXP2_1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NEO_MUL_EXP2_1
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0441
- Accuracy: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1651 | 0.9886 | 65 | 0.2185 | 0.9233 |
| 0.1203 | 1.9924 | 131 | 0.1108 | 0.9583 |
| 0.0871 | 2.9962 | 197 | 0.0879 | 0.9692 |
| 0.0738 | 4.0 | 263 | 0.0665 | 0.9742 |
| 0.0614 | 4.9430 | 325 | 0.0441 | 0.9833 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
[
"burn through",
"contamination",
"good weld",
"lack of fusion",
"lack of penetration",
"misalignment"
] |
LaLegumbreArtificial/NEO_MUL_EXP2_2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NEO_MUL_EXP2_2
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0441
- Accuracy: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1651 | 0.9886 | 65 | 0.2185 | 0.9233 |
| 0.1203 | 1.9924 | 131 | 0.1108 | 0.9583 |
| 0.0871 | 2.9962 | 197 | 0.0879 | 0.9692 |
| 0.0738 | 4.0 | 263 | 0.0665 | 0.9742 |
| 0.0614 | 4.9430 | 325 | 0.0441 | 0.9833 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
[
"burn through",
"contamination",
"good weld",
"lack of fusion",
"lack of penetration",
"misalignment"
] |
LaLegumbreArtificial/NEO_MUL_EXP2_3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NEO_MUL_EXP2_3
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0653
- Accuracy: 0.9783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.2483 | 0.9886 | 65 | 0.2490 | 0.91 |
| 0.1398 | 1.9924 | 131 | 0.1117 | 0.9608 |
| 0.0826 | 2.9962 | 197 | 0.0947 | 0.965 |
| 0.0682 | 4.0 | 263 | 0.0607 | 0.9783 |
| 0.0704 | 4.9430 | 325 | 0.0653 | 0.9783 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
[
"burn through",
"contamination",
"good weld",
"lack of fusion",
"lack of penetration",
"misalignment"
] |
LaLegumbreArtificial/NEO_MUL_EXP2_4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NEO_MUL_EXP2_4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0441
- Accuracy: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1651 | 0.9886 | 65 | 0.2185 | 0.9233 |
| 0.1203 | 1.9924 | 131 | 0.1108 | 0.9583 |
| 0.0871 | 2.9962 | 197 | 0.0879 | 0.9692 |
| 0.0738 | 4.0 | 263 | 0.0665 | 0.9742 |
| 0.0614 | 4.9430 | 325 | 0.0441 | 0.9833 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
[
"burn through",
"contamination",
"good weld",
"lack of fusion",
"lack of penetration",
"misalignment"
] |
Augusto777/swinv2-tiny-patch4-window8-256-DMAE-da3-colab
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-DMAE-da3-colab
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3992
- Accuracy: 0.3696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:--------:|:----:|:---------------:|:--------:|
| 1.3523 | 0.9778 | 22 | 1.4024 | 0.3261 |
| 1.3805 | 2.0 | 45 | 1.3775 | 0.2609 |
| 1.3221 | 2.9778 | 67 | 1.4419 | 0.3043 |
| 1.297 | 4.0 | 90 | 1.3582 | 0.3261 |
| 1.353 | 4.9778 | 112 | 1.3406 | 0.3478 |
| 1.2627 | 6.0 | 135 | 1.3824 | 0.1522 |
| 1.3006 | 6.9778 | 157 | 1.4008 | 0.1522 |
| 1.2438 | 8.0 | 180 | 1.3769 | 0.3261 |
| 1.222 | 8.9778 | 202 | 1.4212 | 0.3043 |
| 1.2221 | 10.0 | 225 | 1.4223 | 0.2391 |
| 1.2262 | 10.9778 | 247 | 1.4154 | 0.2609 |
| 1.2381 | 12.0 | 270 | 1.3327 | 0.2391 |
| 1.227 | 12.9778 | 292 | 1.2887 | 0.2826 |
| 1.2158 | 14.0 | 315 | 1.3465 | 0.2609 |
| 1.2174 | 14.9778 | 337 | 1.3476 | 0.3043 |
| 1.1767 | 16.0 | 360 | 1.4024 | 0.1957 |
| 1.2067 | 16.9778 | 382 | 1.3664 | 0.1739 |
| 1.2303 | 18.0 | 405 | 1.4260 | 0.2826 |
| 1.222 | 18.9778 | 427 | 1.4807 | 0.1739 |
| 1.2026 | 20.0 | 450 | 1.3851 | 0.1739 |
| 1.2185 | 20.9778 | 472 | 1.3214 | 0.2609 |
| 1.2773 | 22.0 | 495 | 1.4404 | 0.1957 |
| 1.227 | 22.9778 | 517 | 1.4535 | 0.2391 |
| 1.2032 | 24.0 | 540 | 1.3967 | 0.3043 |
| 1.2223 | 24.9778 | 562 | 1.4090 | 0.3261 |
| 1.2527 | 26.0 | 585 | 1.4858 | 0.2609 |
| 1.2203 | 26.9778 | 607 | 1.4366 | 0.1739 |
| 1.1993 | 28.0 | 630 | 1.4056 | 0.2609 |
| 1.2014 | 28.9778 | 652 | 1.3755 | 0.3043 |
| 1.2027 | 30.0 | 675 | 1.4579 | 0.2609 |
| 1.1961 | 30.9778 | 697 | 1.4524 | 0.2609 |
| 1.1939 | 32.0 | 720 | 1.4488 | 0.2391 |
| 1.1889 | 32.9778 | 742 | 1.4568 | 0.1522 |
| 1.1871 | 34.0 | 765 | 1.3814 | 0.3261 |
| 1.1778 | 34.9778 | 787 | 1.4403 | 0.1304 |
| 1.2404 | 36.0 | 810 | 1.4437 | 0.1957 |
| 1.197 | 36.9778 | 832 | 1.4765 | 0.2174 |
| 1.2161 | 38.0 | 855 | 1.3720 | 0.2391 |
| 1.221 | 38.9778 | 877 | 1.3750 | 0.3478 |
| 1.229 | 40.0 | 900 | 1.3405 | 0.2391 |
| 1.2046 | 40.9778 | 922 | 1.4231 | 0.2609 |
| 1.2077 | 42.0 | 945 | 1.4384 | 0.2391 |
| 1.1865 | 42.9778 | 967 | 1.4346 | 0.2609 |
| 1.1882 | 44.0 | 990 | 1.3679 | 0.2826 |
| 1.2528 | 44.9778 | 1012 | 1.3451 | 0.2174 |
| 1.1836 | 46.0 | 1035 | 1.4913 | 0.2391 |
| 1.2009 | 46.9778 | 1057 | 1.4841 | 0.3261 |
| 1.203 | 48.0 | 1080 | 1.4326 | 0.3043 |
| 1.1679 | 48.9778 | 1102 | 1.3935 | 0.3043 |
| 1.179 | 50.0 | 1125 | 1.4185 | 0.1957 |
| 1.1687 | 50.9778 | 1147 | 1.3686 | 0.2826 |
| 1.1779 | 52.0 | 1170 | 1.4319 | 0.1957 |
| 1.1566 | 52.9778 | 1192 | 1.3801 | 0.1957 |
| 1.192 | 54.0 | 1215 | 1.3746 | 0.2174 |
| 1.1803 | 54.9778 | 1237 | 1.4017 | 0.1957 |
| 1.194 | 56.0 | 1260 | 1.4288 | 0.1957 |
| 1.1486 | 56.9778 | 1282 | 1.3920 | 0.3043 |
| 1.1429 | 58.0 | 1305 | 1.4616 | 0.2391 |
| 1.1655 | 58.9778 | 1327 | 1.4119 | 0.2174 |
| 1.1697 | 60.0 | 1350 | 1.3812 | 0.2609 |
| 1.1898 | 60.9778 | 1372 | 1.4009 | 0.2391 |
| 1.1882 | 62.0 | 1395 | 1.4221 | 0.2391 |
| 1.134 | 62.9778 | 1417 | 1.6190 | 0.2609 |
| 1.1748 | 64.0 | 1440 | 1.4336 | 0.2391 |
| 1.1439 | 64.9778 | 1462 | 1.3744 | 0.1957 |
| 1.1585 | 66.0 | 1485 | 1.3992 | 0.3696 |
| 1.1344 | 66.9778 | 1507 | 1.3952 | 0.2391 |
| 1.1374 | 68.0 | 1530 | 1.3666 | 0.2174 |
| 1.1252 | 68.9778 | 1552 | 1.3705 | 0.2826 |
| 1.1339 | 70.0 | 1575 | 1.3983 | 0.2826 |
| 1.1344 | 70.9778 | 1597 | 1.3792 | 0.3043 |
| 1.1343 | 72.0 | 1620 | 1.4467 | 0.2826 |
| 1.1555 | 72.9778 | 1642 | 1.4823 | 0.2174 |
| 1.1329 | 74.0 | 1665 | 1.5136 | 0.1522 |
| 1.1513 | 74.9778 | 1687 | 1.4791 | 0.2391 |
| 1.1278 | 76.0 | 1710 | 1.4527 | 0.2609 |
| 1.0956 | 76.9778 | 1732 | 1.4840 | 0.2391 |
| 1.1131 | 78.0 | 1755 | 1.4900 | 0.2174 |
| 1.1376 | 78.9778 | 1777 | 1.5395 | 0.2174 |
| 1.0883 | 80.0 | 1800 | 1.5038 | 0.1957 |
| 1.1017 | 80.9778 | 1822 | 1.5392 | 0.1957 |
| 1.1608 | 82.0 | 1845 | 1.4875 | 0.2174 |
| 1.1308 | 82.9778 | 1867 | 1.5080 | 0.1957 |
| 1.1382 | 84.0 | 1890 | 1.4835 | 0.1739 |
| 1.1195 | 84.9778 | 1912 | 1.4076 | 0.1957 |
| 1.1149 | 86.0 | 1935 | 1.4840 | 0.1739 |
| 1.1344 | 86.9778 | 1957 | 1.4733 | 0.1957 |
| 1.1268 | 88.0 | 1980 | 1.4446 | 0.2391 |
| 1.1267 | 88.9778 | 2002 | 1.4360 | 0.2174 |
| 1.1034 | 90.0 | 2025 | 1.4329 | 0.1522 |
| 1.1113 | 90.9778 | 2047 | 1.4670 | 0.1739 |
| 1.0957 | 92.0 | 2070 | 1.4802 | 0.2391 |
| 1.1227 | 92.9778 | 2092 | 1.4715 | 0.1739 |
| 1.1083 | 94.0 | 2115 | 1.4813 | 0.1957 |
| 1.0583 | 94.9778 | 2137 | 1.5203 | 0.1957 |
| 1.093 | 96.0 | 2160 | 1.5394 | 0.1739 |
| 1.0809 | 96.9778 | 2182 | 1.4620 | 0.1739 |
| 1.0888 | 98.0 | 2205 | 1.4407 | 0.1739 |
| 1.1292 | 98.9778 | 2227 | 1.4578 | 0.1957 |
| 1.0754 | 100.0 | 2250 | 1.5031 | 0.1739 |
| 1.0817 | 100.9778 | 2272 | 1.4461 | 0.2174 |
| 1.0671 | 102.0 | 2295 | 1.4723 | 0.2391 |
| 1.0815 | 102.9778 | 2317 | 1.4989 | 0.1957 |
| 1.0967 | 104.0 | 2340 | 1.4654 | 0.2174 |
| 1.091 | 104.9778 | 2362 | 1.4559 | 0.2174 |
| 1.0895 | 106.0 | 2385 | 1.4221 | 0.2826 |
| 1.0847 | 106.9778 | 2407 | 1.4293 | 0.2826 |
| 1.102 | 108.0 | 2430 | 1.4582 | 0.2391 |
| 1.0404 | 108.9778 | 2452 | 1.4656 | 0.2174 |
| 1.0488 | 110.0 | 2475 | 1.4890 | 0.2174 |
| 1.0966 | 110.9778 | 2497 | 1.4632 | 0.2174 |
| 1.0901 | 112.0 | 2520 | 1.4495 | 0.2174 |
| 1.1008 | 112.9778 | 2542 | 1.4333 | 0.2174 |
| 1.0884 | 114.0 | 2565 | 1.4406 | 0.2174 |
| 1.0889 | 114.9778 | 2587 | 1.4474 | 0.2174 |
| 1.0729 | 116.0 | 2610 | 1.4561 | 0.2174 |
| 1.0671 | 116.9778 | 2632 | 1.4538 | 0.2174 |
| 1.0937 | 117.3333 | 2640 | 1.4532 | 0.2174 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Atirath/Skin_Cancer_Using_ViT
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"ak",
"bcc",
"bkl",
"df",
"mel",
"nv",
"scc",
"vasc"
] |
NP-NP/pokemon_model
|
# Model Card for Pokémon Type Classification
This model leverages a Vision Transformer (ViT) to classify Pokémon images into 18 different types.
It was developed as part of the CS 310 Final Project and trained on a Pokémon image dataset.
## Model Details
- **Developer:** Xianglu (Steven) Zhu
- **Purpose:** Pokémon type classification
- **Model Type:** Vision Transformer (ViT) for image classification
## Getting Started
Here’s how you can use the model for classification:
```python
import torch
from PIL import Image
import torchvision.transforms as transforms
from transformers import ViTForImageClassification, ViTFeatureExtractor
# Load the pretrained model and feature extractor
hf_model = ViTForImageClassification.from_pretrained("NP-NP/pokemon_model")
hf_feature_extractor = ViTFeatureExtractor.from_pretrained("NP-NP/pokemon_model")
# Define preprocessing transformations
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=hf_feature_extractor.image_mean, std=hf_feature_extractor.image_std)
])
# Mapping of labels to indices and vice versa
labels_dict = {
'Grass': 0, 'Fire': 1, 'Water': 2, 'Bug': 3, 'Normal': 4, 'Poison': 5, 'Electric': 6,
'Ground': 7, 'Fairy': 8, 'Fighting': 9, 'Psychic': 10, 'Rock': 11, 'Ghost': 12,
'Ice': 13, 'Dragon': 14, 'Dark': 15, 'Steel': 16, 'Flying': 17
}
idx_to_label = {v: k for k, v in labels_dict.items()}
# Load and preprocess the image
image_path = "cute-pikachu-flowers-pokemon-desktop-wallpaper.jpg"
image = Image.open(image_path).convert("RGB")
input_tensor = transform(image).unsqueeze(0) # shape: (1, 3, 224, 224)
# Make a prediction
hf_model.eval()
with torch.no_grad():
outputs = hf_model(input_tensor)
logits = outputs.logits
predicted_class_idx = torch.argmax(logits, dim=1).item()
predicted_class = idx_to_label[predicted_class_idx]
print("Predicted Pokémon type:", predicted_class)
```
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17"
] |
Atirath/Skin_Cancer_Using_MobileViT
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"ak",
"bcc",
"bkl",
"df",
"mel",
"nv",
"scc",
"vasc"
] |
NickoSELI/finetuned-indian-food
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-indian-food
This model is a fine-tuned version of [Salesforce/blip-image-captioning-base](https://huggingface.co/Salesforce/blip-image-captioning-base) on the indian_food_images dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1894
- Accuracy: 0.3454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 2.9029 | 0.3003 | 100 | 2.9256 | 0.1031 |
| 2.8658 | 0.6006 | 200 | 2.7789 | 0.0967 |
| 2.678 | 0.9009 | 300 | 2.6917 | 0.1838 |
| 2.7905 | 1.2012 | 400 | 2.6983 | 0.1498 |
| 2.5964 | 1.5015 | 500 | 2.4903 | 0.2168 |
| 2.4471 | 1.8018 | 600 | 2.5496 | 0.1987 |
| 2.3428 | 2.1021 | 700 | 2.4333 | 0.2540 |
| 2.3367 | 2.4024 | 800 | 2.3813 | 0.2763 |
| 2.2419 | 2.7027 | 900 | 2.3520 | 0.2965 |
| 2.2023 | 3.0030 | 1000 | 2.2766 | 0.3050 |
| 2.2717 | 3.3033 | 1100 | 2.2615 | 0.3071 |
| 2.2311 | 3.6036 | 1200 | 2.2066 | 0.3284 |
| 2.0541 | 3.9039 | 1300 | 2.1894 | 0.3454 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"indian_pav_bhaji",
"chinese_chai",
"indian_deli_dal_makhani",
"indian_full_course_chole_bhature",
"indian_handheld_kaathi_rolls",
"indian_ice_cream_kulfi",
"indian_pakode",
"indian_pancake_masala_dosa",
"indian_samosa",
"orange_spicy_indian_jalebi",
"pot_of_indian_kadai_paneer",
"side_starter_indian_idli",
"chinese_momos",
"delisiously_advertised_burger",
"delisiously_advertised_pizza",
"fried_rice_with_vegetable",
"indian_balls_paani_puri",
"indian_butter_naan",
"indian_chapati",
"indian_cubical_desert_dhokla"
] |
kiranshivaraju/convnext-xlarge-224-22k-1k-v12-d-v_6_p2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-xlarge-224-22k-1k-v12-d-v_6_p2
This model is a fine-tuned version of [facebook/convnext-xlarge-224-22k-1k](https://huggingface.co/facebook/convnext-xlarge-224-22k-1k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0156
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 9 | 0.1719 | 0.9127 |
| 0.3998 | 2.0 | 18 | 0.0809 | 0.9603 |
| 0.0664 | 3.0 | 27 | 0.0156 | 1.0 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"bad",
"good"
] |
kiranshivaraju/convnext-xlarge-v12-d6-p1
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"bad",
"good"
] |
kiranshivaraju/convnext-xlarge-v13-d6-p1
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"bad",
"good"
] |
Newvel/face_age_detection_base_v3_weighted
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# face_age_detection_base_v3_weighted
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0928
- Accuracy: 0.9691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1216 | 0.9968 | 157 | 0.1257 | 0.9567 |
| 0.1109 | 1.9952 | 314 | 0.1100 | 0.9637 |
| 0.0947 | 2.9937 | 471 | 0.1097 | 0.9640 |
| 0.0745 | 3.9984 | 629 | 0.0928 | 0.9679 |
| 0.0565 | 4.9968 | 786 | 0.0941 | 0.9668 |
| 0.0716 | 5.9889 | 942 | 0.0928 | 0.9691 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"minor",
"adult"
] |
AdityasArsenal/finetuned-for-YogaPoses-v2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-for-YogaPoses-v2
This model is a fine-tuned version of [google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"downdog",
"goddess",
"plank",
"tree",
"warrior2"
] |
willeiton/platzi-vit-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0418
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1529 | 3.8462 | 500 | 0.0418 | 0.9925 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
khengkok/vit-medical
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"benign",
"malignant",
"normal"
] |
AdityasArsenal/finetuned-for-YogaPosesv4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-for-YogaPosesv4
This model is a fine-tuned version of [google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) on the yoga_pose_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0743
- Accuracy: 0.9907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.0434 | 0.8772 | 100 | 0.1421 | 0.9595 |
| 0.033 | 1.7544 | 200 | 0.0880 | 0.9875 |
| 0.084 | 2.6316 | 300 | 0.0919 | 0.9844 |
| 0.0655 | 3.5088 | 400 | 0.0932 | 0.9875 |
| 0.0126 | 4.3860 | 500 | 0.0696 | 0.9875 |
| 0.0487 | 5.2632 | 600 | 0.0847 | 0.9720 |
| 0.0114 | 6.1404 | 700 | 0.1103 | 0.9813 |
| 0.0377 | 7.0175 | 800 | 0.0743 | 0.9907 |
| 0.062 | 7.8947 | 900 | 0.1642 | 0.9782 |
| 0.0025 | 8.7719 | 1000 | 0.0598 | 0.9875 |
| 0.0041 | 9.6491 | 1100 | 0.1280 | 0.9813 |
| 0.0305 | 10.5263 | 1200 | 0.0920 | 0.9813 |
| 0.0148 | 11.4035 | 1300 | 0.1209 | 0.9875 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"downdog",
"goddess",
"plank",
"tree",
"warrior2"
] |
parasahuja23/vit-base-patch16-224-in21k-finetuned-final-2.0
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-final-2.0
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0445
- Accuracy: 0.9895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.6672 | 0.9950 | 100 | 0.4986 | 0.9233 |
| 0.3891 | 2.0 | 201 | 0.2037 | 0.9671 |
| 0.3164 | 2.9950 | 301 | 0.1414 | 0.9751 |
| 0.2884 | 4.0 | 402 | 0.1310 | 0.9699 |
| 0.2726 | 4.9950 | 502 | 0.1040 | 0.9748 |
| 0.2595 | 6.0 | 603 | 0.1150 | 0.9709 |
| 0.2662 | 6.9950 | 703 | 0.0842 | 0.9783 |
| 0.2195 | 8.0 | 804 | 0.0704 | 0.9842 |
| 0.2326 | 8.9950 | 904 | 0.0605 | 0.9842 |
| 0.2292 | 10.0 | 1005 | 0.0568 | 0.9846 |
| 0.2229 | 10.9950 | 1105 | 0.0445 | 0.9895 |
| 0.239 | 12.0 | 1206 | 0.0539 | 0.9839 |
| 0.2115 | 12.9950 | 1306 | 0.0464 | 0.9874 |
| 0.22 | 14.0 | 1407 | 0.0818 | 0.9734 |
| 0.2138 | 14.9950 | 1507 | 0.0599 | 0.9790 |
| 0.2003 | 15.9204 | 1600 | 0.0554 | 0.9821 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"angry",
"happy-frame",
"neutral",
"sad",
"surprised"
] |
stnleyyg/s-modified-microsoft-resnet18
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9"
] |
rohan4s/finetuned-bangladeshi-traditional-food
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bangladeshi-traditional-food
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3157
- Accuracy: 0.9529
- Precision: 0.9560
- Recall: 0.9529
- F1: 0.9538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.2056 | 1.0 | 48 | 0.9746 | 0.8560 | 0.8761 | 0.8560 | 0.8530 |
| 0.5285 | 2.0 | 96 | 0.5351 | 0.9188 | 0.9236 | 0.9188 | 0.9196 |
| 0.3189 | 3.0 | 144 | 0.3756 | 0.9372 | 0.9386 | 0.9372 | 0.9370 |
| 0.221 | 4.0 | 192 | 0.3157 | 0.9529 | 0.9560 | 0.9529 | 0.9538 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"amshotto",
"bakhorkhani",
"narikel naru",
"nimki",
"nokul dana",
"shonpapri",
"tiler khaja",
"muri mowa",
"batasha",
"chira",
"jilapi",
"khoi",
"kotkoti",
"misri",
"morobba",
"muri"
] |
AdityasArsenal/finetuned-for-YogaPosesv6
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-for-YogaPosesv6
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the yoga_pose_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0562
- Accuracy: 0.9938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.9954 | 0.8772 | 100 | 0.8301 | 0.8505 |
| 0.3505 | 1.7544 | 200 | 0.1881 | 0.9907 |
| 0.1524 | 2.6316 | 300 | 0.0901 | 0.9844 |
| 0.152 | 3.5088 | 400 | 0.1241 | 0.9688 |
| 0.1314 | 4.3860 | 500 | 0.0562 | 0.9938 |
| 0.1187 | 5.2632 | 600 | 0.1232 | 0.9720 |
| 0.0936 | 6.1404 | 700 | 0.0893 | 0.9688 |
| 0.0753 | 7.0175 | 800 | 0.1510 | 0.9626 |
| 0.0155 | 7.8947 | 900 | 0.0536 | 0.9907 |
| 0.0181 | 8.7719 | 1000 | 0.0515 | 0.9907 |
| 0.0037 | 9.6491 | 1100 | 0.0570 | 0.9907 |
| 0.0121 | 10.5263 | 1200 | 0.0570 | 0.9907 |
| 0.0065 | 11.4035 | 1300 | 0.0565 | 0.9907 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"downdog",
"goddess",
"plank",
"tree",
"warrior2"
] |
Sohaibsoussi/ViT-NIH-Chest-X-ray-dataset-small
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT-NIH-Chest-X-ray-dataset-small
This model is a fine-tuned version of [Sohaibsoussi/ViT-NIH-Chest-X-ray-dataset-small](https://huggingface.co/Sohaibsoussi/ViT-NIH-Chest-X-ray-dataset-small) on the Sohaibsoussi/NIH-Chest-X-ray-dataset-small dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6731
- Accuracy: 0.2189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 9
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.0271 | 0.3690 | 100 | 0.0347 | 0.8584 |
| 0.0334 | 0.7380 | 200 | 0.0291 | 0.8624 |
| 0.0438 | 1.1070 | 300 | 0.0352 | 0.8607 |
| 0.0215 | 1.4760 | 400 | 0.0319 | 0.8746 |
| 0.0267 | 1.8450 | 500 | 0.0277 | 0.8798 |
| 0.0266 | 2.2140 | 600 | 0.0177 | 0.9116 |
| 0.014 | 2.5830 | 700 | 0.0127 | 0.9497 |
| 0.0207 | 2.9520 | 800 | 0.0144 | 0.9410 |
| 0.0115 | 3.3210 | 900 | 0.0097 | 0.9653 |
| 0.0113 | 3.6900 | 1000 | 0.0077 | 0.9711 |
| 0.0054 | 4.0590 | 1100 | 0.0068 | 0.9844 |
| 0.0047 | 4.4280 | 1200 | 0.0046 | 0.9850 |
| 0.0056 | 4.7970 | 1300 | 0.0040 | 0.9902 |
| 0.0026 | 5.1661 | 1400 | 0.0032 | 0.9925 |
| 0.0037 | 5.5351 | 1500 | 0.0027 | 0.9936 |
| 0.0039 | 5.9041 | 1600 | 0.0023 | 0.9977 |
| 0.0019 | 6.2731 | 1700 | 0.0019 | 0.9971 |
| 0.0019 | 6.6421 | 1800 | 0.0017 | 0.9988 |
| 0.0016 | 7.0111 | 1900 | 0.0015 | 1.0 |
| 0.002 | 7.3801 | 2000 | 0.0014 | 1.0 |
| 0.0013 | 7.7491 | 2100 | 0.0014 | 1.0 |
| 0.0015 | 8.1181 | 2200 | 0.0013 | 1.0 |
| 0.0011 | 8.4871 | 2300 | 0.0013 | 1.0 |
| 0.0013 | 8.8561 | 2400 | 0.0013 | 1.0 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"no finding",
"atelectasis",
"edema",
"emphysema",
"fibrosis",
"pleural_thickening",
"hernia",
"cardiomegaly",
"effusion",
"infiltration",
"mass",
"nodule",
"pneumonia",
"pneumothorax",
"consolidation"
] |
luisafrancielle/amns
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amns
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7099
- Accuracy: 0.8871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 31 | 1.3292 | 0.5574 |
| No log | 2.0 | 62 | 0.9371 | 0.8033 |
| No log | 3.0 | 93 | 0.7407 | 0.8852 |
| 1.2134 | 4.0 | 124 | 0.6463 | 0.9016 |
| 1.2134 | 5.0 | 155 | 0.6189 | 0.9016 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"coffee",
"oil",
"rice",
"bread",
"sugar",
"black_beans",
"beans",
"flour",
"milk"
] |
james05park/vit-base-beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0680
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2869 | 1.0 | 130 | 0.2188 | 0.9624 |
| 0.131 | 2.0 | 260 | 0.1310 | 0.9699 |
| 0.1467 | 3.0 | 390 | 0.0974 | 0.9774 |
| 0.0797 | 4.0 | 520 | 0.0680 | 0.9850 |
| 0.1236 | 5.0 | 650 | 0.0829 | 0.9699 |
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.5.1+cpu
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
bmedeiros/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4697
- Accuracy: 0.8072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.8889 | 6 | 0.6376 | 0.6899 |
| 0.6757 | 1.9259 | 13 | 0.6053 | 0.6938 |
| 0.5472 | 2.9630 | 20 | 0.5903 | 0.7256 |
| 0.5472 | 4.0 | 27 | 0.5782 | 0.7316 |
| 0.4628 | 4.8889 | 33 | 0.5979 | 0.7455 |
| 0.4181 | 5.9259 | 40 | 0.5735 | 0.7614 |
| 0.4181 | 6.9630 | 47 | 0.5252 | 0.7495 |
| 0.4079 | 8.0 | 54 | 0.5363 | 0.7475 |
| 0.4102 | 8.8889 | 60 | 0.5289 | 0.7495 |
| 0.4102 | 9.9259 | 67 | 0.5227 | 0.7535 |
| 0.373 | 10.9630 | 74 | 0.4677 | 0.7773 |
| 0.3639 | 12.0 | 81 | 0.4978 | 0.7813 |
| 0.3639 | 12.8889 | 87 | 0.4651 | 0.7992 |
| 0.3779 | 13.9259 | 94 | 0.4738 | 0.7913 |
| 0.3476 | 14.9630 | 101 | 0.4697 | 0.8072 |
| 0.3476 | 16.0 | 108 | 0.4719 | 0.7952 |
| 0.3467 | 16.8889 | 114 | 0.4552 | 0.7893 |
| 0.3496 | 17.9259 | 121 | 0.5186 | 0.7714 |
| 0.3496 | 18.9630 | 128 | 0.4575 | 0.7952 |
| 0.3657 | 20.0 | 135 | 0.4764 | 0.7793 |
| 0.3888 | 20.8889 | 141 | 0.5009 | 0.7714 |
| 0.3888 | 21.9259 | 148 | 0.4673 | 0.7813 |
| 0.3236 | 22.9630 | 155 | 0.4931 | 0.7753 |
| 0.3179 | 24.0 | 162 | 0.4837 | 0.7654 |
| 0.3179 | 24.8889 | 168 | 0.4652 | 0.7694 |
| 0.327 | 25.9259 | 175 | 0.5108 | 0.7495 |
| 0.3253 | 26.9630 | 182 | 0.4424 | 0.7833 |
| 0.3253 | 28.0 | 189 | 0.5622 | 0.7336 |
| 0.3382 | 28.8889 | 195 | 0.5068 | 0.7694 |
| 0.331 | 29.9259 | 202 | 0.4530 | 0.7694 |
| 0.331 | 30.9630 | 209 | 0.5205 | 0.7316 |
| 0.3302 | 32.0 | 216 | 0.4386 | 0.7853 |
| 0.2972 | 32.8889 | 222 | 0.5031 | 0.7773 |
| 0.2972 | 33.9259 | 229 | 0.4909 | 0.7575 |
| 0.3121 | 34.9630 | 236 | 0.4766 | 0.7793 |
| 0.2956 | 36.0 | 243 | 0.5262 | 0.7416 |
| 0.2956 | 36.8889 | 249 | 0.5374 | 0.7316 |
| 0.2947 | 37.9259 | 256 | 0.4888 | 0.7674 |
| 0.2662 | 38.9630 | 263 | 0.4881 | 0.7694 |
| 0.2826 | 40.0 | 270 | 0.4669 | 0.7893 |
| 0.2826 | 40.8889 | 276 | 0.4591 | 0.7972 |
| 0.2768 | 41.9259 | 283 | 0.5090 | 0.7575 |
| 0.2836 | 42.9630 | 290 | 0.5250 | 0.7495 |
| 0.2836 | 44.0 | 297 | 0.4748 | 0.7654 |
| 0.2724 | 44.8889 | 303 | 0.4429 | 0.7833 |
| 0.2498 | 45.9259 | 310 | 0.4460 | 0.7893 |
| 0.2498 | 46.9630 | 317 | 0.4722 | 0.7793 |
| 0.2893 | 48.0 | 324 | 0.4799 | 0.7714 |
| 0.2618 | 48.8889 | 330 | 0.4850 | 0.7714 |
| 0.2618 | 49.9259 | 337 | 0.5152 | 0.7495 |
| 0.2664 | 50.9630 | 344 | 0.5347 | 0.7396 |
| 0.27 | 52.0 | 351 | 0.5343 | 0.7416 |
| 0.27 | 52.8889 | 357 | 0.5330 | 0.7416 |
| 0.2539 | 53.3333 | 360 | 0.5320 | 0.7396 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
[
"invalid",
"valid"
] |
CristianR8/efficientnet-b0-cocoa
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# efficientnet-b0-cocoa
This model is a fine-tuned version of [google/efficientnet-b0](https://huggingface.co/google/efficientnet-b0) on the SemilleroCV/Cocoa-dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2657
- Accuracy: 0.9097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0627 | 1.0 | 196 | 1.5223 | 0.5596 |
| 0.591 | 2.0 | 392 | 0.8975 | 0.8303 |
| 0.6623 | 3.0 | 588 | 0.6564 | 0.8773 |
| 0.4874 | 4.0 | 784 | 0.6842 | 0.8339 |
| 0.4671 | 5.0 | 980 | 0.4894 | 0.8809 |
| 0.5623 | 6.0 | 1176 | 0.4160 | 0.8736 |
| 0.3917 | 7.0 | 1372 | 0.4022 | 0.8845 |
| 0.3153 | 8.0 | 1568 | 0.4939 | 0.8412 |
| 0.5814 | 9.0 | 1764 | 0.3540 | 0.8773 |
| 0.5883 | 10.0 | 1960 | 0.3493 | 0.8953 |
| 0.4616 | 11.0 | 2156 | 0.7928 | 0.7762 |
| 0.499 | 12.0 | 2352 | 2.0659 | 0.2960 |
| 0.2236 | 13.0 | 2548 | 0.4444 | 0.8520 |
| 0.2083 | 14.0 | 2744 | 0.4640 | 0.8736 |
| 0.3408 | 15.0 | 2940 | 0.3775 | 0.8773 |
| 0.3529 | 16.0 | 3136 | 0.3519 | 0.8881 |
| 0.3859 | 17.0 | 3332 | 0.3310 | 0.9061 |
| 0.3557 | 18.0 | 3528 | 0.3475 | 0.8917 |
| 0.4979 | 19.0 | 3724 | 0.3839 | 0.8592 |
| 0.7133 | 20.0 | 3920 | 0.3032 | 0.9134 |
| 0.4489 | 21.0 | 4116 | 0.4246 | 0.8520 |
| 0.2605 | 22.0 | 4312 | 0.2951 | 0.8989 |
| 0.3787 | 23.0 | 4508 | 0.4357 | 0.8520 |
| 0.3015 | 24.0 | 4704 | 0.3990 | 0.8917 |
| 0.1965 | 25.0 | 4900 | 0.3536 | 0.9097 |
| 0.3903 | 26.0 | 5096 | 0.4166 | 0.8592 |
| 0.1902 | 27.0 | 5292 | 0.4354 | 0.8520 |
| 0.2089 | 28.0 | 5488 | 0.4089 | 0.8592 |
| 0.3574 | 29.0 | 5684 | 0.4787 | 0.8231 |
| 0.3532 | 30.0 | 5880 | 0.3165 | 0.9097 |
| 0.2967 | 31.0 | 6076 | 0.3105 | 0.9134 |
| 0.2364 | 32.0 | 6272 | 0.3560 | 0.9061 |
| 0.3136 | 33.0 | 6468 | 0.2657 | 0.9097 |
| 0.4061 | 34.0 | 6664 | 0.2680 | 0.9134 |
| 0.3296 | 35.0 | 6860 | 0.3798 | 0.9061 |
| 0.2905 | 36.0 | 7056 | 0.5098 | 0.8556 |
| 0.2763 | 37.0 | 7252 | 0.4219 | 0.8809 |
| 0.2454 | 38.0 | 7448 | 0.2852 | 0.9134 |
| 0.6077 | 39.0 | 7644 | 0.3603 | 0.8989 |
| 0.1966 | 40.0 | 7840 | 0.3519 | 0.8736 |
| 0.2473 | 41.0 | 8036 | 0.3343 | 0.9025 |
| 0.2795 | 42.0 | 8232 | 0.3384 | 0.9170 |
| 0.1249 | 43.0 | 8428 | 0.4046 | 0.8773 |
| 0.2943 | 44.0 | 8624 | 0.3953 | 0.8917 |
| 0.3002 | 45.0 | 8820 | 0.5003 | 0.8592 |
| 0.1525 | 46.0 | 9016 | 0.3232 | 0.9170 |
| 0.4022 | 47.0 | 9212 | 0.3113 | 0.9170 |
| 0.4994 | 48.0 | 9408 | 0.4494 | 0.8556 |
| 0.6512 | 49.0 | 9604 | 0.3722 | 0.9206 |
| 0.3152 | 50.0 | 9800 | 0.2852 | 0.9097 |
| 0.1165 | 51.0 | 9996 | 0.4138 | 0.8628 |
| 0.216 | 52.0 | 10192 | 0.3413 | 0.8953 |
| 0.1455 | 53.0 | 10388 | 0.3046 | 0.9170 |
| 0.554 | 54.0 | 10584 | 0.2849 | 0.8989 |
| 0.3586 | 55.0 | 10780 | 0.3517 | 0.9134 |
| 0.2239 | 56.0 | 10976 | 0.4538 | 0.9025 |
| 0.1725 | 57.0 | 11172 | 0.4492 | 0.8592 |
| 0.4689 | 58.0 | 11368 | 0.4739 | 0.8628 |
| 0.3565 | 59.0 | 11564 | 0.2831 | 0.9206 |
| 0.2259 | 60.0 | 11760 | 0.3465 | 0.9206 |
| 0.2212 | 61.0 | 11956 | 0.2884 | 0.9314 |
| 0.2648 | 62.0 | 12152 | 0.4875 | 0.8448 |
| 0.3438 | 63.0 | 12348 | 0.3989 | 0.9061 |
| 0.4785 | 64.0 | 12544 | 0.5953 | 0.8520 |
| 0.06 | 65.0 | 12740 | 0.2954 | 0.9278 |
| 0.1965 | 66.0 | 12936 | 0.5033 | 0.8520 |
| 0.3548 | 67.0 | 13132 | 0.4132 | 0.8809 |
| 0.1279 | 68.0 | 13328 | 0.3743 | 0.9170 |
| 0.2879 | 69.0 | 13524 | 0.6423 | 0.7762 |
| 0.1757 | 70.0 | 13720 | 0.5979 | 0.8014 |
| 0.3338 | 71.0 | 13916 | 0.4398 | 0.8989 |
| 0.1604 | 72.0 | 14112 | 0.5634 | 0.8231 |
| 0.1078 | 73.0 | 14308 | 0.6204 | 0.7762 |
| 0.258 | 74.0 | 14504 | 0.3685 | 0.8953 |
| 0.1227 | 75.0 | 14700 | 0.7026 | 0.8159 |
| 0.2257 | 76.0 | 14896 | 0.4048 | 0.9170 |
| 0.1786 | 77.0 | 15092 | 0.4891 | 0.8845 |
| 0.2006 | 78.0 | 15288 | 0.4216 | 0.8773 |
| 0.3144 | 79.0 | 15484 | 0.2721 | 0.8953 |
| 0.1969 | 80.0 | 15680 | 0.4270 | 0.8484 |
| 0.1405 | 81.0 | 15876 | 0.7632 | 0.7834 |
| 0.1427 | 82.0 | 16072 | 0.3249 | 0.9025 |
| 0.2493 | 83.0 | 16268 | 0.3838 | 0.8989 |
| 0.331 | 84.0 | 16464 | 0.3330 | 0.9206 |
| 0.1231 | 85.0 | 16660 | 0.3246 | 0.8700 |
| 0.2781 | 86.0 | 16856 | 0.3710 | 0.8736 |
| 0.7193 | 87.0 | 17052 | 0.3384 | 0.9061 |
| 0.1149 | 88.0 | 17248 | 0.3703 | 0.9097 |
| 0.0269 | 89.0 | 17444 | 0.5013 | 0.8592 |
| 0.0967 | 90.0 | 17640 | 0.3456 | 0.8989 |
| 0.177 | 91.0 | 17836 | 0.3799 | 0.8881 |
| 0.1917 | 92.0 | 18032 | 0.3239 | 0.9061 |
| 0.2082 | 93.0 | 18228 | 0.4861 | 0.8989 |
| 0.3836 | 94.0 | 18424 | 0.4444 | 0.8736 |
| 0.1 | 95.0 | 18620 | 0.3713 | 0.8845 |
| 0.1785 | 96.0 | 18816 | 0.4279 | 0.8303 |
| 0.19 | 97.0 | 19012 | 0.6588 | 0.8412 |
| 0.099 | 98.0 | 19208 | 0.6632 | 0.8267 |
| 0.1467 | 99.0 | 19404 | 0.4642 | 0.8809 |
| 0.2617 | 100.0 | 19600 | 0.3624 | 0.8809 |
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"fermentado",
"hongo",
"insecto",
"insufi_fermen",
"pizarroso",
"violeta"
] |
Augusto777/swinv2-tiny-patch4-window8-256-RD-da-colab
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-RD-da-colab
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 10.2420
- Accuracy: 0.5036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1347 | 1.0 | 96 | 10.0386 | 0.4982 |
| 0.1556 | 2.0 | 192 | 9.5018 | 0.5 |
| 0.1051 | 3.0 | 288 | 9.9516 | 0.4982 |
| 0.1154 | 4.0 | 384 | 10.8351 | 0.4945 |
| 0.0909 | 5.0 | 480 | 11.6091 | 0.4945 |
| 0.0923 | 6.0 | 576 | 9.0530 | 0.5 |
| 0.1089 | 7.0 | 672 | 11.6765 | 0.4927 |
| 0.0959 | 8.0 | 768 | 11.5132 | 0.4982 |
| 0.1266 | 9.0 | 864 | 10.2420 | 0.5036 |
| 0.106 | 10.0 | 960 | 11.1262 | 0.4945 |
| 0.0831 | 11.0 | 1056 | 11.5815 | 0.4964 |
| 0.0819 | 12.0 | 1152 | 11.6394 | 0.4964 |
| 0.0862 | 13.0 | 1248 | 10.9660 | 0.4982 |
| 0.0754 | 14.0 | 1344 | 9.5463 | 0.4982 |
| 0.06 | 15.0 | 1440 | 10.2678 | 0.4964 |
| 0.0828 | 16.0 | 1536 | 11.4973 | 0.4927 |
| 0.0675 | 17.0 | 1632 | 10.5019 | 0.4964 |
| 0.0687 | 18.0 | 1728 | 10.6483 | 0.4982 |
| 0.0548 | 19.0 | 1824 | 11.2166 | 0.4964 |
| 0.0658 | 20.0 | 1920 | 11.5459 | 0.4945 |
| 0.0565 | 21.0 | 2016 | 11.5899 | 0.4945 |
| 0.0807 | 22.0 | 2112 | 10.7066 | 0.5 |
| 0.0289 | 23.0 | 2208 | 10.6253 | 0.4982 |
| 0.0755 | 24.0 | 2304 | 10.4856 | 0.5018 |
| 0.0483 | 25.0 | 2400 | 11.3838 | 0.4964 |
| 0.0732 | 26.0 | 2496 | 11.1971 | 0.4927 |
| 0.1424 | 27.0 | 2592 | 11.4581 | 0.4945 |
| 0.0814 | 28.0 | 2688 | 11.3341 | 0.4945 |
| 0.101 | 29.0 | 2784 | 11.5705 | 0.4927 |
| 0.0894 | 30.0 | 2880 | 11.5259 | 0.4927 |
| 0.0707 | 31.0 | 2976 | 11.1753 | 0.4945 |
| 0.1289 | 32.0 | 3072 | 10.5668 | 0.4964 |
| 0.0991 | 33.0 | 3168 | 11.1013 | 0.4945 |
| 0.0615 | 34.0 | 3264 | 11.0973 | 0.4945 |
| 0.0784 | 35.0 | 3360 | 11.0716 | 0.4945 |
| 0.0792 | 36.0 | 3456 | 11.0241 | 0.4945 |
| 0.1032 | 37.0 | 3552 | 11.2338 | 0.4945 |
| 0.0837 | 38.0 | 3648 | 11.4256 | 0.4927 |
| 0.0722 | 39.0 | 3744 | 11.3971 | 0.4945 |
| 0.079 | 40.0 | 3840 | 11.3523 | 0.4945 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"mild",
"moderate",
"no_dr",
"proliferate_dr",
"severe"
] |
Augusto777/swiftformer-xs-RD-da-colab
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swiftformer-xs-RD-da-colab
This model is a fine-tuned version of [MBZUAI/swiftformer-xs](https://huggingface.co/MBZUAI/swiftformer-xs) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5780
- Accuracy: 0.5018
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5414 | 1.0 | 96 | 9.3183 | 0.4945 |
| 0.533 | 2.0 | 192 | 7.8263 | 0.5 |
| 0.3449 | 3.0 | 288 | 8.6048 | 0.4945 |
| 0.3157 | 4.0 | 384 | 8.4227 | 0.4945 |
| 0.2686 | 5.0 | 480 | 4.3190 | 0.5473 |
| 0.2987 | 6.0 | 576 | 5.0817 | 0.5164 |
| 0.3415 | 7.0 | 672 | 5.2399 | 0.5127 |
| 0.2396 | 8.0 | 768 | 6.7857 | 0.5018 |
| 0.2618 | 9.0 | 864 | 6.5777 | 0.5055 |
| 0.297 | 10.0 | 960 | 6.6086 | 0.5036 |
| 0.2413 | 11.0 | 1056 | 3.6891 | 0.5236 |
| 0.2074 | 12.0 | 1152 | 6.8991 | 0.5 |
| 0.2029 | 13.0 | 1248 | 5.8597 | 0.5018 |
| 0.2353 | 14.0 | 1344 | 7.3848 | 0.5036 |
| 0.1748 | 15.0 | 1440 | 4.9503 | 0.5109 |
| 0.1885 | 16.0 | 1536 | 7.2151 | 0.4982 |
| 0.1967 | 17.0 | 1632 | 7.9847 | 0.4982 |
| 0.1881 | 18.0 | 1728 | 4.5008 | 0.5109 |
| 0.172 | 19.0 | 1824 | 4.7565 | 0.5273 |
| 0.2222 | 20.0 | 1920 | 6.2814 | 0.4964 |
| 0.1673 | 21.0 | 2016 | 8.1814 | 0.4964 |
| 0.1831 | 22.0 | 2112 | 4.4184 | 0.5164 |
| 0.1121 | 23.0 | 2208 | 6.0737 | 0.4982 |
| 0.1464 | 24.0 | 2304 | 5.3006 | 0.5018 |
| 0.1343 | 25.0 | 2400 | 5.6166 | 0.5036 |
| 0.1385 | 26.0 | 2496 | 6.1437 | 0.5018 |
| 0.1153 | 27.0 | 2592 | 6.3232 | 0.5018 |
| 0.1175 | 28.0 | 2688 | 6.2047 | 0.5036 |
| 0.1107 | 29.0 | 2784 | 7.5461 | 0.4982 |
| 0.0914 | 30.0 | 2880 | 7.4573 | 0.4982 |
| 0.1123 | 31.0 | 2976 | 6.2770 | 0.4982 |
| 0.1268 | 32.0 | 3072 | 5.1979 | 0.5073 |
| 0.1074 | 33.0 | 3168 | 4.9253 | 0.5036 |
| 0.0712 | 34.0 | 3264 | 5.0555 | 0.5018 |
| 0.0792 | 35.0 | 3360 | 6.1480 | 0.4982 |
| 0.1097 | 36.0 | 3456 | 6.5916 | 0.4982 |
| 0.1035 | 37.0 | 3552 | 7.4887 | 0.4982 |
| 0.1066 | 38.0 | 3648 | 6.1041 | 0.5 |
| 0.0887 | 39.0 | 3744 | 6.7739 | 0.4982 |
| 0.0889 | 40.0 | 3840 | 5.5780 | 0.5018 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"mild",
"moderate",
"no_dr",
"proliferate_dr",
"severe"
] |
cristian-rivera/cr-platzi-vit-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cr-platzi-vit-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0463
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1097 | 3.8462 | 500 | 0.0463 | 0.9925 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cpu
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
Tianmu28/vehicle_multiclass_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vehicle_multiclass_classification
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0151
- Accuracy: 0.9952
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0389 | 1.0 | 245 | 0.0542 | 0.9857 |
| 0.0006 | 2.0 | 490 | 0.0453 | 0.9905 |
| 0.0003 | 3.0 | 735 | 0.0525 | 0.9845 |
| 0.0002 | 4.0 | 980 | 0.0519 | 0.9857 |
| 0.0001 | 5.0 | 1225 | 0.0523 | 0.9857 |
| 0.0001 | 6.0 | 1470 | 0.0529 | 0.9857 |
| 0.0001 | 7.0 | 1715 | 0.0534 | 0.9857 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6"
] |
bmedeiros/vit-msn-small-finetuned-lf-invalidation
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-msn-small-finetuned-lf-invalidation
This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2442
- Accuracy: 0.9234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.96 | 6 | 0.7163 | 0.4 |
| 0.7 | 1.92 | 12 | 0.4886 | 0.8234 |
| 0.7 | 2.88 | 18 | 0.4683 | 0.7596 |
| 0.3793 | 4.0 | 25 | 0.3421 | 0.8447 |
| 0.383 | 4.96 | 31 | 0.2535 | 0.9191 |
| 0.383 | 5.92 | 37 | 0.2442 | 0.9234 |
| 0.3658 | 6.88 | 43 | 0.3795 | 0.8404 |
| 0.2601 | 8.0 | 50 | 0.3831 | 0.8383 |
| 0.2601 | 8.96 | 56 | 0.3993 | 0.8191 |
| 0.26 | 9.92 | 62 | 0.2265 | 0.8979 |
| 0.26 | 10.88 | 68 | 0.5355 | 0.7319 |
| 0.3376 | 12.0 | 75 | 0.3881 | 0.8043 |
| 0.248 | 12.96 | 81 | 0.2618 | 0.8979 |
| 0.248 | 13.92 | 87 | 0.5545 | 0.7362 |
| 0.2133 | 14.88 | 93 | 0.9307 | 0.5489 |
| 0.2576 | 16.0 | 100 | 0.4236 | 0.8149 |
| 0.2576 | 16.96 | 106 | 0.4333 | 0.8106 |
| 0.2466 | 17.92 | 112 | 0.8464 | 0.6128 |
| 0.2466 | 18.88 | 118 | 0.7970 | 0.6489 |
| 0.228 | 20.0 | 125 | 0.3522 | 0.8660 |
| 0.2542 | 20.96 | 131 | 0.5095 | 0.7872 |
| 0.2542 | 21.92 | 137 | 0.4808 | 0.8021 |
| 0.2032 | 22.88 | 143 | 0.5805 | 0.7340 |
| 0.1998 | 24.0 | 150 | 0.3987 | 0.8319 |
| 0.1998 | 24.96 | 156 | 0.4889 | 0.7894 |
| 0.1565 | 25.92 | 162 | 0.8003 | 0.6468 |
| 0.1565 | 26.88 | 168 | 0.4740 | 0.7936 |
| 0.1934 | 28.0 | 175 | 0.4442 | 0.8319 |
| 0.1878 | 28.96 | 181 | 0.7115 | 0.7021 |
| 0.1878 | 29.92 | 187 | 0.4234 | 0.8277 |
| 0.1848 | 30.88 | 193 | 0.6975 | 0.6957 |
| 0.1705 | 32.0 | 200 | 0.2965 | 0.8894 |
| 0.1705 | 32.96 | 206 | 0.8020 | 0.6766 |
| 0.1744 | 33.92 | 212 | 0.7330 | 0.6979 |
| 0.1744 | 34.88 | 218 | 1.0977 | 0.5723 |
| 0.1707 | 36.0 | 225 | 1.0648 | 0.5894 |
| 0.1719 | 36.96 | 231 | 0.8495 | 0.6404 |
| 0.1719 | 37.92 | 237 | 0.3177 | 0.8787 |
| 0.1839 | 38.88 | 243 | 0.4839 | 0.7894 |
| 0.1544 | 40.0 | 250 | 0.4100 | 0.8362 |
| 0.1544 | 40.96 | 256 | 0.6012 | 0.7553 |
| 0.135 | 41.92 | 262 | 0.6832 | 0.7213 |
| 0.135 | 42.88 | 268 | 0.6663 | 0.7170 |
| 0.14 | 44.0 | 275 | 0.6219 | 0.7383 |
| 0.151 | 44.96 | 281 | 0.9176 | 0.6149 |
| 0.151 | 45.92 | 287 | 0.8830 | 0.6404 |
| 0.1284 | 46.88 | 293 | 0.7249 | 0.7043 |
| 0.1586 | 48.0 | 300 | 0.7146 | 0.7043 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
[
"invalid",
"valid"
] |
CristianR8/mobilenetv2-cocoa
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilenetv2-cocoa
This model is a fine-tuned version of [google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) on the SemilleroCV/Cocoa-dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3226
- Accuracy: 0.8953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.569 | 1.0 | 196 | 0.5072 | 0.8628 |
| 0.3973 | 2.0 | 392 | 0.4278 | 0.8700 |
| 0.5873 | 3.0 | 588 | 0.4138 | 0.8773 |
| 0.4781 | 4.0 | 784 | 0.4718 | 0.8736 |
| 0.4483 | 5.0 | 980 | 0.4506 | 0.8773 |
| 0.655 | 6.0 | 1176 | 0.3685 | 0.8953 |
| 0.3441 | 7.0 | 1372 | 0.4751 | 0.8773 |
| 0.3166 | 8.0 | 1568 | 0.3796 | 0.8809 |
| 0.5114 | 9.0 | 1764 | 0.4087 | 0.8917 |
| 0.6452 | 10.0 | 1960 | 0.3760 | 0.8989 |
| 0.4747 | 11.0 | 2156 | 0.4223 | 0.8773 |
| 0.5145 | 12.0 | 2352 | 1.1704 | 0.5957 |
| 0.1991 | 13.0 | 2548 | 0.3454 | 0.9097 |
| 0.2396 | 14.0 | 2744 | 0.3913 | 0.8700 |
| 0.3259 | 15.0 | 2940 | 0.3689 | 0.8881 |
| 0.3434 | 16.0 | 3136 | 0.3743 | 0.8736 |
| 0.389 | 17.0 | 3332 | 0.3657 | 0.9025 |
| 0.302 | 18.0 | 3528 | 0.4218 | 0.8917 |
| 0.4693 | 19.0 | 3724 | 0.3226 | 0.8953 |
| 0.6346 | 20.0 | 3920 | 0.3277 | 0.8881 |
| 0.481 | 21.0 | 4116 | 0.3484 | 0.8700 |
| 0.2628 | 22.0 | 4312 | 0.3942 | 0.9025 |
| 0.3653 | 23.0 | 4508 | 0.3537 | 0.8989 |
| 0.344 | 24.0 | 4704 | 0.4758 | 0.8809 |
| 0.2819 | 25.0 | 4900 | 0.4318 | 0.8989 |
| 0.513 | 26.0 | 5096 | 0.4277 | 0.8412 |
| 0.201 | 27.0 | 5292 | 0.3915 | 0.8953 |
| 0.2696 | 28.0 | 5488 | 0.4401 | 0.8809 |
| 0.4204 | 29.0 | 5684 | 0.3856 | 0.8953 |
| 0.316 | 30.0 | 5880 | 0.3576 | 0.8845 |
| 0.3102 | 31.0 | 6076 | 0.4155 | 0.8809 |
| 0.1489 | 32.0 | 6272 | 0.4147 | 0.8953 |
| 0.3302 | 33.0 | 6468 | 0.4217 | 0.8953 |
| 0.3271 | 34.0 | 6664 | 0.3321 | 0.9097 |
| 0.3481 | 35.0 | 6860 | 0.3828 | 0.8809 |
| 0.3329 | 36.0 | 7056 | 0.4045 | 0.8700 |
| 0.2471 | 37.0 | 7252 | 0.5536 | 0.8664 |
| 0.2007 | 38.0 | 7448 | 0.3503 | 0.8881 |
| 0.7535 | 39.0 | 7644 | 0.4819 | 0.8809 |
| 0.1851 | 40.0 | 7840 | 0.3762 | 0.8773 |
| 0.2329 | 41.0 | 8036 | 0.4465 | 0.8845 |
| 0.2889 | 42.0 | 8232 | 0.4696 | 0.9061 |
| 0.1409 | 43.0 | 8428 | 0.4876 | 0.8809 |
| 0.2683 | 44.0 | 8624 | 0.6134 | 0.8809 |
| 0.3535 | 45.0 | 8820 | 0.4364 | 0.8809 |
| 0.1683 | 46.0 | 9016 | 0.4059 | 0.8881 |
| 0.43 | 47.0 | 9212 | 0.3955 | 0.8881 |
| 0.5702 | 48.0 | 9408 | 0.3898 | 0.8809 |
| 0.8043 | 49.0 | 9604 | 0.5963 | 0.8953 |
| 0.3742 | 50.0 | 9800 | 0.5273 | 0.8989 |
| 0.1026 | 51.0 | 9996 | 0.3999 | 0.8989 |
| 0.2357 | 52.0 | 10192 | 0.4724 | 0.8592 |
| 0.2612 | 53.0 | 10388 | 0.4169 | 0.8845 |
| 0.4747 | 54.0 | 10584 | 0.3973 | 0.8917 |
| 0.4943 | 55.0 | 10780 | 0.5156 | 0.9061 |
| 0.2296 | 56.0 | 10976 | 0.6397 | 0.8917 |
| 0.1789 | 57.0 | 11172 | 0.5098 | 0.8267 |
| 0.4355 | 58.0 | 11368 | 0.5032 | 0.8917 |
| 0.3957 | 59.0 | 11564 | 0.4205 | 0.9025 |
| 0.4806 | 60.0 | 11760 | 0.7011 | 0.8917 |
| 0.2356 | 61.0 | 11956 | 0.7832 | 0.8881 |
| 0.3865 | 62.0 | 12152 | 0.4622 | 0.8917 |
| 0.3504 | 63.0 | 12348 | 0.5889 | 0.8773 |
| 0.3766 | 64.0 | 12544 | 0.5246 | 0.8592 |
| 0.1336 | 65.0 | 12740 | 0.6462 | 0.8773 |
| 0.3275 | 66.0 | 12936 | 0.5013 | 0.8628 |
| 0.3765 | 67.0 | 13132 | 0.4857 | 0.8953 |
| 0.1622 | 68.0 | 13328 | 0.4918 | 0.8845 |
| 0.2291 | 69.0 | 13524 | 0.5734 | 0.8736 |
| 0.1786 | 70.0 | 13720 | 0.6691 | 0.8231 |
| 0.3451 | 71.0 | 13916 | 0.7318 | 0.8773 |
| 0.2313 | 72.0 | 14112 | 0.5041 | 0.8700 |
| 0.1984 | 73.0 | 14308 | 0.6518 | 0.7690 |
| 0.2345 | 74.0 | 14504 | 0.5280 | 0.8845 |
| 0.0851 | 75.0 | 14700 | 0.6302 | 0.8917 |
| 0.2234 | 76.0 | 14896 | 0.4843 | 0.8809 |
| 0.2266 | 77.0 | 15092 | 0.4900 | 0.8628 |
| 0.2735 | 78.0 | 15288 | 0.5249 | 0.8736 |
| 0.2442 | 79.0 | 15484 | 0.5061 | 0.8917 |
| 0.2246 | 80.0 | 15680 | 0.4810 | 0.8664 |
| 0.3557 | 81.0 | 15876 | 0.6420 | 0.8123 |
| 0.2017 | 82.0 | 16072 | 0.5158 | 0.8845 |
| 0.249 | 83.0 | 16268 | 0.4364 | 0.9025 |
| 0.2566 | 84.0 | 16464 | 0.5507 | 0.8736 |
| 0.1012 | 85.0 | 16660 | 0.4728 | 0.8845 |
| 0.1972 | 86.0 | 16856 | 0.5746 | 0.8809 |
| 0.7922 | 87.0 | 17052 | 0.5262 | 0.8628 |
| 0.1229 | 88.0 | 17248 | 0.6293 | 0.8845 |
| 0.0248 | 89.0 | 17444 | 0.6193 | 0.8881 |
| 0.0925 | 90.0 | 17640 | 0.4755 | 0.8700 |
| 0.1968 | 91.0 | 17836 | 0.5528 | 0.8700 |
| 0.1694 | 92.0 | 18032 | 0.4338 | 0.8953 |
| 0.2083 | 93.0 | 18228 | 1.1286 | 0.8809 |
| 0.3666 | 94.0 | 18424 | 0.6879 | 0.8267 |
| 0.1358 | 95.0 | 18620 | 0.5071 | 0.8881 |
| 0.2247 | 96.0 | 18816 | 0.5941 | 0.8520 |
| 0.2682 | 97.0 | 19012 | 0.5219 | 0.8592 |
| 0.1762 | 98.0 | 19208 | 0.6929 | 0.8520 |
| 0.2368 | 99.0 | 19404 | 0.5324 | 0.8845 |
| 0.1268 | 100.0 | 19600 | 0.6160 | 0.8881 |
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"fermentado",
"hongo",
"insecto",
"insufi_fermen",
"pizarroso",
"violeta"
] |
CristianR8/vit-base-cocoa
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-cocoa
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the SemilleroCV/Cocoa-dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2061
- Accuracy: 0.9278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.3733 | 1.0 | 196 | 0.9025 | 0.3558 |
| 0.3727 | 2.0 | 392 | 0.8989 | 0.4098 |
| 0.3901 | 3.0 | 588 | 0.8989 | 0.2668 |
| 0.3421 | 4.0 | 784 | 0.9170 | 0.2612 |
| 0.2703 | 5.0 | 980 | 0.9278 | 0.2061 |
| 0.1734 | 6.0 | 1176 | 0.9278 | 0.2568 |
| 0.1385 | 7.0 | 1372 | 0.9206 | 0.3242 |
| 0.3237 | 8.0 | 1568 | 0.9386 | 0.2922 |
| 0.236 | 9.0 | 1764 | 0.9386 | 0.3044 |
| 0.2124 | 10.0 | 1960 | 0.9061 | 0.3848 |
| 0.0454 | 11.0 | 2156 | 0.9350 | 0.3527 |
| 0.0756 | 12.0 | 2352 | 0.9350 | 0.2844 |
| 0.0605 | 13.0 | 2548 | 0.9314 | 0.3077 |
| 0.0214 | 14.0 | 2744 | 0.9025 | 0.6295 |
| 0.1816 | 15.0 | 2940 | 0.9386 | 0.2996 |
| 0.0338 | 16.0 | 3136 | 0.9278 | 0.3597 |
| 0.2136 | 17.0 | 3332 | 0.9314 | 0.4070 |
| 0.188 | 18.0 | 3528 | 0.9458 | 0.3532 |
| 0.0539 | 19.0 | 3724 | 0.9386 | 0.3843 |
| 0.0992 | 20.0 | 3920 | 0.9422 | 0.3904 |
| 0.0019 | 21.0 | 4116 | 0.9458 | 0.3732 |
| 0.0348 | 22.0 | 4312 | 0.9386 | 0.4021 |
| 0.0823 | 23.0 | 4508 | 0.9350 | 0.4217 |
| 0.1125 | 24.0 | 4704 | 0.9097 | 0.4704 |
| 0.0173 | 25.0 | 4900 | 0.9350 | 0.3700 |
| 0.0442 | 26.0 | 5096 | 0.9314 | 0.3725 |
| 0.0009 | 27.0 | 5292 | 0.9278 | 0.4819 |
| 0.0087 | 28.0 | 5488 | 0.9170 | 0.6492 |
| 0.0021 | 29.0 | 5684 | 0.9242 | 0.5297 |
| 0.2552 | 30.0 | 5880 | 0.9314 | 0.4482 |
| 0.0154 | 31.0 | 6076 | 0.9242 | 0.6075 |
| 0.0009 | 32.0 | 6272 | 0.9350 | 0.4101 |
| 0.1626 | 33.0 | 6468 | 0.9350 | 0.4653 |
| 0.0276 | 34.0 | 6664 | 0.9386 | 0.4174 |
| 0.0139 | 35.0 | 6860 | 0.9422 | 0.3992 |
| 0.0023 | 36.0 | 7056 | 0.9170 | 0.6972 |
| 0.1264 | 37.0 | 7252 | 0.9314 | 0.4980 |
| 0.0113 | 38.0 | 7448 | 0.9170 | 0.7154 |
| 0.0694 | 39.0 | 7644 | 0.9242 | 0.5443 |
| 0.0976 | 40.0 | 7840 | 0.9350 | 0.3852 |
| 0.1191 | 41.0 | 8036 | 0.9242 | 0.5398 |
| 0.1249 | 42.0 | 8232 | 0.9170 | 0.6197 |
| 0.0002 | 43.0 | 8428 | 0.9134 | 0.6967 |
| 0.1163 | 44.0 | 8624 | 0.9242 | 0.5697 |
| 0.0201 | 45.0 | 8820 | 0.9134 | 0.7221 |
| 0.0003 | 46.0 | 9016 | 0.9314 | 0.5253 |
| 0.0224 | 47.0 | 9212 | 0.9495 | 0.3817 |
| 0.0183 | 48.0 | 9408 | 0.9242 | 0.4966 |
| 0.0077 | 49.0 | 9604 | 0.9458 | 0.4349 |
| 0.0083 | 50.0 | 9800 | 0.9242 | 0.5191 |
| 0.0571 | 51.0 | 9996 | 0.9206 | 0.5826 |
| 0.0583 | 52.0 | 10192 | 0.9170 | 0.5335 |
| 0.0019 | 53.0 | 10388 | 0.9206 | 0.5843 |
| 0.0044 | 54.0 | 10584 | 0.9206 | 0.5895 |
| 0.0065 | 55.0 | 10780 | 0.9350 | 0.4487 |
| 0.0126 | 56.0 | 10976 | 0.9314 | 0.6221 |
| 0.0093 | 57.0 | 11172 | 0.9314 | 0.5138 |
| 0.0004 | 58.0 | 11368 | 0.9314 | 0.5162 |
| 0.0002 | 59.0 | 11564 | 0.9350 | 0.4514 |
| 0.1463 | 60.0 | 11760 | 0.9386 | 0.4744 |
| 0.0001 | 61.0 | 11956 | 0.9314 | 0.5338 |
| 0.0006 | 62.0 | 12152 | 0.9278 | 0.5788 |
| 0.0269 | 63.0 | 12348 | 0.9278 | 0.5500 |
| 0.1 | 64.0 | 12544 | 0.9206 | 0.6467 |
| 0.0004 | 65.0 | 12740 | 0.9242 | 0.5828 |
| 0.0001 | 66.0 | 12936 | 0.9314 | 0.5283 |
| 0.0001 | 67.0 | 13132 | 0.9206 | 0.6212 |
| 0.0002 | 68.0 | 13328 | 0.9242 | 0.4973 |
| 0.0058 | 69.0 | 13524 | 0.9278 | 0.5021 |
| 0.0605 | 70.0 | 13720 | 0.9170 | 0.6982 |
| 0.0006 | 71.0 | 13916 | 0.9350 | 0.4602 |
| 0.0021 | 72.0 | 14112 | 0.9314 | 0.5595 |
| 0.0004 | 73.0 | 14308 | 0.9386 | 0.4366 |
| 0.0124 | 74.0 | 14504 | 0.9134 | 0.7612 |
| 0.0284 | 75.0 | 14700 | 0.9206 | 0.6054 |
| 0.0001 | 76.0 | 14896 | 0.9242 | 0.5922 |
| 0.0119 | 77.0 | 15092 | 0.9242 | 0.5496 |
| 0.0006 | 78.0 | 15288 | 0.9206 | 0.6327 |
| 0.0711 | 79.0 | 15484 | 0.9386 | 0.5177 |
| 0.0001 | 80.0 | 15680 | 0.9134 | 0.7391 |
| 0.0985 | 81.0 | 15876 | 0.9242 | 0.5683 |
| 0.0001 | 82.0 | 16072 | 0.9206 | 0.6106 |
| 0.0 | 83.0 | 16268 | 0.9242 | 0.6235 |
| 0.0006 | 84.0 | 16464 | 0.9061 | 0.7914 |
| 0.0001 | 85.0 | 16660 | 0.9314 | 0.5649 |
| 0.0 | 86.0 | 16856 | 0.9350 | 0.5512 |
| 0.066 | 87.0 | 17052 | 0.9350 | 0.5473 |
| 0.0189 | 88.0 | 17248 | 0.9386 | 0.4866 |
| 0.0 | 89.0 | 17444 | 0.9386 | 0.5136 |
| 0.0001 | 90.0 | 17640 | 0.9350 | 0.5246 |
| 0.0001 | 91.0 | 17836 | 0.9314 | 0.5626 |
| 0.0037 | 92.0 | 18032 | 0.9350 | 0.5335 |
| 0.0999 | 93.0 | 18228 | 0.9242 | 0.6357 |
| 0.1124 | 94.0 | 18424 | 0.9278 | 0.5905 |
| 0.0175 | 95.0 | 18620 | 0.9206 | 0.6618 |
| 0.0001 | 96.0 | 18816 | 0.9386 | 0.5588 |
| 0.0259 | 97.0 | 19012 | 0.9350 | 0.5549 |
| 0.0001 | 98.0 | 19208 | 0.9350 | 0.5599 |
| 0.0285 | 99.0 | 19404 | 0.9350 | 0.5517 |
| 0.003 | 100.0 | 19600 | 0.9350 | 0.5570 |
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"fermentado",
"hongo",
"insecto",
"insufi_fermen",
"pizarroso",
"violeta"
] |
CristianR8/resnet50-cocoa
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet50-cocoa
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the SemilleroCV/Cocoa-dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3381
- Accuracy: 0.8917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.3793 | 1.0 | 196 | 1.4452 | 0.8628 |
| 0.9417 | 2.0 | 392 | 1.0832 | 0.8628 |
| 0.8546 | 3.0 | 588 | 0.7324 | 0.8628 |
| 0.6067 | 4.0 | 784 | 0.5761 | 0.8628 |
| 0.5583 | 5.0 | 980 | 0.5221 | 0.8628 |
| 0.6819 | 6.0 | 1176 | 0.4618 | 0.8628 |
| 0.4154 | 7.0 | 1372 | 0.4545 | 0.8628 |
| 0.4997 | 8.0 | 1568 | 0.4556 | 0.8628 |
| 0.6623 | 9.0 | 1764 | 0.4483 | 0.8628 |
| 0.8141 | 10.0 | 1960 | 0.4494 | 0.8628 |
| 0.5514 | 11.0 | 2156 | 0.4437 | 0.8628 |
| 0.6831 | 12.0 | 2352 | 0.4407 | 0.8664 |
| 0.2799 | 13.0 | 2548 | 0.4459 | 0.8700 |
| 0.451 | 14.0 | 2744 | 0.4313 | 0.8809 |
| 0.3901 | 15.0 | 2940 | 0.4340 | 0.8845 |
| 0.4778 | 16.0 | 3136 | 0.4219 | 0.8845 |
| 0.5531 | 17.0 | 3332 | 0.4304 | 0.8845 |
| 0.4904 | 18.0 | 3528 | 0.4429 | 0.8845 |
| 0.5398 | 19.0 | 3724 | 0.4144 | 0.8917 |
| 0.8024 | 20.0 | 3920 | 0.4253 | 0.8881 |
| 0.7022 | 21.0 | 4116 | 0.4232 | 0.8917 |
| 0.3868 | 22.0 | 4312 | 0.4167 | 0.8917 |
| 0.4075 | 23.0 | 4508 | 0.3917 | 0.8917 |
| 0.3873 | 24.0 | 4704 | 0.4269 | 0.8881 |
| 0.2382 | 25.0 | 4900 | 0.3913 | 0.8845 |
| 0.6525 | 26.0 | 5096 | 0.3949 | 0.8881 |
| 0.3207 | 27.0 | 5292 | 0.3967 | 0.8881 |
| 0.4569 | 28.0 | 5488 | 0.3901 | 0.8845 |
| 0.6184 | 29.0 | 5684 | 0.4114 | 0.8917 |
| 0.6055 | 30.0 | 5880 | 0.4342 | 0.8881 |
| 0.47 | 31.0 | 6076 | 0.4071 | 0.8917 |
| 0.3507 | 32.0 | 6272 | 0.3838 | 0.8881 |
| 0.4888 | 33.0 | 6468 | 0.4006 | 0.8881 |
| 0.4276 | 34.0 | 6664 | 0.3909 | 0.8881 |
| 0.5371 | 35.0 | 6860 | 0.4238 | 0.8917 |
| 0.4826 | 36.0 | 7056 | 0.3843 | 0.8917 |
| 0.5119 | 37.0 | 7252 | 0.3747 | 0.8845 |
| 0.4192 | 38.0 | 7448 | 0.4232 | 0.8881 |
| 1.1545 | 39.0 | 7644 | 0.4415 | 0.8881 |
| 0.3206 | 40.0 | 7840 | 0.3937 | 0.8881 |
| 0.3464 | 41.0 | 8036 | 0.3678 | 0.8881 |
| 0.4016 | 42.0 | 8232 | 0.3849 | 0.8881 |
| 0.2037 | 43.0 | 8428 | 0.3487 | 0.8881 |
| 0.3795 | 44.0 | 8624 | 0.4298 | 0.8881 |
| 0.403 | 45.0 | 8820 | 0.3966 | 0.8881 |
| 0.2754 | 46.0 | 9016 | 0.3785 | 0.8845 |
| 0.5228 | 47.0 | 9212 | 0.4117 | 0.8881 |
| 0.7263 | 48.0 | 9408 | 0.3726 | 0.8845 |
| 0.8995 | 49.0 | 9604 | 0.4559 | 0.8917 |
| 0.6844 | 50.0 | 9800 | 0.4164 | 0.8881 |
| 0.2734 | 51.0 | 9996 | 0.3862 | 0.8881 |
| 0.4179 | 52.0 | 10192 | 0.4386 | 0.8917 |
| 0.3354 | 53.0 | 10388 | 0.3949 | 0.8881 |
| 0.7031 | 54.0 | 10584 | 0.3910 | 0.8881 |
| 0.586 | 55.0 | 10780 | 0.4216 | 0.8881 |
| 0.3601 | 56.0 | 10976 | 0.4545 | 0.8881 |
| 0.362 | 57.0 | 11172 | 0.3760 | 0.8845 |
| 0.6132 | 58.0 | 11368 | 0.4258 | 0.8881 |
| 0.5605 | 59.0 | 11564 | 0.3972 | 0.8881 |
| 0.5071 | 60.0 | 11760 | 0.3873 | 0.8917 |
| 0.458 | 61.0 | 11956 | 0.4098 | 0.8881 |
| 0.4401 | 62.0 | 12152 | 0.3859 | 0.8845 |
| 0.5439 | 63.0 | 12348 | 0.4142 | 0.8917 |
| 0.6099 | 64.0 | 12544 | 0.3970 | 0.8881 |
| 0.2749 | 65.0 | 12740 | 0.3656 | 0.8809 |
| 0.581 | 66.0 | 12936 | 0.4203 | 0.8881 |
| 0.6009 | 67.0 | 13132 | 0.4074 | 0.8917 |
| 0.2388 | 68.0 | 13328 | 0.3594 | 0.8845 |
| 0.6006 | 69.0 | 13524 | 0.4045 | 0.8845 |
| 0.388 | 70.0 | 13720 | 0.3717 | 0.8881 |
| 0.552 | 71.0 | 13916 | 0.4239 | 0.8881 |
| 0.3875 | 72.0 | 14112 | 0.3731 | 0.8881 |
| 0.3105 | 73.0 | 14308 | 0.3434 | 0.8845 |
| 0.4627 | 74.0 | 14504 | 0.3946 | 0.8881 |
| 0.2931 | 75.0 | 14700 | 0.3950 | 0.8845 |
| 0.4639 | 76.0 | 14896 | 0.3875 | 0.8881 |
| 0.3534 | 77.0 | 15092 | 0.4009 | 0.8881 |
| 0.3175 | 78.0 | 15288 | 0.4109 | 0.8881 |
| 0.5334 | 79.0 | 15484 | 0.3918 | 0.8881 |
| 0.4827 | 80.0 | 15680 | 0.3807 | 0.8881 |
| 0.5162 | 81.0 | 15876 | 0.3624 | 0.8845 |
| 0.4377 | 82.0 | 16072 | 0.3729 | 0.8881 |
| 0.4487 | 83.0 | 16268 | 0.3981 | 0.8917 |
| 0.5057 | 84.0 | 16464 | 0.3995 | 0.8917 |
| 0.3421 | 85.0 | 16660 | 0.3554 | 0.8881 |
| 0.4083 | 86.0 | 16856 | 0.3634 | 0.8845 |
| 0.7634 | 87.0 | 17052 | 0.3970 | 0.8881 |
| 0.2588 | 88.0 | 17248 | 0.4121 | 0.8917 |
| 0.1584 | 89.0 | 17444 | 0.3711 | 0.8881 |
| 0.2643 | 90.0 | 17640 | 0.3743 | 0.8881 |
| 0.2771 | 91.0 | 17836 | 0.3726 | 0.8881 |
| 0.336 | 92.0 | 18032 | 0.3758 | 0.8845 |
| 0.3283 | 93.0 | 18228 | 0.4397 | 0.8917 |
| 0.7224 | 94.0 | 18424 | 0.3869 | 0.8917 |
| 0.1575 | 95.0 | 18620 | 0.3381 | 0.8917 |
| 0.4062 | 96.0 | 18816 | 0.3684 | 0.8845 |
| 0.3849 | 97.0 | 19012 | 0.3887 | 0.8881 |
| 0.2755 | 98.0 | 19208 | 0.3725 | 0.8881 |
| 0.4952 | 99.0 | 19404 | 0.4137 | 0.8917 |
| 0.3807 | 100.0 | 19600 | 0.3923 | 0.8881 |
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"fermentado",
"hongo",
"insecto",
"insufi_fermen",
"pizarroso",
"violeta"
] |
nadahh/APTOS2019DetectionMultiLabelNumericalviaLVMConvNextV2
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"0",
"1",
"2",
"3",
"4"
] |
Towen/vit-base-patch16-224-in21k-finetuned-earlystop
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-earlystop
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1972
- Accuracy: 0.9375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.5989 | 0.9816 | 40 | 0.6929 | 0.5 |
| 0.3542 | 1.9877 | 81 | 0.5951 | 0.6875 |
| 0.2495 | 2.9939 | 122 | 0.5182 | 0.75 |
| 0.1553 | 4.0 | 163 | 0.7023 | 0.625 |
| 0.1806 | 4.9816 | 203 | 0.3825 | 0.8125 |
| 0.1509 | 5.9877 | 244 | 0.1972 | 0.9375 |
| 0.1771 | 6.9939 | 285 | 0.6752 | 0.625 |
| 0.1372 | 8.0 | 326 | 0.4901 | 0.6875 |
| 0.1698 | 8.9816 | 366 | 0.2187 | 0.875 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"normal",
"pneumonia"
] |
nttwt1597/ViT_Blood_test_ckpt_3582
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-cifar-10
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9675
- Accuracy: 0.1471
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| No log | 1.0 | 398 | 0.1078 | 2.4878 |
| 2.6367 | 2.0 | 796 | 0.1225 | 2.2750 |
| 2.0748 | 3.0 | 1194 | 0.1471 | 2.1435 |
| 1.9035 | 4.0 | 1592 | 0.1225 | 2.0770 |
| 1.9035 | 5.0 | 1990 | 0.1422 | 2.0976 |
| 1.8217 | 6.0 | 2388 | 0.1618 | 1.9768 |
| 1.7998 | 7.0 | 2786 | 2.0803 | 0.1275 |
| 1.7268 | 8.0 | 3184 | 1.9141 | 0.1569 |
| 1.6826 | 9.0 | 3582 | 1.7059 | 0.2010 |
| 1.6826 | 10.0 | 3980 | 2.0650 | 0.1127 |
| 1.6642 | 11.0 | 4378 | 1.9643 | 0.1520 |
| 1.6267 | 12.0 | 4776 | 1.9675 | 0.1471 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"6.5",
"3.5",
"2",
"21",
"14.5",
"21.5",
"8",
"17.5",
"4.5",
"23.5",
"16.5",
"17",
"1",
"22.5",
"24.5",
"11.5",
"23",
"10.5",
"19",
"18.5",
"12.5",
"20.5",
"15.5",
"11",
"8.5",
"25",
"22",
"clot",
"16",
"12",
"10",
"15",
"6",
"20",
"0.5",
"9",
"7.5",
"5.5",
"3",
"5",
"7",
"13.5",
"1.5",
"24",
"18",
"2.5",
"19.5",
"13",
"4",
"14",
"9.5"
] |
priyamarwaha/vit-base-v1-eval-epoch-maxgrad-decay-cosine
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-v1-eval-epoch-maxgrad-decay-cosine
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2420
- Accuracy: 0.7032
## Model description
Detects the 14 highest mountains in the world
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.0001 | 0.9903 | 51 | 1.0182 | 0.7898 |
| 0.0027 | 2.0 | 103 | 1.4837 | 0.6688 |
| 0.0076 | 2.9903 | 154 | 1.2528 | 0.7420 |
| 0.0001 | 4.0 | 206 | 1.2986 | 0.7325 |
| 0.0007 | 4.9903 | 257 | 1.2049 | 0.7261 |
| 0.0001 | 6.0 | 309 | 1.1404 | 0.7707 |
| 0.0 | 6.9903 | 360 | 1.1531 | 0.7675 |
| 0.0 | 8.0 | 412 | 1.1605 | 0.7643 |
| 0.0 | 8.9903 | 463 | 1.1647 | 0.7643 |
| 0.0 | 10.0 | 515 | 1.1668 | 0.7675 |
| 0.0 | 10.9903 | 566 | 1.1690 | 0.7707 |
| 0.0 | 12.0 | 618 | 1.1702 | 0.7739 |
| 0.0 | 12.9903 | 669 | 1.1707 | 0.7739 |
| 0.0 | 14.0 | 721 | 1.1711 | 0.7739 |
| 0.0 | 14.8544 | 765 | 1.1710 | 0.7739 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"annapurna",
"broad peak",
"manaslu",
"mount everest",
"nanga parbat",
"shishapangma",
"cho oyu",
"dhaulagiri",
"gasherbrum i",
"gasherbrum ii",
"k2",
"kangchenjunga",
"lhotse",
"makalu"
] |
LNTTushar/vit-Facial-Expression-Recognition
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-Facial-Expression-Recognition
This model is a fine-tuned version of [motheecreator/vit-Facial-Expression-Recognition](https://huggingface.co/motheecreator/vit-Facial-Expression-Recognition) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2650
- eval_accuracy: 0.9142
- eval_runtime: 2876.3885
- eval_samples_per_second: 4.277
- eval_steps_per_second: 0.134
- epoch: 0
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cpu
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"angry",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
KhunPop/GardeningImage
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
nadahh/APTOS2019DetectionMultiLabelNumericalviaLVMcConvnextV2
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"0",
"1",
"2",
"3",
"4"
] |
DigitalPath/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0764
- Accuracy: 0.9781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2503 | 1.0 | 190 | 0.1109 | 0.9630 |
| 0.1804 | 2.0 | 380 | 0.0764 | 0.9781 |
| 0.1352 | 3.0 | 570 | 0.0684 | 0.9774 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 2.11.0
- Tokenizers 0.19.1
|
[
"annualcrop",
"forest",
"herbaceousvegetation",
"highway",
"industrial",
"pasture",
"permanentcrop",
"residential",
"river",
"sealake"
] |
FA24-CS462-Group-26/cnn_model
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
ArtiSikhwal/train_dir
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_dir
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2398
- Accuracy: 0.9085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.9980 | 246 | 0.2860 | 0.8900 |
| No log | 2.0 | 493 | 0.2773 | 0.8893 |
| 0.2997 | 2.9980 | 739 | 0.2486 | 0.9049 |
| 0.2997 | 3.9919 | 984 | 0.2398 | 0.9085 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"damage",
"no-damage"
] |
mango77/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3400
- Accuracy: 0.9337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 47 | 1.1302 | 0.8525 |
| No log | 2.0 | 94 | 0.5316 | 0.9093 |
| 1.4238 | 3.0 | 141 | 0.4060 | 0.9229 |
| 1.4238 | 4.0 | 188 | 0.3677 | 0.9215 |
| 0.3791 | 5.0 | 235 | 0.3565 | 0.9256 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.0
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
bmedeiros/vit-msn-base-finetuned-lf-invalidation
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-msn-base-finetuned-lf-invalidation
This model is a fine-tuned version of [facebook/vit-msn-base](https://huggingface.co/facebook/vit-msn-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2414
- Accuracy: 0.9234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.96 | 6 | 0.6512 | 0.6957 |
| 0.7053 | 1.92 | 12 | 0.6311 | 0.6809 |
| 0.7053 | 2.88 | 18 | 0.5361 | 0.7277 |
| 0.5163 | 4.0 | 25 | 0.3341 | 0.8681 |
| 0.3242 | 4.96 | 31 | 0.3167 | 0.8809 |
| 0.3242 | 5.92 | 37 | 0.3960 | 0.8191 |
| 0.2779 | 6.88 | 43 | 0.3818 | 0.8255 |
| 0.2348 | 8.0 | 50 | 0.5019 | 0.7362 |
| 0.2348 | 8.96 | 56 | 0.2944 | 0.8851 |
| 0.26 | 9.92 | 62 | 0.2414 | 0.9234 |
| 0.26 | 10.88 | 68 | 0.3664 | 0.8298 |
| 0.2778 | 12.0 | 75 | 0.2505 | 0.9043 |
| 0.2271 | 12.96 | 81 | 0.6277 | 0.6298 |
| 0.2271 | 13.92 | 87 | 0.2753 | 0.8745 |
| 0.2488 | 14.88 | 93 | 0.6249 | 0.6957 |
| 0.2729 | 16.0 | 100 | 0.5195 | 0.7149 |
| 0.2729 | 16.96 | 106 | 0.7984 | 0.5745 |
| 0.3261 | 17.92 | 112 | 0.4631 | 0.7723 |
| 0.3261 | 18.88 | 118 | 1.1010 | 0.5149 |
| 0.2212 | 20.0 | 125 | 0.2337 | 0.9170 |
| 0.2802 | 20.96 | 131 | 0.4638 | 0.7574 |
| 0.2802 | 21.92 | 137 | 0.3859 | 0.8362 |
| 0.2112 | 22.88 | 143 | 0.6708 | 0.6894 |
| 0.2231 | 24.0 | 150 | 0.3387 | 0.8681 |
| 0.2231 | 24.96 | 156 | 0.7045 | 0.6553 |
| 0.2037 | 25.92 | 162 | 0.3958 | 0.8277 |
| 0.2037 | 26.88 | 168 | 0.5082 | 0.7702 |
| 0.1845 | 28.0 | 175 | 0.5991 | 0.7234 |
| 0.1898 | 28.96 | 181 | 0.5108 | 0.7617 |
| 0.1898 | 29.92 | 187 | 0.2720 | 0.9085 |
| 0.2118 | 30.88 | 193 | 0.4936 | 0.7851 |
| 0.2097 | 32.0 | 200 | 0.3748 | 0.8404 |
| 0.2097 | 32.96 | 206 | 0.5048 | 0.7766 |
| 0.1704 | 33.92 | 212 | 0.4368 | 0.7957 |
| 0.1704 | 34.88 | 218 | 0.6959 | 0.6830 |
| 0.1962 | 36.0 | 225 | 1.0097 | 0.5957 |
| 0.1686 | 36.96 | 231 | 0.4992 | 0.7915 |
| 0.1686 | 37.92 | 237 | 0.5374 | 0.7574 |
| 0.1855 | 38.88 | 243 | 0.3710 | 0.8340 |
| 0.1528 | 40.0 | 250 | 0.3631 | 0.8447 |
| 0.1528 | 40.96 | 256 | 0.5589 | 0.7681 |
| 0.1523 | 41.92 | 262 | 0.5147 | 0.7809 |
| 0.1523 | 42.88 | 268 | 0.5299 | 0.7638 |
| 0.1709 | 44.0 | 275 | 0.5937 | 0.7447 |
| 0.1527 | 44.96 | 281 | 0.5969 | 0.7383 |
| 0.1527 | 45.92 | 287 | 0.6439 | 0.7255 |
| 0.1397 | 46.88 | 293 | 0.7721 | 0.6723 |
| 0.1538 | 48.0 | 300 | 0.5768 | 0.7702 |
| 0.1538 | 48.96 | 306 | 0.5801 | 0.7596 |
| 0.1466 | 49.92 | 312 | 0.5673 | 0.7574 |
| 0.1466 | 50.88 | 318 | 0.6469 | 0.7085 |
| 0.1302 | 52.0 | 325 | 0.7276 | 0.6957 |
| 0.1565 | 52.96 | 331 | 0.8247 | 0.6723 |
| 0.1565 | 53.92 | 337 | 0.4811 | 0.7979 |
| 0.1267 | 54.88 | 343 | 0.6373 | 0.7021 |
| 0.1424 | 56.0 | 350 | 0.7252 | 0.6723 |
| 0.1424 | 56.96 | 356 | 0.5697 | 0.7489 |
| 0.1053 | 57.92 | 362 | 0.7067 | 0.6957 |
| 0.1053 | 58.88 | 368 | 0.6577 | 0.7064 |
| 0.1301 | 60.0 | 375 | 0.5326 | 0.7745 |
| 0.0906 | 60.96 | 381 | 0.5468 | 0.7851 |
| 0.0906 | 61.92 | 387 | 0.4413 | 0.8277 |
| 0.0974 | 62.88 | 393 | 0.5479 | 0.7660 |
| 0.1133 | 64.0 | 400 | 0.7109 | 0.7043 |
| 0.1133 | 64.96 | 406 | 0.5735 | 0.7617 |
| 0.1189 | 65.92 | 412 | 0.4084 | 0.8298 |
| 0.1189 | 66.88 | 418 | 0.5716 | 0.7489 |
| 0.1064 | 68.0 | 425 | 0.5537 | 0.7553 |
| 0.1084 | 68.96 | 431 | 0.4569 | 0.8021 |
| 0.1084 | 69.92 | 437 | 0.5227 | 0.7617 |
| 0.1054 | 70.88 | 443 | 0.5995 | 0.7277 |
| 0.1005 | 72.0 | 450 | 0.5560 | 0.7638 |
| 0.1005 | 72.96 | 456 | 0.4550 | 0.8064 |
| 0.1028 | 73.92 | 462 | 0.4404 | 0.8234 |
| 0.1028 | 74.88 | 468 | 0.4761 | 0.7957 |
| 0.0917 | 76.0 | 475 | 0.5278 | 0.7681 |
| 0.1009 | 76.8 | 480 | 0.5346 | 0.7617 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
[
"invalid",
"valid"
] |
Melo1512/vit-msn-small-wbc-blur-detector
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-msn-small-wbc-blur-detector
This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2045
- Accuracy: 0.9251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3471 | 1.0 | 208 | 0.2960 | 0.8940 |
| 0.3113 | 2.0 | 416 | 0.2551 | 0.9088 |
| 0.3104 | 3.0 | 624 | 0.2106 | 0.9212 |
| 0.2855 | 4.0 | 832 | 0.2101 | 0.9221 |
| 0.2497 | 5.0 | 1040 | 0.2045 | 0.9251 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
[
"eosinophils",
"lymphocytes",
"monocytes",
"neutrophils"
] |
bmedeiros/swinv2-tiny-patch4-window8-256-finetuned-lf-invalidation
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-finetuned-lf-invalidation
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7236
- Accuracy: 0.6766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.5608 | 0.9796 | 12 | 0.6779 | 0.5532 |
| 0.5249 | 1.9592 | 24 | 0.5234 | 0.7617 |
| 0.4404 | 2.9388 | 36 | 0.5121 | 0.7766 |
| 0.3893 | 4.0 | 49 | 0.3981 | 0.8128 |
| 0.4083 | 4.9796 | 61 | 0.5870 | 0.6447 |
| 0.3725 | 5.9592 | 73 | 0.4991 | 0.7553 |
| 0.3909 | 6.9388 | 85 | 0.4062 | 0.8426 |
| 0.3799 | 8.0 | 98 | 0.5115 | 0.7574 |
| 0.3332 | 8.9796 | 110 | 0.4470 | 0.8277 |
| 0.3108 | 9.9592 | 122 | 0.3451 | 0.8681 |
| 0.308 | 10.9388 | 134 | 0.5822 | 0.7511 |
| 0.3699 | 12.0 | 147 | 0.4653 | 0.8106 |
| 0.2945 | 12.9796 | 159 | 0.4171 | 0.8426 |
| 0.2934 | 13.9592 | 171 | 0.4366 | 0.8234 |
| 0.2719 | 14.9388 | 183 | 0.5905 | 0.7638 |
| 0.3287 | 16.0 | 196 | 0.6654 | 0.7234 |
| 0.271 | 16.9796 | 208 | 0.6328 | 0.7447 |
| 0.3018 | 17.9592 | 220 | 0.4671 | 0.8255 |
| 0.2763 | 18.9388 | 232 | 0.6032 | 0.7468 |
| 0.2834 | 20.0 | 245 | 0.7016 | 0.7 |
| 0.2653 | 20.9796 | 257 | 0.4089 | 0.8468 |
| 0.2666 | 21.9592 | 269 | 0.7905 | 0.6447 |
| 0.2941 | 22.9388 | 281 | 0.6064 | 0.7553 |
| 0.2792 | 24.0 | 294 | 0.7444 | 0.7085 |
| 0.2019 | 24.9796 | 306 | 0.7595 | 0.7170 |
| 0.2552 | 25.9592 | 318 | 1.0296 | 0.5660 |
| 0.2451 | 26.9388 | 330 | 0.5999 | 0.7340 |
| 0.2126 | 28.0 | 343 | 0.5730 | 0.7660 |
| 0.2214 | 28.9796 | 355 | 0.9756 | 0.5809 |
| 0.2633 | 29.9592 | 367 | 0.4134 | 0.8404 |
| 0.2427 | 30.9388 | 379 | 0.8228 | 0.6362 |
| 0.2405 | 32.0 | 392 | 0.5279 | 0.7723 |
| 0.2078 | 32.9796 | 404 | 0.6581 | 0.6979 |
| 0.2201 | 33.9592 | 416 | 0.9132 | 0.5745 |
| 0.2481 | 34.9388 | 428 | 0.9526 | 0.5617 |
| 0.248 | 36.0 | 441 | 0.8979 | 0.5553 |
| 0.2209 | 36.9796 | 453 | 0.8351 | 0.5915 |
| 0.2253 | 37.9592 | 465 | 0.6744 | 0.6851 |
| 0.2447 | 38.9388 | 477 | 0.7794 | 0.6404 |
| 0.2049 | 40.0 | 490 | 0.6136 | 0.7468 |
| 0.1965 | 40.9796 | 502 | 0.6582 | 0.7340 |
| 0.2638 | 41.9592 | 514 | 0.7487 | 0.6766 |
| 0.2297 | 42.9388 | 526 | 0.7282 | 0.6702 |
| 0.2163 | 44.0 | 539 | 0.5713 | 0.7511 |
| 0.2016 | 44.9796 | 551 | 0.5994 | 0.7319 |
| 0.1739 | 45.9592 | 563 | 0.6865 | 0.6915 |
| 0.2497 | 46.9388 | 575 | 0.6901 | 0.6957 |
| 0.2293 | 48.0 | 588 | 0.7150 | 0.6851 |
| 0.2237 | 48.9796 | 600 | 0.7236 | 0.6766 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
[
"invalid",
"valid"
] |
parasahuja23/vit-base-patch16-224-in21k-ISL-final-1.0
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-ISL-final-1.0
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0518
- Accuracy: 0.9938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 1.1101 | 1.0 | 193 | 0.9557 | 0.7079 |
| 0.5682 | 2.0 | 386 | 0.9787 | 0.2368 |
| 0.4623 | 3.0 | 579 | 0.9891 | 0.1313 |
| 0.4103 | 4.0 | 772 | 0.9840 | 0.1107 |
| 0.3254 | 5.0 | 965 | 0.9916 | 0.0713 |
| 0.2949 | 6.0 | 1158 | 0.9918 | 0.0570 |
| 0.2638 | 7.0 | 1351 | 0.0518 | 0.9938 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"dance",
"hospital",
"i",
"india",
"love",
"university",
"who",
"you",
"happy",
"see",
"where"
] |
bmedeiros/vit-base-patch16-224-in21k-finetuned-lf-invalidation
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-lf-invalidation
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1798
- Accuracy: 0.9511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.6773 | 0.9796 | 12 | 0.6550 | 0.5681 |
| 0.5982 | 1.9592 | 24 | 0.5839 | 0.6362 |
| 0.479 | 2.9388 | 36 | 0.4356 | 0.8894 |
| 0.3862 | 4.0 | 49 | 0.2807 | 0.9362 |
| 0.2498 | 4.9796 | 61 | 0.2599 | 0.9128 |
| 0.2836 | 5.9592 | 73 | 0.5015 | 0.7745 |
| 0.2641 | 6.9388 | 85 | 0.5500 | 0.7340 |
| 0.2716 | 8.0 | 98 | 0.3083 | 0.8787 |
| 0.2382 | 8.9796 | 110 | 0.2885 | 0.8936 |
| 0.1985 | 9.9592 | 122 | 0.1798 | 0.9511 |
| 0.2174 | 10.9388 | 134 | 0.3060 | 0.8766 |
| 0.2372 | 12.0 | 147 | 0.3084 | 0.8702 |
| 0.2164 | 12.9796 | 159 | 0.2667 | 0.9021 |
| 0.2106 | 13.9592 | 171 | 0.3747 | 0.8447 |
| 0.1956 | 14.9388 | 183 | 0.5105 | 0.7851 |
| 0.2154 | 16.0 | 196 | 0.5683 | 0.7787 |
| 0.179 | 16.9796 | 208 | 0.4279 | 0.8340 |
| 0.2548 | 17.9592 | 220 | 0.6493 | 0.7404 |
| 0.236 | 18.9388 | 232 | 0.3860 | 0.8340 |
| 0.2121 | 20.0 | 245 | 0.5826 | 0.7766 |
| 0.1691 | 20.9796 | 257 | 0.3195 | 0.8638 |
| 0.1824 | 21.9592 | 269 | 0.3772 | 0.8404 |
| 0.1733 | 22.9388 | 281 | 0.5182 | 0.7936 |
| 0.1837 | 24.0 | 294 | 0.4924 | 0.8149 |
| 0.1274 | 24.9796 | 306 | 0.3895 | 0.8447 |
| 0.1415 | 25.9592 | 318 | 0.3662 | 0.8532 |
| 0.186 | 26.9388 | 330 | 0.4347 | 0.8447 |
| 0.1403 | 28.0 | 343 | 0.4490 | 0.8383 |
| 0.1635 | 28.9796 | 355 | 0.7771 | 0.7085 |
| 0.2135 | 29.9592 | 367 | 0.3503 | 0.8702 |
| 0.1456 | 30.9388 | 379 | 0.3815 | 0.8617 |
| 0.1634 | 32.0 | 392 | 0.2810 | 0.9 |
| 0.1308 | 32.9796 | 404 | 0.4643 | 0.8383 |
| 0.163 | 33.9592 | 416 | 0.3337 | 0.8787 |
| 0.1736 | 34.9388 | 428 | 0.4070 | 0.8553 |
| 0.1638 | 36.0 | 441 | 0.4142 | 0.8574 |
| 0.1488 | 36.9796 | 453 | 0.5039 | 0.8170 |
| 0.148 | 37.9592 | 465 | 0.5767 | 0.7745 |
| 0.1741 | 38.9388 | 477 | 0.4842 | 0.8255 |
| 0.1338 | 40.0 | 490 | 0.7236 | 0.7234 |
| 0.1302 | 40.9796 | 502 | 0.5295 | 0.8043 |
| 0.141 | 41.9592 | 514 | 0.5294 | 0.8085 |
| 0.1461 | 42.9388 | 526 | 0.5485 | 0.7979 |
| 0.1006 | 44.0 | 539 | 0.5453 | 0.7915 |
| 0.1317 | 44.9796 | 551 | 0.5930 | 0.7681 |
| 0.1069 | 45.9592 | 563 | 0.4976 | 0.8170 |
| 0.1531 | 46.9388 | 575 | 0.5105 | 0.8064 |
| 0.155 | 48.0 | 588 | 0.6128 | 0.7638 |
| 0.1237 | 48.9796 | 600 | 0.6180 | 0.7617 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
[
"invalid",
"valid"
] |
Melo1512/vit-msn-small-wbc-classifier-100
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-msn-small-wbc-classifier-100
This model is a fine-tuned version of [Melo1512/vit-msn-small-wbc-blur-detector](https://huggingface.co/Melo1512/vit-msn-small-wbc-blur-detector) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2005
- Accuracy: 0.9272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2356 | 1.0 | 208 | 0.2005 | 0.9272 |
| 0.2305 | 2.0 | 416 | 0.2259 | 0.9195 |
| 0.246 | 3.0 | 624 | 0.2097 | 0.9210 |
| 0.2585 | 4.0 | 832 | 0.2184 | 0.9180 |
| 0.2593 | 5.0 | 1040 | 0.2331 | 0.9171 |
| 0.2483 | 6.0 | 1248 | 0.2170 | 0.9198 |
| 0.268 | 7.0 | 1456 | 0.2228 | 0.9181 |
| 0.3112 | 8.0 | 1664 | 0.2361 | 0.9171 |
| 0.2679 | 9.0 | 1872 | 0.2273 | 0.9185 |
| 0.3099 | 10.0 | 2080 | 0.2303 | 0.9144 |
| 0.2749 | 11.0 | 2288 | 0.2658 | 0.9125 |
| 0.2475 | 12.0 | 2496 | 0.2247 | 0.9179 |
| 0.2338 | 13.0 | 2704 | 0.2333 | 0.9139 |
| 0.2731 | 14.0 | 2912 | 0.2295 | 0.9153 |
| 0.229 | 15.0 | 3120 | 0.2505 | 0.9138 |
| 0.2462 | 16.0 | 3328 | 0.2534 | 0.9137 |
| 0.2274 | 17.0 | 3536 | 0.2652 | 0.9079 |
| 0.2339 | 18.0 | 3744 | 0.2550 | 0.9153 |
| 0.2053 | 19.0 | 3952 | 0.2819 | 0.9106 |
| 0.2063 | 20.0 | 4160 | 0.2747 | 0.9129 |
| 0.1964 | 21.0 | 4368 | 0.2975 | 0.9118 |
| 0.1953 | 22.0 | 4576 | 0.2799 | 0.9145 |
| 0.1938 | 23.0 | 4784 | 0.3197 | 0.9100 |
| 0.1851 | 24.0 | 4992 | 0.3143 | 0.9138 |
| 0.1931 | 25.0 | 5200 | 0.3331 | 0.9125 |
| 0.1877 | 26.0 | 5408 | 0.3044 | 0.9110 |
| 0.177 | 27.0 | 5616 | 0.3271 | 0.9109 |
| 0.1529 | 28.0 | 5824 | 0.3382 | 0.9094 |
| 0.1684 | 29.0 | 6032 | 0.3415 | 0.9128 |
| 0.176 | 30.0 | 6240 | 0.3463 | 0.9095 |
| 0.1496 | 31.0 | 6448 | 0.3952 | 0.9136 |
| 0.1509 | 32.0 | 6656 | 0.3690 | 0.9121 |
| 0.1463 | 33.0 | 6864 | 0.3999 | 0.9094 |
| 0.1354 | 34.0 | 7072 | 0.3996 | 0.9135 |
| 0.1546 | 35.0 | 7280 | 0.3810 | 0.9116 |
| 0.1513 | 36.0 | 7488 | 0.3992 | 0.9121 |
| 0.115 | 37.0 | 7696 | 0.4295 | 0.9132 |
| 0.1479 | 38.0 | 7904 | 0.4363 | 0.9123 |
| 0.1455 | 39.0 | 8112 | 0.4220 | 0.9140 |
| 0.1353 | 40.0 | 8320 | 0.4112 | 0.9127 |
| 0.141 | 41.0 | 8528 | 0.4322 | 0.9139 |
| 0.1272 | 42.0 | 8736 | 0.4176 | 0.9119 |
| 0.1402 | 43.0 | 8944 | 0.4041 | 0.9108 |
| 0.1236 | 44.0 | 9152 | 0.4478 | 0.9095 |
| 0.1349 | 45.0 | 9360 | 0.4211 | 0.9112 |
| 0.1472 | 46.0 | 9568 | 0.4510 | 0.9113 |
| 0.1115 | 47.0 | 9776 | 0.4373 | 0.9119 |
| 0.1122 | 48.0 | 9984 | 0.4689 | 0.9129 |
| 0.1297 | 49.0 | 10192 | 0.4569 | 0.9140 |
| 0.1337 | 50.0 | 10400 | 0.4622 | 0.9111 |
| 0.1194 | 51.0 | 10608 | 0.4579 | 0.9151 |
| 0.1322 | 52.0 | 10816 | 0.4728 | 0.9104 |
| 0.1179 | 53.0 | 11024 | 0.4729 | 0.9125 |
| 0.1216 | 54.0 | 11232 | 0.5199 | 0.9114 |
| 0.1234 | 55.0 | 11440 | 0.4769 | 0.9135 |
| 0.1125 | 56.0 | 11648 | 0.4871 | 0.9118 |
| 0.1234 | 57.0 | 11856 | 0.4667 | 0.9146 |
| 0.1103 | 58.0 | 12064 | 0.4741 | 0.9119 |
| 0.1103 | 59.0 | 12272 | 0.4864 | 0.9129 |
| 0.1222 | 60.0 | 12480 | 0.4550 | 0.9143 |
| 0.127 | 61.0 | 12688 | 0.4919 | 0.9135 |
| 0.1117 | 62.0 | 12896 | 0.4946 | 0.9139 |
| 0.1078 | 63.0 | 13104 | 0.5040 | 0.9133 |
| 0.1127 | 64.0 | 13312 | 0.4804 | 0.9126 |
| 0.1122 | 65.0 | 13520 | 0.4997 | 0.9136 |
| 0.1089 | 66.0 | 13728 | 0.5134 | 0.9139 |
| 0.1179 | 67.0 | 13936 | 0.5246 | 0.9155 |
| 0.0934 | 68.0 | 14144 | 0.5158 | 0.9126 |
| 0.1011 | 69.0 | 14352 | 0.5361 | 0.9140 |
| 0.1063 | 70.0 | 14560 | 0.5326 | 0.9135 |
| 0.1021 | 71.0 | 14768 | 0.5151 | 0.9143 |
| 0.1007 | 72.0 | 14976 | 0.5390 | 0.9143 |
| 0.0946 | 73.0 | 15184 | 0.5256 | 0.9114 |
| 0.097 | 74.0 | 15392 | 0.5247 | 0.9135 |
| 0.0967 | 75.0 | 15600 | 0.5154 | 0.9144 |
| 0.0985 | 76.0 | 15808 | 0.5412 | 0.9154 |
| 0.0856 | 77.0 | 16016 | 0.5335 | 0.9148 |
| 0.103 | 78.0 | 16224 | 0.5210 | 0.9162 |
| 0.1033 | 79.0 | 16432 | 0.5165 | 0.9156 |
| 0.109 | 80.0 | 16640 | 0.5303 | 0.9150 |
| 0.0999 | 81.0 | 16848 | 0.5299 | 0.9158 |
| 0.0966 | 82.0 | 17056 | 0.5324 | 0.9167 |
| 0.0952 | 83.0 | 17264 | 0.5229 | 0.9168 |
| 0.1071 | 84.0 | 17472 | 0.5303 | 0.9176 |
| 0.0899 | 85.0 | 17680 | 0.5228 | 0.9160 |
| 0.0868 | 86.0 | 17888 | 0.5297 | 0.9149 |
| 0.1011 | 87.0 | 18096 | 0.5370 | 0.9156 |
| 0.0867 | 88.0 | 18304 | 0.5430 | 0.9158 |
| 0.0936 | 89.0 | 18512 | 0.5346 | 0.9165 |
| 0.0929 | 90.0 | 18720 | 0.5387 | 0.9163 |
| 0.0792 | 91.0 | 18928 | 0.5459 | 0.9150 |
| 0.0918 | 92.0 | 19136 | 0.5257 | 0.9165 |
| 0.0853 | 93.0 | 19344 | 0.5426 | 0.9155 |
| 0.0908 | 94.0 | 19552 | 0.5429 | 0.9153 |
| 0.0981 | 95.0 | 19760 | 0.5394 | 0.9155 |
| 0.0825 | 96.0 | 19968 | 0.5345 | 0.9168 |
| 0.0849 | 97.0 | 20176 | 0.5388 | 0.9164 |
| 0.0992 | 98.0 | 20384 | 0.5357 | 0.9168 |
| 0.0909 | 99.0 | 20592 | 0.5375 | 0.9167 |
| 0.0861 | 100.0 | 20800 | 0.5372 | 0.9166 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
[
"eosinophils",
"lymphocytes",
"monocytes",
"neutrophils"
] |
ArtiSikhwal/headlight_11_12_2024_google_vit-base-patch16-224-in21k
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# headlight_11_12_2024_google_vit-base-patch16-224-in21k
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2520
- Accuracy: 0.9039
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2762 | 1.0 | 1969 | 0.2746 | 0.8977 |
| 0.2598 | 2.0 | 3938 | 0.2588 | 0.9005 |
| 0.2462 | 3.0 | 5907 | 0.2547 | 0.9019 |
| 0.2371 | 4.0 | 7876 | 0.2520 | 0.9039 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"damage",
"no-damage"
] |
jnmrr/ds3-img-classification
|
# Document Classification Model
## Overview
This model is trained for document classification using vision transformers (DiT).
## Model Details
* Architecture: Vision Transformer (DiT)
* Tasks: Document Classification
* Training Framework: 🤗 Transformers
* Base Model: microsoft/dit-large
* Training Dataset Size: 32786
## Training Parameters
* Batch Size: 256
* Learning Rate: 0.001
* Number of Epochs: 90
* Mixed Precision: BF16
* Gradient Accumulation Steps: 2
* Weight Decay: 0.01
* Learning Rate Schedule: cosine_with_restarts
* Warmup Ratio: 0.1
## Training and Evaluation Metrics
### Training Metrics
* Loss: 0.1915
* Grad Norm: 1.3002
* Learning Rate: 0.0009
* Epoch: 26.4186
* Step: 1704.0000
### Evaluation Metrics
* Loss: 0.9457
* Accuracy: 0.7757
* Weighted F1: 0.7689
* Micro F1: 0.7757
* Macro F1: 0.7518
* Weighted Recall: 0.7757
* Micro Recall: 0.7757
* Macro Recall: 0.7603
* Weighted Precision: 0.8023
* Micro Precision: 0.7757
* Macro Precision: 0.7941
* Runtime: 8.4106
* Samples Per Second: 433.1450
* Steps Per Second: 3.4480
## Usage
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
# Load model and processor
processor = AutoImageProcessor.from_pretrained("jnmrr/ds3-img-classification")
model = AutoModelForImageClassification.from_pretrained("jnmrr/ds3-img-classification")
# Process an image
image = Image.open("document.png")
inputs = processor(image, return_tensors="pt")
# Make prediction
outputs = model(**inputs)
predicted_label = outputs.logits.argmax(-1).item()
```
|
[
"informe_social",
"cargo_ingreso_mpe",
"acta_registro_personal",
"citacion_pnp",
"acta_lectura_derechos_ley30364",
"acta_intervencion_minpu",
"denuncia_policial",
"providencia_fiscal",
"consulta_medidas_proteccion",
"oficio_medicina_legal",
"oficio_evaluacion_psicologica",
"resolucion_judicial_audiencia",
"certificado_medico_legal",
"ficha_datos_sidpol",
"consulta_sucamec",
"informe_mimp",
"ficha_valoracion_riesgo_manual",
"oficio_atencion_integral",
"escrito_minpu",
"informe_psicologico",
"ficha_valoracion_riesgo_digital",
"informe_medico",
"croquis_domicilio",
"ficha_datos_reniec",
"escrito_pj",
"evidencia_chat",
"declaracion_minpu",
"notificacion_resolucion_judicial_audiencia",
"consulta_antecedentes_penales",
"notificacion_detencion_pnp",
"consulta_personas_sidpol",
"escrito_pnp",
"constancia_buen_trato",
"declaracion_pnp",
"escrito_mimp",
"acta_intervencion_pnp",
"constancia_notificacion",
"cargo_ingreso_sij",
"anexo_dni",
"hoja_blanco",
"consulta_requisitorias"
] |
gsandle92/vit-base-beans-demo-v5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans-demo-v5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0984
- Accuracy: 0.9762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1192 | 0.1764 | 100 | 0.1158 | 0.9692 |
| 0.0734 | 0.3527 | 200 | 0.1268 | 0.9702 |
| 0.0701 | 0.5291 | 300 | 0.1057 | 0.9673 |
| 0.1107 | 0.7055 | 400 | 0.1081 | 0.9722 |
| 0.0413 | 0.8818 | 500 | 0.0984 | 0.9762 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.4.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"negative",
"positive"
] |
hyelijah2/resnet-18
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
platzi/platzi-vit-model-Yomin-Jaramillo
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-Yomin-Jaramillo
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0303
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1424 | 3.8462 | 500 | 0.0303 | 0.9925 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
platzi/osvaldo_platzi_course-osvaldotrejo
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# osvaldo_platzi_course-osvaldotrejo
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0423
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1501 | 3.8462 | 500 | 0.0423 | 0.9925 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
minizhu/aesthetic-anime-v2
|
Aesthetic Shadow V2 is a 1.1b parameters visual transformer designed to evaluate the quality of anime images. It accepts high-resolution 1024x1024 images as input and provides a prediction score that quantifies the aesthetic appeal of the artwork. Leveraging cutting-edge deep learning techniques, this model excels at discerning fine details, proportions, and overall visual coherence in anime illustrations.
This is an improved model over the original shadowlilac/aesthetic-shadow
## Disclosure
This model does not intend to be offensive towards any artist and may not output an accurate label for an image. A potential use case would be low quality images filtering on image datasets.
|
[
"hq",
"lq"
] |
vinaybabu/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1883
- Accuracy: 0.9418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3905 | 1.0 | 370 | 0.3049 | 0.9215 |
| 0.2057 | 2.0 | 740 | 0.2411 | 0.9296 |
| 0.165 | 3.0 | 1110 | 0.2202 | 0.9269 |
| 0.1345 | 4.0 | 1480 | 0.2145 | 0.9296 |
| 0.1364 | 5.0 | 1850 | 0.2141 | 0.9283 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
Pointer0111/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1700
- Accuracy: 0.9418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3646 | 1.0 | 370 | 0.2851 | 0.9378 |
| 0.225 | 2.0 | 740 | 0.2206 | 0.9432 |
| 0.1619 | 3.0 | 1110 | 0.1992 | 0.9459 |
| 0.1482 | 4.0 | 1480 | 0.1939 | 0.9445 |
| 0.1409 | 5.0 | 1850 | 0.1905 | 0.9459 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.