model_id
stringlengths 7
105
| model_card
stringlengths 1
130k
| model_labels
listlengths 2
80k
|
---|---|---|
vision7111/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2031
- Accuracy: 0.9459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3727 | 1.0 | 370 | 0.2756 | 0.9337 |
| 0.2145 | 2.0 | 740 | 0.2168 | 0.9378 |
| 0.1835 | 3.0 | 1110 | 0.1918 | 0.9459 |
| 0.147 | 4.0 | 1480 | 0.1857 | 0.9472 |
| 0.1315 | 5.0 | 1850 | 0.1818 | 0.9472 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
Melo1512/vit-msn-small-wbc-classifier-mono-V-all
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-msn-small-wbc-classifier-mono-V-all
This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1140
- Accuracy: 0.9585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1974 | 1.0 | 208 | 0.1642 | 0.9372 |
| 0.1589 | 2.0 | 416 | 0.1334 | 0.9508 |
| 0.134 | 3.0 | 624 | 0.1466 | 0.9431 |
| 0.1488 | 4.0 | 832 | 0.1155 | 0.9566 |
| 0.1169 | 5.0 | 1040 | 0.1140 | 0.9585 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
[
"monocytes",
"others"
] |
ArtiSikhwal/headlight_12_12_2024_google_vit-base-patch16-224-in21k
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# headlight_12_12_2024_google_vit-base-patch16-224-in21k
This model is a fine-tuned version of [ArtiSikhwal/headlight_11_12_2024_google_vit-base-patch16-224-in21k](https://huggingface.co/ArtiSikhwal/headlight_11_12_2024_google_vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2587
- Accuracy: 0.9015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.9995 | 492 | 0.2682 | 0.8973 |
| 0.1998 | 1.9990 | 984 | 0.2701 | 0.8982 |
| 0.1988 | 2.9985 | 1476 | 0.2708 | 0.8974 |
| 0.1976 | 4.0 | 1969 | 0.2609 | 0.9013 |
| 0.2131 | 4.9995 | 2461 | 0.2584 | 0.9011 |
| 0.2169 | 5.9970 | 2952 | 0.2587 | 0.9015 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"damage",
"no-damage"
] |
Melo1512/vit-msn-small-wbc-classifier-lowlr-500
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-msn-small-wbc-classifier-lowlr-500
This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2021
- Accuracy: 0.9250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 1.5458 | 1.0 | 208 | 1.6083 | 0.3335 |
| 1.5553 | 2.0 | 416 | 1.5834 | 0.3335 |
| 1.5144 | 3.0 | 624 | 1.5428 | 0.3335 |
| 1.4408 | 4.0 | 832 | 1.4865 | 0.3335 |
| 1.3689 | 5.0 | 1040 | 1.4168 | 0.3335 |
| 1.3224 | 6.0 | 1248 | 1.3352 | 0.3343 |
| 1.228 | 7.0 | 1456 | 1.2446 | 0.3462 |
| 1.1529 | 8.0 | 1664 | 1.1526 | 0.4247 |
| 1.0773 | 9.0 | 1872 | 1.0670 | 0.5535 |
| 1.0057 | 10.0 | 2080 | 1.0025 | 0.6327 |
| 0.9576 | 11.0 | 2288 | 0.9550 | 0.6800 |
| 0.9039 | 12.0 | 2496 | 0.9114 | 0.7124 |
| 0.8684 | 13.0 | 2704 | 0.8636 | 0.7392 |
| 0.8352 | 14.0 | 2912 | 0.8112 | 0.7623 |
| 0.782 | 15.0 | 3120 | 0.7614 | 0.7764 |
| 0.7359 | 16.0 | 3328 | 0.7145 | 0.7867 |
| 0.7044 | 17.0 | 3536 | 0.6702 | 0.7948 |
| 0.6568 | 18.0 | 3744 | 0.6311 | 0.8020 |
| 0.6296 | 19.0 | 3952 | 0.5987 | 0.8069 |
| 0.6315 | 20.0 | 4160 | 0.5693 | 0.8136 |
| 0.6146 | 21.0 | 4368 | 0.5437 | 0.8181 |
| 0.5632 | 22.0 | 4576 | 0.5216 | 0.8247 |
| 0.5705 | 23.0 | 4784 | 0.5022 | 0.8309 |
| 0.5503 | 24.0 | 4992 | 0.4832 | 0.8363 |
| 0.5541 | 25.0 | 5200 | 0.4666 | 0.8425 |
| 0.5263 | 26.0 | 5408 | 0.4504 | 0.8468 |
| 0.4947 | 27.0 | 5616 | 0.4357 | 0.8514 |
| 0.5 | 28.0 | 5824 | 0.4245 | 0.8547 |
| 0.4959 | 29.0 | 6032 | 0.4112 | 0.8582 |
| 0.4929 | 30.0 | 6240 | 0.4005 | 0.8609 |
| 0.4578 | 31.0 | 6448 | 0.3910 | 0.8628 |
| 0.4565 | 32.0 | 6656 | 0.3816 | 0.8654 |
| 0.4492 | 33.0 | 6864 | 0.3737 | 0.8684 |
| 0.4319 | 34.0 | 7072 | 0.3654 | 0.8705 |
| 0.4395 | 35.0 | 7280 | 0.3587 | 0.8722 |
| 0.4436 | 36.0 | 7488 | 0.3498 | 0.8751 |
| 0.4196 | 37.0 | 7696 | 0.3440 | 0.8769 |
| 0.4138 | 38.0 | 7904 | 0.3381 | 0.8789 |
| 0.4414 | 39.0 | 8112 | 0.3336 | 0.8805 |
| 0.424 | 40.0 | 8320 | 0.3303 | 0.8813 |
| 0.4058 | 41.0 | 8528 | 0.3241 | 0.8832 |
| 0.3714 | 42.0 | 8736 | 0.3210 | 0.8844 |
| 0.384 | 43.0 | 8944 | 0.3196 | 0.8848 |
| 0.3795 | 44.0 | 9152 | 0.3163 | 0.8861 |
| 0.4052 | 45.0 | 9360 | 0.3080 | 0.8895 |
| 0.3864 | 46.0 | 9568 | 0.3097 | 0.8865 |
| 0.3881 | 47.0 | 9776 | 0.3012 | 0.8910 |
| 0.3606 | 48.0 | 9984 | 0.3056 | 0.8877 |
| 0.3765 | 49.0 | 10192 | 0.2981 | 0.8907 |
| 0.3762 | 50.0 | 10400 | 0.2946 | 0.8922 |
| 0.3535 | 51.0 | 10608 | 0.3004 | 0.8899 |
| 0.3667 | 52.0 | 10816 | 0.2918 | 0.8924 |
| 0.3663 | 53.0 | 11024 | 0.2921 | 0.8923 |
| 0.3735 | 54.0 | 11232 | 0.2874 | 0.8942 |
| 0.3649 | 55.0 | 11440 | 0.2867 | 0.8947 |
| 0.3531 | 56.0 | 11648 | 0.2828 | 0.8967 |
| 0.3608 | 57.0 | 11856 | 0.2825 | 0.8963 |
| 0.3483 | 58.0 | 12064 | 0.2811 | 0.8970 |
| 0.3544 | 59.0 | 12272 | 0.2785 | 0.8982 |
| 0.3768 | 60.0 | 12480 | 0.2730 | 0.9002 |
| 0.3659 | 61.0 | 12688 | 0.2735 | 0.8995 |
| 0.3547 | 62.0 | 12896 | 0.2732 | 0.8999 |
| 0.3403 | 63.0 | 13104 | 0.2696 | 0.9003 |
| 0.3482 | 64.0 | 13312 | 0.2687 | 0.9014 |
| 0.3503 | 65.0 | 13520 | 0.2673 | 0.9011 |
| 0.3249 | 66.0 | 13728 | 0.2678 | 0.9012 |
| 0.3334 | 67.0 | 13936 | 0.2679 | 0.9014 |
| 0.3308 | 68.0 | 14144 | 0.2646 | 0.9018 |
| 0.3297 | 69.0 | 14352 | 0.2676 | 0.9007 |
| 0.3566 | 70.0 | 14560 | 0.2609 | 0.9037 |
| 0.3256 | 71.0 | 14768 | 0.2635 | 0.9027 |
| 0.3439 | 72.0 | 14976 | 0.2606 | 0.9036 |
| 0.3324 | 73.0 | 15184 | 0.2578 | 0.9051 |
| 0.3316 | 74.0 | 15392 | 0.2552 | 0.9058 |
| 0.3224 | 75.0 | 15600 | 0.2555 | 0.9060 |
| 0.3294 | 76.0 | 15808 | 0.2543 | 0.9061 |
| 0.3211 | 77.0 | 16016 | 0.2538 | 0.9063 |
| 0.3326 | 78.0 | 16224 | 0.2561 | 0.9047 |
| 0.3261 | 79.0 | 16432 | 0.2547 | 0.9059 |
| 0.3082 | 80.0 | 16640 | 0.2554 | 0.9048 |
| 0.3341 | 81.0 | 16848 | 0.2538 | 0.9057 |
| 0.3355 | 82.0 | 17056 | 0.2508 | 0.9073 |
| 0.3311 | 83.0 | 17264 | 0.2486 | 0.9081 |
| 0.3511 | 84.0 | 17472 | 0.2478 | 0.9085 |
| 0.3467 | 85.0 | 17680 | 0.2503 | 0.9070 |
| 0.312 | 86.0 | 17888 | 0.2467 | 0.9086 |
| 0.336 | 87.0 | 18096 | 0.2455 | 0.9090 |
| 0.3065 | 88.0 | 18304 | 0.2484 | 0.9080 |
| 0.3365 | 89.0 | 18512 | 0.2453 | 0.9094 |
| 0.3106 | 90.0 | 18720 | 0.2454 | 0.9092 |
| 0.2984 | 91.0 | 18928 | 0.2489 | 0.9089 |
| 0.309 | 92.0 | 19136 | 0.2440 | 0.9102 |
| 0.31 | 93.0 | 19344 | 0.2407 | 0.9108 |
| 0.3294 | 94.0 | 19552 | 0.2416 | 0.9105 |
| 0.309 | 95.0 | 19760 | 0.2423 | 0.9106 |
| 0.3141 | 96.0 | 19968 | 0.2442 | 0.9096 |
| 0.3098 | 97.0 | 20176 | 0.2404 | 0.9112 |
| 0.3206 | 98.0 | 20384 | 0.2414 | 0.9111 |
| 0.3258 | 99.0 | 20592 | 0.2393 | 0.9112 |
| 0.319 | 100.0 | 20800 | 0.2383 | 0.9116 |
| 0.3035 | 101.0 | 21008 | 0.2396 | 0.9114 |
| 0.2899 | 102.0 | 21216 | 0.2410 | 0.9113 |
| 0.3058 | 103.0 | 21424 | 0.2355 | 0.9124 |
| 0.3028 | 104.0 | 21632 | 0.2373 | 0.9121 |
| 0.3021 | 105.0 | 21840 | 0.2386 | 0.9116 |
| 0.3012 | 106.0 | 22048 | 0.2379 | 0.9124 |
| 0.2955 | 107.0 | 22256 | 0.2356 | 0.9125 |
| 0.2948 | 108.0 | 22464 | 0.2342 | 0.9128 |
| 0.309 | 109.0 | 22672 | 0.2321 | 0.9134 |
| 0.3321 | 110.0 | 22880 | 0.2328 | 0.9133 |
| 0.289 | 111.0 | 23088 | 0.2331 | 0.9132 |
| 0.3103 | 112.0 | 23296 | 0.2343 | 0.9131 |
| 0.3124 | 113.0 | 23504 | 0.2330 | 0.9136 |
| 0.3305 | 114.0 | 23712 | 0.2320 | 0.9138 |
| 0.2994 | 115.0 | 23920 | 0.2323 | 0.9135 |
| 0.3011 | 116.0 | 24128 | 0.2316 | 0.9136 |
| 0.2999 | 117.0 | 24336 | 0.2318 | 0.9143 |
| 0.3082 | 118.0 | 24544 | 0.2322 | 0.9144 |
| 0.2923 | 119.0 | 24752 | 0.2302 | 0.9142 |
| 0.315 | 120.0 | 24960 | 0.2309 | 0.9142 |
| 0.3058 | 121.0 | 25168 | 0.2292 | 0.9143 |
| 0.3044 | 122.0 | 25376 | 0.2270 | 0.9160 |
| 0.2679 | 123.0 | 25584 | 0.2301 | 0.9146 |
| 0.3091 | 124.0 | 25792 | 0.2289 | 0.9152 |
| 0.2926 | 125.0 | 26000 | 0.2273 | 0.9156 |
| 0.2961 | 126.0 | 26208 | 0.2266 | 0.9156 |
| 0.2738 | 127.0 | 26416 | 0.2301 | 0.9146 |
| 0.3074 | 128.0 | 26624 | 0.2277 | 0.9153 |
| 0.303 | 129.0 | 26832 | 0.2271 | 0.9153 |
| 0.3062 | 130.0 | 27040 | 0.2268 | 0.9154 |
| 0.3262 | 131.0 | 27248 | 0.2276 | 0.9158 |
| 0.2775 | 132.0 | 27456 | 0.2287 | 0.9154 |
| 0.2845 | 133.0 | 27664 | 0.2260 | 0.9157 |
| 0.2846 | 134.0 | 27872 | 0.2272 | 0.9155 |
| 0.2878 | 135.0 | 28080 | 0.2224 | 0.9168 |
| 0.2933 | 136.0 | 28288 | 0.2256 | 0.9159 |
| 0.3003 | 137.0 | 28496 | 0.2237 | 0.9164 |
| 0.3087 | 138.0 | 28704 | 0.2256 | 0.9166 |
| 0.3039 | 139.0 | 28912 | 0.2218 | 0.9174 |
| 0.2938 | 140.0 | 29120 | 0.2218 | 0.9168 |
| 0.2836 | 141.0 | 29328 | 0.2237 | 0.9168 |
| 0.2978 | 142.0 | 29536 | 0.2237 | 0.9169 |
| 0.2968 | 143.0 | 29744 | 0.2219 | 0.9176 |
| 0.2889 | 144.0 | 29952 | 0.2219 | 0.9177 |
| 0.2781 | 145.0 | 30160 | 0.2251 | 0.9163 |
| 0.2977 | 146.0 | 30368 | 0.2234 | 0.9173 |
| 0.318 | 147.0 | 30576 | 0.2217 | 0.9181 |
| 0.3054 | 148.0 | 30784 | 0.2222 | 0.9178 |
| 0.2889 | 149.0 | 30992 | 0.2223 | 0.9176 |
| 0.2739 | 150.0 | 31200 | 0.2195 | 0.9187 |
| 0.2746 | 151.0 | 31408 | 0.2223 | 0.9175 |
| 0.3057 | 152.0 | 31616 | 0.2202 | 0.9182 |
| 0.2913 | 153.0 | 31824 | 0.2187 | 0.9193 |
| 0.3006 | 154.0 | 32032 | 0.2186 | 0.9192 |
| 0.3062 | 155.0 | 32240 | 0.2180 | 0.9190 |
| 0.2891 | 156.0 | 32448 | 0.2202 | 0.9185 |
| 0.3066 | 157.0 | 32656 | 0.2190 | 0.9182 |
| 0.2989 | 158.0 | 32864 | 0.2185 | 0.9191 |
| 0.2885 | 159.0 | 33072 | 0.2204 | 0.9184 |
| 0.2936 | 160.0 | 33280 | 0.2173 | 0.9194 |
| 0.2872 | 161.0 | 33488 | 0.2209 | 0.9179 |
| 0.3034 | 162.0 | 33696 | 0.2185 | 0.9191 |
| 0.2966 | 163.0 | 33904 | 0.2188 | 0.9187 |
| 0.3209 | 164.0 | 34112 | 0.2179 | 0.9191 |
| 0.2718 | 165.0 | 34320 | 0.2188 | 0.9190 |
| 0.299 | 166.0 | 34528 | 0.2199 | 0.9185 |
| 0.2725 | 167.0 | 34736 | 0.2180 | 0.9194 |
| 0.3013 | 168.0 | 34944 | 0.2166 | 0.9198 |
| 0.3017 | 169.0 | 35152 | 0.2162 | 0.9193 |
| 0.3002 | 170.0 | 35360 | 0.2186 | 0.9191 |
| 0.2901 | 171.0 | 35568 | 0.2189 | 0.9184 |
| 0.2687 | 172.0 | 35776 | 0.2159 | 0.9200 |
| 0.2929 | 173.0 | 35984 | 0.2165 | 0.9196 |
| 0.2932 | 174.0 | 36192 | 0.2169 | 0.9195 |
| 0.2915 | 175.0 | 36400 | 0.2165 | 0.9201 |
| 0.2853 | 176.0 | 36608 | 0.2166 | 0.9199 |
| 0.287 | 177.0 | 36816 | 0.2162 | 0.9196 |
| 0.2843 | 178.0 | 37024 | 0.2157 | 0.9193 |
| 0.2882 | 179.0 | 37232 | 0.2176 | 0.9194 |
| 0.3024 | 180.0 | 37440 | 0.2150 | 0.9205 |
| 0.2963 | 181.0 | 37648 | 0.2161 | 0.9194 |
| 0.2707 | 182.0 | 37856 | 0.2157 | 0.9201 |
| 0.2589 | 183.0 | 38064 | 0.2147 | 0.9203 |
| 0.2898 | 184.0 | 38272 | 0.2128 | 0.9205 |
| 0.3106 | 185.0 | 38480 | 0.2141 | 0.9209 |
| 0.297 | 186.0 | 38688 | 0.2131 | 0.9212 |
| 0.2949 | 187.0 | 38896 | 0.2158 | 0.9198 |
| 0.28 | 188.0 | 39104 | 0.2124 | 0.9209 |
| 0.2713 | 189.0 | 39312 | 0.2131 | 0.9203 |
| 0.2969 | 190.0 | 39520 | 0.2156 | 0.9197 |
| 0.2944 | 191.0 | 39728 | 0.2140 | 0.9201 |
| 0.2872 | 192.0 | 39936 | 0.2126 | 0.9209 |
| 0.2891 | 193.0 | 40144 | 0.2145 | 0.9203 |
| 0.3068 | 194.0 | 40352 | 0.2116 | 0.9217 |
| 0.2745 | 195.0 | 40560 | 0.2144 | 0.9206 |
| 0.2958 | 196.0 | 40768 | 0.2142 | 0.9208 |
| 0.3056 | 197.0 | 40976 | 0.2122 | 0.9206 |
| 0.3078 | 198.0 | 41184 | 0.2154 | 0.9199 |
| 0.2847 | 199.0 | 41392 | 0.2124 | 0.9210 |
| 0.2914 | 200.0 | 41600 | 0.2145 | 0.9203 |
| 0.2753 | 201.0 | 41808 | 0.2138 | 0.9203 |
| 0.2694 | 202.0 | 42016 | 0.2121 | 0.9211 |
| 0.2918 | 203.0 | 42224 | 0.2113 | 0.9212 |
| 0.2839 | 204.0 | 42432 | 0.2142 | 0.9200 |
| 0.2802 | 205.0 | 42640 | 0.2117 | 0.9211 |
| 0.2809 | 206.0 | 42848 | 0.2127 | 0.9210 |
| 0.302 | 207.0 | 43056 | 0.2121 | 0.9208 |
| 0.278 | 208.0 | 43264 | 0.2122 | 0.9208 |
| 0.2817 | 209.0 | 43472 | 0.2118 | 0.9212 |
| 0.2786 | 210.0 | 43680 | 0.2130 | 0.9207 |
| 0.2766 | 211.0 | 43888 | 0.2140 | 0.9207 |
| 0.2768 | 212.0 | 44096 | 0.2120 | 0.9215 |
| 0.2946 | 213.0 | 44304 | 0.2101 | 0.9218 |
| 0.2882 | 214.0 | 44512 | 0.2106 | 0.9213 |
| 0.2786 | 215.0 | 44720 | 0.2117 | 0.9213 |
| 0.2775 | 216.0 | 44928 | 0.2117 | 0.9210 |
| 0.2904 | 217.0 | 45136 | 0.2097 | 0.9213 |
| 0.2791 | 218.0 | 45344 | 0.2103 | 0.9216 |
| 0.2776 | 219.0 | 45552 | 0.2101 | 0.9215 |
| 0.2786 | 220.0 | 45760 | 0.2092 | 0.9217 |
| 0.2867 | 221.0 | 45968 | 0.2092 | 0.9219 |
| 0.2858 | 222.0 | 46176 | 0.2086 | 0.9218 |
| 0.2833 | 223.0 | 46384 | 0.2096 | 0.9218 |
| 0.2873 | 224.0 | 46592 | 0.2105 | 0.9216 |
| 0.2816 | 225.0 | 46800 | 0.2085 | 0.9220 |
| 0.2639 | 226.0 | 47008 | 0.2101 | 0.9213 |
| 0.2841 | 227.0 | 47216 | 0.2103 | 0.9214 |
| 0.2686 | 228.0 | 47424 | 0.2100 | 0.9210 |
| 0.2766 | 229.0 | 47632 | 0.2092 | 0.9215 |
| 0.2817 | 230.0 | 47840 | 0.2098 | 0.9214 |
| 0.2556 | 231.0 | 48048 | 0.2096 | 0.9219 |
| 0.2806 | 232.0 | 48256 | 0.2090 | 0.9214 |
| 0.2661 | 233.0 | 48464 | 0.2105 | 0.9216 |
| 0.3061 | 234.0 | 48672 | 0.2088 | 0.9216 |
| 0.2879 | 235.0 | 48880 | 0.2087 | 0.9222 |
| 0.2808 | 236.0 | 49088 | 0.2093 | 0.9218 |
| 0.281 | 237.0 | 49296 | 0.2103 | 0.9216 |
| 0.2884 | 238.0 | 49504 | 0.2083 | 0.9223 |
| 0.2664 | 239.0 | 49712 | 0.2090 | 0.9223 |
| 0.2873 | 240.0 | 49920 | 0.2080 | 0.9229 |
| 0.3102 | 241.0 | 50128 | 0.2100 | 0.9218 |
| 0.269 | 242.0 | 50336 | 0.2097 | 0.9220 |
| 0.2735 | 243.0 | 50544 | 0.2084 | 0.9220 |
| 0.2562 | 244.0 | 50752 | 0.2074 | 0.9221 |
| 0.2835 | 245.0 | 50960 | 0.2089 | 0.9217 |
| 0.2906 | 246.0 | 51168 | 0.2086 | 0.9219 |
| 0.2747 | 247.0 | 51376 | 0.2082 | 0.9221 |
| 0.2738 | 248.0 | 51584 | 0.2074 | 0.9228 |
| 0.2888 | 249.0 | 51792 | 0.2080 | 0.9224 |
| 0.2908 | 250.0 | 52000 | 0.2080 | 0.9222 |
| 0.2685 | 251.0 | 52208 | 0.2078 | 0.9223 |
| 0.2771 | 252.0 | 52416 | 0.2089 | 0.9224 |
| 0.2773 | 253.0 | 52624 | 0.2089 | 0.9225 |
| 0.2764 | 254.0 | 52832 | 0.2102 | 0.9221 |
| 0.2686 | 255.0 | 53040 | 0.2075 | 0.9225 |
| 0.2897 | 256.0 | 53248 | 0.2074 | 0.9223 |
| 0.2919 | 257.0 | 53456 | 0.2083 | 0.9229 |
| 0.2787 | 258.0 | 53664 | 0.2067 | 0.9230 |
| 0.2931 | 259.0 | 53872 | 0.2093 | 0.9227 |
| 0.2774 | 260.0 | 54080 | 0.2067 | 0.9232 |
| 0.2822 | 261.0 | 54288 | 0.2077 | 0.9229 |
| 0.2836 | 262.0 | 54496 | 0.2070 | 0.9230 |
| 0.2837 | 263.0 | 54704 | 0.2070 | 0.9228 |
| 0.2791 | 264.0 | 54912 | 0.2084 | 0.9224 |
| 0.2585 | 265.0 | 55120 | 0.2066 | 0.9232 |
| 0.282 | 266.0 | 55328 | 0.2076 | 0.9227 |
| 0.2534 | 267.0 | 55536 | 0.2098 | 0.9223 |
| 0.2724 | 268.0 | 55744 | 0.2079 | 0.9228 |
| 0.2799 | 269.0 | 55952 | 0.2068 | 0.9233 |
| 0.2671 | 270.0 | 56160 | 0.2073 | 0.9232 |
| 0.2793 | 271.0 | 56368 | 0.2054 | 0.9231 |
| 0.2792 | 272.0 | 56576 | 0.2066 | 0.9233 |
| 0.2718 | 273.0 | 56784 | 0.2057 | 0.9231 |
| 0.2777 | 274.0 | 56992 | 0.2062 | 0.9233 |
| 0.2826 | 275.0 | 57200 | 0.2074 | 0.9230 |
| 0.2887 | 276.0 | 57408 | 0.2076 | 0.9228 |
| 0.2751 | 277.0 | 57616 | 0.2053 | 0.9235 |
| 0.2892 | 278.0 | 57824 | 0.2059 | 0.9232 |
| 0.2681 | 279.0 | 58032 | 0.2065 | 0.9234 |
| 0.2845 | 280.0 | 58240 | 0.2062 | 0.9233 |
| 0.2568 | 281.0 | 58448 | 0.2056 | 0.9237 |
| 0.297 | 282.0 | 58656 | 0.2059 | 0.9235 |
| 0.2665 | 283.0 | 58864 | 0.2049 | 0.9233 |
| 0.2694 | 284.0 | 59072 | 0.2073 | 0.9228 |
| 0.2616 | 285.0 | 59280 | 0.2066 | 0.9234 |
| 0.2644 | 286.0 | 59488 | 0.2066 | 0.9232 |
| 0.2733 | 287.0 | 59696 | 0.2061 | 0.9235 |
| 0.2772 | 288.0 | 59904 | 0.2067 | 0.9230 |
| 0.2699 | 289.0 | 60112 | 0.2050 | 0.9239 |
| 0.2778 | 290.0 | 60320 | 0.2046 | 0.9238 |
| 0.2738 | 291.0 | 60528 | 0.2058 | 0.9235 |
| 0.2466 | 292.0 | 60736 | 0.2061 | 0.9236 |
| 0.2711 | 293.0 | 60944 | 0.2046 | 0.9236 |
| 0.2759 | 294.0 | 61152 | 0.2055 | 0.9234 |
| 0.2819 | 295.0 | 61360 | 0.2044 | 0.9235 |
| 0.2572 | 296.0 | 61568 | 0.2057 | 0.9236 |
| 0.2801 | 297.0 | 61776 | 0.2047 | 0.9235 |
| 0.2974 | 298.0 | 61984 | 0.2055 | 0.9235 |
| 0.2688 | 299.0 | 62192 | 0.2060 | 0.9232 |
| 0.2581 | 300.0 | 62400 | 0.2048 | 0.9237 |
| 0.2443 | 301.0 | 62608 | 0.2048 | 0.9236 |
| 0.2646 | 302.0 | 62816 | 0.2065 | 0.9230 |
| 0.277 | 303.0 | 63024 | 0.2050 | 0.9240 |
| 0.2617 | 304.0 | 63232 | 0.2061 | 0.9239 |
| 0.2602 | 305.0 | 63440 | 0.2054 | 0.9238 |
| 0.3001 | 306.0 | 63648 | 0.2055 | 0.9236 |
| 0.2729 | 307.0 | 63856 | 0.2039 | 0.9240 |
| 0.2725 | 308.0 | 64064 | 0.2063 | 0.9235 |
| 0.2785 | 309.0 | 64272 | 0.2063 | 0.9236 |
| 0.2886 | 310.0 | 64480 | 0.2054 | 0.9238 |
| 0.2784 | 311.0 | 64688 | 0.2062 | 0.9237 |
| 0.2771 | 312.0 | 64896 | 0.2040 | 0.9240 |
| 0.2707 | 313.0 | 65104 | 0.2052 | 0.9238 |
| 0.2684 | 314.0 | 65312 | 0.2048 | 0.9241 |
| 0.2789 | 315.0 | 65520 | 0.2041 | 0.9239 |
| 0.2439 | 316.0 | 65728 | 0.2051 | 0.9241 |
| 0.272 | 317.0 | 65936 | 0.2045 | 0.9242 |
| 0.2668 | 318.0 | 66144 | 0.2037 | 0.9240 |
| 0.2657 | 319.0 | 66352 | 0.2042 | 0.9245 |
| 0.2845 | 320.0 | 66560 | 0.2042 | 0.9244 |
| 0.272 | 321.0 | 66768 | 0.2039 | 0.9241 |
| 0.2883 | 322.0 | 66976 | 0.2067 | 0.9235 |
| 0.2751 | 323.0 | 67184 | 0.2048 | 0.9244 |
| 0.311 | 324.0 | 67392 | 0.2037 | 0.9243 |
| 0.2746 | 325.0 | 67600 | 0.2068 | 0.9237 |
| 0.2625 | 326.0 | 67808 | 0.2037 | 0.9240 |
| 0.27 | 327.0 | 68016 | 0.2034 | 0.9239 |
| 0.2549 | 328.0 | 68224 | 0.2044 | 0.9245 |
| 0.2624 | 329.0 | 68432 | 0.2035 | 0.9245 |
| 0.2751 | 330.0 | 68640 | 0.2045 | 0.9242 |
| 0.2672 | 331.0 | 68848 | 0.2032 | 0.9243 |
| 0.277 | 332.0 | 69056 | 0.2038 | 0.9246 |
| 0.2806 | 333.0 | 69264 | 0.2041 | 0.9243 |
| 0.2896 | 334.0 | 69472 | 0.2038 | 0.9244 |
| 0.2967 | 335.0 | 69680 | 0.2039 | 0.9243 |
| 0.2538 | 336.0 | 69888 | 0.2048 | 0.9238 |
| 0.2787 | 337.0 | 70096 | 0.2042 | 0.9240 |
| 0.2687 | 338.0 | 70304 | 0.2031 | 0.9250 |
| 0.2823 | 339.0 | 70512 | 0.2028 | 0.9247 |
| 0.2511 | 340.0 | 70720 | 0.2032 | 0.9248 |
| 0.2753 | 341.0 | 70928 | 0.2036 | 0.9243 |
| 0.2714 | 342.0 | 71136 | 0.2035 | 0.9242 |
| 0.2426 | 343.0 | 71344 | 0.2047 | 0.9240 |
| 0.261 | 344.0 | 71552 | 0.2049 | 0.9242 |
| 0.2765 | 345.0 | 71760 | 0.2040 | 0.9248 |
| 0.292 | 346.0 | 71968 | 0.2041 | 0.9240 |
| 0.2762 | 347.0 | 72176 | 0.2032 | 0.9245 |
| 0.263 | 348.0 | 72384 | 0.2030 | 0.9247 |
| 0.2718 | 349.0 | 72592 | 0.2038 | 0.9243 |
| 0.2721 | 350.0 | 72800 | 0.2043 | 0.9240 |
| 0.2677 | 351.0 | 73008 | 0.2034 | 0.9247 |
| 0.2677 | 352.0 | 73216 | 0.2028 | 0.9248 |
| 0.2617 | 353.0 | 73424 | 0.2038 | 0.9241 |
| 0.29 | 354.0 | 73632 | 0.2039 | 0.9243 |
| 0.2682 | 355.0 | 73840 | 0.2035 | 0.9245 |
| 0.2775 | 356.0 | 74048 | 0.2042 | 0.9240 |
| 0.2515 | 357.0 | 74256 | 0.2030 | 0.9249 |
| 0.2775 | 358.0 | 74464 | 0.2029 | 0.9248 |
| 0.2699 | 359.0 | 74672 | 0.2033 | 0.9242 |
| 0.2719 | 360.0 | 74880 | 0.2025 | 0.9242 |
| 0.2631 | 361.0 | 75088 | 0.2028 | 0.9244 |
| 0.2694 | 362.0 | 75296 | 0.2029 | 0.9242 |
| 0.2643 | 363.0 | 75504 | 0.2043 | 0.9239 |
| 0.2737 | 364.0 | 75712 | 0.2042 | 0.9243 |
| 0.261 | 365.0 | 75920 | 0.2033 | 0.9245 |
| 0.2564 | 366.0 | 76128 | 0.2031 | 0.9243 |
| 0.2931 | 367.0 | 76336 | 0.2032 | 0.9242 |
| 0.2688 | 368.0 | 76544 | 0.2035 | 0.9246 |
| 0.249 | 369.0 | 76752 | 0.2042 | 0.9238 |
| 0.2859 | 370.0 | 76960 | 0.2026 | 0.9245 |
| 0.2632 | 371.0 | 77168 | 0.2028 | 0.9243 |
| 0.2572 | 372.0 | 77376 | 0.2031 | 0.9246 |
| 0.2604 | 373.0 | 77584 | 0.2026 | 0.9243 |
| 0.2643 | 374.0 | 77792 | 0.2036 | 0.9246 |
| 0.2668 | 375.0 | 78000 | 0.2037 | 0.9249 |
| 0.2739 | 376.0 | 78208 | 0.2028 | 0.9242 |
| 0.272 | 377.0 | 78416 | 0.2039 | 0.9238 |
| 0.2757 | 378.0 | 78624 | 0.2032 | 0.9242 |
| 0.251 | 379.0 | 78832 | 0.2033 | 0.9250 |
| 0.26 | 380.0 | 79040 | 0.2035 | 0.9245 |
| 0.2734 | 381.0 | 79248 | 0.2035 | 0.9241 |
| 0.2742 | 382.0 | 79456 | 0.2026 | 0.9247 |
| 0.2552 | 383.0 | 79664 | 0.2034 | 0.9242 |
| 0.2709 | 384.0 | 79872 | 0.2028 | 0.9244 |
| 0.28 | 385.0 | 80080 | 0.2029 | 0.9244 |
| 0.2587 | 386.0 | 80288 | 0.2037 | 0.9243 |
| 0.2706 | 387.0 | 80496 | 0.2032 | 0.9246 |
| 0.2774 | 388.0 | 80704 | 0.2036 | 0.9239 |
| 0.2755 | 389.0 | 80912 | 0.2025 | 0.9244 |
| 0.2586 | 390.0 | 81120 | 0.2034 | 0.9241 |
| 0.2715 | 391.0 | 81328 | 0.2024 | 0.9246 |
| 0.2844 | 392.0 | 81536 | 0.2030 | 0.9241 |
| 0.2626 | 393.0 | 81744 | 0.2035 | 0.9243 |
| 0.2567 | 394.0 | 81952 | 0.2027 | 0.9247 |
| 0.2789 | 395.0 | 82160 | 0.2024 | 0.9242 |
| 0.2695 | 396.0 | 82368 | 0.2019 | 0.9247 |
| 0.2829 | 397.0 | 82576 | 0.2016 | 0.9247 |
| 0.2784 | 398.0 | 82784 | 0.2025 | 0.9243 |
| 0.2765 | 399.0 | 82992 | 0.2033 | 0.9243 |
| 0.2673 | 400.0 | 83200 | 0.2024 | 0.9248 |
| 0.275 | 401.0 | 83408 | 0.2041 | 0.9240 |
| 0.2499 | 402.0 | 83616 | 0.2028 | 0.9241 |
| 0.2702 | 403.0 | 83824 | 0.2028 | 0.9246 |
| 0.285 | 404.0 | 84032 | 0.2028 | 0.9243 |
| 0.2615 | 405.0 | 84240 | 0.2038 | 0.9244 |
| 0.2849 | 406.0 | 84448 | 0.2021 | 0.9247 |
| 0.2414 | 407.0 | 84656 | 0.2023 | 0.9245 |
| 0.2777 | 408.0 | 84864 | 0.2020 | 0.9247 |
| 0.2601 | 409.0 | 85072 | 0.2024 | 0.9242 |
| 0.2873 | 410.0 | 85280 | 0.2026 | 0.9243 |
| 0.2616 | 411.0 | 85488 | 0.2032 | 0.9246 |
| 0.2794 | 412.0 | 85696 | 0.2024 | 0.9243 |
| 0.2596 | 413.0 | 85904 | 0.2026 | 0.9246 |
| 0.2585 | 414.0 | 86112 | 0.2027 | 0.9245 |
| 0.2588 | 415.0 | 86320 | 0.2029 | 0.9245 |
| 0.2685 | 416.0 | 86528 | 0.2025 | 0.9243 |
| 0.2923 | 417.0 | 86736 | 0.2026 | 0.9245 |
| 0.2641 | 418.0 | 86944 | 0.2030 | 0.9246 |
| 0.2821 | 419.0 | 87152 | 0.2021 | 0.9249 |
| 0.2674 | 420.0 | 87360 | 0.2021 | 0.9250 |
| 0.2745 | 421.0 | 87568 | 0.2023 | 0.9247 |
| 0.2703 | 422.0 | 87776 | 0.2022 | 0.9248 |
| 0.2653 | 423.0 | 87984 | 0.2032 | 0.9248 |
| 0.2763 | 424.0 | 88192 | 0.2022 | 0.9245 |
| 0.2572 | 425.0 | 88400 | 0.2019 | 0.9247 |
| 0.2577 | 426.0 | 88608 | 0.2028 | 0.9245 |
| 0.2966 | 427.0 | 88816 | 0.2023 | 0.9246 |
| 0.2667 | 428.0 | 89024 | 0.2025 | 0.9249 |
| 0.2388 | 429.0 | 89232 | 0.2028 | 0.9247 |
| 0.2856 | 430.0 | 89440 | 0.2019 | 0.9247 |
| 0.2842 | 431.0 | 89648 | 0.2020 | 0.9248 |
| 0.2806 | 432.0 | 89856 | 0.2025 | 0.9248 |
| 0.2627 | 433.0 | 90064 | 0.2027 | 0.9245 |
| 0.2582 | 434.0 | 90272 | 0.2025 | 0.9247 |
| 0.2594 | 435.0 | 90480 | 0.2032 | 0.9243 |
| 0.2557 | 436.0 | 90688 | 0.2029 | 0.9244 |
| 0.266 | 437.0 | 90896 | 0.2026 | 0.9245 |
| 0.2718 | 438.0 | 91104 | 0.2030 | 0.9242 |
| 0.2577 | 439.0 | 91312 | 0.2024 | 0.9246 |
| 0.2996 | 440.0 | 91520 | 0.2016 | 0.9250 |
| 0.2613 | 441.0 | 91728 | 0.2021 | 0.9247 |
| 0.2669 | 442.0 | 91936 | 0.2022 | 0.9246 |
| 0.2695 | 443.0 | 92144 | 0.2023 | 0.9246 |
| 0.267 | 444.0 | 92352 | 0.2017 | 0.9247 |
| 0.2704 | 445.0 | 92560 | 0.2020 | 0.9246 |
| 0.2529 | 446.0 | 92768 | 0.2018 | 0.9248 |
| 0.2743 | 447.0 | 92976 | 0.2014 | 0.9248 |
| 0.2664 | 448.0 | 93184 | 0.2025 | 0.9246 |
| 0.272 | 449.0 | 93392 | 0.2015 | 0.9248 |
| 0.2761 | 450.0 | 93600 | 0.2019 | 0.9248 |
| 0.2751 | 451.0 | 93808 | 0.2019 | 0.9247 |
| 0.2698 | 452.0 | 94016 | 0.2024 | 0.9246 |
| 0.2678 | 453.0 | 94224 | 0.2018 | 0.9249 |
| 0.2691 | 454.0 | 94432 | 0.2018 | 0.9250 |
| 0.2635 | 455.0 | 94640 | 0.2022 | 0.9248 |
| 0.2711 | 456.0 | 94848 | 0.2024 | 0.9247 |
| 0.2767 | 457.0 | 95056 | 0.2025 | 0.9248 |
| 0.2781 | 458.0 | 95264 | 0.2023 | 0.9247 |
| 0.2756 | 459.0 | 95472 | 0.2018 | 0.9249 |
| 0.2948 | 460.0 | 95680 | 0.2023 | 0.9248 |
| 0.267 | 461.0 | 95888 | 0.2017 | 0.9250 |
| 0.2626 | 462.0 | 96096 | 0.2018 | 0.9249 |
| 0.2559 | 463.0 | 96304 | 0.2022 | 0.9245 |
| 0.275 | 464.0 | 96512 | 0.2023 | 0.9248 |
| 0.2326 | 465.0 | 96720 | 0.2019 | 0.9248 |
| 0.2492 | 466.0 | 96928 | 0.2015 | 0.9248 |
| 0.2686 | 467.0 | 97136 | 0.2017 | 0.9248 |
| 0.2778 | 468.0 | 97344 | 0.2021 | 0.9245 |
| 0.2946 | 469.0 | 97552 | 0.2021 | 0.9248 |
| 0.2567 | 470.0 | 97760 | 0.2021 | 0.9247 |
| 0.2505 | 471.0 | 97968 | 0.2021 | 0.9248 |
| 0.2659 | 472.0 | 98176 | 0.2020 | 0.9247 |
| 0.2659 | 473.0 | 98384 | 0.2018 | 0.9248 |
| 0.2766 | 474.0 | 98592 | 0.2022 | 0.9247 |
| 0.2687 | 475.0 | 98800 | 0.2020 | 0.9248 |
| 0.2568 | 476.0 | 99008 | 0.2020 | 0.9247 |
| 0.2644 | 477.0 | 99216 | 0.2024 | 0.9246 |
| 0.2657 | 478.0 | 99424 | 0.2018 | 0.9248 |
| 0.263 | 479.0 | 99632 | 0.2020 | 0.9247 |
| 0.2499 | 480.0 | 99840 | 0.2019 | 0.9248 |
| 0.2963 | 481.0 | 100048 | 0.2019 | 0.9249 |
| 0.2778 | 482.0 | 100256 | 0.2019 | 0.9248 |
| 0.2593 | 483.0 | 100464 | 0.2021 | 0.9247 |
| 0.2644 | 484.0 | 100672 | 0.2022 | 0.9247 |
| 0.2849 | 485.0 | 100880 | 0.2020 | 0.9248 |
| 0.2727 | 486.0 | 101088 | 0.2019 | 0.9248 |
| 0.2668 | 487.0 | 101296 | 0.2020 | 0.9247 |
| 0.2624 | 488.0 | 101504 | 0.2018 | 0.9249 |
| 0.2445 | 489.0 | 101712 | 0.2020 | 0.9248 |
| 0.2541 | 490.0 | 101920 | 0.2018 | 0.9250 |
| 0.2646 | 491.0 | 102128 | 0.2019 | 0.9248 |
| 0.2658 | 492.0 | 102336 | 0.2019 | 0.9248 |
| 0.2677 | 493.0 | 102544 | 0.2020 | 0.9248 |
| 0.2505 | 494.0 | 102752 | 0.2020 | 0.9248 |
| 0.2593 | 495.0 | 102960 | 0.2019 | 0.9248 |
| 0.2411 | 496.0 | 103168 | 0.2019 | 0.9248 |
| 0.2682 | 497.0 | 103376 | 0.2019 | 0.9248 |
| 0.2693 | 498.0 | 103584 | 0.2019 | 0.9248 |
| 0.2568 | 499.0 | 103792 | 0.2019 | 0.9248 |
| 0.2589 | 500.0 | 104000 | 0.2019 | 0.9248 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
[
"eosinophils",
"lymphocytes",
"monocytes",
"neutrophils"
] |
bikekowal/models_diff
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# models_diff
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 500.0
### Training results
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"d",
"n"
] |
thien-nguyen/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1978
- Accuracy: 0.9418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3745 | 1.0 | 370 | 0.2968 | 0.9229 |
| 0.2178 | 2.0 | 740 | 0.2262 | 0.9405 |
| 0.159 | 3.0 | 1110 | 0.2067 | 0.9364 |
| 0.1545 | 4.0 | 1480 | 0.1974 | 0.9350 |
| 0.1217 | 5.0 | 1850 | 0.1944 | 0.9337 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
qubvel-hf/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [timm/mobilenetv4_conv_small.e2400_r224_in1k](https://huggingface.co/timm/mobilenetv4_conv_small.e2400_r224_in1k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7294
- Accuracy: 0.818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.8009 | 1.0 | 63 | 0.9982 | 0.791 |
| 3.8041 | 2.0 | 126 | 0.7726 | 0.82 |
| 3.5834 | 2.96 | 186 | 0.7294 | 0.818 |
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
till-onethousand/beans_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beans_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0264
- Model Preparation Time: 0.0048
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:----------------------:|:--------:|
| 0.1068 | 1.5385 | 100 | 0.0307 | 0.0048 | 1.0 |
| 0.0316 | 3.0769 | 200 | 0.0264 | 0.0048 | 0.9925 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
osvaldotr07/beans-classification
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
till-onethousand/hurricane_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hurricane_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the jonathan-roberts1/Satellite-Images-of-Hurricane-Damage dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0224
- Model Preparation Time: 0.0051
- Accuracy: 0.9948
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:----------------------:|:--------:|
| 0.1118 | 0.3195 | 100 | 0.1486 | 0.0051 | 0.9476 |
| 0.1112 | 0.6390 | 200 | 0.0701 | 0.0051 | 0.9752 |
| 0.0694 | 0.9585 | 300 | 0.0608 | 0.0051 | 0.9808 |
| 0.0048 | 1.2780 | 400 | 0.0917 | 0.0051 | 0.9744 |
| 0.036 | 1.5974 | 500 | 0.0552 | 0.0051 | 0.9836 |
| 0.0594 | 1.9169 | 600 | 0.0547 | 0.0051 | 0.9808 |
| 0.0115 | 2.2364 | 700 | 0.0627 | 0.0051 | 0.9844 |
| 0.0016 | 2.5559 | 800 | 0.0296 | 0.0051 | 0.9936 |
| 0.004 | 2.8754 | 900 | 0.0325 | 0.0051 | 0.9916 |
| 0.0009 | 3.1949 | 1000 | 0.0224 | 0.0051 | 0.9948 |
| 0.0008 | 3.5144 | 1100 | 0.0270 | 0.0051 | 0.9936 |
| 0.0008 | 3.8339 | 1200 | 0.0256 | 0.0051 | 0.994 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
[
"flooded or damaged buildings",
"undamaged buildings"
] |
fernandabufon/ft_stable_diffusion
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft_stable_diffusion
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the generated by stable diffusion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3650
- Accuracy: 0.9194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 70 | 0.9239 | 0.7705 |
| 1.1759 | 2.0 | 140 | 0.5778 | 0.8852 |
| 0.5081 | 3.0 | 210 | 0.4438 | 0.9180 |
| 0.5081 | 4.0 | 280 | 0.3857 | 0.9344 |
| 0.3442 | 5.0 | 350 | 0.3700 | 0.9344 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
[
"coffee",
"oil",
"rice",
"bread",
"sugar",
"black_beans",
"beans",
"flour",
"milk"
] |
Shk4/vit_ana_0.89
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3768
- Accuracy: 0.8989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.9231 | 3 | 1.8889 | 0.2472 |
| No log | 1.8462 | 6 | 1.7625 | 0.3820 |
| No log | 2.7692 | 9 | 1.5603 | 0.4494 |
| 1.7854 | 4.0 | 13 | 1.3005 | 0.5281 |
| 1.7854 | 4.9231 | 16 | 1.0408 | 0.6292 |
| 1.7854 | 5.8462 | 19 | 0.8925 | 0.6854 |
| 1.1431 | 6.7692 | 22 | 0.7614 | 0.7303 |
| 1.1431 | 8.0 | 26 | 0.6343 | 0.7753 |
| 1.1431 | 8.9231 | 29 | 0.5810 | 0.7978 |
| 0.7715 | 9.8462 | 32 | 0.5551 | 0.8427 |
| 0.7715 | 10.7692 | 35 | 0.5209 | 0.8539 |
| 0.7715 | 12.0 | 39 | 0.5690 | 0.8202 |
| 0.5645 | 12.9231 | 42 | 0.4431 | 0.8876 |
| 0.5645 | 13.8462 | 45 | 0.4922 | 0.8202 |
| 0.5645 | 14.7692 | 48 | 0.4914 | 0.8315 |
| 0.4999 | 16.0 | 52 | 0.3768 | 0.8989 |
| 0.4999 | 16.9231 | 55 | 0.4292 | 0.8539 |
| 0.4999 | 17.8462 | 58 | 0.3846 | 0.8652 |
| 0.4555 | 18.7692 | 61 | 0.3498 | 0.8876 |
| 0.4555 | 20.0 | 65 | 0.3523 | 0.8652 |
| 0.4555 | 20.9231 | 68 | 0.3541 | 0.8876 |
| 0.3941 | 21.8462 | 71 | 0.3240 | 0.8989 |
| 0.3941 | 22.7692 | 74 | 0.3169 | 0.8989 |
| 0.3941 | 24.0 | 78 | 0.3317 | 0.8764 |
| 0.361 | 24.9231 | 81 | 0.3251 | 0.8876 |
| 0.361 | 25.8462 | 84 | 0.3198 | 0.8764 |
| 0.361 | 26.7692 | 87 | 0.3117 | 0.8764 |
| 0.3485 | 27.6923 | 90 | 0.3101 | 0.8764 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"speckled",
"centromere",
"dfs",
"nucleolar",
"nuclear envelope",
"nuclear dots",
"homogeneous"
] |
dan-lara/Garbage-Classifier-Resnet-50-Finetuning
|
# Garbage Classification Model (Fine-tuned ResNet-50)
Ce modèle est une version fine-tunée de ResNet-50 pour la classification des images de déchets en 8 catégories, utilisant le [Garbage Dataset](https://www.kaggle.com/datasets/danielferreiralara/normalized-garbage-dataset-for-resnet). Ce modèle est conçu pour des applications environnementales telles que le tri automatique des déchets et la sensibilisation au recyclage.
## Modèle de base
Ce modèle est basé sur [ResNet-50 v1.5](https://huggingface.co/microsoft/resnet-50), qui est pré-entraîné sur [ImageNet-1k](https://huggingface.co/datasets/ILSVRC/imagenet-1k). ResNet est une architecture de réseau de neurones convolutionnels qui a introduit les concepts d’apprentissage résiduel et de connexions par saut, permettant ainsi l’entraînement de modèles beaucoup plus profonds.
ResNet-50 v1.5 inclut une amélioration dans les blocs de bottleneck, utilisant une stride de 2 dans la convolution 3x3, ce qui le rend légèrement plus précis que v1 (∼0,5 % en top-1).
## Description du Modèle
### Classes cibles
Le modèle classifie les images dans les 8 catégories suivantes :
- 🔋 Batterie
- 📦 Carton
- 🔗 Métal
- 🍓 Organique
- 🗳️ Papier
- 🧳 Plastique
- 🫙 Verre
- 👖 Vêtements
### Prétraitement
Les images du dataset ont été normalisées et redimensionnées à une résolution de 224x224, compatible avec l’entrée du modèle ResNet-50.
### Performance
Le modèle atteint un **taux de précision global de 94 %** sur le jeu de test du Dataset. Les performances varient légèrement entre les classes en fonction de la diversité des images et des similarités visuelles entre certaines catégories.
Voici un simulateur([EcoMind AI](https://ecomind-ai.streamlit.app/)) qui compare notre modèle au ResNet de base et à d'autres technologies telles que Yolo et LLMs (Llama 3.2).
## Utilisation prévue & limitations
### Cas d'utilisation
- Automatisation du tri des déchets pour le recyclage.
- Développement d'applications éducatives et interactives sur la gestion des déchets.
- Recherche en vision par ordinateur appliquée à l'environnement.
### Limitations
Ce modèle a été entraîné sur un dataset limité à 8 catégories. Les scénarios impliquant des déchets très spécifiques ou des catégories en dehors de celles mentionnées pourraient nécessiter un retrain ou une extension du dataset.
## Comment utiliser ce modèle
Voici un exemple de code pour utiliser ce modèle afin de classifier une image :
```python
```
## Citations et Références
Si vous utilisez ce modèle, merci de citer à la fois le modèle de base ResNet-50 et le Dataset :
### Modèle de base :
```bibtex
@inproceedings{he2016deep,
title={Deep residual learning for image recognition},
author={He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={770--778},
year={2016}
}
```
### Dataset Waste Classification :
```bibtex
@misc{garbageDatasetResNet24,
author = {Ferreira et al.},
title = {8 classes Garbage Dataset for ResNet},
year = {2024},
publisher = {Kaggle},
howpublished = {\url{[https://kaggle.com](https://www.kaggle.com/datasets/danielferreiralara/normalized-garbage-dataset-for-resnet)}}
}
```
## Contact
Pour toute question ou suggestion, n’hésitez pas à me contacter à [[email protected]](mailto:[email protected]).
|
[
"batterie",
"carton",
"metal",
"organique",
"papier",
"plastique",
"verre",
"vetements"
] |
hoanbklucky/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3611
- Accuracy: 0.9022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5267 | 1.0 | 26 | 0.7457 | 0.8288 |
| 0.6287 | 2.0 | 52 | 0.4085 | 0.8967 |
| 0.5212 | 3.0 | 78 | 0.3611 | 0.9022 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
[
"abyssinian",
"american_bulldog",
"american_pit_bull_terrier",
"basset_hound",
"beagle",
"bengal",
"birman",
"bombay",
"boxer",
"british_shorthair",
"chihuahua",
"egyptian_mau",
"english_cocker_spaniel",
"english_setter",
"german_shorthaired",
"great_pyrenees",
"havanese",
"japanese_chin",
"keeshond",
"leonberger",
"maine_coon",
"miniature_pinscher",
"newfoundland",
"persian",
"pomeranian",
"pug",
"ragdoll",
"russian_blue",
"saint_bernard",
"samoyed",
"scottish_terrier",
"shiba_inu",
"siamese",
"sphynx",
"staffordshire_bull_terrier",
"wheaten_terrier",
"yorkshire_terrier"
] |
hoanbklucky/dinov2-small-imagenet1k-1-layer-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dinov2-small-imagenet1k-1-layer-finetuned-eurosat
This model is a fine-tuned version of [facebook/dinov2-small-imagenet1k-1-layer](https://huggingface.co/facebook/dinov2-small-imagenet1k-1-layer) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2994
- Accuracy: 0.9212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9886 | 1.0 | 26 | 0.8828 | 0.7717 |
| 0.645 | 2.0 | 52 | 0.4112 | 0.8859 |
| 0.4834 | 3.0 | 78 | 0.2994 | 0.9212 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
[
"abyssinian",
"american_bulldog",
"american_pit_bull_terrier",
"basset_hound",
"beagle",
"bengal",
"birman",
"bombay",
"boxer",
"british_shorthair",
"chihuahua",
"egyptian_mau",
"english_cocker_spaniel",
"english_setter",
"german_shorthaired",
"great_pyrenees",
"havanese",
"japanese_chin",
"keeshond",
"leonberger",
"maine_coon",
"miniature_pinscher",
"newfoundland",
"persian",
"pomeranian",
"pug",
"ragdoll",
"russian_blue",
"saint_bernard",
"samoyed",
"scottish_terrier",
"shiba_inu",
"siamese",
"sphynx",
"staffordshire_bull_terrier",
"wheaten_terrier",
"yorkshire_terrier"
] |
hoanbklucky/dinov2-small-imagenet1k-1-layer-finetuned-noh
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dinov2-small-imagenet1k-1-layer-finetuned-noh
This model is a fine-tuned version of [facebook/dinov2-small-imagenet1k-1-layer](https://huggingface.co/facebook/dinov2-small-imagenet1k-1-layer) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3366
- Accuracy: 0.8982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.4924 | 1.0 | 23 | 0.5212 | 0.8325 |
| 0.5732 | 2.0 | 46 | 0.3366 | 0.8982 |
| 0.5639 | 3.0 | 69 | 0.3907 | 0.8489 |
| 0.4759 | 4.0 | 92 | 0.3482 | 0.8818 |
| 0.3757 | 5.0 | 115 | 0.3921 | 0.8276 |
| 0.3356 | 6.0 | 138 | 0.3184 | 0.8966 |
| 0.2521 | 7.0 | 161 | 0.3992 | 0.8571 |
| 0.2981 | 8.0 | 184 | 0.3904 | 0.8703 |
| 0.2302 | 9.0 | 207 | 0.3987 | 0.8719 |
| 0.1979 | 9.5778 | 220 | 0.4129 | 0.8604 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1
- Datasets 2.19.1
- Tokenizers 0.21.0
|
[
"normal",
"cancer"
] |
hoanbklucky/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1773
- Accuracy: 0.9432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3928 | 1.0 | 370 | 0.2696 | 0.9323 |
| 0.206 | 2.0 | 740 | 0.2022 | 0.9405 |
| 0.1689 | 3.0 | 1110 | 0.1863 | 0.9405 |
| 0.1298 | 4.0 | 1480 | 0.1801 | 0.9472 |
| 0.1358 | 5.0 | 1850 | 0.1783 | 0.9418 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
cz6879/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1848
- Accuracy: 0.9486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3895 | 1.0 | 370 | 0.2942 | 0.9229 |
| 0.2122 | 2.0 | 740 | 0.2150 | 0.9418 |
| 0.1657 | 3.0 | 1110 | 0.1969 | 0.9445 |
| 0.1393 | 4.0 | 1480 | 0.1901 | 0.9459 |
| 0.1364 | 5.0 | 1850 | 0.1877 | 0.9486 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
kaanyvvz/ky-finetuned-skindiseaseicthuawei32
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ky-finetuned-skindiseaseicthuawei32
This model is a fine-tuned version of [facebook/dinov2-base](https://huggingface.co/facebook/dinov2-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1058
- Accuracy: 0.9623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3894 | 1.0 | 300 | 0.6160 | 0.8061 |
| 0.6543 | 2.0 | 600 | 0.4378 | 0.8635 |
| 0.471 | 3.0 | 900 | 0.2566 | 0.9161 |
| 0.3853 | 4.0 | 1200 | 0.2498 | 0.9135 |
| 0.3225 | 5.0 | 1500 | 0.2157 | 0.9290 |
| 0.2769 | 6.0 | 1800 | 0.1747 | 0.9407 |
| 0.2364 | 7.0 | 2100 | 0.1502 | 0.9487 |
| 0.2005 | 8.0 | 2400 | 0.1282 | 0.9547 |
| 0.1737 | 9.0 | 2700 | 0.1129 | 0.9597 |
| 0.1468 | 10.0 | 3000 | 0.1058 | 0.9623 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"basal cell carcinoma",
"darier_s disease",
"epidermolysis bullosa pruriginosa",
"hailey-hailey disease",
"herpes simplex",
"impetigo",
"larva migrans",
"leprosy borderline",
"leprosy lepromatous",
"leprosy tuberculoid",
"lichen planus",
"lupus erythematosus chronicus discoides",
"melanoma",
"molluscum contagiosum",
"mycosis fungoides",
"neurofibromatosis",
"papilomatosis confluentes and reticulate",
"pediculosis capitis",
"pityriasis rosea",
"porokeratosis actinic",
"psoriasis",
"tinea corporis",
"tinea nigra",
"tungiasis",
"unknown",
"vitiligo",
"actinic keratosis",
"dermatofibroma",
"nevus",
"seborrheic keratosis",
"squamous cell carcinoma",
"vascular lesion"
] |
CooperAharon/white-blood-cell-classifier
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"basophil",
"eosinophil",
"lymphocyte",
"monocyte",
"neutrophil"
] |
Melo1512/vit-msn-small-wbc-classifier-0316-cleandataset-10
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-msn-small-wbc-classifier-0316-cleandataset-10
This model is a fine-tuned version of [Melo1512/vit-msn-small-wbc-classifier-0316-cleandataset-10](https://huggingface.co/Melo1512/vit-msn-small-wbc-classifier-0316-cleandataset-10) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3943
- Accuracy: 0.8599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.3785 | 0.9730 | 18 | 0.3985 | 0.8569 |
| 0.3432 | 2.0 | 37 | 0.3996 | 0.8557 |
| 0.3454 | 2.9730 | 55 | 0.4011 | 0.8553 |
| 0.3639 | 4.0 | 74 | 0.4034 | 0.8538 |
| 0.3544 | 4.9730 | 92 | 0.4049 | 0.8546 |
| 0.3607 | 6.0 | 111 | 0.4057 | 0.8538 |
| 0.3652 | 6.9730 | 129 | 0.4046 | 0.8561 |
| 0.3639 | 8.0 | 148 | 0.4046 | 0.8553 |
| 0.3472 | 8.9730 | 166 | 0.4048 | 0.8561 |
| 0.3704 | 10.0 | 185 | 0.4033 | 0.8546 |
| 0.3954 | 10.9730 | 203 | 0.4009 | 0.8565 |
| 0.372 | 12.0 | 222 | 0.4022 | 0.8546 |
| 0.3599 | 12.9730 | 240 | 0.4005 | 0.8561 |
| 0.3689 | 14.0 | 259 | 0.4018 | 0.8550 |
| 0.3687 | 14.9730 | 277 | 0.4016 | 0.8553 |
| 0.3521 | 16.0 | 296 | 0.4000 | 0.8561 |
| 0.3817 | 16.9730 | 314 | 0.4001 | 0.8553 |
| 0.3768 | 18.0 | 333 | 0.3994 | 0.8550 |
| 0.3835 | 18.9730 | 351 | 0.4041 | 0.8546 |
| 0.3833 | 20.0 | 370 | 0.4042 | 0.8553 |
| 0.36 | 20.9730 | 388 | 0.4012 | 0.8561 |
| 0.3729 | 22.0 | 407 | 0.4023 | 0.8565 |
| 0.3647 | 22.9730 | 425 | 0.4029 | 0.8546 |
| 0.3811 | 24.0 | 444 | 0.4011 | 0.8561 |
| 0.38 | 24.9730 | 462 | 0.3999 | 0.8569 |
| 0.3588 | 26.0 | 481 | 0.3994 | 0.8557 |
| 0.3554 | 26.9730 | 499 | 0.3991 | 0.8561 |
| 0.354 | 28.0 | 518 | 0.3995 | 0.8561 |
| 0.3577 | 28.9730 | 536 | 0.3986 | 0.8557 |
| 0.3723 | 30.0 | 555 | 0.3998 | 0.8561 |
| 0.3763 | 30.9730 | 573 | 0.3994 | 0.8561 |
| 0.3701 | 32.0 | 592 | 0.3994 | 0.8569 |
| 0.3728 | 32.9730 | 610 | 0.3980 | 0.8553 |
| 0.3649 | 34.0 | 629 | 0.3964 | 0.8565 |
| 0.3551 | 34.9730 | 647 | 0.3982 | 0.8569 |
| 0.3832 | 36.0 | 666 | 0.3977 | 0.8576 |
| 0.3459 | 36.9730 | 684 | 0.3968 | 0.8561 |
| 0.3613 | 38.0 | 703 | 0.3966 | 0.8561 |
| 0.3588 | 38.9730 | 721 | 0.3968 | 0.8565 |
| 0.3483 | 40.0 | 740 | 0.3958 | 0.8573 |
| 0.3693 | 40.9730 | 758 | 0.3967 | 0.8576 |
| 0.3544 | 42.0 | 777 | 0.3988 | 0.8576 |
| 0.3701 | 42.9730 | 795 | 0.3976 | 0.8573 |
| 0.3649 | 44.0 | 814 | 0.3984 | 0.8565 |
| 0.3621 | 44.9730 | 832 | 0.3966 | 0.8573 |
| 0.3494 | 46.0 | 851 | 0.3989 | 0.8573 |
| 0.373 | 46.9730 | 869 | 0.3993 | 0.8573 |
| 0.3911 | 48.0 | 888 | 0.3978 | 0.8576 |
| 0.3716 | 48.9730 | 906 | 0.3967 | 0.8576 |
| 0.3685 | 50.0 | 925 | 0.3968 | 0.8576 |
| 0.3879 | 50.9730 | 943 | 0.3950 | 0.8573 |
| 0.3774 | 52.0 | 962 | 0.3951 | 0.8580 |
| 0.3588 | 52.9730 | 980 | 0.3950 | 0.8584 |
| 0.3746 | 54.0 | 999 | 0.3959 | 0.8584 |
| 0.3677 | 54.9730 | 1017 | 0.3960 | 0.8584 |
| 0.3608 | 56.0 | 1036 | 0.3965 | 0.8588 |
| 0.3518 | 56.9730 | 1054 | 0.3963 | 0.8580 |
| 0.3554 | 58.0 | 1073 | 0.3957 | 0.8588 |
| 0.3584 | 58.9730 | 1091 | 0.3957 | 0.8584 |
| 0.3776 | 60.0 | 1110 | 0.3948 | 0.8592 |
| 0.364 | 60.9730 | 1128 | 0.3942 | 0.8588 |
| 0.3647 | 62.0 | 1147 | 0.3942 | 0.8584 |
| 0.3613 | 62.9730 | 1165 | 0.3949 | 0.8588 |
| 0.3509 | 64.0 | 1184 | 0.3961 | 0.8584 |
| 0.3816 | 64.9730 | 1202 | 0.3967 | 0.8584 |
| 0.3552 | 66.0 | 1221 | 0.3957 | 0.8588 |
| 0.3461 | 66.9730 | 1239 | 0.3946 | 0.8588 |
| 0.364 | 68.0 | 1258 | 0.3940 | 0.8588 |
| 0.372 | 68.9730 | 1276 | 0.3943 | 0.8599 |
| 0.347 | 70.0 | 1295 | 0.3939 | 0.8592 |
| 0.3537 | 70.9730 | 1313 | 0.3943 | 0.8599 |
| 0.3537 | 72.0 | 1332 | 0.3950 | 0.8595 |
| 0.3823 | 72.9730 | 1350 | 0.3951 | 0.8592 |
| 0.3454 | 74.0 | 1369 | 0.3947 | 0.8592 |
| 0.3667 | 74.9730 | 1387 | 0.3949 | 0.8592 |
| 0.3585 | 76.0 | 1406 | 0.3945 | 0.8592 |
| 0.356 | 76.9730 | 1424 | 0.3947 | 0.8592 |
| 0.337 | 78.0 | 1443 | 0.3949 | 0.8592 |
| 0.3588 | 78.9730 | 1461 | 0.3944 | 0.8592 |
| 0.3591 | 80.0 | 1480 | 0.3941 | 0.8592 |
| 0.3638 | 80.9730 | 1498 | 0.3943 | 0.8592 |
| 0.367 | 82.0 | 1517 | 0.3941 | 0.8592 |
| 0.3694 | 82.9730 | 1535 | 0.3943 | 0.8592 |
| 0.3779 | 84.0 | 1554 | 0.3941 | 0.8592 |
| 0.344 | 84.9730 | 1572 | 0.3939 | 0.8595 |
| 0.3619 | 86.0 | 1591 | 0.3935 | 0.8592 |
| 0.342 | 86.9730 | 1609 | 0.3934 | 0.8595 |
| 0.3686 | 88.0 | 1628 | 0.3931 | 0.8595 |
| 0.3407 | 88.9730 | 1646 | 0.3931 | 0.8595 |
| 0.3553 | 90.0 | 1665 | 0.3933 | 0.8599 |
| 0.367 | 90.9730 | 1683 | 0.3934 | 0.8595 |
| 0.3665 | 92.0 | 1702 | 0.3932 | 0.8599 |
| 0.3684 | 92.9730 | 1720 | 0.3932 | 0.8599 |
| 0.3685 | 94.0 | 1739 | 0.3934 | 0.8595 |
| 0.375 | 94.9730 | 1757 | 0.3934 | 0.8592 |
| 0.3564 | 96.0 | 1776 | 0.3934 | 0.8592 |
| 0.362 | 96.9730 | 1794 | 0.3934 | 0.8592 |
| 0.3688 | 97.2973 | 1800 | 0.3934 | 0.8592 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
[
"eosinophils",
"lymphocytes",
"monocytes",
"neutrophils"
] |
Melo1512/vit-msn-small-wbc-classifier-0316-cleaned-dataset-10
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-msn-small-wbc-classifier-0316-cleaned-dataset-10
This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3538
- Accuracy: 0.8935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5193 | 1.0 | 16 | 0.6823 | 0.7945 |
| 0.5339 | 2.0 | 32 | 0.4553 | 0.8438 |
| 0.4778 | 3.0 | 48 | 0.4525 | 0.8478 |
| 0.4253 | 4.0 | 64 | 0.4077 | 0.8473 |
| 0.4086 | 5.0 | 80 | 0.4218 | 0.8575 |
| 0.3673 | 6.0 | 96 | 0.4002 | 0.8693 |
| 0.3275 | 7.0 | 112 | 0.3302 | 0.8773 |
| 0.3231 | 8.0 | 128 | 0.3672 | 0.8803 |
| 0.302 | 9.0 | 144 | 0.3363 | 0.8900 |
| 0.3122 | 10.0 | 160 | 0.3284 | 0.8843 |
| 0.2686 | 11.0 | 176 | 0.3317 | 0.8874 |
| 0.2786 | 12.0 | 192 | 0.3660 | 0.8883 |
| 0.2338 | 13.0 | 208 | 0.3520 | 0.8834 |
| 0.2466 | 14.0 | 224 | 0.3414 | 0.8896 |
| 0.2296 | 15.0 | 240 | 0.3531 | 0.8874 |
| 0.1961 | 16.0 | 256 | 0.3844 | 0.8847 |
| 0.2056 | 17.0 | 272 | 0.3705 | 0.8900 |
| 0.197 | 18.0 | 288 | 0.3538 | 0.8935 |
| 0.1748 | 19.0 | 304 | 0.3717 | 0.8887 |
| 0.1807 | 20.0 | 320 | 0.4075 | 0.8843 |
| 0.177 | 21.0 | 336 | 0.3881 | 0.8830 |
| 0.1433 | 22.0 | 352 | 0.4014 | 0.8856 |
| 0.1522 | 23.0 | 368 | 0.3918 | 0.8874 |
| 0.1322 | 24.0 | 384 | 0.4199 | 0.8905 |
| 0.1396 | 25.0 | 400 | 0.4142 | 0.8896 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
[
"eosinophils",
"lymphocytes",
"monocytes",
"neutrophils"
] |
Melo1512/vit-msn-small-wbc-classifier-0316-cropped-cleaned-dataset-10
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-msn-small-wbc-classifier-0316-cropped-cleaned-dataset-10
This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3986
- Accuracy: 0.8855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3709 | 1.0 | 17 | 0.6977 | 0.8050 |
| 0.5673 | 2.0 | 34 | 0.5949 | 0.8099 |
| 0.5227 | 3.0 | 51 | 0.6152 | 0.7931 |
| 0.4958 | 4.0 | 68 | 0.4351 | 0.8436 |
| 0.4402 | 5.0 | 85 | 0.3777 | 0.8580 |
| 0.3878 | 6.0 | 102 | 0.3970 | 0.8699 |
| 0.3646 | 7.0 | 119 | 0.3793 | 0.8641 |
| 0.3452 | 8.0 | 136 | 0.3550 | 0.8805 |
| 0.344 | 9.0 | 153 | 0.4003 | 0.8736 |
| 0.3365 | 10.0 | 170 | 0.3654 | 0.8830 |
| 0.3223 | 11.0 | 187 | 0.3571 | 0.8764 |
| 0.2819 | 12.0 | 204 | 0.3665 | 0.8789 |
| 0.2998 | 13.0 | 221 | 0.3609 | 0.8838 |
| 0.2959 | 14.0 | 238 | 0.4335 | 0.8719 |
| 0.2662 | 15.0 | 255 | 0.4245 | 0.8785 |
| 0.2668 | 16.0 | 272 | 0.3760 | 0.8846 |
| 0.2576 | 17.0 | 289 | 0.3728 | 0.8830 |
| 0.2398 | 18.0 | 306 | 0.4192 | 0.8814 |
| 0.2278 | 19.0 | 323 | 0.4156 | 0.8805 |
| 0.2033 | 20.0 | 340 | 0.4159 | 0.8851 |
| 0.2037 | 21.0 | 357 | 0.3986 | 0.8855 |
| 0.1934 | 22.0 | 374 | 0.4220 | 0.8822 |
| 0.1983 | 23.0 | 391 | 0.4159 | 0.8855 |
| 0.1746 | 24.0 | 408 | 0.4179 | 0.8855 |
| 0.1776 | 25.0 | 425 | 0.4247 | 0.8834 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
[
"eosinophils",
"lymphocytes",
"monocytes",
"neutrophils"
] |
WillyIde545/dog_classifier
|
# Model Card for Model ID
Model classifies dogs given a pictures between 120 different breeds of dogs.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model. This model takes in a picture of a dog, resizes it, and then classifies the dog as one of 120 dog breeds.
- **Developed by:** [Wilson Ide]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
This model was only trained on the stanfor dogs dataset, which is not a super wide dataset. Additionally, it is only about 86% accurate.
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24",
"label_25",
"label_26",
"label_27",
"label_28",
"label_29",
"label_30",
"label_31",
"label_32",
"label_33",
"label_34",
"label_35",
"label_36",
"label_37",
"label_38",
"label_39",
"label_40",
"label_41",
"label_42",
"label_43",
"label_44",
"label_45",
"label_46",
"label_47",
"label_48",
"label_49",
"label_50",
"label_51",
"label_52",
"label_53",
"label_54",
"label_55",
"label_56",
"label_57",
"label_58",
"label_59",
"label_60",
"label_61",
"label_62",
"label_63",
"label_64",
"label_65",
"label_66",
"label_67",
"label_68",
"label_69",
"label_70",
"label_71",
"label_72",
"label_73",
"label_74",
"label_75",
"label_76",
"label_77",
"label_78",
"label_79",
"label_80",
"label_81",
"label_82",
"label_83",
"label_84",
"label_85",
"label_86",
"label_87",
"label_88",
"label_89",
"label_90",
"label_91",
"label_92",
"label_93",
"label_94",
"label_95",
"label_96",
"label_97",
"label_98",
"label_99",
"label_100",
"label_101",
"label_102",
"label_103",
"label_104",
"label_105",
"label_106",
"label_107",
"label_108",
"label_109",
"label_110",
"label_111",
"label_112",
"label_113",
"label_114",
"label_115",
"label_116",
"label_117",
"label_118",
"label_119"
] |
shoni/comic-sans-detector
|
# Comic Sans Detector
This repository contains a fine-tuned ResNet-18 model, specifically trained to detect whether an image contains Comic Sans font. It is a fine-tuning of a previously fine-tuned font classification model, based on the ResNet-18 foundation model.
## Features
- Distinguishes between Comic Sans and non-Comic Sans images.
- Built using a custom dataset with two classes: `comic` and `not-comic`.
## Usage
To use this model with the Hugging Face Inference API:
```python
from transformers import pipeline
classifier = pipeline("image-classification", model="shoni/comic-sans-detector")
result = classifier("path/to/image.jpg")
print(result)
# Comic Sans Detector
This repository contains a fine-tuned ResNet-18 model, specifically trained to detect whether an image contains Comic Sans font. It is a fine-tuning of a previously fine-tuned font classification model, based on the ResNet-18 foundation model.
## Repository Contents
- **`comic-detector.ipynb`**: A notebook that demonstrates the training and evaluation process for the Comic Sans detector using the fine-tuned ResNet-18 model.
- **`image-format-generalizer.ipynb`**: A utility notebook for preparing and normalizing image datasets, ensuring consistent formatting across `/data` folders.
## Dataset Structure (Not Included)
The dataset used for training and evaluation should follow this structure:
```
/data
├── comic/
│ ├── image1.jpg
│ ├── image2.png
│ └── ...
├── not-comic/
│ ├── image1.jpg
│ ├── image2.png
│ └── ...
```
- **`comic/`**: Contains images labeled as featuring Comic Sans font.
- **`not-comic/`**: Contains images labeled as not featuring Comic Sans font.
⚠️ The dataset itself is not included in this repository. You must prepare and structure your dataset as described.
## How to Use
### 1. Clone the Repository
```bash
git clone https://huggingface.co/shoni/comic-sans-detector
cd comic-sans-detector
```
### 2. Prepare the Dataset
Ensure your dataset is properly structured under a `/data` directory with `comic/` and `not-comic/` folders.
### 3. Run the Training Notebook
Open `comic-detector.ipynb` in Jupyter Notebook or an equivalent environment to retrain the model or evaluate it.
### 4. Format Images (Optional)
If your dataset images are not in a consistent format, use `image-format-generalizer.ipynb` to preprocess them.
## Model Usage
The fine-tuned model can be deployed directly via the Hugging Face Inference API. Once uploaded, the model can be used to classify whether an image contains Comic Sans font.
Example API usage (replace `shoni/comic-sans-detector` with your repository name):
```python
from transformers import pipeline
classifier = pipeline("image-classification", model="shoni/comic-sans-detector")
result = classifier("path/to/image.jpg")
print(result)
```
## Fine-Tuning Process
This model was fine-tuned on a previously fine-tuned font classification model, which itself was based on the ResNet-18 foundation model. The fine-tuning process was conducted using a custom dataset with two classes: `comic` and `not-comic`.
## Acknowledgments
This project is based on the original font identifier repository by [gaborcselle](https://huggingface.co/gaborcselle/font-identifier).
## License
Include your preferred license here (e.g., MIT, Apache 2.0, etc.).
|
[
"comic",
"not-comic"
] |
rbenrejeb/vit-Facial-Expression-Recognition
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-Facial-Expression-Recognition
This model is a fine-tuned version of [motheecreator/vit-Facial-Expression-Recognition](https://huggingface.co/motheecreator/vit-Facial-Expression-Recognition) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3720
- Accuracy: 0.8732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.5752 | 0.2164 | 100 | 0.3737 | 0.8740 |
| 0.568 | 0.4328 | 200 | 0.3759 | 0.8720 |
| 0.551 | 0.6492 | 300 | 0.3722 | 0.8734 |
| 0.5604 | 0.8656 | 400 | 0.3747 | 0.8733 |
| 0.5391 | 1.0820 | 500 | 0.3720 | 0.8732 |
| 0.5751 | 1.2984 | 600 | 0.3761 | 0.8718 |
| 0.5678 | 1.5147 | 700 | 0.3824 | 0.8691 |
| 0.5493 | 1.7311 | 800 | 0.3870 | 0.8672 |
| 0.5766 | 1.9475 | 900 | 0.3942 | 0.8629 |
| 0.5301 | 2.1639 | 1000 | 0.3947 | 0.8639 |
| 0.5092 | 2.3803 | 1100 | 0.3896 | 0.8656 |
| 0.5164 | 2.5967 | 1200 | 0.3778 | 0.8703 |
| 0.4971 | 2.8131 | 1300 | 0.3731 | 0.8730 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"angry",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
MHTrXz/fire_classification
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"fire_images",
"non_fire_images"
] |
kaleemullah0005/results
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5130
- Accuracy: 0.9678
- F1 Macro: 0.3279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.9582 | 1.0 | 326 | 0.6083 | 0.9570 | 0.3720 |
| 0.9225 | 2.0 | 652 | 0.5519 | 0.9455 | 0.3759 |
| 0.8664 | 3.0 | 978 | 0.4927 | 0.9677 | 0.3454 |
| 0.6536 | 4.0 | 1304 | 0.5522 | 0.8848 | 0.3702 |
| 0.6793 | 5.0 | 1630 | 0.4951 | 0.9455 | 0.3830 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
[
"0",
"1",
"2"
] |
Melo1512/vit-msn-small-wbc-classifier-cells-separated-dataset-agregates-25
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-msn-small-wbc-classifier-cells-separated-dataset-agregates-25
This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1790
- Accuracy: 0.9401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.351 | 0.9937 | 119 | 0.2523 | 0.9151 |
| 0.3364 | 1.9958 | 239 | 0.2355 | 0.9195 |
| 0.2999 | 2.9979 | 359 | 0.2384 | 0.9169 |
| 0.2861 | 4.0 | 479 | 0.1902 | 0.9341 |
| 0.3014 | 4.9937 | 598 | 0.2154 | 0.9290 |
| 0.292 | 5.9958 | 718 | 0.1764 | 0.9383 |
| 0.2441 | 6.9979 | 838 | 0.1894 | 0.9348 |
| 0.2416 | 8.0 | 958 | 0.1913 | 0.9349 |
| 0.2642 | 8.9937 | 1077 | 0.1738 | 0.9385 |
| 0.2482 | 9.9958 | 1197 | 0.1911 | 0.9371 |
| 0.2279 | 10.9979 | 1317 | 0.1867 | 0.9381 |
| 0.2331 | 12.0 | 1437 | 0.1814 | 0.9389 |
| 0.2208 | 12.9937 | 1556 | 0.1790 | 0.9401 |
| 0.2326 | 13.9958 | 1676 | 0.1926 | 0.9366 |
| 0.1899 | 14.9979 | 1796 | 0.1975 | 0.9372 |
| 0.1822 | 16.0 | 1916 | 0.2052 | 0.9352 |
| 0.1837 | 16.9937 | 2035 | 0.2078 | 0.9364 |
| 0.1712 | 17.9958 | 2155 | 0.2345 | 0.9288 |
| 0.1715 | 18.9979 | 2275 | 0.2156 | 0.9368 |
| 0.1516 | 20.0 | 2395 | 0.2279 | 0.9368 |
| 0.1504 | 20.9937 | 2514 | 0.2213 | 0.9382 |
| 0.139 | 21.9958 | 2634 | 0.2247 | 0.9370 |
| 0.1264 | 22.9979 | 2754 | 0.2357 | 0.9384 |
| 0.1266 | 24.0 | 2874 | 0.2360 | 0.9381 |
| 0.1144 | 24.8434 | 2975 | 0.2370 | 0.9375 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
[
"agregados",
"eosinophils",
"lymphocytes",
"monocytes",
"neutrophils"
] |
Melo1512/vit-msn-small-wbc-classifier-cells-separated-dataset-no-agregates-10
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-msn-small-wbc-classifier-cells-separated-dataset-no-agregates-10
This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1504
- Accuracy: 0.9463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.3353 | 0.9937 | 118 | 0.2299 | 0.9214 |
| 0.2884 | 1.9958 | 237 | 0.2378 | 0.9172 |
| 0.2487 | 2.9979 | 356 | 0.1871 | 0.9360 |
| 0.2347 | 4.0 | 475 | 0.1920 | 0.9328 |
| 0.2343 | 4.9937 | 593 | 0.1674 | 0.9405 |
| 0.2285 | 5.9958 | 712 | 0.1642 | 0.9426 |
| 0.2079 | 6.9979 | 831 | 0.1836 | 0.9344 |
| 0.2155 | 8.0 | 950 | 0.1661 | 0.9442 |
| 0.1954 | 8.9937 | 1068 | 0.1504 | 0.9463 |
| 0.1763 | 9.9368 | 1180 | 0.1588 | 0.9450 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
[
"eosinophils",
"lymphocytes",
"monocytes",
"neutrophils"
] |
Audi24/OptoAI
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Audi24/OptoAI
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1247
- Validation Loss: 1.0296
- Train Accuracy: 0.6167
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 2400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.2671 | 1.1720 | 0.5 | 0 |
| 1.1998 | 1.0899 | 0.5417 | 1 |
| 1.1785 | 1.0827 | 0.6167 | 2 |
| 1.1651 | 1.0569 | 0.5917 | 3 |
| 1.1247 | 1.0296 | 0.6167 | 4 |
### Framework versions
- Transformers 4.47.0
- TensorFlow 2.17.1
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"normal",
"cataract",
"glaucoma",
"retinal disease"
] |
Audi24/OptoAI2.0
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Audi24/OptoAI2.0
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2258
- Validation Loss: 1.1361
- Train Accuracy: 0.4875
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1600, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.3733 | 1.3181 | 0.425 | 0 |
| 1.3350 | 1.2704 | 0.4375 | 1 |
| 1.3132 | 1.2019 | 0.5125 | 2 |
| 1.2711 | 1.2010 | 0.5 | 3 |
| 1.2258 | 1.1361 | 0.4875 | 4 |
### Framework versions
- Transformers 4.47.0
- TensorFlow 2.17.1
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"normal",
"cataract",
"glaucoma",
"retinal disease"
] |
Audi24/Opto_AI
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Audi24/Opto_AI
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3912
- Validation Loss: 0.3749
- Train Accuracy: 0.8619
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 16885, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.8641 | 0.5357 | 0.8012 | 0 |
| 0.5990 | 0.4117 | 0.8702 | 1 |
| 0.4826 | 0.3584 | 0.8857 | 2 |
| 0.4381 | 0.3717 | 0.8655 | 3 |
| 0.3912 | 0.3749 | 0.8619 | 4 |
### Framework versions
- Transformers 4.47.0
- TensorFlow 2.17.1
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"cataract",
"diabetic retinopathy",
"glaucoma",
"normal"
] |
Emiel/cub-200-bird-classifier-swin
|
# Model Card for Model ID

### Model Description
This model was finetuned for the "Feather in Focus!" Kaggle competition of the Information Studies Master's Applied Machine Learning course at the University of Amsterdam.
The goal of the competition was to apply novel approaches to achieve the highest possible accuracy on a bird classification task with 200 classes.
We were given a labeled dataset of 3926 images and an unlabeled dataset of 4000 test images.
Out of 32 groups and 1083 submissions, we achieved the #1 accuracy on the test set with a score of 0.87950.
### Training Details
The model we are finetuning, microsoft/swin-large-patch4-window12-384-in22k, was pre-trained on imagenet-21k, see https://huggingface.co/microsoft/swin-large-patch4-window12-384-in22k.
#### Preprocessing
Data augmentation was applied to the training data in a custom Torch dataset class. Because of the size of the dataset, images were not replaced but were duplicated and augmented.
The only augmentations applied were HorizontalFlips and Rotations (10 degrees) to align with the relatively homogeneous dataset.
# Finetuning
Finetuning was done on some 50 different models including different VTs and CNNs. All models were trained for 10 epochs with the best model, based on the evaluation acccuracy,
saved every epoch.
### Finetuning Data
The finetuning data is a subset of the cub-200-2011 dataset, http://www.vision.caltech.edu/datasets/cub_200_2011/.
We finetuned the model on 3533 samples of the labeled dataset we were given, stratified on the label (7066 including augmented images).
#### Finetuning Hyperparameters
| Hyperparameter | Value |
|-----------------------|----------------------------|
| Optimizer | AdamW |
| Learning Rate | 1e-4 |
| Batch Size | 32 |
| Epochs | 2 |
| Weight Decay | * |
| Class Weight | * |
| Label Smoothing | * |
| Scheduler | * |
| Mixed Precision | Torch AMP |
*parameters were intentionally not set because of poor results
### Evaluation Data
The evaluation data is a subset of the cub-200-2011 dataset, http://www.vision.caltech.edu/datasets/cub_200_2011/.
We evaluated the model on 393 samples of the labeled dataset we were given, stratified on the label.
#### Testing Data
The testing data is a subset of an unlabeled subset of the cub-200-2011 dataset, http://www.vision.caltech.edu/datasets/cub_200_2011/ of 4000 images. After model finetuning
the best model, based on the evaluation data, would be loaded. This model would then be used to predict the labels of the unlabeled test set.
These predicted labels were submitted to the Kaggle competition via CSV which returned the test accuracy.
### Poster

*novel approaches were not applied when finetuning the final model as they did not improve accuracy.
|
[
"black_footed_albatross",
"laysan_albatross",
"sooty_albatross",
"groove_billed_ani",
"crested_auklet",
"least_auklet",
"parakeet_auklet",
"rhinoceros_auklet",
"brewer_blackbird",
"red_winged_blackbird",
"rusty_blackbird",
"yellow_headed_blackbird",
"bobolink",
"indigo_bunting",
"lazuli_bunting",
"painted_bunting",
"cardinal",
"spotted_catbird",
"gray_catbird",
"yellow_breasted_chat",
"eastern_towhee",
"chuck_will_widow",
"brandt_cormorant",
"red_faced_cormorant",
"pelagic_cormorant",
"bronzed_cowbird",
"shiny_cowbird",
"brown_creeper",
"american_crow",
"fish_crow",
"black_billed_cuckoo",
"mangrove_cuckoo",
"yellow_billed_cuckoo",
"gray_crowned_rosy_finch",
"purple_finch",
"northern_flicker",
"acadian_flycatcher",
"great_crested_flycatcher",
"least_flycatcher",
"olive_sided_flycatcher",
"scissor_tailed_flycatcher",
"vermilion_flycatcher",
"yellow_bellied_flycatcher",
"frigatebird",
"northern_fulmar",
"gadwall",
"american_goldfinch",
"european_goldfinch",
"boat_tailed_grackle",
"eared_grebe",
"horned_grebe",
"pied_billed_grebe",
"western_grebe",
"blue_grosbeak",
"evening_grosbeak",
"pine_grosbeak",
"rose_breasted_grosbeak",
"pigeon_guillemot",
"california_gull",
"glaucous_winged_gull",
"heermann_gull",
"herring_gull",
"ivory_gull",
"ring_billed_gull",
"slaty_backed_gull",
"western_gull",
"anna_hummingbird",
"ruby_throated_hummingbird",
"rufous_hummingbird",
"green_violetear",
"long_tailed_jaeger",
"pomarine_jaeger",
"blue_jay",
"florida_jay",
"green_jay",
"dark_eyed_junco",
"tropical_kingbird",
"gray_kingbird",
"belted_kingfisher",
"green_kingfisher",
"pied_kingfisher",
"ringed_kingfisher",
"white_breasted_kingfisher",
"red_legged_kittiwake",
"horned_lark",
"pacific_loon",
"mallard",
"western_meadowlark",
"hooded_merganser",
"red_breasted_merganser",
"mockingbird",
"nighthawk",
"clark_nutcracker",
"white_breasted_nuthatch",
"baltimore_oriole",
"hooded_oriole",
"orchard_oriole",
"scott_oriole",
"ovenbird",
"brown_pelican",
"white_pelican",
"western_wood_pewee",
"sayornis",
"american_pipit",
"whip_poor_will",
"horned_puffin",
"common_raven",
"white_necked_raven",
"american_redstart",
"geococcyx",
"loggerhead_shrike",
"great_grey_shrike",
"baird_sparrow",
"black_throated_sparrow",
"brewer_sparrow",
"chipping_sparrow",
"clay_colored_sparrow",
"house_sparrow",
"field_sparrow",
"fox_sparrow",
"grasshopper_sparrow",
"harris_sparrow",
"henslow_sparrow",
"le_conte_sparrow",
"lincoln_sparrow",
"nelson_sharp_tailed_sparrow",
"savannah_sparrow",
"seaside_sparrow",
"song_sparrow",
"tree_sparrow",
"vesper_sparrow",
"white_crowned_sparrow",
"white_throated_sparrow",
"cape_glossy_starling",
"bank_swallow",
"barn_swallow",
"cliff_swallow",
"tree_swallow",
"scarlet_tanager",
"summer_tanager",
"artic_tern",
"black_tern",
"caspian_tern",
"common_tern",
"elegant_tern",
"forsters_tern",
"least_tern",
"green_tailed_towhee",
"brown_thrasher",
"sage_thrasher",
"black_capped_vireo",
"blue_headed_vireo",
"philadelphia_vireo",
"red_eyed_vireo",
"warbling_vireo",
"white_eyed_vireo",
"yellow_throated_vireo",
"bay_breasted_warbler",
"black_and_white_warbler",
"black_throated_blue_warbler",
"blue_winged_warbler",
"canada_warbler",
"cape_may_warbler",
"cerulean_warbler",
"chestnut_sided_warbler",
"golden_winged_warbler",
"hooded_warbler",
"kentucky_warbler",
"magnolia_warbler",
"mourning_warbler",
"myrtle_warbler",
"nashville_warbler",
"orange_crowned_warbler",
"palm_warbler",
"pine_warbler",
"prairie_warbler",
"prothonotary_warbler",
"swainson_warbler",
"tennessee_warbler",
"wilson_warbler",
"worm_eating_warbler",
"yellow_warbler",
"northern_waterthrush",
"louisiana_waterthrush",
"bohemian_waxwing",
"cedar_waxwing",
"american_three_toed_woodpecker",
"pileated_woodpecker",
"red_bellied_woodpecker",
"red_cockaded_woodpecker",
"red_headed_woodpecker",
"downy_woodpecker",
"bewick_wren",
"cactus_wren",
"carolina_wren",
"house_wren",
"marsh_wren",
"rock_wren",
"winter_wren",
"common_yellowthroat"
] |
thainq107/flowers-vit-base-patch16-224-in21k
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flowers-vit-base-patch16-224-in21k
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2048
- Model Preparation Time: 0.0068
- Accuracy: 0.9673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:--------:|
| No log | 1.0 | 92 | 0.6178 | 0.0068 | 0.9700 |
| No log | 2.0 | 184 | 0.3102 | 0.0068 | 0.9646 |
| No log | 3.0 | 276 | 0.2315 | 0.0068 | 0.9700 |
| No log | 4.0 | 368 | 0.2097 | 0.0068 | 0.9673 |
| No log | 5.0 | 460 | 0.2048 | 0.0068 | 0.9673 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Tokenizers 0.21.0
|
[
"daisy",
"dandelion",
"roses",
"sunflowers",
"tulips"
] |
wellCh4n/tomato-leaf-disease-classification-vit
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tomato-leaf-disease-classification-vit
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the wellCh4n/tomato-leaf-disease-image dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0170
- Accuracy: 0.9967
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1879 | 1.0 | 1930 | 0.0915 | 0.9842 |
| 0.1685 | 2.0 | 3860 | 0.0688 | 0.9838 |
| 0.0118 | 3.0 | 5790 | 0.0271 | 0.9952 |
| 0.1 | 4.0 | 7720 | 0.0244 | 0.9952 |
| 0.0629 | 5.0 | 9650 | 0.0170 | 0.9967 |
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"a healthy tomato leaf",
"a tomato leaf with leaf mold",
"a tomato leaf with target spot",
"a tomato leaf with late blight",
"a tomato leaf with early blight",
"a tomato leaf with bacterial spot",
"a tomato leaf with septoria leaf spot",
"a tomato leaf with tomato mosaic virus",
"a tomato leaf with tomato yellow leaf curl virus",
"a tomato leaf with spider mites two-spotted spider mite"
] |
wellCh4n/tomato-leaf-disease-classification-resnet50
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tomato-leaf-disease-classification-resnet50
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the wellCh4n/tomato-leaf-disease-image dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0197
- Accuracy: 0.9956
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 1337
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.6891 | 1.0 | 965 | 1.6572 | 0.3488 |
| 1.1351 | 2.0 | 1930 | 1.1593 | 0.7126 |
| 0.7767 | 3.0 | 2895 | 0.6135 | 0.8168 |
| 0.7963 | 4.0 | 3860 | 0.3818 | 0.8796 |
| 0.547 | 5.0 | 4825 | 0.2581 | 0.9302 |
| 0.5104 | 6.0 | 5790 | 0.2106 | 0.9438 |
| 0.3997 | 7.0 | 6755 | 0.1579 | 0.9563 |
| 0.2527 | 8.0 | 7720 | 0.1292 | 0.9604 |
| 0.3268 | 9.0 | 8685 | 0.1154 | 0.9659 |
| 0.2595 | 10.0 | 9650 | 0.1018 | 0.9699 |
| 0.2269 | 11.0 | 10615 | 0.0869 | 0.9743 |
| 0.2515 | 12.0 | 11580 | 0.0783 | 0.9747 |
| 0.2604 | 13.0 | 12545 | 0.0710 | 0.9794 |
| 0.2583 | 14.0 | 13510 | 0.0704 | 0.9783 |
| 0.2004 | 15.0 | 14475 | 0.0603 | 0.9824 |
| 0.2552 | 16.0 | 15440 | 0.0565 | 0.9835 |
| 0.2192 | 17.0 | 16405 | 0.0553 | 0.9846 |
| 0.3443 | 18.0 | 17370 | 0.0508 | 0.9831 |
| 0.1954 | 19.0 | 18335 | 0.0530 | 0.9846 |
| 0.2685 | 20.0 | 19300 | 0.0430 | 0.9864 |
| 0.1277 | 21.0 | 20265 | 0.0406 | 0.9864 |
| 0.1388 | 22.0 | 21230 | 0.0404 | 0.9872 |
| 0.2379 | 23.0 | 22195 | 0.0399 | 0.9875 |
| 0.1018 | 24.0 | 23160 | 0.0441 | 0.9879 |
| 0.2155 | 25.0 | 24125 | 0.0364 | 0.9905 |
| 0.1699 | 26.0 | 25090 | 0.0398 | 0.9875 |
| 0.2772 | 27.0 | 26055 | 0.0364 | 0.9872 |
| 0.1669 | 28.0 | 27020 | 0.0369 | 0.9894 |
| 0.0867 | 29.0 | 27985 | 0.0339 | 0.9901 |
| 0.1314 | 30.0 | 28950 | 0.0322 | 0.9905 |
| 0.082 | 31.0 | 29915 | 0.0362 | 0.9879 |
| 0.0393 | 32.0 | 30880 | 0.0332 | 0.9908 |
| 0.0812 | 33.0 | 31845 | 0.0329 | 0.9905 |
| 0.2634 | 34.0 | 32810 | 0.0333 | 0.9897 |
| 0.1581 | 35.0 | 33775 | 0.0337 | 0.9901 |
| 0.168 | 36.0 | 34740 | 0.0298 | 0.9890 |
| 0.0653 | 37.0 | 35705 | 0.0311 | 0.9905 |
| 0.0998 | 38.0 | 36670 | 0.0326 | 0.9901 |
| 0.0947 | 39.0 | 37635 | 0.0288 | 0.9919 |
| 0.1126 | 40.0 | 38600 | 0.0272 | 0.9916 |
| 0.1319 | 41.0 | 39565 | 0.0272 | 0.9919 |
| 0.0446 | 42.0 | 40530 | 0.0283 | 0.9916 |
| 0.2453 | 43.0 | 41495 | 0.0281 | 0.9919 |
| 0.0708 | 44.0 | 42460 | 0.0263 | 0.9923 |
| 0.0441 | 45.0 | 43425 | 0.0262 | 0.9916 |
| 0.0936 | 46.0 | 44390 | 0.0252 | 0.9919 |
| 0.1565 | 47.0 | 45355 | 0.0284 | 0.9923 |
| 0.0404 | 48.0 | 46320 | 0.0263 | 0.9930 |
| 0.0357 | 49.0 | 47285 | 0.0240 | 0.9930 |
| 0.0971 | 50.0 | 48250 | 0.0285 | 0.9916 |
| 0.0582 | 51.0 | 49215 | 0.0251 | 0.9923 |
| 0.048 | 52.0 | 50180 | 0.0257 | 0.9919 |
| 0.1218 | 53.0 | 51145 | 0.0252 | 0.9930 |
| 0.0576 | 54.0 | 52110 | 0.0227 | 0.9930 |
| 0.0723 | 55.0 | 53075 | 0.0227 | 0.9930 |
| 0.1347 | 56.0 | 54040 | 0.0242 | 0.9941 |
| 0.1684 | 57.0 | 55005 | 0.0255 | 0.9927 |
| 0.0525 | 58.0 | 55970 | 0.0250 | 0.9938 |
| 0.1031 | 59.0 | 56935 | 0.0265 | 0.9923 |
| 0.0768 | 60.0 | 57900 | 0.0244 | 0.9941 |
| 0.0416 | 61.0 | 58865 | 0.0207 | 0.9934 |
| 0.1783 | 62.0 | 59830 | 0.0237 | 0.9941 |
| 0.1253 | 63.0 | 60795 | 0.0269 | 0.9912 |
| 0.0448 | 64.0 | 61760 | 0.0236 | 0.9941 |
| 0.0967 | 65.0 | 62725 | 0.0230 | 0.9934 |
| 0.0486 | 66.0 | 63690 | 0.0229 | 0.9941 |
| 0.0442 | 67.0 | 64655 | 0.0256 | 0.9934 |
| 0.0526 | 68.0 | 65620 | 0.0210 | 0.9945 |
| 0.0949 | 69.0 | 66585 | 0.0250 | 0.9938 |
| 0.0674 | 70.0 | 67550 | 0.0228 | 0.9938 |
| 0.1554 | 71.0 | 68515 | 0.0240 | 0.9941 |
| 0.0598 | 72.0 | 69480 | 0.0233 | 0.9945 |
| 0.0632 | 73.0 | 70445 | 0.0218 | 0.9949 |
| 0.0951 | 74.0 | 71410 | 0.0234 | 0.9945 |
| 0.1634 | 75.0 | 72375 | 0.0245 | 0.9945 |
| 0.2039 | 76.0 | 73340 | 0.0222 | 0.9938 |
| 0.0741 | 77.0 | 74305 | 0.0226 | 0.9949 |
| 0.0923 | 78.0 | 75270 | 0.0218 | 0.9949 |
| 0.0351 | 79.0 | 76235 | 0.0230 | 0.9945 |
| 0.1234 | 80.0 | 77200 | 0.0244 | 0.9934 |
| 0.0659 | 81.0 | 78165 | 0.0232 | 0.9945 |
| 0.0393 | 82.0 | 79130 | 0.0210 | 0.9949 |
| 0.053 | 83.0 | 80095 | 0.0205 | 0.9945 |
| 0.0575 | 84.0 | 81060 | 0.0210 | 0.9945 |
| 0.0651 | 85.0 | 82025 | 0.0198 | 0.9949 |
| 0.0875 | 86.0 | 82990 | 0.0210 | 0.9945 |
| 0.1006 | 87.0 | 83955 | 0.0214 | 0.9949 |
| 0.0466 | 88.0 | 84920 | 0.0211 | 0.9941 |
| 0.088 | 89.0 | 85885 | 0.0233 | 0.9923 |
| 0.1162 | 90.0 | 86850 | 0.0197 | 0.9956 |
| 0.0641 | 91.0 | 87815 | 0.0213 | 0.9949 |
| 0.0867 | 92.0 | 88780 | 0.0203 | 0.9952 |
| 0.0305 | 93.0 | 89745 | 0.0212 | 0.9941 |
| 0.1009 | 94.0 | 90710 | 0.0200 | 0.9956 |
| 0.084 | 95.0 | 91675 | 0.0200 | 0.9960 |
| 0.0409 | 96.0 | 92640 | 0.0213 | 0.9949 |
| 0.107 | 97.0 | 93605 | 0.0210 | 0.9934 |
| 0.0558 | 98.0 | 94570 | 0.0206 | 0.9952 |
| 0.0644 | 99.0 | 95535 | 0.0219 | 0.9949 |
| 0.0617 | 100.0 | 96500 | 0.0205 | 0.9941 |
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"a healthy tomato leaf",
"a tomato leaf with leaf mold",
"a tomato leaf with target spot",
"a tomato leaf with late blight",
"a tomato leaf with early blight",
"a tomato leaf with bacterial spot",
"a tomato leaf with septoria leaf spot",
"a tomato leaf with tomato mosaic virus",
"a tomato leaf with tomato yellow leaf curl virus",
"a tomato leaf with spider mites two-spotted spider mite"
] |
rostcherno/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# rostcherno/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3671
- Validation Loss: 0.3437
- Train Accuracy: 0.912
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7715 | 1.6111 | 0.823 | 0 |
| 1.1907 | 0.8209 | 0.889 | 1 |
| 0.6760 | 0.5247 | 0.905 | 2 |
| 0.4748 | 0.4012 | 0.903 | 3 |
| 0.3671 | 0.3437 | 0.912 | 4 |
### Framework versions
- Transformers 4.47.1
- TensorFlow 2.17.1
- Tokenizers 0.21.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
heloula/vit-Facial-Expression-Recognition
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-Facial-Expression-Recognition
This model is a fine-tuned version of [motheecreator/vit-Facial-Expression-Recognition](https://huggingface.co/motheecreator/vit-Facial-Expression-Recognition) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3652
- Accuracy: 0.8771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.5697 | 0.2164 | 100 | 0.3684 | 0.8756 |
| 0.5697 | 0.4328 | 200 | 0.3664 | 0.8763 |
| 0.5708 | 0.6492 | 300 | 0.3643 | 0.8763 |
| 0.5829 | 0.8656 | 400 | 0.3661 | 0.8754 |
| 0.5458 | 1.0820 | 500 | 0.3652 | 0.8771 |
| 0.5635 | 1.2984 | 600 | 0.3702 | 0.8748 |
| 0.5495 | 1.5147 | 700 | 0.3767 | 0.8694 |
| 0.5633 | 1.7311 | 800 | 0.3848 | 0.8659 |
| 0.5666 | 1.9475 | 900 | 0.3882 | 0.8655 |
| 0.5284 | 2.1639 | 1000 | 0.3914 | 0.8640 |
| 0.5135 | 2.3803 | 1100 | 0.3824 | 0.8679 |
| 0.5036 | 2.5967 | 1200 | 0.3726 | 0.8722 |
| 0.4927 | 2.8131 | 1300 | 0.3664 | 0.8739 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
[
"angry",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
rostcherno/ai-and-human-art-classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# rostcherno/ai-and-human-art-classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1332
- Validation Loss: 0.1122
- Train Accuracy: 0.9628
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 6325, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.5634 | 0.3687 | 0.8862 | 0 |
| 0.2924 | 0.2816 | 0.8917 | 1 |
| 0.2152 | 0.1730 | 0.9423 | 2 |
| 0.1681 | 0.1308 | 0.9502 | 3 |
| 0.1332 | 0.1122 | 0.9628 | 4 |
### Framework versions
- Transformers 4.47.1
- TensorFlow 2.17.1
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"ai_generated",
"non_ai_generated"
] |
anh-dangminh/resnet-50-finetuned-oxfordflowers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-oxfordflowers
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the oxford102_flower_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6561
- Accuracy: 0.8330
- Precision: 0.8531
- Recall: 0.8330
- F1: 0.8319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 4.4813 | 1.0 | 32 | 4.1934 | 0.3176 | 0.3522 | 0.3176 | 0.2599 |
| 2.6507 | 2.0 | 64 | 1.8716 | 0.5382 | 0.5792 | 0.5382 | 0.4930 |
| 1.257 | 3.0 | 96 | 1.0998 | 0.7216 | 0.7663 | 0.7216 | 0.7085 |
| 0.5333 | 4.0 | 128 | 0.9724 | 0.7422 | 0.7875 | 0.7422 | 0.7296 |
| 0.2506 | 5.0 | 160 | 0.8243 | 0.7627 | 0.7975 | 0.7627 | 0.7566 |
| 0.0689 | 6.0 | 192 | 0.7067 | 0.8147 | 0.8482 | 0.8147 | 0.8111 |
| 0.0325 | 7.0 | 224 | 0.6370 | 0.8206 | 0.8428 | 0.8206 | 0.8175 |
| 0.0132 | 8.0 | 256 | 0.5774 | 0.8412 | 0.8617 | 0.8412 | 0.8389 |
| 0.0117 | 9.0 | 288 | 0.5469 | 0.8559 | 0.8726 | 0.8559 | 0.8542 |
| 0.0066 | 10.0 | 320 | 0.5384 | 0.8608 | 0.8722 | 0.8608 | 0.8575 |
| 0.0072 | 11.0 | 352 | 0.5246 | 0.8686 | 0.8783 | 0.8686 | 0.8650 |
| 0.0068 | 12.0 | 384 | 0.5130 | 0.8716 | 0.8790 | 0.8716 | 0.8679 |
| 0.0045 | 13.0 | 416 | 0.5038 | 0.8716 | 0.8814 | 0.8716 | 0.8691 |
| 0.0025 | 14.0 | 448 | 0.5486 | 0.85 | 0.8627 | 0.85 | 0.8448 |
| 0.0029 | 15.0 | 480 | 0.4992 | 0.8637 | 0.8736 | 0.8637 | 0.8619 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"pink primrose",
"hard-leaved pocket orchid",
"canterbury bells",
"sweet pea",
"english marigold",
"tiger lily",
"moon orchid",
"bird of paradise",
"monkshood",
"globe thistle",
"snapdragon",
"colt's foot",
"king protea",
"spear thistle",
"yellow iris",
"globe-flower",
"purple coneflower",
"peruvian lily",
"balloon flower",
"giant white arum lily",
"fire lily",
"pincushion flower",
"fritillary",
"red ginger",
"grape hyacinth",
"corn poppy",
"prince of wales feathers",
"stemless gentian",
"artichoke",
"sweet william",
"carnation",
"garden phlox",
"love in the mist",
"mexican aster",
"alpine sea holly",
"ruby-lipped cattleya",
"cape flower",
"great masterwort",
"siam tulip",
"lenten rose",
"barbeton daisy",
"daffodil",
"sword lily",
"poinsettia",
"bolero deep blue",
"wallflower",
"marigold",
"buttercup",
"oxeye daisy",
"common dandelion",
"petunia",
"wild pansy",
"primula",
"sunflower",
"pelargonium",
"bishop of llandaff",
"gaura",
"geranium",
"orange dahlia",
"pink-yellow dahlia?",
"cautleya spicata",
"japanese anemone",
"black-eyed susan",
"silverbush",
"californian poppy",
"osteospermum",
"spring crocus",
"bearded iris",
"windflower",
"tree poppy",
"gazania",
"azalea",
"water lily",
"rose",
"thorn apple",
"morning glory",
"passion flower",
"lotus",
"toad lily",
"anthurium",
"frangipani",
"clematis",
"hibiscus",
"columbine",
"desert-rose",
"tree mallow",
"magnolia",
"cyclamen",
"watercress",
"canna lily",
"hippeastrum",
"bee balm",
"ball moss",
"foxglove",
"bougainvillea",
"camellia",
"mallow",
"mexican petunia",
"bromelia",
"blanket flower",
"trumpet creeper",
"blackberry lily"
] |
facebook/dinov2-with-registers-small-imagenet1k-1-layer
|
# Vision Transformer (small-sized model) trained using DINOv2, with registers
Vision Transformer (ViT) model introduced in the paper [Vision Transformers Need Registers](https://arxiv.org/abs/2309.16588) by Darcet et al. and first released in [this repository](https://github.com/facebookresearch/dinov2).
Disclaimer: The team releasing DINOv2 with registers did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) [originally introduced](https://arxiv.org/abs/2010.11929) to do supervised image classification on ImageNet.
Next, people figured out ways to make ViT work really well on self-supervised image feature extraction (i.e. learning meaningful features, also called embeddings) on
images without requiring any labels. Some example papers here include [DINOv2](https://huggingface.co/papers/2304.07193) and [MAE](https://arxiv.org/abs/2111.06377).
The authors of DINOv2 noticed that ViTs have artifacts in attention maps. It’s due to the model using some image patches as “registers”. The authors propose a fix: just add some new tokens (called "register" tokens), which you only use during pre-training (and throw away afterwards). This results in:
- no artifacts
- interpretable attention maps
- and improved performances.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/dinov2_with_registers_visualization.png"
alt="drawing" width="600"/>
<small> Visualization of attention maps of various models trained with vs. without registers. Taken from the <a href="https://arxiv.org/abs/2309.16588">original paper</a>. </small>
Note that this model does not include any fine-tuned heads.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model to classify an image into one of the 1000 possible ImageNet classes. See the [model hub](https://huggingface.co/models?other=dinov2_with_registers) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
import torch
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/dinov2-with-registers-small-imagenet1k-1-layer')
model = AutoModelForImageClassification.from_pretrained('facebook/dinov2-with-registers-small-imagenet1k-1-layer')
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
class_idx = outputs.logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[class_idx])
```
### BibTeX entry and citation info
```bibtex
@misc{darcet2024visiontransformersneedregisters,
title={Vision Transformers Need Registers},
author={Timothée Darcet and Maxime Oquab and Julien Mairal and Piotr Bojanowski},
year={2024},
eprint={2309.16588},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2309.16588},
}
```
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
facebook/dinov2-with-registers-base-imagenet1k-1-layer
|
# Vision Transformer (base-sized model) trained using DINOv2, with registers
Vision Transformer (ViT) model introduced in the paper [Vision Transformers Need Registers](https://arxiv.org/abs/2309.16588) by Darcet et al. and first released in [this repository](https://github.com/facebookresearch/dinov2).
Disclaimer: The team releasing DINOv2 with registers did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) [originally introduced](https://arxiv.org/abs/2010.11929) to do supervised image classification on ImageNet.
Next, people figured out ways to make ViT work really well on self-supervised image feature extraction (i.e. learning meaningful features, also called embeddings) on
images without requiring any labels. Some example papers here include [DINOv2](https://huggingface.co/papers/2304.07193) and [MAE](https://arxiv.org/abs/2111.06377).
The authors of DINOv2 noticed that ViTs have artifacts in attention maps. It’s due to the model using some image patches as “registers”. The authors propose a fix: just add some new tokens (called "register" tokens), which you only use during pre-training (and throw away afterwards). This results in:
- no artifacts
- interpretable attention maps
- and improved performances.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/dinov2_with_registers_visualization.png"
alt="drawing" width="600"/>
<small> Visualization of attention maps of various models trained with vs. without registers. Taken from the <a href="https://arxiv.org/abs/2309.16588">original paper</a>. </small>
Note that this model does not include any fine-tuned heads.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model to classify an image into one of the 1000 possible ImageNet classes. See the [model hub](https://huggingface.co/models?other=dinov2_with_registers) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
import torch
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/dinov2-with-registers-base-imagenet1k-1-layer')
model = AutoModelForImageClassification.from_pretrained('facebook/dinov2-with-registers-base-imagenet1k-1-layer')
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
class_idx = outputs.logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[class_idx])
```
### BibTeX entry and citation info
```bibtex
@misc{darcet2024visiontransformersneedregisters,
title={Vision Transformers Need Registers},
author={Timothée Darcet and Maxime Oquab and Julien Mairal and Piotr Bojanowski},
year={2024},
eprint={2309.16588},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2309.16588},
}
```
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
facebook/dinov2-with-registers-large-imagenet1k-1-layer
|
# Vision Transformer (large-sized model) trained using DINOv2, with registers
Vision Transformer (ViT) model introduced in the paper [Vision Transformers Need Registers](https://arxiv.org/abs/2309.16588) by Darcet et al. and first released in [this repository](https://github.com/facebookresearch/dinov2).
Disclaimer: The team releasing DINOv2 with registers did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) [originally introduced](https://arxiv.org/abs/2010.11929) to do supervised image classification on ImageNet.
Next, people figured out ways to make ViT work really well on self-supervised image feature extraction (i.e. learning meaningful features, also called embeddings) on
images without requiring any labels. Some example papers here include [DINOv2](https://huggingface.co/papers/2304.07193) and [MAE](https://arxiv.org/abs/2111.06377).
The authors of DINOv2 noticed that ViTs have artifacts in attention maps. It’s due to the model using some image patches as “registers”. The authors propose a fix: just add some new tokens (called "register" tokens), which you only use during pre-training (and throw away afterwards). This results in:
- no artifacts
- interpretable attention maps
- and improved performances.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/dinov2_with_registers_visualization.png"
alt="drawing" width="600"/>
<small> Visualization of attention maps of various models trained with vs. without registers. Taken from the <a href="https://arxiv.org/abs/2309.16588">original paper</a>. </small>
Note that this model does not include any fine-tuned heads.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model to classify an image into one of the 1000 possible ImageNet classes. See the [model hub](https://huggingface.co/models?other=dinov2_with_registers) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
import torch
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/dinov2-with-registers-large-imagenet1k-1-layer')
model = AutoModelForImageClassification.from_pretrained('facebook/dinov2-with-registers-large-imagenet1k-1-layer')
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
class_idx = outputs.logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[class_idx])
```
### BibTeX entry and citation info
```bibtex
@misc{darcet2024visiontransformersneedregisters,
title={Vision Transformers Need Registers},
author={Timothée Darcet and Maxime Oquab and Julien Mairal and Piotr Bojanowski},
year={2024},
eprint={2309.16588},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2309.16588},
}
```
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
facebook/dinov2-with-registers-giant-imagenet1k-1-layer
|
# Vision Transformer (giant-sized model) trained using DINOv2, with registers
Vision Transformer (ViT) model introduced in the paper [Vision Transformers Need Registers](https://arxiv.org/abs/2309.16588) by Darcet et al. and first released in [this repository](https://github.com/facebookresearch/dinov2).
Disclaimer: The team releasing DINOv2 with registers did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) [originally introduced](https://arxiv.org/abs/2010.11929) to do supervised image classification on ImageNet.
Next, people figured out ways to make ViT work really well on self-supervised image feature extraction (i.e. learning meaningful features, also called embeddings) on
images without requiring any labels. Some example papers here include [DINOv2](https://huggingface.co/papers/2304.07193) and [MAE](https://arxiv.org/abs/2111.06377).
The authors of DINOv2 noticed that ViTs have artifacts in attention maps. It’s due to the model using some image patches as “registers”. The authors propose a fix: just add some new tokens (called "register" tokens), which you only use during pre-training (and throw away afterwards). This results in:
- no artifacts
- interpretable attention maps
- and improved performances.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/dinov2_with_registers_visualization.png"
alt="drawing" width="600"/>
<small> Visualization of attention maps of various models trained with vs. without registers. Taken from the <a href="https://arxiv.org/abs/2309.16588">original paper</a>. </small>
Note that this model does not include any fine-tuned heads.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model to classify an image into one of the 1000 possible ImageNet classes. See the [model hub](https://huggingface.co/models?other=dinov2_with_registers) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
import torch
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/dinov2-with-registers-giant-imagenet1k-1-layer')
model = AutoModelForImageClassification.from_pretrained('facebook/dinov2-with-registers-giant-imagenet1k-1-layer')
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
class_idx = outputs.logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[class_idx])
```
### BibTeX entry and citation info
```bibtex
@misc{darcet2024visiontransformersneedregisters,
title={Vision Transformers Need Registers},
author={Timothée Darcet and Maxime Oquab and Julien Mairal and Piotr Bojanowski},
year={2024},
eprint={2309.16588},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2309.16588},
}
```
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
tribber93/my-trash-classification
|
# Trash Image Classification using Vision Transformer (ViT)
This repository contains an implementation of an image classification model using a pre-trained Vision Transformer (ViT) model from Hugging Face. The model is fine-tuned to classify images into six categories: cardboard, glass, metal, paper, plastic, and trash.
## Dataset
The dataset consists of images from six categories from [`garythung/trashnet`](https://huggingface.co/datasets/garythung/trashnet) with the following distribution:
- Cardboard: 806 images
- Glass: 1002 images
- Metal: 820 images
- Paper: 1188 images
- Plastic: 964 images
- Trash: 274 images
## Model
We utilize the pre-trained Vision Transformer model [`google/vit-base-patch16-224-in21k`](https://huggingface.co/google/vit-base-patch16-224-in21k) from Hugging Face for image classification. The model is fine-tuned on the dataset to achieve optimal performance.
The trained model is accessible on Hugging Face Hub at: [tribber93/my-trash-classification](https://huggingface.co/tribber93/my-trash-classification)
## Usage
To use the model for inference, follow these steps:
```python
import torch
import requests
from PIL import Image
from transformers import AutoModelForImageClassification, AutoImageProcessor
url = 'https://cdn.grid.id/crop/0x0:0x0/700x465/photo/grid/original/127308_kaleng-bekas.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model_name = "tribber93/my-trash-classification"
model = AutoModelForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
inputs = processor(image, return_tensors="pt")
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, dim=-1)
print("Predicted class:", model.config.id2label[predictions.item()])
```
## Results
After training, the model achieved the following performance:
| Epoch | Training Loss | Validation Loss | Accuracy |
|-------|---------------|-----------------|----------|
| 1 | 3.3200 | 0.7011 | 86.25% |
| 2 | 1.6611 | 0.4298 | 91.49% |
| 3 | 1.4353 | 0.3563 | 94.26% |
|
[
"cardboard",
"glass",
"metal",
"paper",
"plastic",
"trash"
] |
maxsop/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# maxsop/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3005
- Validation Loss: 0.2724
- Train Accuracy: 0.928
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.1637 | 0.7682 | 0.897 | 0 |
| 0.6543 | 0.5160 | 0.907 | 1 |
| 0.4626 | 0.4016 | 0.907 | 2 |
| 0.3701 | 0.3274 | 0.918 | 3 |
| 0.3005 | 0.2724 | 0.928 | 4 |
### Framework versions
- Transformers 4.47.1
- TensorFlow 2.18.0
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
YunsangJoo/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1843
- Accuracy: 0.9472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4007 | 1.0 | 370 | 0.2966 | 0.9229 |
| 0.2175 | 2.0 | 740 | 0.2327 | 0.9269 |
| 0.1569 | 3.0 | 1110 | 0.2143 | 0.9378 |
| 0.1353 | 4.0 | 1480 | 0.2093 | 0.9323 |
| 0.1428 | 5.0 | 1850 | 0.2062 | 0.9350 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.1.0
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
dantepalacio/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window16-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window16-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6589
- Accuracy: 0.7124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 3.2193 | 1.0 | 76 | 0.7354 | 0.6446 |
| 2.7694 | 2.0 | 152 | 0.6906 | 0.6909 |
| 2.9082 | 2.9637 | 225 | 0.6589 | 0.7124 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"0",
"1",
"2",
"moderate_condition",
"modern_renovation",
"needs_repair"
] |
dima806/pokemons_1000_types_image_detection
|
Returns pokemon name (from the 1,000 pokemons list) with about 94.1% accuracy given an image.
See https://www.kaggle.com/code/dima806/pokemons-1000-types-image-detection-vit for details.
```
Accuracy: 0.9413
F1 Score: 0.9389
Classification report:
precision recall f1-score support
abomasnow 1.0000 1.0000 1.0000 16
abra 0.5000 0.9375 0.6522 16
absol 1.0000 1.0000 1.0000 16
accelgor 1.0000 1.0000 1.0000 16
aegislash-shield 1.0000 1.0000 1.0000 16
aerodactyl 0.8182 0.5625 0.6667 16
aggron 1.0000 1.0000 1.0000 16
aipom 0.8421 1.0000 0.9143 16
alakazam 0.4444 0.7500 0.5581 16
alcremie 1.0000 1.0000 1.0000 16
alomomola 0.9412 1.0000 0.9697 16
altaria 1.0000 0.9375 0.9677 16
amaura 1.0000 1.0000 1.0000 16
ambipom 1.0000 1.0000 1.0000 16
amoonguss 1.0000 1.0000 1.0000 16
ampharos 1.0000 0.6875 0.8148 16
annihilape 1.0000 1.0000 1.0000 16
anorith 1.0000 0.9375 0.9677 16
appletun 1.0000 1.0000 1.0000 16
applin 0.9412 1.0000 0.9697 16
araquanid 1.0000 1.0000 1.0000 16
arbok 1.0000 0.4375 0.6087 16
arboliva 1.0000 1.0000 1.0000 16
arcanine 0.3721 1.0000 0.5424 16
arceus 1.0000 1.0000 1.0000 16
archen 1.0000 1.0000 1.0000 16
archeops 1.0000 1.0000 1.0000 16
arctibax 1.0000 1.0000 1.0000 16
arctovish 1.0000 1.0000 1.0000 16
arctozolt 1.0000 1.0000 1.0000 16
ariados 1.0000 1.0000 1.0000 16
armaldo 1.0000 1.0000 1.0000 16
armarouge 1.0000 1.0000 1.0000 16
aromatisse 1.0000 1.0000 1.0000 16
aron 0.9412 1.0000 0.9697 16
arrokuda 1.0000 1.0000 1.0000 16
articuno 0.5357 0.9375 0.6818 16
audino 1.0000 1.0000 1.0000 16
aurorus 1.0000 1.0000 1.0000 16
avalugg 1.0000 1.0000 1.0000 16
axew 0.9412 1.0000 0.9697 16
azelf 1.0000 1.0000 1.0000 16
azumarill 1.0000 1.0000 1.0000 16
azurill 0.8889 1.0000 0.9412 16
bagon 1.0000 1.0000 1.0000 16
baltoy 1.0000 1.0000 1.0000 16
banette 1.0000 0.9375 0.9677 16
barbaracle 1.0000 1.0000 1.0000 16
barboach 1.0000 1.0000 1.0000 16
barraskewda 1.0000 1.0000 1.0000 16
basculegion-male 1.0000 1.0000 1.0000 16
basculin-red-striped 1.0000 1.0000 1.0000 16
bastiodon 1.0000 1.0000 1.0000 16
baxcalibur 1.0000 1.0000 1.0000 16
bayleef 1.0000 0.8125 0.8966 16
beartic 1.0000 1.0000 1.0000 16
beautifly 1.0000 1.0000 1.0000 16
beedrill 1.0000 0.6875 0.8148 16
beheeyem 1.0000 1.0000 1.0000 16
beldum 0.9375 0.9375 0.9375 16
bellibolt 1.0000 1.0000 1.0000 16
bellossom 1.0000 0.9375 0.9677 16
bellsprout 1.0000 0.8125 0.8966 16
bergmite 1.0000 1.0000 1.0000 16
bewear 1.0000 1.0000 1.0000 16
bibarel 1.0000 0.8750 0.9333 16
bidoof 0.8889 1.0000 0.9412 16
binacle 1.0000 1.0000 1.0000 16
bisharp 1.0000 1.0000 1.0000 16
blacephalon 1.0000 1.0000 1.0000 16
blastoise 0.7143 0.6250 0.6667 16
blaziken 1.0000 1.0000 1.0000 16
blipbug 1.0000 1.0000 1.0000 16
blissey 0.4324 1.0000 0.6038 16
blitzle 1.0000 1.0000 1.0000 16
boldore 1.0000 1.0000 1.0000 16
boltund 0.9412 1.0000 0.9697 16
bombirdier 1.0000 1.0000 1.0000 16
bonsly 1.0000 1.0000 1.0000 16
bouffalant 1.0000 1.0000 1.0000 16
bounsweet 1.0000 1.0000 1.0000 16
braixen 0.9412 1.0000 0.9697 16
brambleghast 1.0000 1.0000 1.0000 16
bramblin 1.0000 1.0000 1.0000 16
braviary 1.0000 1.0000 1.0000 16
breloom 1.0000 1.0000 1.0000 16
brionne 1.0000 1.0000 1.0000 16
bronzong 1.0000 1.0000 1.0000 16
bronzor 1.0000 1.0000 1.0000 16
brute-bonnet 1.0000 1.0000 1.0000 16
bruxish 1.0000 1.0000 1.0000 16
budew 0.6957 1.0000 0.8205 16
buizel 0.9412 1.0000 0.9697 16
bulbasaur 0.7857 0.6875 0.7333 16
buneary 0.9412 1.0000 0.9697 16
bunnelby 1.0000 1.0000 1.0000 16
burmy 1.0000 1.0000 1.0000 16
butterfree 0.9333 0.8750 0.9032 16
buzzwole 0.9412 1.0000 0.9697 16
cacnea 1.0000 1.0000 1.0000 16
cacturne 0.9412 1.0000 0.9697 16
calyrex 1.0000 1.0000 1.0000 16
camerupt 1.0000 1.0000 1.0000 16
capsakid 1.0000 1.0000 1.0000 16
carbink 1.0000 1.0000 1.0000 16
carkol 1.0000 1.0000 1.0000 16
carnivine 0.9412 1.0000 0.9697 16
carracosta 1.0000 0.9375 0.9677 16
carvanha 1.0000 1.0000 1.0000 16
cascoon 1.0000 0.8750 0.9333 16
castform 1.0000 1.0000 1.0000 16
caterpie 0.9286 0.8125 0.8667 16
celebi 1.0000 0.8750 0.9333 16
celesteela 1.0000 1.0000 1.0000 16
centiskorch 1.0000 1.0000 1.0000 16
ceruledge 1.0000 1.0000 1.0000 16
cetitan 1.0000 1.0000 1.0000 16
cetoddle 1.0000 1.0000 1.0000 16
chandelure 1.0000 1.0000 1.0000 16
chansey 0.8182 0.5625 0.6667 16
charcadet 1.0000 1.0000 1.0000 16
charizard 1.0000 0.1250 0.2222 16
charjabug 1.0000 1.0000 1.0000 16
charmander 0.5500 0.6875 0.6111 16
charmeleon 0.9091 0.6250 0.7407 16
chatot 1.0000 1.0000 1.0000 16
cherrim 0.9412 1.0000 0.9697 16
cherubi 1.0000 1.0000 1.0000 16
chesnaught 1.0000 1.0000 1.0000 16
chespin 1.0000 1.0000 1.0000 16
chewtle 1.0000 1.0000 1.0000 16
chikorita 1.0000 1.0000 1.0000 16
chimchar 0.9412 1.0000 0.9697 16
chimecho 0.9412 1.0000 0.9697 16
chinchou 1.0000 1.0000 1.0000 16
chingling 1.0000 1.0000 1.0000 16
cinccino 1.0000 1.0000 1.0000 16
cinderace 1.0000 1.0000 1.0000 16
clamperl 1.0000 0.8750 0.9333 16
clauncher 1.0000 1.0000 1.0000 16
clawitzer 1.0000 1.0000 1.0000 16
claydol 1.0000 1.0000 1.0000 16
clefable 0.9091 0.6250 0.7407 16
clefairy 0.5500 0.6875 0.6111 16
cleffa 0.8333 0.9375 0.8824 16
clobbopus 1.0000 1.0000 1.0000 16
clodsire 1.0000 1.0000 1.0000 16
cloyster 0.7222 0.8125 0.7647 16
coalossal 1.0000 1.0000 1.0000 16
cobalion 1.0000 1.0000 1.0000 16
cofagrigus 1.0000 1.0000 1.0000 16
combee 0.8889 1.0000 0.9412 16
combusken 1.0000 1.0000 1.0000 16
comfey 1.0000 1.0000 1.0000 16
conkeldurr 1.0000 1.0000 1.0000 16
copperajah 1.0000 1.0000 1.0000 16
corphish 1.0000 1.0000 1.0000 16
corsola 1.0000 1.0000 1.0000 16
corviknight 0.8889 1.0000 0.9412 16
corvisquire 1.0000 0.8750 0.9333 16
cosmoem 1.0000 1.0000 1.0000 16
cosmog 1.0000 1.0000 1.0000 16
cottonee 1.0000 1.0000 1.0000 16
crabominable 1.0000 1.0000 1.0000 16
crabrawler 1.0000 1.0000 1.0000 16
cradily 1.0000 1.0000 1.0000 16
cramorant 1.0000 1.0000 1.0000 16
cranidos 1.0000 1.0000 1.0000 16
crawdaunt 1.0000 1.0000 1.0000 16
cresselia 1.0000 1.0000 1.0000 16
croagunk 0.9412 1.0000 0.9697 16
crobat 0.5161 1.0000 0.6809 16
crocalor 1.0000 1.0000 1.0000 16
croconaw 0.9167 0.6875 0.7857 16
crustle 0.8889 1.0000 0.9412 16
cryogonal 1.0000 1.0000 1.0000 16
cubchoo 1.0000 1.0000 1.0000 16
cubone 1.0000 0.4375 0.6087 16
cufant 1.0000 1.0000 1.0000 16
cursola 1.0000 1.0000 1.0000 16
cutiefly 1.0000 1.0000 1.0000 16
cyclizar 1.0000 1.0000 1.0000 16
cyndaquil 0.8889 1.0000 0.9412 16
dachsbun 1.0000 1.0000 1.0000 16
darkrai 1.0000 1.0000 1.0000 16
darmanitan-standard 1.0000 1.0000 1.0000 16
dartrix 1.0000 0.8750 0.9333 16
darumaka 1.0000 1.0000 1.0000 16
decidueye 1.0000 1.0000 1.0000 16
dedenne 1.0000 1.0000 1.0000 16
deerling 1.0000 1.0000 1.0000 16
deino 1.0000 1.0000 1.0000 16
delcatty 1.0000 1.0000 1.0000 16
delibird 0.9412 1.0000 0.9697 16
delphox 1.0000 1.0000 1.0000 16
deoxys-normal 0.9412 1.0000 0.9697 16
dewgong 0.3721 1.0000 0.5424 16
dewott 0.9412 1.0000 0.9697 16
dewpider 1.0000 1.0000 1.0000 16
dhelmise 1.0000 1.0000 1.0000 16
dialga 1.0000 1.0000 1.0000 16
diancie 1.0000 1.0000 1.0000 16
diggersby 0.9412 1.0000 0.9697 16
diglett 0.6316 0.7500 0.6857 16
ditto 0.7895 0.9375 0.8571 16
dodrio 1.0000 0.6875 0.8148 16
doduo 0.8235 0.8750 0.8485 16
dolliv 1.0000 1.0000 1.0000 16
dondozo 1.0000 1.0000 1.0000 16
donphan 1.0000 0.9375 0.9677 16
dottler 1.0000 1.0000 1.0000 16
doublade 1.0000 1.0000 1.0000 16
dracovish 1.0000 1.0000 1.0000 16
dracozolt 1.0000 1.0000 1.0000 16
dragalge 1.0000 1.0000 1.0000 16
dragapult 1.0000 1.0000 1.0000 16
dragonair 0.5263 0.6250 0.5714 16
dragonite 1.0000 0.5000 0.6667 16
drakloak 1.0000 1.0000 1.0000 16
drampa 1.0000 1.0000 1.0000 16
drapion 0.9412 1.0000 0.9697 16
dratini 0.7143 0.6250 0.6667 16
drednaw 1.0000 1.0000 1.0000 16
dreepy 1.0000 1.0000 1.0000 16
drifblim 1.0000 1.0000 1.0000 16
drifloon 1.0000 1.0000 1.0000 16
drilbur 1.0000 1.0000 1.0000 16
drizzile 1.0000 1.0000 1.0000 16
drowzee 1.0000 0.3125 0.4762 16
druddigon 1.0000 1.0000 1.0000 16
dubwool 1.0000 1.0000 1.0000 16
ducklett 0.9412 1.0000 0.9697 16
dudunsparce-two-segment 0.7273 1.0000 0.8421 16
dugtrio 0.5909 0.8125 0.6842 16
dunsparce 1.0000 0.6250 0.7692 16
duosion 0.9375 0.9375 0.9375 16
duraludon 1.0000 1.0000 1.0000 16
durant 1.0000 1.0000 1.0000 16
dusclops 1.0000 1.0000 1.0000 16
dusknoir 0.8421 1.0000 0.9143 16
duskull 0.9412 1.0000 0.9697 16
dustox 1.0000 1.0000 1.0000 16
dwebble 0.9412 1.0000 0.9697 16
eelektrik 1.0000 1.0000 1.0000 16
eelektross 0.9412 1.0000 0.9697 16
eevee 0.7647 0.8125 0.7879 16
eiscue-ice 1.0000 1.0000 1.0000 16
ekans 0.9091 0.6250 0.7407 16
eldegoss 1.0000 1.0000 1.0000 16
electabuzz 0.8571 0.7500 0.8000 16
electivire 1.0000 1.0000 1.0000 16
electrike 1.0000 1.0000 1.0000 16
electrode 0.7619 1.0000 0.8649 16
elekid 0.6667 1.0000 0.8000 16
elgyem 1.0000 1.0000 1.0000 16
emboar 1.0000 0.9375 0.9677 16
emolga 1.0000 1.0000 1.0000 16
empoleon 1.0000 1.0000 1.0000 16
enamorus-incarnate 1.0000 1.0000 1.0000 16
entei 0.9286 0.8125 0.8667 16
escavalier 1.0000 1.0000 1.0000 16
espathra 0.9412 1.0000 0.9697 16
espeon 0.8125 0.8125 0.8125 16
espurr 1.0000 1.0000 1.0000 16
eternatus 1.0000 1.0000 1.0000 16
excadrill 1.0000 1.0000 1.0000 16
exeggcute 1.0000 0.8750 0.9333 16
exeggutor 0.7778 0.8750 0.8235 16
exploud 0.9412 1.0000 0.9697 16
falinks 1.0000 1.0000 1.0000 16
farfetchd 1.0000 0.6250 0.7692 16
farigiraf 1.0000 1.0000 1.0000 16
fearow 0.5833 0.8750 0.7000 16
feebas 1.0000 1.0000 1.0000 16
fennekin 1.0000 1.0000 1.0000 16
feraligatr 0.8889 1.0000 0.9412 16
ferroseed 1.0000 0.8750 0.9333 16
ferrothorn 1.0000 1.0000 1.0000 16
fidough 1.0000 1.0000 1.0000 16
finizen 0.7143 0.3125 0.4348 16
finneon 0.8889 1.0000 0.9412 16
flaaffy 0.9412 1.0000 0.9697 16
flabebe 1.0000 1.0000 1.0000 16
flamigo 0.9412 1.0000 0.9697 16
flapple 0.9412 1.0000 0.9697 16
flareon 0.6667 0.8750 0.7568 16
fletchinder 1.0000 0.8125 0.8966 16
fletchling 0.8421 1.0000 0.9143 16
flittle 1.0000 1.0000 1.0000 16
floatzel 1.0000 1.0000 1.0000 16
floette 1.0000 1.0000 1.0000 16
floragato 1.0000 1.0000 1.0000 16
florges 1.0000 1.0000 1.0000 16
flutter-mane 0.8889 1.0000 0.9412 16
flygon 1.0000 0.9375 0.9677 16
fomantis 0.9412 1.0000 0.9697 16
foongus 0.9412 1.0000 0.9697 16
forretress 0.9286 0.8125 0.8667 16
fraxure 0.9412 1.0000 0.9697 16
frigibax 1.0000 1.0000 1.0000 16
frillish 1.0000 1.0000 1.0000 16
froakie 1.0000 1.0000 1.0000 16
frogadier 0.9412 1.0000 0.9697 16
froslass 1.0000 1.0000 1.0000 16
frosmoth 0.9412 1.0000 0.9697 16
fuecoco 1.0000 1.0000 1.0000 16
furfrou 1.0000 1.0000 1.0000 16
furret 1.0000 1.0000 1.0000 16
gabite 1.0000 1.0000 1.0000 16
gallade 1.0000 1.0000 1.0000 16
galvantula 1.0000 1.0000 1.0000 16
garbodor 1.0000 1.0000 1.0000 16
garchomp 1.0000 1.0000 1.0000 16
gardevoir 1.0000 1.0000 1.0000 16
garganacl 0.8889 1.0000 0.9412 16
gastly 1.0000 1.0000 1.0000 16
gastrodon 1.0000 1.0000 1.0000 16
genesect 1.0000 1.0000 1.0000 16
gengar 0.7500 0.7500 0.7500 16
geodude 0.7692 0.6250 0.6897 16
gholdengo 1.0000 1.0000 1.0000 16
gible 0.8889 1.0000 0.9412 16
gigalith 1.0000 1.0000 1.0000 16
gimmighoul 1.0000 1.0000 1.0000 16
girafarig 1.0000 1.0000 1.0000 16
giratina-altered 1.0000 1.0000 1.0000 16
glaceon 1.0000 1.0000 1.0000 16
glalie 1.0000 1.0000 1.0000 16
glameow 1.0000 1.0000 1.0000 16
glastrier 1.0000 1.0000 1.0000 16
gligar 1.0000 1.0000 1.0000 16
glimmet 1.0000 1.0000 1.0000 16
glimmora 1.0000 1.0000 1.0000 16
gliscor 1.0000 1.0000 1.0000 16
gloom 0.9412 1.0000 0.9697 16
gogoat 1.0000 0.8750 0.9333 16
golbat 1.0000 0.5000 0.6667 16
goldeen 0.9167 0.6875 0.7857 16
golduck 0.8235 0.8750 0.8485 16
golem 0.5909 0.8125 0.6842 16
golett 1.0000 1.0000 1.0000 16
golisopod 1.0000 1.0000 1.0000 16
golurk 1.0000 1.0000 1.0000 16
goodra 0.8889 1.0000 0.9412 16
goomy 1.0000 1.0000 1.0000 16
gorebyss 0.8889 1.0000 0.9412 16
gossifleur 1.0000 1.0000 1.0000 16
gothita 1.0000 1.0000 1.0000 16
gothitelle 1.0000 1.0000 1.0000 16
gothorita 0.8889 1.0000 0.9412 16
gourgeist-average 1.0000 1.0000 1.0000 16
grafaiai 1.0000 1.0000 1.0000 16
granbull 0.9000 0.5625 0.6923 16
grapploct 1.0000 1.0000 1.0000 16
graveler 0.7500 0.9375 0.8333 16
great-tusk 1.0000 1.0000 1.0000 16
greavard 1.0000 1.0000 1.0000 16
greedent 1.0000 1.0000 1.0000 16
greninja 1.0000 1.0000 1.0000 16
grimer 0.7500 0.3750 0.5000 16
grimmsnarl 1.0000 1.0000 1.0000 16
grookey 1.0000 1.0000 1.0000 16
grotle 1.0000 1.0000 1.0000 16
groudon 0.9412 1.0000 0.9697 16
grovyle 1.0000 0.9375 0.9677 16
growlithe 0.7500 0.3750 0.5000 16
grubbin 1.0000 1.0000 1.0000 16
grumpig 1.0000 1.0000 1.0000 16
gulpin 1.0000 1.0000 1.0000 16
gumshoos 0.9333 0.8750 0.9032 16
gurdurr 1.0000 1.0000 1.0000 16
guzzlord 1.0000 0.8125 0.8966 16
gyarados 1.0000 0.5625 0.7200 16
hakamo-o 1.0000 1.0000 1.0000 16
happiny 0.8421 1.0000 0.9143 16
hariyama 1.0000 1.0000 1.0000 16
hatenna 0.8889 1.0000 0.9412 16
hatterene 1.0000 1.0000 1.0000 16
hattrem 1.0000 1.0000 1.0000 16
haunter 0.8235 0.8750 0.8485 16
hawlucha 1.0000 1.0000 1.0000 16
haxorus 1.0000 1.0000 1.0000 16
heatmor 1.0000 1.0000 1.0000 16
heatran 0.9375 0.9375 0.9375 16
heliolisk 1.0000 1.0000 1.0000 16
helioptile 1.0000 1.0000 1.0000 16
heracross 0.8889 1.0000 0.9412 16
herdier 1.0000 1.0000 1.0000 16
hippopotas 1.0000 1.0000 1.0000 16
hippowdon 1.0000 1.0000 1.0000 16
hitmonchan 1.0000 0.4375 0.6087 16
hitmonlee 1.0000 0.6250 0.7692 16
hitmontop 1.0000 0.9375 0.9677 16
ho-oh 0.8889 1.0000 0.9412 16
honchkrow 1.0000 1.0000 1.0000 16
honedge 1.0000 1.0000 1.0000 16
hoopa 1.0000 1.0000 1.0000 16
hoothoot 1.0000 1.0000 1.0000 16
hoppip 0.9412 1.0000 0.9697 16
horsea 0.7857 0.6875 0.7333 16
houndoom 0.9412 1.0000 0.9697 16
houndour 1.0000 0.5000 0.6667 16
houndstone 1.0000 1.0000 1.0000 16
huntail 1.0000 0.9375 0.9677 16
hydreigon 0.9412 1.0000 0.9697 16
hypno 1.0000 0.1875 0.3158 16
igglybuff 0.7619 1.0000 0.8649 16
illumise 0.9412 1.0000 0.9697 16
impidimp 1.0000 0.9375 0.9677 16
incineroar 1.0000 1.0000 1.0000 16
indeedee-male 1.0000 1.0000 1.0000 16
infernape 0.8421 1.0000 0.9143 16
inkay 1.0000 1.0000 1.0000 16
inteleon 1.0000 1.0000 1.0000 16
iron-bundle 1.0000 1.0000 1.0000 16
iron-hands 1.0000 1.0000 1.0000 16
iron-jugulis 1.0000 1.0000 1.0000 16
iron-moth 1.0000 1.0000 1.0000 16
iron-thorns 1.0000 1.0000 1.0000 16
iron-treads 1.0000 1.0000 1.0000 16
ivysaur 0.7692 0.6250 0.6897 16
jangmo-o 0.9412 1.0000 0.9697 16
jellicent 0.9412 1.0000 0.9697 16
jigglypuff 0.8182 0.5625 0.6667 16
jirachi 1.0000 1.0000 1.0000 16
jolteon 0.9286 0.8125 0.8667 16
joltik 0.8889 1.0000 0.9412 16
jumpluff 0.9412 1.0000 0.9697 16
jynx 1.0000 0.7500 0.8571 16
kabuto 0.9286 0.8125 0.8667 16
kabutops 0.5161 1.0000 0.6809 16
kadabra 0.6154 0.5000 0.5517 16
kakuna 0.8571 0.7500 0.8000 16
kangaskhan 0.4333 0.8125 0.5652 16
karrablast 1.0000 1.0000 1.0000 16
kartana 1.0000 1.0000 1.0000 16
kecleon 1.0000 1.0000 1.0000 16
keldeo-ordinary 1.0000 1.0000 1.0000 16
kilowattrel 1.0000 1.0000 1.0000 16
kingambit 1.0000 1.0000 1.0000 16
kingdra 1.0000 1.0000 1.0000 16
kingler 0.4500 0.5625 0.5000 16
kirlia 1.0000 1.0000 1.0000 16
klang 0.7273 1.0000 0.8421 16
klawf 1.0000 1.0000 1.0000 16
kleavor 1.0000 1.0000 1.0000 16
klefki 1.0000 1.0000 1.0000 16
klink 1.0000 1.0000 1.0000 16
klinklang 1.0000 0.6250 0.7692 16
koffing 0.5333 1.0000 0.6957 16
komala 0.9412 1.0000 0.9697 16
kommo-o 1.0000 1.0000 1.0000 16
krabby 0.6923 0.5625 0.6207 16
kricketot 1.0000 1.0000 1.0000 16
kricketune 0.9412 1.0000 0.9697 16
krokorok 0.8421 1.0000 0.9143 16
krookodile 1.0000 0.8750 0.9333 16
kubfu 0.9412 1.0000 0.9697 16
kyogre 1.0000 1.0000 1.0000 16
kyurem 1.0000 1.0000 1.0000 16
lairon 1.0000 1.0000 1.0000 16
lampent 1.0000 1.0000 1.0000 16
landorus-incarnate 1.0000 1.0000 1.0000 16
lanturn 1.0000 1.0000 1.0000 16
lapras 1.0000 0.5000 0.6667 16
larvesta 1.0000 1.0000 1.0000 16
larvitar 0.9412 1.0000 0.9697 16
latias 1.0000 1.0000 1.0000 16
latios 1.0000 1.0000 1.0000 16
leafeon 1.0000 1.0000 1.0000 16
leavanny 1.0000 1.0000 1.0000 16
lechonk 1.0000 1.0000 1.0000 16
ledian 0.8421 1.0000 0.9143 16
ledyba 0.8889 1.0000 0.9412 16
lickilicky 0.9412 1.0000 0.9697 16
lickitung 0.8667 0.8125 0.8387 16
liepard 1.0000 1.0000 1.0000 16
lileep 1.0000 1.0000 1.0000 16
lilligant 1.0000 1.0000 1.0000 16
lillipup 1.0000 1.0000 1.0000 16
linoone 0.9412 1.0000 0.9697 16
litleo 1.0000 1.0000 1.0000 16
litten 1.0000 1.0000 1.0000 16
litwick 1.0000 1.0000 1.0000 16
lokix 1.0000 1.0000 1.0000 16
lombre 1.0000 1.0000 1.0000 16
lopunny 1.0000 1.0000 1.0000 16
lotad 1.0000 0.9375 0.9677 16
loudred 1.0000 1.0000 1.0000 16
lucario 1.0000 1.0000 1.0000 16
ludicolo 0.9412 1.0000 0.9697 16
lugia 0.9333 0.8750 0.9032 16
lumineon 1.0000 1.0000 1.0000 16
lunala 1.0000 1.0000 1.0000 16
lunatone 0.8889 1.0000 0.9412 16
lurantis 1.0000 1.0000 1.0000 16
luvdisc 0.8889 1.0000 0.9412 16
luxio 1.0000 1.0000 1.0000 16
luxray 0.9412 1.0000 0.9697 16
lycanroc-midday 0.9412 1.0000 0.9697 16
mabosstiff 1.0000 1.0000 1.0000 16
machamp 0.6000 0.7500 0.6667 16
machoke 0.7895 0.9375 0.8571 16
machop 0.5652 0.8125 0.6667 16
magby 0.8421 1.0000 0.9143 16
magcargo 1.0000 1.0000 1.0000 16
magearna 1.0000 1.0000 1.0000 16
magikarp 0.7778 0.4375 0.5600 16
magmar 0.7143 0.3125 0.4348 16
magmortar 0.8889 1.0000 0.9412 16
magnemite 1.0000 0.4375 0.6087 16
magneton 0.7273 1.0000 0.8421 16
magnezone 1.0000 1.0000 1.0000 16
makuhita 1.0000 1.0000 1.0000 16
malamar 1.0000 1.0000 1.0000 16
mamoswine 1.0000 1.0000 1.0000 16
manaphy 0.8750 0.8750 0.8750 16
mandibuzz 1.0000 1.0000 1.0000 16
manectric 1.0000 1.0000 1.0000 16
mankey 0.5455 0.7500 0.6316 16
mantine 0.9286 0.8125 0.8667 16
mantyke 0.9412 1.0000 0.9697 16
maractus 0.9412 1.0000 0.9697 16
mareanie 1.0000 1.0000 1.0000 16
mareep 0.9286 0.8125 0.8667 16
marill 0.9375 0.9375 0.9375 16
marowak 0.5200 0.8125 0.6341 16
marshadow 1.0000 1.0000 1.0000 16
marshtomp 0.9375 0.9375 0.9375 16
maschiff 1.0000 1.0000 1.0000 16
masquerain 1.0000 1.0000 1.0000 16
maushold-family-of-four 1.0000 1.0000 1.0000 16
mawile 0.9412 1.0000 0.9697 16
medicham 1.0000 1.0000 1.0000 16
meditite 1.0000 1.0000 1.0000 16
meganium 0.9412 1.0000 0.9697 16
melmetal 1.0000 1.0000 1.0000 16
meloetta-aria 1.0000 1.0000 1.0000 16
meltan 1.0000 1.0000 1.0000 16
meowscarada 1.0000 1.0000 1.0000 16
meowstic-male 1.0000 0.9375 0.9677 16
meowth 1.0000 0.7500 0.8571 16
mesprit 1.0000 1.0000 1.0000 16
metagross 1.0000 1.0000 1.0000 16
metang 1.0000 1.0000 1.0000 16
metapod 0.6667 0.8750 0.7568 16
mew 0.6000 0.9375 0.7317 16
mewtwo 0.8750 0.4375 0.5833 16
mienfoo 1.0000 1.0000 1.0000 16
mienshao 1.0000 1.0000 1.0000 16
mightyena 0.9412 1.0000 0.9697 16
milcery 0.8421 1.0000 0.9143 16
milotic 1.0000 1.0000 1.0000 16
miltank 1.0000 1.0000 1.0000 16
mime-jr 1.0000 1.0000 1.0000 16
mimikyu-disguised 1.0000 1.0000 1.0000 16
minccino 0.8421 1.0000 0.9143 16
minior-red-meteor 1.0000 1.0000 1.0000 16
minun 1.0000 1.0000 1.0000 16
misdreavus 1.0000 0.7500 0.8571 16
mismagius 1.0000 1.0000 1.0000 16
moltres 1.0000 0.4375 0.6087 16
monferno 1.0000 0.8125 0.8966 16
morelull 1.0000 1.0000 1.0000 16
morgrem 1.0000 1.0000 1.0000 16
morpeko-full-belly 1.0000 1.0000 1.0000 16
mothim 1.0000 1.0000 1.0000 16
mr-mime 1.0000 0.8750 0.9333 16
mr-rime 1.0000 1.0000 1.0000 16
mudbray 1.0000 1.0000 1.0000 16
mudkip 0.9286 0.8125 0.8667 16
mudsdale 1.0000 1.0000 1.0000 16
muk 0.4706 0.5000 0.4848 16
munchlax 0.8421 1.0000 0.9143 16
munna 1.0000 1.0000 1.0000 16
murkrow 1.0000 1.0000 1.0000 16
musharna 1.0000 1.0000 1.0000 16
nacli 1.0000 1.0000 1.0000 16
naclstack 1.0000 1.0000 1.0000 16
naganadel 1.0000 1.0000 1.0000 16
natu 0.8889 1.0000 0.9412 16
necrozma 1.0000 1.0000 1.0000 16
nickit 1.0000 1.0000 1.0000 16
nidoking 0.6522 0.9375 0.7692 16
nidoqueen 1.0000 0.3125 0.4762 16
nidoran-f 0.8182 0.5625 0.6667 16
nidoran-m 0.8667 0.8125 0.8387 16
nidorina 0.7857 0.6875 0.7333 16
nidorino 0.8889 0.5000 0.6400 16
nihilego 1.0000 1.0000 1.0000 16
nincada 1.0000 0.8750 0.9333 16
ninetales 0.5417 0.8125 0.6500 16
ninjask 0.8889 1.0000 0.9412 16
noctowl 1.0000 0.8125 0.8966 16
noibat 1.0000 1.0000 1.0000 16
noivern 1.0000 1.0000 1.0000 16
nosepass 1.0000 1.0000 1.0000 16
numel 1.0000 1.0000 1.0000 16
nuzleaf 1.0000 1.0000 1.0000 16
nymble 0.9412 1.0000 0.9697 16
obstagoon 1.0000 1.0000 1.0000 16
octillery 0.9412 1.0000 0.9697 16
oddish 0.9412 1.0000 0.9697 16
oinkologne-male 1.0000 1.0000 1.0000 16
omanyte 0.6667 1.0000 0.8000 16
omastar 0.6923 0.5625 0.6207 16
onix 0.8571 0.7500 0.8000 16
oranguru 1.0000 1.0000 1.0000 16
orbeetle 1.0000 1.0000 1.0000 16
oricorio-baile 1.0000 1.0000 1.0000 16
orthworm 1.0000 1.0000 1.0000 16
oshawott 1.0000 1.0000 1.0000 16
overqwil 1.0000 1.0000 1.0000 16
pachirisu 1.0000 1.0000 1.0000 16
palafin-zero 0.5600 0.8750 0.6829 16
palkia 1.0000 1.0000 1.0000 16
palossand 1.0000 1.0000 1.0000 16
palpitoad 1.0000 1.0000 1.0000 16
pancham 1.0000 1.0000 1.0000 16
pangoro 1.0000 1.0000 1.0000 16
panpour 1.0000 1.0000 1.0000 16
pansage 1.0000 1.0000 1.0000 16
pansear 0.9412 1.0000 0.9697 16
paras 0.7857 0.6875 0.7333 16
parasect 0.9167 0.6875 0.7857 16
passimian 1.0000 1.0000 1.0000 16
patrat 1.0000 1.0000 1.0000 16
pawmi 1.0000 1.0000 1.0000 16
pawmo 1.0000 1.0000 1.0000 16
pawmot 0.8889 1.0000 0.9412 16
pawniard 1.0000 1.0000 1.0000 16
pelipper 0.9412 1.0000 0.9697 16
perrserker 1.0000 1.0000 1.0000 16
persian 0.8000 0.7500 0.7742 16
petilil 1.0000 1.0000 1.0000 16
phanpy 1.0000 1.0000 1.0000 16
phantump 1.0000 1.0000 1.0000 16
pheromosa 1.0000 1.0000 1.0000 16
phione 0.8000 1.0000 0.8889 16
pichu 1.0000 0.8750 0.9333 16
pidgeot 0.7143 0.6250 0.6667 16
pidgeotto 0.7273 0.5000 0.5926 16
pidgey 0.8333 0.3125 0.4545 16
pidove 1.0000 1.0000 1.0000 16
pignite 0.8889 1.0000 0.9412 16
pikachu 0.8333 0.9375 0.8824 16
pikipek 1.0000 1.0000 1.0000 16
piloswine 1.0000 1.0000 1.0000 16
pincurchin 1.0000 1.0000 1.0000 16
pineco 1.0000 0.7500 0.8571 16
pinsir 0.8571 0.7500 0.8000 16
piplup 0.8421 1.0000 0.9143 16
plusle 0.9412 1.0000 0.9697 16
poipole 1.0000 1.0000 1.0000 16
politoed 0.9231 0.7500 0.8276 16
poliwag 1.0000 0.6875 0.8148 16
poliwhirl 0.5357 0.9375 0.6818 16
poliwrath 0.8889 0.5000 0.6400 16
polteageist 1.0000 1.0000 1.0000 16
ponyta 0.6667 0.6250 0.6452 16
poochyena 0.8889 1.0000 0.9412 16
popplio 1.0000 1.0000 1.0000 16
porygon 0.9231 0.7500 0.8276 16
porygon-z 0.8889 1.0000 0.9412 16
porygon2 0.9375 0.9375 0.9375 16
primarina 1.0000 1.0000 1.0000 16
primeape 0.8333 0.9375 0.8824 16
prinplup 0.9412 1.0000 0.9697 16
probopass 0.8889 1.0000 0.9412 16
psyduck 0.6875 0.6875 0.6875 16
pumpkaboo-average 1.0000 1.0000 1.0000 16
pupitar 0.8421 1.0000 0.9143 16
purrloin 1.0000 1.0000 1.0000 16
purugly 1.0000 1.0000 1.0000 16
pyroar 1.0000 1.0000 1.0000 16
pyukumuku 1.0000 1.0000 1.0000 16
quagsire 0.8889 1.0000 0.9412 16
quaquaval 1.0000 1.0000 1.0000 16
quaxly 1.0000 1.0000 1.0000 16
quaxwell 1.0000 1.0000 1.0000 16
quilava 0.9412 1.0000 0.9697 16
quilladin 1.0000 1.0000 1.0000 16
qwilfish 1.0000 0.5625 0.7200 16
raboot 1.0000 1.0000 1.0000 16
rabsca 1.0000 1.0000 1.0000 16
raichu 1.0000 0.6250 0.7692 16
raikou 1.0000 0.6875 0.8148 16
ralts 1.0000 1.0000 1.0000 16
rampardos 1.0000 1.0000 1.0000 16
rapidash 0.7000 0.4375 0.5385 16
raticate 1.0000 0.8125 0.8966 16
rattata 1.0000 0.3125 0.4762 16
rayquaza 1.0000 1.0000 1.0000 16
regice 0.8421 1.0000 0.9143 16
regidrago 1.0000 1.0000 1.0000 16
regieleki 1.0000 1.0000 1.0000 16
regigigas 1.0000 1.0000 1.0000 16
regirock 0.9412 1.0000 0.9697 16
registeel 1.0000 1.0000 1.0000 16
relicanth 1.0000 1.0000 1.0000 16
rellor 1.0000 1.0000 1.0000 16
remoraid 1.0000 0.9375 0.9677 16
reshiram 0.9412 1.0000 0.9697 16
reuniclus 1.0000 1.0000 1.0000 16
revavroom 1.0000 1.0000 1.0000 16
rhydon 1.0000 0.6875 0.8148 16
rhyhorn 0.5200 0.8125 0.6341 16
rhyperior 1.0000 1.0000 1.0000 16
ribombee 1.0000 1.0000 1.0000 16
rillaboom 1.0000 1.0000 1.0000 16
riolu 1.0000 0.9375 0.9677 16
rockruff 1.0000 1.0000 1.0000 16
roggenrola 1.0000 1.0000 1.0000 16
rolycoly 1.0000 1.0000 1.0000 16
rookidee 1.0000 1.0000 1.0000 16
roselia 0.9412 1.0000 0.9697 16
roserade 1.0000 1.0000 1.0000 16
rotom 0.8421 1.0000 0.9143 16
rowlet 1.0000 1.0000 1.0000 16
rufflet 1.0000 1.0000 1.0000 16
runerigus 1.0000 1.0000 1.0000 16
sableye 1.0000 1.0000 1.0000 16
salamence 1.0000 1.0000 1.0000 16
salandit 1.0000 1.0000 1.0000 16
salazzle 0.9412 1.0000 0.9697 16
samurott 1.0000 1.0000 1.0000 16
sandaconda 1.0000 1.0000 1.0000 16
sandile 1.0000 1.0000 1.0000 16
sandshrew 0.9167 0.6875 0.7857 16
sandslash 0.5652 0.8125 0.6667 16
sandy-shocks 1.0000 1.0000 1.0000 16
sandygast 1.0000 1.0000 1.0000 16
sawk 1.0000 1.0000 1.0000 16
sawsbuck 1.0000 1.0000 1.0000 16
scatterbug 0.9412 1.0000 0.9697 16
sceptile 1.0000 1.0000 1.0000 16
scizor 1.0000 0.9375 0.9677 16
scolipede 1.0000 1.0000 1.0000 16
scorbunny 1.0000 1.0000 1.0000 16
scovillain 1.0000 1.0000 1.0000 16
scrafty 1.0000 1.0000 1.0000 16
scraggy 1.0000 1.0000 1.0000 16
scream-tail 0.8889 1.0000 0.9412 16
scyther 0.9375 0.9375 0.9375 16
seadra 0.8824 0.9375 0.9091 16
seaking 1.0000 0.9375 0.9677 16
sealeo 1.0000 1.0000 1.0000 16
seedot 1.0000 1.0000 1.0000 16
seel 0.5000 0.6875 0.5789 16
seismitoad 0.9412 1.0000 0.9697 16
sentret 0.8235 0.8750 0.8485 16
serperior 1.0000 1.0000 1.0000 16
servine 1.0000 1.0000 1.0000 16
seviper 1.0000 1.0000 1.0000 16
sewaddle 0.9412 1.0000 0.9697 16
sharpedo 1.0000 1.0000 1.0000 16
shaymin-land 0.9412 1.0000 0.9697 16
shedinja 1.0000 1.0000 1.0000 16
shelgon 1.0000 1.0000 1.0000 16
shellder 0.8000 0.2500 0.3810 16
shellos 1.0000 1.0000 1.0000 16
shelmet 0.9412 1.0000 0.9697 16
shieldon 1.0000 1.0000 1.0000 16
shiftry 1.0000 0.8750 0.9333 16
shiinotic 0.9412 1.0000 0.9697 16
shinx 1.0000 1.0000 1.0000 16
shroodle 1.0000 1.0000 1.0000 16
shroomish 1.0000 1.0000 1.0000 16
shuckle 1.0000 1.0000 1.0000 16
shuppet 1.0000 1.0000 1.0000 16
sigilyph 1.0000 1.0000 1.0000 16
silcoon 1.0000 1.0000 1.0000 16
silicobra 0.9412 1.0000 0.9697 16
silvally 1.0000 1.0000 1.0000 16
simipour 0.9412 1.0000 0.9697 16
simisage 1.0000 1.0000 1.0000 16
simisear 1.0000 0.9375 0.9677 16
sinistea 1.0000 1.0000 1.0000 16
sirfetchd 1.0000 1.0000 1.0000 16
sizzlipede 1.0000 1.0000 1.0000 16
skarmory 1.0000 0.8750 0.9333 16
skeledirge 1.0000 1.0000 1.0000 16
skiddo 0.8889 1.0000 0.9412 16
skiploom 1.0000 0.8750 0.9333 16
skitty 1.0000 1.0000 1.0000 16
skorupi 0.9412 1.0000 0.9697 16
skrelp 1.0000 1.0000 1.0000 16
skuntank 1.0000 1.0000 1.0000 16
skwovet 1.0000 1.0000 1.0000 16
slaking 1.0000 1.0000 1.0000 16
slakoth 1.0000 0.8125 0.8966 16
sliggoo 1.0000 1.0000 1.0000 16
slither-wing 1.0000 1.0000 1.0000 16
slowbro 0.9091 0.6250 0.7407 16
slowking 0.9375 0.9375 0.9375 16
slowpoke 1.0000 0.3125 0.4762 16
slugma 0.9091 0.6250 0.7407 16
slurpuff 1.0000 1.0000 1.0000 16
smeargle 1.0000 0.6250 0.7692 16
smoliv 1.0000 1.0000 1.0000 16
smoochum 0.9412 1.0000 0.9697 16
sneasel 1.0000 0.8750 0.9333 16
sneasler 1.0000 1.0000 1.0000 16
snivy 0.9412 1.0000 0.9697 16
snom 1.0000 1.0000 1.0000 16
snorlax 1.0000 0.5625 0.7200 16
snorunt 1.0000 1.0000 1.0000 16
snover 1.0000 1.0000 1.0000 16
snubbull 1.0000 0.9375 0.9677 16
sobble 0.9412 1.0000 0.9697 16
solgaleo 1.0000 1.0000 1.0000 16
solosis 0.8421 1.0000 0.9143 16
solrock 1.0000 1.0000 1.0000 16
spearow 1.0000 0.3125 0.4762 16
spectrier 1.0000 1.0000 1.0000 16
spewpa 1.0000 1.0000 1.0000 16
spheal 0.8889 1.0000 0.9412 16
spidops 1.0000 1.0000 1.0000 16
spinarak 1.0000 0.9375 0.9677 16
spinda 1.0000 1.0000 1.0000 16
spiritomb 1.0000 1.0000 1.0000 16
spoink 1.0000 1.0000 1.0000 16
sprigatito 1.0000 1.0000 1.0000 16
spritzee 1.0000 1.0000 1.0000 16
squawkabilly-green-plumage 0.8421 1.0000 0.9143 16
squirtle 0.9231 0.7500 0.8276 16
stakataka 1.0000 1.0000 1.0000 16
stantler 0.7778 0.8750 0.8235 16
staraptor 0.9412 1.0000 0.9697 16
staravia 1.0000 1.0000 1.0000 16
starly 1.0000 1.0000 1.0000 16
starmie 0.8667 0.8125 0.8387 16
staryu 0.7222 0.8125 0.7647 16
steelix 1.0000 0.9375 0.9677 16
steenee 1.0000 1.0000 1.0000 16
stonjourner 1.0000 1.0000 1.0000 16
stoutland 1.0000 1.0000 1.0000 16
stufful 1.0000 1.0000 1.0000 16
stunfisk 1.0000 0.8125 0.8966 16
stunky 1.0000 1.0000 1.0000 16
sudowoodo 1.0000 1.0000 1.0000 16
suicune 0.9412 1.0000 0.9697 16
sunflora 1.0000 1.0000 1.0000 16
sunkern 0.8889 1.0000 0.9412 16
surskit 1.0000 1.0000 1.0000 16
swablu 0.9412 1.0000 0.9697 16
swadloon 0.9412 1.0000 0.9697 16
swalot 0.9412 1.0000 0.9697 16
swampert 1.0000 1.0000 1.0000 16
swanna 1.0000 1.0000 1.0000 16
swellow 1.0000 0.9375 0.9677 16
swinub 1.0000 1.0000 1.0000 16
swirlix 1.0000 1.0000 1.0000 16
swoobat 1.0000 1.0000 1.0000 16
sylveon 1.0000 1.0000 1.0000 16
tadbulb 1.0000 1.0000 1.0000 16
taillow 0.9412 1.0000 0.9697 16
talonflame 1.0000 1.0000 1.0000 16
tandemaus 1.0000 1.0000 1.0000 16
tangela 0.9375 0.9375 0.9375 16
tangrowth 0.9412 1.0000 0.9697 16
tapu-bulu 1.0000 1.0000 1.0000 16
tapu-fini 1.0000 1.0000 1.0000 16
tapu-koko 1.0000 1.0000 1.0000 16
tapu-lele 1.0000 1.0000 1.0000 16
tarountula 1.0000 1.0000 1.0000 16
tatsugiri-curly 0.9412 1.0000 0.9697 16
tauros 1.0000 0.3125 0.4762 16
teddiursa 1.0000 0.6250 0.7692 16
tentacool 0.8667 0.8125 0.8387 16
tentacruel 1.0000 0.8125 0.8966 16
tepig 0.9412 1.0000 0.9697 16
terrakion 1.0000 1.0000 1.0000 16
thievul 1.0000 1.0000 1.0000 16
throh 1.0000 1.0000 1.0000 16
thundurus-incarnate 1.0000 1.0000 1.0000 16
thwackey 1.0000 1.0000 1.0000 16
timburr 1.0000 1.0000 1.0000 16
tinkatink 0.9412 1.0000 0.9697 16
tinkaton 1.0000 1.0000 1.0000 16
tinkatuff 1.0000 1.0000 1.0000 16
tirtouga 0.9412 1.0000 0.9697 16
toedscool 1.0000 1.0000 1.0000 16
toedscruel 1.0000 1.0000 1.0000 16
togedemaru 1.0000 1.0000 1.0000 16
togekiss 1.0000 1.0000 1.0000 16
togepi 1.0000 1.0000 1.0000 16
togetic 0.8889 1.0000 0.9412 16
torchic 1.0000 1.0000 1.0000 16
torkoal 1.0000 1.0000 1.0000 16
tornadus-incarnate 1.0000 1.0000 1.0000 16
torracat 0.9412 1.0000 0.9697 16
torterra 1.0000 1.0000 1.0000 16
totodile 0.8824 0.9375 0.9091 16
toucannon 1.0000 1.0000 1.0000 16
toxapex 1.0000 1.0000 1.0000 16
toxel 1.0000 1.0000 1.0000 16
toxicroak 1.0000 1.0000 1.0000 16
toxtricity-amped 1.0000 1.0000 1.0000 16
tranquill 1.0000 1.0000 1.0000 16
trapinch 1.0000 1.0000 1.0000 16
treecko 1.0000 1.0000 1.0000 16
trevenant 1.0000 1.0000 1.0000 16
tropius 1.0000 1.0000 1.0000 16
trubbish 1.0000 1.0000 1.0000 16
trumbeak 1.0000 1.0000 1.0000 16
tsareena 0.9412 1.0000 0.9697 16
turtonator 0.9412 1.0000 0.9697 16
turtwig 0.9412 1.0000 0.9697 16
tympole 0.8421 1.0000 0.9143 16
tynamo 0.8889 1.0000 0.9412 16
type-null 0.9412 1.0000 0.9697 16
typhlosion 1.0000 0.8750 0.9333 16
tyranitar 1.0000 1.0000 1.0000 16
tyrantrum 0.8889 1.0000 0.9412 16
tyrogue 1.0000 0.9375 0.9677 16
tyrunt 0.9412 1.0000 0.9697 16
umbreon 1.0000 1.0000 1.0000 16
unfezant 1.0000 1.0000 1.0000 16
unown 0.9412 1.0000 0.9697 16
ursaluna 1.0000 1.0000 1.0000 16
ursaring 0.8889 1.0000 0.9412 16
urshifu-single-strike 1.0000 1.0000 1.0000 16
uxie 1.0000 1.0000 1.0000 16
vanillish 1.0000 1.0000 1.0000 16
vanillite 1.0000 1.0000 1.0000 16
vanilluxe 1.0000 1.0000 1.0000 16
vaporeon 0.6667 0.2500 0.3636 16
varoom 1.0000 1.0000 1.0000 16
veluza 1.0000 1.0000 1.0000 16
venipede 1.0000 1.0000 1.0000 16
venomoth 1.0000 1.0000 1.0000 16
venonat 0.9286 0.8125 0.8667 16
venusaur 0.9286 0.8125 0.8667 16
vespiquen 1.0000 1.0000 1.0000 16
vibrava 1.0000 0.9375 0.9677 16
victini 1.0000 1.0000 1.0000 16
victreebel 1.0000 0.8750 0.9333 16
vigoroth 1.0000 1.0000 1.0000 16
vikavolt 0.9412 1.0000 0.9697 16
vileplume 0.9375 0.9375 0.9375 16
virizion 1.0000 1.0000 1.0000 16
vivillon 1.0000 1.0000 1.0000 16
volbeat 1.0000 0.9375 0.9677 16
volcanion 1.0000 1.0000 1.0000 16
volcarona 1.0000 1.0000 1.0000 16
voltorb 1.0000 0.8125 0.8966 16
vullaby 1.0000 1.0000 1.0000 16
vulpix 0.9167 0.6875 0.7857 16
wailmer 1.0000 0.9375 0.9677 16
wailord 0.8889 1.0000 0.9412 16
walrein 0.9412 1.0000 0.9697 16
wartortle 0.5714 1.0000 0.7273 16
watchog 1.0000 1.0000 1.0000 16
wattrel 1.0000 1.0000 1.0000 16
weavile 0.9412 1.0000 0.9697 16
weedle 0.7333 0.6875 0.7097 16
weepinbell 0.7333 0.6875 0.7097 16
weezing 0.3333 0.0625 0.1053 16
whimsicott 1.0000 1.0000 1.0000 16
whirlipede 1.0000 1.0000 1.0000 16
whiscash 1.0000 1.0000 1.0000 16
whismur 0.9412 1.0000 0.9697 16
wigglytuff 0.8667 0.8125 0.8387 16
wiglett 1.0000 1.0000 1.0000 16
wimpod 1.0000 1.0000 1.0000 16
wingull 1.0000 1.0000 1.0000 16
wishiwashi-solo 1.0000 1.0000 1.0000 16
wobbuffet 0.9333 0.8750 0.9032 16
woobat 1.0000 1.0000 1.0000 16
wooloo 1.0000 1.0000 1.0000 16
wooper 1.0000 1.0000 1.0000 16
wormadam-plant 1.0000 1.0000 1.0000 16
wugtrio 1.0000 1.0000 1.0000 16
wurmple 1.0000 1.0000 1.0000 16
wynaut 0.9333 0.8750 0.9032 16
wyrdeer 0.8889 1.0000 0.9412 16
xatu 1.0000 1.0000 1.0000 16
xerneas 1.0000 1.0000 1.0000 16
xurkitree 1.0000 1.0000 1.0000 16
yamask 0.9412 1.0000 0.9697 16
yamper 1.0000 1.0000 1.0000 16
yanma 1.0000 1.0000 1.0000 16
yanmega 1.0000 0.9375 0.9677 16
yungoos 0.8000 1.0000 0.8889 16
yveltal 1.0000 1.0000 1.0000 16
zacian 0.9412 1.0000 0.9697 16
zamazenta 1.0000 1.0000 1.0000 16
zangoose 1.0000 1.0000 1.0000 16
zapdos 0.8889 1.0000 0.9412 16
zarude 1.0000 1.0000 1.0000 16
zebstrika 1.0000 1.0000 1.0000 16
zekrom 0.9412 1.0000 0.9697 16
zeraora 1.0000 1.0000 1.0000 16
zigzagoon 1.0000 1.0000 1.0000 16
zoroark 1.0000 1.0000 1.0000 16
zorua 0.8889 1.0000 0.9412 16
zubat 1.0000 0.3125 0.4762 16
zweilous 1.0000 1.0000 1.0000 16
zygarde-50 1.0000 1.0000 1.0000 16
accuracy 0.9413 16000
macro avg 0.9509 0.9413 0.9389 16000
weighted avg 0.9509 0.9413 0.9389 16000
```
|
[
"abomasnow",
"abra",
"absol",
"accelgor",
"aegislash-shield",
"aerodactyl",
"aggron",
"aipom",
"alakazam",
"alcremie",
"alomomola",
"altaria",
"amaura",
"ambipom",
"amoonguss",
"ampharos",
"annihilape",
"anorith",
"appletun",
"applin",
"araquanid",
"arbok",
"arboliva",
"arcanine",
"arceus",
"archen",
"archeops",
"arctibax",
"arctovish",
"arctozolt",
"ariados",
"armaldo",
"armarouge",
"aromatisse",
"aron",
"arrokuda",
"articuno",
"audino",
"aurorus",
"avalugg",
"axew",
"azelf",
"azumarill",
"azurill",
"bagon",
"baltoy",
"banette",
"barbaracle",
"barboach",
"barraskewda",
"basculegion-male",
"basculin-red-striped",
"bastiodon",
"baxcalibur",
"bayleef",
"beartic",
"beautifly",
"beedrill",
"beheeyem",
"beldum",
"bellibolt",
"bellossom",
"bellsprout",
"bergmite",
"bewear",
"bibarel",
"bidoof",
"binacle",
"bisharp",
"blacephalon",
"blastoise",
"blaziken",
"blipbug",
"blissey",
"blitzle",
"boldore",
"boltund",
"bombirdier",
"bonsly",
"bouffalant",
"bounsweet",
"braixen",
"brambleghast",
"bramblin",
"braviary",
"breloom",
"brionne",
"bronzong",
"bronzor",
"brute-bonnet",
"bruxish",
"budew",
"buizel",
"bulbasaur",
"buneary",
"bunnelby",
"burmy",
"butterfree",
"buzzwole",
"cacnea",
"cacturne",
"calyrex",
"camerupt",
"capsakid",
"carbink",
"carkol",
"carnivine",
"carracosta",
"carvanha",
"cascoon",
"castform",
"caterpie",
"celebi",
"celesteela",
"centiskorch",
"ceruledge",
"cetitan",
"cetoddle",
"chandelure",
"chansey",
"charcadet",
"charizard",
"charjabug",
"charmander",
"charmeleon",
"chatot",
"cherrim",
"cherubi",
"chesnaught",
"chespin",
"chewtle",
"chikorita",
"chimchar",
"chimecho",
"chinchou",
"chingling",
"cinccino",
"cinderace",
"clamperl",
"clauncher",
"clawitzer",
"claydol",
"clefable",
"clefairy",
"cleffa",
"clobbopus",
"clodsire",
"cloyster",
"coalossal",
"cobalion",
"cofagrigus",
"combee",
"combusken",
"comfey",
"conkeldurr",
"copperajah",
"corphish",
"corsola",
"corviknight",
"corvisquire",
"cosmoem",
"cosmog",
"cottonee",
"crabominable",
"crabrawler",
"cradily",
"cramorant",
"cranidos",
"crawdaunt",
"cresselia",
"croagunk",
"crobat",
"crocalor",
"croconaw",
"crustle",
"cryogonal",
"cubchoo",
"cubone",
"cufant",
"cursola",
"cutiefly",
"cyclizar",
"cyndaquil",
"dachsbun",
"darkrai",
"darmanitan-standard",
"dartrix",
"darumaka",
"decidueye",
"dedenne",
"deerling",
"deino",
"delcatty",
"delibird",
"delphox",
"deoxys-normal",
"dewgong",
"dewott",
"dewpider",
"dhelmise",
"dialga",
"diancie",
"diggersby",
"diglett",
"ditto",
"dodrio",
"doduo",
"dolliv",
"dondozo",
"donphan",
"dottler",
"doublade",
"dracovish",
"dracozolt",
"dragalge",
"dragapult",
"dragonair",
"dragonite",
"drakloak",
"drampa",
"drapion",
"dratini",
"drednaw",
"dreepy",
"drifblim",
"drifloon",
"drilbur",
"drizzile",
"drowzee",
"druddigon",
"dubwool",
"ducklett",
"dudunsparce-two-segment",
"dugtrio",
"dunsparce",
"duosion",
"duraludon",
"durant",
"dusclops",
"dusknoir",
"duskull",
"dustox",
"dwebble",
"eelektrik",
"eelektross",
"eevee",
"eiscue-ice",
"ekans",
"eldegoss",
"electabuzz",
"electivire",
"electrike",
"electrode",
"elekid",
"elgyem",
"emboar",
"emolga",
"empoleon",
"enamorus-incarnate",
"entei",
"escavalier",
"espathra",
"espeon",
"espurr",
"eternatus",
"excadrill",
"exeggcute",
"exeggutor",
"exploud",
"falinks",
"farfetchd",
"farigiraf",
"fearow",
"feebas",
"fennekin",
"feraligatr",
"ferroseed",
"ferrothorn",
"fidough",
"finizen",
"finneon",
"flaaffy",
"flabebe",
"flamigo",
"flapple",
"flareon",
"fletchinder",
"fletchling",
"flittle",
"floatzel",
"floette",
"floragato",
"florges",
"flutter-mane",
"flygon",
"fomantis",
"foongus",
"forretress",
"fraxure",
"frigibax",
"frillish",
"froakie",
"frogadier",
"froslass",
"frosmoth",
"fuecoco",
"furfrou",
"furret",
"gabite",
"gallade",
"galvantula",
"garbodor",
"garchomp",
"gardevoir",
"garganacl",
"gastly",
"gastrodon",
"genesect",
"gengar",
"geodude",
"gholdengo",
"gible",
"gigalith",
"gimmighoul",
"girafarig",
"giratina-altered",
"glaceon",
"glalie",
"glameow",
"glastrier",
"gligar",
"glimmet",
"glimmora",
"gliscor",
"gloom",
"gogoat",
"golbat",
"goldeen",
"golduck",
"golem",
"golett",
"golisopod",
"golurk",
"goodra",
"goomy",
"gorebyss",
"gossifleur",
"gothita",
"gothitelle",
"gothorita",
"gourgeist-average",
"grafaiai",
"granbull",
"grapploct",
"graveler",
"great-tusk",
"greavard",
"greedent",
"greninja",
"grimer",
"grimmsnarl",
"grookey",
"grotle",
"groudon",
"grovyle",
"growlithe",
"grubbin",
"grumpig",
"gulpin",
"gumshoos",
"gurdurr",
"guzzlord",
"gyarados",
"hakamo-o",
"happiny",
"hariyama",
"hatenna",
"hatterene",
"hattrem",
"haunter",
"hawlucha",
"haxorus",
"heatmor",
"heatran",
"heliolisk",
"helioptile",
"heracross",
"herdier",
"hippopotas",
"hippowdon",
"hitmonchan",
"hitmonlee",
"hitmontop",
"ho-oh",
"honchkrow",
"honedge",
"hoopa",
"hoothoot",
"hoppip",
"horsea",
"houndoom",
"houndour",
"houndstone",
"huntail",
"hydreigon",
"hypno",
"igglybuff",
"illumise",
"impidimp",
"incineroar",
"indeedee-male",
"infernape",
"inkay",
"inteleon",
"iron-bundle",
"iron-hands",
"iron-jugulis",
"iron-moth",
"iron-thorns",
"iron-treads",
"ivysaur",
"jangmo-o",
"jellicent",
"jigglypuff",
"jirachi",
"jolteon",
"joltik",
"jumpluff",
"jynx",
"kabuto",
"kabutops",
"kadabra",
"kakuna",
"kangaskhan",
"karrablast",
"kartana",
"kecleon",
"keldeo-ordinary",
"kilowattrel",
"kingambit",
"kingdra",
"kingler",
"kirlia",
"klang",
"klawf",
"kleavor",
"klefki",
"klink",
"klinklang",
"koffing",
"komala",
"kommo-o",
"krabby",
"kricketot",
"kricketune",
"krokorok",
"krookodile",
"kubfu",
"kyogre",
"kyurem",
"lairon",
"lampent",
"landorus-incarnate",
"lanturn",
"lapras",
"larvesta",
"larvitar",
"latias",
"latios",
"leafeon",
"leavanny",
"lechonk",
"ledian",
"ledyba",
"lickilicky",
"lickitung",
"liepard",
"lileep",
"lilligant",
"lillipup",
"linoone",
"litleo",
"litten",
"litwick",
"lokix",
"lombre",
"lopunny",
"lotad",
"loudred",
"lucario",
"ludicolo",
"lugia",
"lumineon",
"lunala",
"lunatone",
"lurantis",
"luvdisc",
"luxio",
"luxray",
"lycanroc-midday",
"mabosstiff",
"machamp",
"machoke",
"machop",
"magby",
"magcargo",
"magearna",
"magikarp",
"magmar",
"magmortar",
"magnemite",
"magneton",
"magnezone",
"makuhita",
"malamar",
"mamoswine",
"manaphy",
"mandibuzz",
"manectric",
"mankey",
"mantine",
"mantyke",
"maractus",
"mareanie",
"mareep",
"marill",
"marowak",
"marshadow",
"marshtomp",
"maschiff",
"masquerain",
"maushold-family-of-four",
"mawile",
"medicham",
"meditite",
"meganium",
"melmetal",
"meloetta-aria",
"meltan",
"meowscarada",
"meowstic-male",
"meowth",
"mesprit",
"metagross",
"metang",
"metapod",
"mew",
"mewtwo",
"mienfoo",
"mienshao",
"mightyena",
"milcery",
"milotic",
"miltank",
"mime-jr",
"mimikyu-disguised",
"minccino",
"minior-red-meteor",
"minun",
"misdreavus",
"mismagius",
"moltres",
"monferno",
"morelull",
"morgrem",
"morpeko-full-belly",
"mothim",
"mr-mime",
"mr-rime",
"mudbray",
"mudkip",
"mudsdale",
"muk",
"munchlax",
"munna",
"murkrow",
"musharna",
"nacli",
"naclstack",
"naganadel",
"natu",
"necrozma",
"nickit",
"nidoking",
"nidoqueen",
"nidoran-f",
"nidoran-m",
"nidorina",
"nidorino",
"nihilego",
"nincada",
"ninetales",
"ninjask",
"noctowl",
"noibat",
"noivern",
"nosepass",
"numel",
"nuzleaf",
"nymble",
"obstagoon",
"octillery",
"oddish",
"oinkologne-male",
"omanyte",
"omastar",
"onix",
"oranguru",
"orbeetle",
"oricorio-baile",
"orthworm",
"oshawott",
"overqwil",
"pachirisu",
"palafin-zero",
"palkia",
"palossand",
"palpitoad",
"pancham",
"pangoro",
"panpour",
"pansage",
"pansear",
"paras",
"parasect",
"passimian",
"patrat",
"pawmi",
"pawmo",
"pawmot",
"pawniard",
"pelipper",
"perrserker",
"persian",
"petilil",
"phanpy",
"phantump",
"pheromosa",
"phione",
"pichu",
"pidgeot",
"pidgeotto",
"pidgey",
"pidove",
"pignite",
"pikachu",
"pikipek",
"piloswine",
"pincurchin",
"pineco",
"pinsir",
"piplup",
"plusle",
"poipole",
"politoed",
"poliwag",
"poliwhirl",
"poliwrath",
"polteageist",
"ponyta",
"poochyena",
"popplio",
"porygon",
"porygon-z",
"porygon2",
"primarina",
"primeape",
"prinplup",
"probopass",
"psyduck",
"pumpkaboo-average",
"pupitar",
"purrloin",
"purugly",
"pyroar",
"pyukumuku",
"quagsire",
"quaquaval",
"quaxly",
"quaxwell",
"quilava",
"quilladin",
"qwilfish",
"raboot",
"rabsca",
"raichu",
"raikou",
"ralts",
"rampardos",
"rapidash",
"raticate",
"rattata",
"rayquaza",
"regice",
"regidrago",
"regieleki",
"regigigas",
"regirock",
"registeel",
"relicanth",
"rellor",
"remoraid",
"reshiram",
"reuniclus",
"revavroom",
"rhydon",
"rhyhorn",
"rhyperior",
"ribombee",
"rillaboom",
"riolu",
"rockruff",
"roggenrola",
"rolycoly",
"rookidee",
"roselia",
"roserade",
"rotom",
"rowlet",
"rufflet",
"runerigus",
"sableye",
"salamence",
"salandit",
"salazzle",
"samurott",
"sandaconda",
"sandile",
"sandshrew",
"sandslash",
"sandy-shocks",
"sandygast",
"sawk",
"sawsbuck",
"scatterbug",
"sceptile",
"scizor",
"scolipede",
"scorbunny",
"scovillain",
"scrafty",
"scraggy",
"scream-tail",
"scyther",
"seadra",
"seaking",
"sealeo",
"seedot",
"seel",
"seismitoad",
"sentret",
"serperior",
"servine",
"seviper",
"sewaddle",
"sharpedo",
"shaymin-land",
"shedinja",
"shelgon",
"shellder",
"shellos",
"shelmet",
"shieldon",
"shiftry",
"shiinotic",
"shinx",
"shroodle",
"shroomish",
"shuckle",
"shuppet",
"sigilyph",
"silcoon",
"silicobra",
"silvally",
"simipour",
"simisage",
"simisear",
"sinistea",
"sirfetchd",
"sizzlipede",
"skarmory",
"skeledirge",
"skiddo",
"skiploom",
"skitty",
"skorupi",
"skrelp",
"skuntank",
"skwovet",
"slaking",
"slakoth",
"sliggoo",
"slither-wing",
"slowbro",
"slowking",
"slowpoke",
"slugma",
"slurpuff",
"smeargle",
"smoliv",
"smoochum",
"sneasel",
"sneasler",
"snivy",
"snom",
"snorlax",
"snorunt",
"snover",
"snubbull",
"sobble",
"solgaleo",
"solosis",
"solrock",
"spearow",
"spectrier",
"spewpa",
"spheal",
"spidops",
"spinarak",
"spinda",
"spiritomb",
"spoink",
"sprigatito",
"spritzee",
"squawkabilly-green-plumage",
"squirtle",
"stakataka",
"stantler",
"staraptor",
"staravia",
"starly",
"starmie",
"staryu",
"steelix",
"steenee",
"stonjourner",
"stoutland",
"stufful",
"stunfisk",
"stunky",
"sudowoodo",
"suicune",
"sunflora",
"sunkern",
"surskit",
"swablu",
"swadloon",
"swalot",
"swampert",
"swanna",
"swellow",
"swinub",
"swirlix",
"swoobat",
"sylveon",
"tadbulb",
"taillow",
"talonflame",
"tandemaus",
"tangela",
"tangrowth",
"tapu-bulu",
"tapu-fini",
"tapu-koko",
"tapu-lele",
"tarountula",
"tatsugiri-curly",
"tauros",
"teddiursa",
"tentacool",
"tentacruel",
"tepig",
"terrakion",
"thievul",
"throh",
"thundurus-incarnate",
"thwackey",
"timburr",
"tinkatink",
"tinkaton",
"tinkatuff",
"tirtouga",
"toedscool",
"toedscruel",
"togedemaru",
"togekiss",
"togepi",
"togetic",
"torchic",
"torkoal",
"tornadus-incarnate",
"torracat",
"torterra",
"totodile",
"toucannon",
"toxapex",
"toxel",
"toxicroak",
"toxtricity-amped",
"tranquill",
"trapinch",
"treecko",
"trevenant",
"tropius",
"trubbish",
"trumbeak",
"tsareena",
"turtonator",
"turtwig",
"tympole",
"tynamo",
"type-null",
"typhlosion",
"tyranitar",
"tyrantrum",
"tyrogue",
"tyrunt",
"umbreon",
"unfezant",
"unown",
"ursaluna",
"ursaring",
"urshifu-single-strike",
"uxie",
"vanillish",
"vanillite",
"vanilluxe",
"vaporeon",
"varoom",
"veluza",
"venipede",
"venomoth",
"venonat",
"venusaur",
"vespiquen",
"vibrava",
"victini",
"victreebel",
"vigoroth",
"vikavolt",
"vileplume",
"virizion",
"vivillon",
"volbeat",
"volcanion",
"volcarona",
"voltorb",
"vullaby",
"vulpix",
"wailmer",
"wailord",
"walrein",
"wartortle",
"watchog",
"wattrel",
"weavile",
"weedle",
"weepinbell",
"weezing",
"whimsicott",
"whirlipede",
"whiscash",
"whismur",
"wigglytuff",
"wiglett",
"wimpod",
"wingull",
"wishiwashi-solo",
"wobbuffet",
"woobat",
"wooloo",
"wooper",
"wormadam-plant",
"wugtrio",
"wurmple",
"wynaut",
"wyrdeer",
"xatu",
"xerneas",
"xurkitree",
"yamask",
"yamper",
"yanma",
"yanmega",
"yungoos",
"yveltal",
"zacian",
"zamazenta",
"zangoose",
"zapdos",
"zarude",
"zebstrika",
"zekrom",
"zeraora",
"zigzagoon",
"zoroark",
"zorua",
"zubat",
"zweilous",
"zygarde-50"
] |
MiroJ/win_eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# win_eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0414
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.6619 | 1.0 | 608 | 0.0794 | 0.9731 |
| 0.7688 | 2.0 | 1216 | 0.0729 | 0.9778 |
| 0.3537 | 2.9959 | 1821 | 0.0414 | 0.9866 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.2.0+cpu
- Datasets 2.0.0
- Tokenizers 0.21.0
|
[
"annualcrop",
"forest",
"herbaceousvegetation",
"highway",
"industrial",
"pasture",
"permanentcrop",
"residential",
"river",
"sealake"
] |
vieanh/vit-sports-cls
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-sports-cls
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0838
- Accuracy: 0.9742
- Precision: 0.9743
- Recall: 0.9742
- F1: 0.9741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.171 | 1.0 | 104 | 0.1729 | 0.9489 | 0.9493 | 0.9489 | 0.9489 |
| 0.0979 | 2.0 | 208 | 0.1356 | 0.9585 | 0.9597 | 0.9585 | 0.9583 |
| 0.0408 | 3.0 | 312 | 0.1184 | 0.9561 | 0.9571 | 0.9561 | 0.9561 |
| 0.0703 | 4.0 | 416 | 0.0892 | 0.9700 | 0.9701 | 0.9700 | 0.9699 |
| 0.1375 | 5.0 | 520 | 0.1029 | 0.9681 | 0.9683 | 0.9681 | 0.9682 |
| 0.0061 | 6.0 | 624 | 0.1073 | 0.9681 | 0.9688 | 0.9681 | 0.9682 |
| 0.0083 | 7.0 | 728 | 0.0795 | 0.9700 | 0.9701 | 0.9700 | 0.9700 |
| 0.0079 | 8.0 | 832 | 0.0754 | 0.9814 | 0.9816 | 0.9814 | 0.9814 |
| 0.0594 | 9.0 | 936 | 0.0714 | 0.9754 | 0.9756 | 0.9754 | 0.9754 |
| 0.0391 | 10.0 | 1040 | 0.0838 | 0.9742 | 0.9743 | 0.9742 | 0.9741 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"badminton",
"cricket",
"football",
"karate",
"swimming",
"tennis",
"wrestling"
] |
kclee111/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0681
- Accuracy: 0.9773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8977 | 1.0 | 152 | 0.1133 | 0.9653 |
| 0.6553 | 2.0 | 304 | 0.0772 | 0.9745 |
| 0.537 | 3.0 | 456 | 0.0681 | 0.9773 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"river",
"sealake",
"annualcrop",
"forest",
"residential",
"highway",
"permanentcrop",
"pasture",
"herbaceousvegetation",
"industrial"
] |
Luan220703/vit-base-VietnameseFood
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-VietnameseFood
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k)
on a Vietnamese Food dataset (https://huggingface.co/datasets/TuyenTrungLe/vietnamese_food_images) with
More than 17k images were on the train set, 2k5 were on the validation set, and 5k were on the test set.

It achieves the following results on the evaluation set:
- Loss: 1.2489
- Accuracy: 0.8925
Although the loss is quite high, the model predicted well with test set with 0.8639 accuracy and a loss of 0.4871

## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.4936 | 0.1818 | 100 | 1.5493 | 0.6901 |
| 0.848 | 0.3636 | 200 | 0.9488 | 0.7851 |
| 0.6619 | 0.5455 | 300 | 0.8240 | 0.7865 |
| 0.6868 | 0.7273 | 400 | 0.6671 | 0.8298 |
| 0.6127 | 0.9091 | 500 | 0.6296 | 0.8296 |
| 0.4413 | 1.0909 | 600 | 0.6003 | 0.8339 |
| 0.3484 | 1.2727 | 700 | 0.6349 | 0.8153 |
| 0.3529 | 1.4545 | 800 | 0.5235 | 0.8581 |
| 0.4104 | 1.6364 | 900 | 0.5407 | 0.8512 |
| 0.3097 | 1.8182 | 1000 | 0.5537 | 0.8423 |
| 0.2527 | 2.0 | 1100 | 0.4871 | 0.8639 |
| 0.1571 | 2.1818 | 1200 | 0.5507 | 0.8587 |
| 0.2164 | 2.3636 | 1300 | 0.5598 | 0.8585 |
| 0.1875 | 2.5455 | 1400 | 0.5787 | 0.8522 |
| 0.1314 | 2.7273 | 1500 | 0.5262 | 0.8643 |
| 0.1671 | 2.9091 | 1600 | 0.5686 | 0.8587 |
| 0.0807 | 3.0909 | 1700 | 0.5912 | 0.8633 |
| 0.0989 | 3.2727 | 1800 | 0.6392 | 0.8679 |
| 0.0586 | 3.4545 | 1900 | 0.6587 | 0.8651 |
| 0.0672 | 3.6364 | 2000 | 0.6542 | 0.8758 |
| 0.0342 | 3.8182 | 2100 | 0.6533 | 0.8786 |
| 0.0484 | 4.0 | 2200 | 0.7314 | 0.8756 |
| 0.0678 | 4.1818 | 2300 | 0.8517 | 0.8788 |
| 0.075 | 4.3636 | 2400 | 0.9576 | 0.8843 |
| 0.0201 | 4.5455 | 2500 | 1.0758 | 0.8845 |
| 0.1238 | 4.7273 | 2600 | 1.1375 | 0.8871 |
| 0.0434 | 4.9091 | 2700 | 1.2226 | 0.8877 |
| 0.0493 | 5.0909 | 2800 | 1.1938 | 0.8923 |
| 0.0055 | 5.2727 | 2900 | 1.2594 | 0.8903 |
| 0.0039 | 5.4545 | 3000 | 1.2709 | 0.8887 |
| 0.0445 | 5.6364 | 3100 | 1.2420 | 0.8921 |
| 0.0347 | 5.8182 | 3200 | 1.2609 | 0.8915 |
| 0.0657 | 6.0 | 3300 | 1.2489 | 0.8925 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
[
"banh beo",
"banh bot loc",
"banh pia",
"banh tet",
"banh trang nuong",
"banh xeo",
"bun bo hue",
"bun dau mam tom",
"bun mam",
"bun rieu",
"bun thit nuong",
"ca kho to",
"banh can",
"canh chua",
"cao lau",
"chao long",
"com tam",
"goi cuon",
"hu tieu",
"mi quang",
"nem chua",
"pho",
"xoi xeo",
"banh canh",
"banh chung",
"banh cuon",
"banh duc",
"banh gio",
"banh khot",
"banh mi"
] |
alexasophia-24/Human-Action-Recognition-VIT-Base-patch16-224
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Human-Action-Recognition-VIT-Base-patch16-224
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4367
- Accuracy: 0.8687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 10.2084 | 1.0 | 40 | 2.0027 | 0.4877 |
| 5.7018 | 2.0 | 80 | 0.7764 | 0.7774 |
| 3.1984 | 3.0 | 120 | 0.5612 | 0.8329 |
| 2.6944 | 4.0 | 160 | 0.5205 | 0.8437 |
| 2.4232 | 5.0 | 200 | 0.4874 | 0.8508 |
| 2.2387 | 6.0 | 240 | 0.4712 | 0.8567 |
| 2.0735 | 7.0 | 280 | 0.4715 | 0.8552 |
| 1.9519 | 8.0 | 320 | 0.4472 | 0.8587 |
| 1.8481 | 9.0 | 360 | 0.4504 | 0.8563 |
| 1.6348 | 10.0 | 400 | 0.4512 | 0.8583 |
| 1.6713 | 11.0 | 440 | 0.4621 | 0.8579 |
| 1.5573 | 12.0 | 480 | 0.4380 | 0.8659 |
| 1.5445 | 13.0 | 520 | 0.4347 | 0.8635 |
| 1.4436 | 14.0 | 560 | 0.4385 | 0.8683 |
| 1.388 | 15.0 | 600 | 0.4379 | 0.8679 |
| 1.4061 | 16.0 | 640 | 0.4391 | 0.8647 |
| 1.3256 | 17.0 | 680 | 0.4353 | 0.8671 |
| 1.3634 | 18.0 | 720 | 0.4360 | 0.8671 |
| 1.3661 | 19.0 | 760 | 0.4366 | 0.8679 |
| 1.3606 | 19.5063 | 780 | 0.4367 | 0.8687 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Tokenizers 0.21.0
|
[
"calling",
"clapping",
"cycling",
"dancing",
"drinking",
"eating",
"fighting",
"hugging",
"laughing",
"listening_to_music",
"running",
"sitting",
"sleeping",
"texting",
"using_laptop"
] |
MiroJ/google_eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google_eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0650
- Accuracy: 0.9894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.1819 | 1.0 | 608 | 0.1604 | 0.9759 |
| 0.6554 | 2.0 | 1216 | 0.0953 | 0.9824 |
| 0.4079 | 2.9959 | 1821 | 0.0650 | 0.9894 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.2.0+cpu
- Datasets 2.0.0
- Tokenizers 0.21.0
|
[
"annualcrop",
"forest",
"herbaceousvegetation",
"highway",
"industrial",
"pasture",
"permanentcrop",
"residential",
"river",
"sealake"
] |
Augusto777/swinv2-tiny-patch4-window8-256-RD-FIX
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-RD-FIX
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5014
- Accuracy: 0.7826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.8571 | 3 | 1.1955 | 0.4565 |
| No log | 1.8571 | 6 | 1.1280 | 0.5 |
| No log | 2.8571 | 9 | 1.0565 | 0.4783 |
| 4.8751 | 3.8571 | 12 | 0.9184 | 0.5870 |
| 4.8751 | 4.8571 | 15 | 0.8208 | 0.5870 |
| 4.8751 | 5.8571 | 18 | 0.7310 | 0.6087 |
| 3.6315 | 6.8571 | 21 | 0.6951 | 0.7174 |
| 3.6315 | 7.8571 | 24 | 0.6772 | 0.7174 |
| 3.6315 | 8.8571 | 27 | 0.6626 | 0.7174 |
| 2.8559 | 9.8571 | 30 | 0.5987 | 0.7826 |
| 2.8559 | 10.8571 | 33 | 0.5431 | 0.8261 |
| 2.8559 | 11.8571 | 36 | 0.6193 | 0.6739 |
| 2.8559 | 12.8571 | 39 | 0.6475 | 0.7174 |
| 2.3617 | 13.8571 | 42 | 0.5725 | 0.7174 |
| 2.3617 | 14.8571 | 45 | 0.5794 | 0.7826 |
| 2.3617 | 15.8571 | 48 | 0.5292 | 0.7826 |
| 2.1506 | 16.8571 | 51 | 0.5988 | 0.7391 |
| 2.1506 | 17.8571 | 54 | 0.6548 | 0.7174 |
| 2.1506 | 18.8571 | 57 | 0.5131 | 0.8261 |
| 1.9498 | 19.8571 | 60 | 0.4700 | 0.8478 |
| 1.9498 | 20.8571 | 63 | 0.5254 | 0.8043 |
| 1.9498 | 21.8571 | 66 | 0.5451 | 0.7826 |
| 1.9498 | 22.8571 | 69 | 0.5304 | 0.7609 |
| 1.422 | 23.8571 | 72 | 0.5105 | 0.8043 |
| 1.422 | 24.8571 | 75 | 0.4685 | 0.7826 |
| 1.422 | 25.8571 | 78 | 0.4875 | 0.8261 |
| 1.3044 | 26.8571 | 81 | 0.5492 | 0.7826 |
| 1.3044 | 27.8571 | 84 | 0.5202 | 0.7826 |
| 1.3044 | 28.8571 | 87 | 0.4737 | 0.8261 |
| 1.2464 | 29.8571 | 90 | 0.4398 | 0.8478 |
| 1.2464 | 30.8571 | 93 | 0.4753 | 0.8043 |
| 1.2464 | 31.8571 | 96 | 0.4913 | 0.8043 |
| 1.2464 | 32.8571 | 99 | 0.5262 | 0.7826 |
| 1.1614 | 33.8571 | 102 | 0.5280 | 0.7826 |
| 1.1614 | 34.8571 | 105 | 0.5252 | 0.7609 |
| 1.1614 | 35.8571 | 108 | 0.5127 | 0.7826 |
| 1.045 | 36.8571 | 111 | 0.5061 | 0.7826 |
| 1.045 | 37.8571 | 114 | 0.5012 | 0.7826 |
| 1.045 | 38.8571 | 117 | 0.5025 | 0.7826 |
| 0.9391 | 39.8571 | 120 | 0.5014 | 0.7826 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
hhffxx/my-food-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-food-model
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2693
- Accuracy: 0.94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4352 | 1.0 | 125 | 0.4475 | 0.924 |
| 0.2204 | 2.0 | 250 | 0.2962 | 0.939 |
| 0.1395 | 3.0 | 375 | 0.2693 | 0.94 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"beignets",
"bruschetta",
"chicken_wings",
"hamburger",
"pork_chop",
"prime_rib",
"ramen"
] |
Felipecordeiiro/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0383
- Accuracy: 0.9887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4709 | 1.0 | 422 | 0.0610 | 0.9807 |
| 1.5976 | 2.0 | 844 | 0.0406 | 0.9868 |
| 1.1605 | 3.0 | 1266 | 0.0383 | 0.9887 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"0",
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9"
] |
bmedeiros/vit-msn-small-ultralytics_yolo_cropped_lateral_flow_ivalidation
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-msn-small-ultralytics_yolo_cropped_lateral_flow_ivalidation
This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4550
- Accuracy: 0.8373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.9032 | 7 | 0.6079 | 0.7263 |
| 0.4464 | 1.9355 | 15 | 0.4912 | 0.8107 |
| 0.3464 | 2.9677 | 23 | 0.6082 | 0.6820 |
| 0.2864 | 4.0 | 31 | 0.5636 | 0.7234 |
| 0.2864 | 4.9032 | 38 | 0.4431 | 0.8121 |
| 0.2617 | 5.9355 | 46 | 0.5066 | 0.7322 |
| 0.2504 | 6.9677 | 54 | 0.4550 | 0.8373 |
| 0.2319 | 8.0 | 62 | 0.7023 | 0.6686 |
| 0.2319 | 8.9032 | 69 | 0.6887 | 0.6346 |
| 0.2338 | 9.9355 | 77 | 0.5075 | 0.8107 |
| 0.2163 | 10.9677 | 85 | 0.6170 | 0.7189 |
| 0.2024 | 12.0 | 93 | 0.7783 | 0.6139 |
| 0.2027 | 12.9032 | 100 | 0.9525 | 0.5059 |
| 0.2027 | 13.9355 | 108 | 0.7353 | 0.6805 |
| 0.2086 | 14.9677 | 116 | 0.7734 | 0.6479 |
| 0.1921 | 16.0 | 124 | 0.9112 | 0.5251 |
| 0.1827 | 16.9032 | 131 | 0.6997 | 0.6997 |
| 0.1827 | 17.9355 | 139 | 0.7572 | 0.6731 |
| 0.1854 | 18.9677 | 147 | 0.6843 | 0.7041 |
| 0.172 | 20.0 | 155 | 0.7237 | 0.6997 |
| 0.1703 | 20.9032 | 162 | 0.7698 | 0.6598 |
| 0.1587 | 21.9355 | 170 | 0.7597 | 0.6420 |
| 0.1587 | 22.9677 | 178 | 0.8517 | 0.5976 |
| 0.1673 | 24.0 | 186 | 0.6763 | 0.6672 |
| 0.1474 | 24.9032 | 193 | 0.8353 | 0.6420 |
| 0.1512 | 25.9355 | 201 | 0.7117 | 0.6953 |
| 0.1512 | 26.9677 | 209 | 0.8383 | 0.6169 |
| 0.1427 | 28.0 | 217 | 1.0619 | 0.5399 |
| 0.1501 | 28.9032 | 224 | 0.7946 | 0.6760 |
| 0.1325 | 29.9355 | 232 | 1.0962 | 0.5222 |
| 0.1314 | 30.9677 | 240 | 0.8824 | 0.6183 |
| 0.1314 | 32.0 | 248 | 0.8409 | 0.6331 |
| 0.1294 | 32.9032 | 255 | 0.8754 | 0.6021 |
| 0.1204 | 33.9355 | 263 | 0.8036 | 0.6716 |
| 0.1218 | 34.9677 | 271 | 0.8477 | 0.6568 |
| 0.1218 | 36.0 | 279 | 0.8739 | 0.6331 |
| 0.1217 | 36.1290 | 280 | 0.8748 | 0.6331 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
[
"invalid",
"valid"
] |
SouthMemphis/vit-military-aircraft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3643
- Accuracy: 0.9027
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 3.5924 | 0.0620 | 100 | 3.5675 | 0.1927 |
| 3.0189 | 0.1239 | 200 | 3.0313 | 0.3047 |
| 2.5541 | 0.1859 | 300 | 2.5575 | 0.3956 |
| 2.114 | 0.2478 | 400 | 2.2332 | 0.4571 |
| 1.9624 | 0.3098 | 500 | 1.9455 | 0.5596 |
| 1.6749 | 0.3717 | 600 | 1.7370 | 0.5787 |
| 1.5852 | 0.4337 | 700 | 1.4947 | 0.6439 |
| 1.1875 | 0.4957 | 800 | 1.4151 | 0.6468 |
| 1.5114 | 0.5576 | 900 | 1.2709 | 0.6820 |
| 1.3122 | 0.6196 | 1000 | 1.1940 | 0.6939 |
| 1.0721 | 0.6815 | 1100 | 1.0757 | 0.7261 |
| 0.8249 | 0.7435 | 1200 | 0.9666 | 0.7576 |
| 0.7944 | 0.8055 | 1300 | 0.9101 | 0.7708 |
| 0.8032 | 0.8674 | 1400 | 0.9011 | 0.7691 |
| 0.7479 | 0.9294 | 1500 | 0.7409 | 0.8067 |
| 0.5997 | 0.9913 | 1600 | 0.7326 | 0.8110 |
| 0.5005 | 1.0533 | 1700 | 0.6769 | 0.8211 |
| 0.4107 | 1.1152 | 1800 | 0.6375 | 0.8374 |
| 0.4596 | 1.1772 | 1900 | 0.6302 | 0.8304 |
| 0.2544 | 1.2392 | 2000 | 0.5805 | 0.8400 |
| 0.2983 | 1.3011 | 2100 | 0.5480 | 0.8501 |
| 0.3214 | 1.3631 | 2200 | 0.5053 | 0.8683 |
| 0.2384 | 1.4250 | 2300 | 0.4929 | 0.8713 |
| 0.2397 | 1.4870 | 2400 | 0.4664 | 0.8742 |
| 0.3448 | 1.5489 | 2500 | 0.4690 | 0.8755 |
| 0.3129 | 1.6109 | 2600 | 0.4351 | 0.8843 |
| 0.1027 | 1.6729 | 2700 | 0.4311 | 0.8846 |
| 0.2086 | 1.7348 | 2800 | 0.4088 | 0.8897 |
| 0.1683 | 1.7968 | 2900 | 0.4133 | 0.8919 |
| 0.2767 | 1.8587 | 3000 | 0.3851 | 0.8964 |
| 0.1582 | 1.9207 | 3100 | 0.3703 | 0.9018 |
| 0.1421 | 1.9827 | 3200 | 0.3643 | 0.9027 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
[
"a10",
"a400m",
"ag600",
"ah64",
"av8b",
"an124",
"an22",
"an225",
"an72",
"b1",
"b2",
"b21",
"b52",
"be200",
"c130",
"c17",
"c2",
"c390",
"c5",
"ch47",
"cl415",
"e2",
"e7",
"ef2000",
"f117",
"f14",
"f15",
"f16",
"f18",
"f22",
"f35",
"f4",
"h6",
"j10",
"j20",
"jas39",
"jf17",
"jh7",
"kc135",
"kf21",
"kj600",
"ka27",
"ka52",
"mq9",
"mi24",
"mi26",
"mi28",
"mig29",
"mig31",
"mirage2000",
"p3",
"rq4",
"rafale",
"sr71",
"su24",
"su25",
"su34",
"su57",
"tb001",
"tb2",
"tornado",
"tu160",
"tu22m",
"tu95",
"u2",
"uh60",
"us2",
"v22",
"vulcan",
"wz7",
"xb70",
"y20",
"yf23",
"z19"
] |
JMMM77/pneumonia_image_classification_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pneumonia_image_classification_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9616
- Accuracy: 0.625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.7397 | 1.0 | 82 | 1.4402 | 0.5625 |
| 0.6347 | 2.0 | 164 | 1.3682 | 0.625 |
| 0.5134 | 2.9693 | 243 | 0.9616 | 0.625 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"normal",
"pneumonia"
] |
Bastik22/pneumonia
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.5226172208786011
f1: 0.8527472527472527
precision: 0.7432950191570882
recall: 1.0
auc: 0.9095966687182644
accuracy: 0.7432950191570882
|
[
"normal",
"pneumonia"
] |
Bastik22/pneumonia1
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.5015742182731628
f1: 0.8527472527472527
precision: 0.7432950191570882
recall: 1.0
auc: 0.9162130712417295
accuracy: 0.7432950191570882
|
[
"normal",
"pneumonia"
] |
a838264168/kvasir-v2-classifier
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"dyed-lifted-polyps",
"dyed-resection-margins",
"esophagitis",
"normal-cecum",
"normal-pylorus",
"normal-z-line",
"polyps",
"ulcerative-colitis"
] |
Brightmzb/vit-base-beans-demo-v5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans-demo-v5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0147
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.0541 | 1.5385 | 100 | 0.0242 | 1.0 |
| 0.014 | 3.0769 | 200 | 0.0147 | 1.0 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
MiroJ/facebook_eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# facebook_eurosat
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0341
- Accuracy: 0.9894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.5235 | 1.0 | 608 | 0.0824 | 0.9727 |
| 0.8113 | 2.0 | 1216 | 0.0579 | 0.9815 |
| 0.3605 | 2.9959 | 1821 | 0.0341 | 0.9894 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.2.0+cpu
- Datasets 2.0.0
- Tokenizers 0.21.0
|
[
"annualcrop",
"forest",
"herbaceousvegetation",
"highway",
"industrial",
"pasture",
"permanentcrop",
"residential",
"river",
"sealake"
] |
Renegade-888/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2264
- Accuracy: 0.9350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3603 | 1.0 | 370 | 0.3164 | 0.9120 |
| 0.1989 | 2.0 | 740 | 0.2547 | 0.9269 |
| 0.1646 | 3.0 | 1110 | 0.2423 | 0.9242 |
| 0.1393 | 4.0 | 1480 | 0.2316 | 0.9283 |
| 0.1231 | 5.0 | 1850 | 0.2303 | 0.9310 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
desarrolloasesoreslocales/cvt-13-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cvt-13-finetuned-eurosat
This model is a fine-tuned version of [microsoft/cvt-13](https://huggingface.co/microsoft/cvt-13) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9738
- Accuracy: 0.496
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 12.096 | 1.0 | 18 | 2.5771 | 0.258 |
| 10.7352 | 2.0 | 36 | 2.3322 | 0.39 |
| 9.976 | 3.0 | 54 | 2.1288 | 0.456 |
| 9.4697 | 4.0 | 72 | 1.9970 | 0.496 |
| 8.751 | 4.7429 | 85 | 1.9738 | 0.496 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"circulación prohibida",
"circular por carriles de circulación reservada",
"estacionar en carril-taxi o en carril-bus",
"estacionar en el centro de la calzada",
"estacionar en espacio reservado",
"estacionar en espacio reservado para personas de movilidad reducida",
"estacionar en espacio reservado para vehículo eléctrico, sin tener esa condición",
"estacionar en intersección",
"estacionar en lugar prohibido por línea amarilla discontinua",
"estacionar en lugar prohibido por línea amarilla en zig-zag",
"estacionar en un carril bici",
"estacionar en un lugar donde se impide la retirada o vaciado de contenedores",
"estacionar en vado señalizado",
"estacionar en zonas de carga y descarga",
"estacionar o parar donde está prohibida la parada por la señal vertical correspondiente",
"estacionar o parar en doble fila",
"estacionar o parar en paso para peatones",
"estacionar o parar en un lugar prohibido por linea amarilla continua",
"estacionar o parar sobre acera",
"estacionar o parar un vehículo en rebaje en la acera para disminuidos físicos",
"estacionar un vehículo en zonas señalizadas con franjas en el pavimento (isleta)"
] |
desarrolloasesoreslocales/cvt-13-finetuned-AL
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cvt-13-finetuned-AL
This model is a fine-tuned version of [microsoft/cvt-13](https://huggingface.co/microsoft/cvt-13) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6319
- Accuracy: 0.2619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2095 | 1.0 | 11 | 2.6973 | 0.1905 |
| 2.1537 | 2.0 | 22 | 2.6885 | 0.1667 |
| 2.3281 | 3.0 | 33 | 2.6838 | 0.1429 |
| 2.3004 | 4.0 | 44 | 2.6759 | 0.1667 |
| 2.3531 | 5.0 | 55 | 2.6765 | 0.1905 |
| 2.4045 | 6.0 | 66 | 2.6647 | 0.1905 |
| 2.3842 | 7.0 | 77 | 2.6552 | 0.2143 |
| 2.4049 | 8.0 | 88 | 2.6505 | 0.2381 |
| 2.3972 | 9.0 | 99 | 2.6470 | 0.2143 |
| 2.4238 | 10.0 | 110 | 2.6428 | 0.2143 |
| 2.4359 | 11.0 | 121 | 2.6429 | 0.1905 |
| 2.4042 | 12.0 | 132 | 2.6383 | 0.2619 |
| 2.4737 | 13.0 | 143 | 2.6364 | 0.2381 |
| 2.4003 | 14.0 | 154 | 2.6280 | 0.2143 |
| 2.382 | 15.0 | 165 | 2.6319 | 0.2619 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"circulación prohibida",
"circular por carriles de circulación reservada",
"estacionar en carril-taxi o en carril-bus",
"estacionar en el centro de la calzada",
"estacionar en espacio reservado",
"estacionar en espacio reservado para personas de movilidad reducida",
"estacionar en espacio reservado para vehículo eléctrico, sin tener esa condición",
"estacionar en intersección",
"estacionar en lugar prohibido por línea amarilla discontinua",
"estacionar en lugar prohibido por línea amarilla en zig-zag",
"estacionar en un carril bici",
"estacionar en un lugar donde se impide la retirada o vaciado de contenedores",
"estacionar en vado señalizado",
"estacionar en zonas de carga y descarga",
"estacionar o parar donde está prohibida la parada por la señal vertical correspondiente",
"estacionar o parar en doble fila",
"estacionar o parar en paso para peatones",
"estacionar o parar en un lugar prohibido por linea amarilla continua",
"estacionar o parar sobre acera",
"estacionar o parar un vehículo en rebaje en la acera para disminuidos físicos",
"estacionar un vehículo en zonas señalizadas con franjas en el pavimento (isleta)"
] |
Bastik22/pneumonia3
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 1.8160488605499268
f1: 0.6666666666666666
precision: 0.5
recall: 1.0
auc: 0.8125
accuracy: 0.5
|
[
"normal",
"pneumonia"
] |
zavora/vit-beans-classifier
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-beans-classifier
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0681
- Accuracy: 0.9624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 65 | 0.1897 | 0.9248 |
| No log | 2.0 | 130 | 0.0980 | 0.9624 |
| No log | 3.0 | 195 | 0.0736 | 0.9699 |
| No log | 4.0 | 260 | 0.0687 | 0.9624 |
| No log | 5.0 | 325 | 0.0681 | 0.9624 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
Prahaladha/Indian_Food_Classification
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"burger",
"butter_naan",
"chai",
"chapati",
"chole_bhature",
"dal_makhani",
"dhokla",
"fried_rice",
"idli",
"jalebi",
"kaathi_rolls",
"kadai_paneer",
"kulfi",
"masala_dosa",
"momos",
"paani_puri",
"pakode",
"pav_bhaji",
"pizza",
"samosa"
] |
sunnyday910/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1903
- Accuracy: 0.9405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3761 | 1.0 | 370 | 0.2990 | 0.9283 |
| 0.2121 | 2.0 | 740 | 0.2239 | 0.9391 |
| 0.1462 | 3.0 | 1110 | 0.1997 | 0.9418 |
| 0.1392 | 4.0 | 1480 | 0.1912 | 0.9432 |
| 0.1417 | 5.0 | 1850 | 0.1865 | 0.9445 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
hoanbklucky/swin-tiny-patch4-window7-224-finetuned-noh
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-noh
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4349
- Accuracy: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.5322 | 1.0 | 23 | 0.4647 | 0.7619 |
| 0.4535 | 2.0 | 46 | 0.4359 | 0.8194 |
| 0.3854 | 3.0 | 69 | 0.3514 | 0.8539 |
| 0.302 | 4.0 | 92 | 0.4349 | 0.8621 |
| 0.2571 | 5.0 | 115 | 0.5112 | 0.8095 |
| 0.2104 | 6.0 | 138 | 0.4453 | 0.8259 |
| 0.1702 | 7.0 | 161 | 0.5550 | 0.7833 |
| 0.1682 | 8.0 | 184 | 0.5313 | 0.7947 |
| 0.136 | 9.0 | 207 | 0.5452 | 0.8276 |
| 0.1415 | 9.5778 | 220 | 0.5352 | 0.8210 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1
- Datasets 2.19.1
- Tokenizers 0.21.0
|
[
"normal",
"cancer"
] |
hoanbklucky/vit-base-patch16-224-finetuned-noh
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-noh
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5148
- Accuracy: 0.8210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.4728 | 1.0 | 23 | 0.4540 | 0.7750 |
| 0.3998 | 2.0 | 46 | 0.4063 | 0.8128 |
| 0.3388 | 3.0 | 69 | 0.3919 | 0.8358 |
| 0.2665 | 4.0 | 92 | 0.4299 | 0.8539 |
| 0.2112 | 5.0 | 115 | 0.4299 | 0.8227 |
| 0.187 | 6.0 | 138 | 0.4721 | 0.8259 |
| 0.1363 | 7.0 | 161 | 0.4639 | 0.8440 |
| 0.119 | 8.0 | 184 | 0.5293 | 0.7898 |
| 0.1042 | 9.0 | 207 | 0.5141 | 0.8161 |
| 0.1153 | 9.5778 | 220 | 0.5148 | 0.8210 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1
- Datasets 2.19.1
- Tokenizers 0.21.0
|
[
"normal",
"cancer"
] |
hoanbklucky/convnext-tiny-224-finetuned-noh
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-noh
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3730
- Accuracy: 0.8801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.5518 | 1.0 | 23 | 0.5414 | 0.7619 |
| 0.5027 | 2.0 | 46 | 0.4834 | 0.7619 |
| 0.4616 | 3.0 | 69 | 0.4075 | 0.8358 |
| 0.4001 | 4.0 | 92 | 0.3601 | 0.8571 |
| 0.3698 | 5.0 | 115 | 0.3467 | 0.8768 |
| 0.3261 | 6.0 | 138 | 0.3730 | 0.8801 |
| 0.301 | 7.0 | 161 | 0.3728 | 0.8736 |
| 0.3071 | 8.0 | 184 | 0.3959 | 0.8374 |
| 0.264 | 9.0 | 207 | 0.3865 | 0.8604 |
| 0.2769 | 9.5778 | 220 | 0.3873 | 0.8621 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1
- Datasets 2.19.1
- Tokenizers 0.21.0
|
[
"normal",
"cancer"
] |
hoanbklucky/efficientnet-b0-finetuned-noh
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# efficientnet-b0-finetuned-noh
This model is a fine-tuned version of [google/efficientnet-b0](https://huggingface.co/google/efficientnet-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3784
- Accuracy: 0.8883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.6146 | 1.0 | 23 | 0.6139 | 0.7291 |
| 0.5116 | 2.0 | 46 | 0.4704 | 0.8309 |
| 0.4655 | 3.0 | 69 | 0.4233 | 0.8588 |
| 0.4331 | 4.0 | 92 | 0.4119 | 0.8604 |
| 0.4281 | 5.0 | 115 | 0.3897 | 0.8752 |
| 0.4001 | 6.0 | 138 | 0.4012 | 0.8719 |
| 0.3721 | 7.0 | 161 | 0.3861 | 0.8818 |
| 0.3979 | 8.0 | 184 | 0.3784 | 0.8883 |
| 0.3376 | 9.0 | 207 | 0.4171 | 0.8604 |
| 0.3984 | 9.5778 | 220 | 0.4139 | 0.8621 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1
- Datasets 2.19.1
- Tokenizers 0.21.0
|
[
"normal",
"cancer"
] |
Kankanaghosh/vit-base-beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0099
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.0865 | 1.5385 | 100 | 0.1435 | 0.9624 |
| 0.0347 | 3.0769 | 200 | 0.0099 | 1.0 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
vikas117/finetuned-ai-real
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-ai-real
This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2686
- Accuracy: 0.9073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.5454 | 0.5682 | 25 | 0.2972 | 0.8911 |
| 0.4058 | 1.1364 | 50 | 0.6324 | 0.7581 |
| 0.236 | 1.7045 | 75 | 0.2686 | 0.9073 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"ai",
"real"
] |
bikekowal/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.5198
- eval_accuracy: 0.0392
- eval_runtime: 5.7122
- eval_samples_per_second: 129.373
- eval_steps_per_second: 8.228
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
skshmjn/Pokemon-classifier-gen9-1025
|
# Model Card for Pokemon Classifier Gen9
## Model Overview
This is a fine-tuned ViT (Vision Transformer) model for Pokémon image classification. The model is trained to classify upto Gen9 (1025) Pokémon images.
## Intended Use
This model is designed for image classification tasks, specifically for identifying Pokémon characters. It can be used for:
- Pokémon-themed apps
- Educational projects
- Pokémon identification in images
**Note**: The model is not designed for general-purpose image classification.
## How to Use
Here's how you can load and use the model with the Hugging Face `transformers` library:
```python
from transformers import ViTForImageClassification, ViTImageProcessor
from PIL import Image
import torch
# Define the device
device = "cuda" if torch.cuda.is_available() else "cpu"
# Load the model and image processor
model_id = "skshmjn/Pokemon-classifier-gen9-1025"
model = ViTForImageClassification.from_pretrained(model_id).to(device)
image_processor = ViTImageProcessor.from_pretrained(model_id)
# Load and process an image
img = Image.open('test.jpg').convert("RGB")
inputs = image_processor(images=img, return_tensors='pt').to(device)
# Make predictions
outputs = model(**inputs)
predicted_id = outputs.logits.argmax(-1).item()
predicted_pokemon = model.config.id2label[predicted_id]
# Print predicted class
print(f"Predicted Pokémon Pokédex number: {predicted_id+1}")
print(f"Predicted Pokémon: {predicted_pokemon}")
|
[
"bulbasaur",
"ivysaur",
"venusaur",
"charmander",
"charmeleon",
"charizard",
"squirtle",
"wartortle",
"blastoise",
"caterpie",
"metapod",
"butterfree",
"weedle",
"kakuna",
"beedrill",
"pidgey",
"pidgeotto",
"pidgeot",
"rattata",
"raticate",
"spearow",
"fearow",
"ekans",
"arbok",
"pikachu",
"raichu",
"sandshrew",
"sandslash",
"nidoran♀",
"nidorina",
"nidoqueen",
"nidoran♂",
"nidorino",
"nidoking",
"clefairy",
"clefable",
"vulpix",
"ninetales",
"jigglypuff",
"wigglytuff",
"zubat",
"golbat",
"oddish",
"gloom",
"vileplume",
"paras",
"parasect",
"venonat",
"venomoth",
"diglett",
"dugtrio",
"meowth",
"persian",
"psyduck",
"golduck",
"mankey",
"primeape",
"growlithe",
"arcanine",
"poliwag",
"poliwhirl",
"poliwrath",
"abra",
"kadabra",
"alakazam",
"machop",
"machoke",
"machamp",
"bellsprout",
"weepinbell",
"victreebel",
"tentacool",
"tentacruel",
"geodude",
"graveler",
"golem",
"ponyta",
"rapidash",
"slowpoke",
"slowbro",
"magnemite",
"magneton",
"farfetch'd",
"doduo",
"dodrio",
"seel",
"dewgong",
"grimer",
"muk",
"shellder",
"cloyster",
"gastly",
"haunter",
"gengar",
"onix",
"drowzee",
"hypno",
"krabby",
"kingler",
"voltorb",
"electrode",
"exeggcute",
"exeggutor",
"cubone",
"marowak",
"hitmonlee",
"hitmonchan",
"lickitung",
"koffing",
"weezing",
"rhyhorn",
"rhydon",
"chansey",
"tangela",
"kangaskhan",
"horsea",
"seadra",
"goldeen",
"seaking",
"staryu",
"starmie",
"mr. mime",
"scyther",
"jynx",
"electabuzz",
"magmar",
"pinsir",
"tauros",
"magikarp",
"gyarados",
"lapras",
"ditto",
"eevee",
"vaporeon",
"jolteon",
"flareon",
"porygon",
"omanyte",
"omastar",
"kabuto",
"kabutops",
"aerodactyl",
"snorlax",
"articuno",
"zapdos",
"moltres",
"dratini",
"dragonair",
"dragonite",
"mewtwo",
"mew",
"chikorita",
"bayleef",
"meganium",
"cyndaquil",
"quilava",
"typhlosion",
"totodile",
"croconaw",
"feraligatr",
"sentret",
"furret",
"hoothoot",
"noctowl",
"ledyba",
"ledian",
"spinarak",
"ariados",
"crobat",
"chinchou",
"lanturn",
"pichu",
"cleffa",
"igglybuff",
"togepi",
"togetic",
"natu",
"xatu",
"mareep",
"flaaffy",
"ampharos",
"bellossom",
"marill",
"azumarill",
"sudowoodo",
"politoed",
"hoppip",
"skiploom",
"jumpluff",
"aipom",
"sunkern",
"sunflora",
"yanma",
"wooper",
"quagsire",
"espeon",
"umbreon",
"murkrow",
"slowking",
"misdreavus",
"unown",
"wobbuffet",
"girafarig",
"pineco",
"forretress",
"dunsparce",
"gligar",
"steelix",
"snubbull",
"granbull",
"qwilfish",
"scizor",
"shuckle",
"heracross",
"sneasel",
"teddiursa",
"ursaring",
"slugma",
"magcargo",
"swinub",
"piloswine",
"corsola",
"remoraid",
"octillery",
"delibird",
"mantine",
"skarmory",
"houndour",
"houndoom",
"kingdra",
"phanpy",
"donphan",
"porygon2",
"stantler",
"smeargle",
"tyrogue",
"hitmontop",
"smoochum",
"elekid",
"magby",
"miltank",
"blissey",
"raikou",
"entei",
"suicune",
"larvitar",
"pupitar",
"tyranitar",
"lugia",
"ho-oh",
"celebi",
"treecko",
"grovyle",
"sceptile",
"torchic",
"combusken",
"blaziken",
"mudkip",
"marshtomp",
"swampert",
"poochyena",
"mightyena",
"zigzagoon",
"linoone",
"wurmple",
"silcoon",
"beautifly",
"cascoon",
"dustox",
"lotad",
"lombre",
"ludicolo",
"seedot",
"nuzleaf",
"shiftry",
"taillow",
"swellow",
"wingull",
"pelipper",
"ralts",
"kirlia",
"gardevoir",
"surskit",
"masquerain",
"shroomish",
"breloom",
"slakoth",
"vigoroth",
"slaking",
"nincada",
"ninjask",
"shedinja",
"whismur",
"loudred",
"exploud",
"makuhita",
"hariyama",
"azurill",
"nosepass",
"skitty",
"delcatty",
"sableye",
"mawile",
"aron",
"lairon",
"aggron",
"meditite",
"medicham",
"electrike",
"manectric",
"plusle",
"minun",
"volbeat",
"illumise",
"roselia",
"gulpin",
"swalot",
"carvanha",
"sharpedo",
"wailmer",
"wailord",
"numel",
"camerupt",
"torkoal",
"spoink",
"grumpig",
"spinda",
"trapinch",
"vibrava",
"flygon",
"cacnea",
"cacturne",
"swablu",
"altaria",
"zangoose",
"seviper",
"lunatone",
"solrock",
"barboach",
"whiscash",
"corphish",
"crawdaunt",
"baltoy",
"claydol",
"lileep",
"cradily",
"anorith",
"armaldo",
"feebas",
"milotic",
"castform",
"kecleon",
"shuppet",
"banette",
"duskull",
"dusclops",
"tropius",
"chimecho",
"absol",
"wynaut",
"snorunt",
"glalie",
"spheal",
"sealeo",
"walrein",
"clamperl",
"huntail",
"gorebyss",
"relicanth",
"luvdisc",
"bagon",
"shelgon",
"salamence",
"beldum",
"metang",
"metagross",
"regirock",
"regice",
"registeel",
"latias",
"latios",
"kyogre",
"groudon",
"rayquaza",
"jirachi",
"deoxys",
"turtwig",
"grotle",
"torterra",
"chimchar",
"monferno",
"infernape",
"piplup",
"prinplup",
"empoleon",
"starly",
"staravia",
"staraptor",
"bidoof",
"bibarel",
"kricketot",
"kricketune",
"shinx",
"luxio",
"luxray",
"budew",
"roserade",
"cranidos",
"rampardos",
"shieldon",
"bastiodon",
"burmy",
"wormadam",
"mothim",
"combee",
"vespiquen",
"pachirisu",
"buizel",
"floatzel",
"cherubi",
"cherrim",
"shellos",
"gastrodon",
"ambipom",
"drifloon",
"drifblim",
"buneary",
"lopunny",
"mismagius",
"honchkrow",
"glameow",
"purugly",
"chingling",
"stunky",
"skuntank",
"bronzor",
"bronzong",
"bonsly",
"mime jr.",
"happiny",
"chatot",
"spiritomb",
"gible",
"gabite",
"garchomp",
"munchlax",
"riolu",
"lucario",
"hippopotas",
"hippowdon",
"skorupi",
"drapion",
"croagunk",
"toxicroak",
"carnivine",
"finneon",
"lumineon",
"mantyke",
"snover",
"abomasnow",
"weavile",
"magnezone",
"lickilicky",
"rhyperior",
"tangrowth",
"electivire",
"magmortar",
"togekiss",
"yanmega",
"leafeon",
"glaceon",
"gliscor",
"mamoswine",
"porygon-z",
"gallade",
"probopass",
"dusknoir",
"froslass",
"rotom",
"uxie",
"mesprit",
"azelf",
"dialga",
"palkia",
"heatran",
"regigigas",
"giratina",
"cresselia",
"phione",
"manaphy",
"darkrai",
"shaymin",
"arceus",
"victini",
"snivy",
"servine",
"serperior",
"tepig",
"pignite",
"emboar",
"oshawott",
"dewott",
"samurott",
"patrat",
"watchog",
"lillipup",
"herdier",
"stoutland",
"purrloin",
"liepard",
"pansage",
"simisage",
"pansear",
"simisear",
"panpour",
"simipour",
"munna",
"musharna",
"pidove",
"tranquill",
"unfezant",
"blitzle",
"zebstrika",
"roggenrola",
"boldore",
"gigalith",
"woobat",
"swoobat",
"drilbur",
"excadrill",
"audino",
"timburr",
"gurdurr",
"conkeldurr",
"tympole",
"palpitoad",
"seismitoad",
"throh",
"sawk",
"sewaddle",
"swadloon",
"leavanny",
"venipede",
"whirlipede",
"scolipede",
"cottonee",
"whimsicott",
"petilil",
"lilligant",
"basculin",
"sandile",
"krokorok",
"krookodile",
"darumaka",
"darmanitan",
"maractus",
"dwebble",
"crustle",
"scraggy",
"scrafty",
"sigilyph",
"yamask",
"cofagrigus",
"tirtouga",
"carracosta",
"archen",
"archeops",
"trubbish",
"garbodor",
"zorua",
"zoroark",
"minccino",
"cinccino",
"gothita",
"gothorita",
"gothitelle",
"solosis",
"duosion",
"reuniclus",
"ducklett",
"swanna",
"vanillite",
"vanillish",
"vanilluxe",
"deerling",
"sawsbuck",
"emolga",
"karrablast",
"escavalier",
"foongus",
"amoonguss",
"frillish",
"jellicent",
"alomomola",
"joltik",
"galvantula",
"ferroseed",
"ferrothorn",
"klink",
"klang",
"klinklang",
"tynamo",
"eelektrik",
"eelektross",
"elgyem",
"beheeyem",
"litwick",
"lampent",
"chandelure",
"axew",
"fraxure",
"haxorus",
"cubchoo",
"beartic",
"cryogonal",
"shelmet",
"accelgor",
"stunfisk",
"mienfoo",
"mienshao",
"druddigon",
"golett",
"golurk",
"pawniard",
"bisharp",
"bouffalant",
"rufflet",
"braviary",
"vullaby",
"mandibuzz",
"heatmor",
"durant",
"deino",
"zweilous",
"hydreigon",
"larvesta",
"volcarona",
"cobalion",
"terrakion",
"virizion",
"tornadus",
"thundurus",
"reshiram",
"zekrom",
"landorus",
"kyurem",
"keldeo",
"meloetta",
"genesect",
"chespin",
"quilladin",
"chesnaught",
"fennekin",
"braixen",
"delphox",
"froakie",
"frogadier",
"greninja",
"bunnelby",
"diggersby",
"fletchling",
"fletchinder",
"talonflame",
"scatterbug",
"spewpa",
"vivillon",
"litleo",
"pyroar",
"flabébé",
"floette",
"florges",
"skiddo",
"gogoat",
"pancham",
"pangoro",
"furfrou",
"espurr",
"meowstic",
"honedge",
"doublade",
"aegislash",
"spritzee",
"aromatisse",
"swirlix",
"slurpuff",
"inkay",
"malamar",
"binacle",
"barbaracle",
"skrelp",
"dragalge",
"clauncher",
"clawitzer",
"helioptile",
"heliolisk",
"tyrunt",
"tyrantrum",
"amaura",
"aurorus",
"sylveon",
"hawlucha",
"dedenne",
"carbink",
"goomy",
"sliggoo",
"goodra",
"klefki",
"phantump",
"trevenant",
"pumpkaboo",
"gourgeist",
"bergmite",
"avalugg",
"noibat",
"noivern",
"xerneas",
"yveltal",
"zygarde",
"diancie",
"hoopa",
"volcanion",
"rowlet",
"dartrix",
"decidueye",
"litten",
"torracat",
"incineroar",
"popplio",
"brionne",
"primarina",
"pikipek",
"trumbeak",
"toucannon",
"yungoos",
"gumshoos",
"grubbin",
"charjabug",
"vikavolt",
"crabrawler",
"crabominable",
"oricorio",
"cutiefly",
"ribombee",
"rockruff",
"lycanroc",
"wishiwashi",
"mareanie",
"toxapex",
"mudbray",
"mudsdale",
"dewpider",
"araquanid",
"fomantis",
"lurantis",
"morelull",
"shiinotic",
"salandit",
"salazzle",
"stufful",
"bewear",
"bounsweet",
"steenee",
"tsareena",
"comfey",
"oranguru",
"passimian",
"wimpod",
"golisopod",
"sandygast",
"palossand",
"pyukumuku",
"type: null",
"silvally",
"minior",
"komala",
"turtonator",
"togedemaru",
"mimikyu",
"bruxish",
"drampa",
"dhelmise",
"jangmo-o",
"hakamo-o",
"kommo-o",
"tapu koko",
"tapu lele",
"tapu bulu",
"tapu fini",
"cosmog",
"cosmoem",
"solgaleo",
"lunala",
"nihilego",
"buzzwole",
"pheromosa",
"xurkitree",
"celesteela",
"kartana",
"guzzlord",
"necrozma",
"magearna",
"marshadow",
"poipole",
"naganadel",
"stakataka",
"blacephalon",
"zeraora",
"meltan",
"melmetal",
"grookey",
"thwackey",
"rillaboom",
"scorbunny",
"raboot",
"cinderace",
"sobble",
"drizzile",
"inteleon",
"skwovet",
"greedent",
"rookidee",
"corvisquire",
"corviknight",
"blipbug",
"dottler",
"orbeetle",
"nickit",
"thievul",
"gossifleur",
"eldegoss",
"wooloo",
"dubwool",
"chewtle",
"drednaw",
"yamper",
"boltund",
"rolycoly",
"carkol",
"coalossal",
"applin",
"flapple",
"appletun",
"silicobra",
"sandaconda",
"cramorant",
"arrokuda",
"barraskewda",
"toxel",
"toxtricity",
"sizzlipede",
"centiskorch",
"clobbopus",
"grapploct",
"sinistea",
"polteageist",
"hatenna",
"hattrem",
"hatterene",
"impidimp",
"morgrem",
"grimmsnarl",
"obstagoon",
"perrserker",
"cursola",
"sirfetch'd",
"mr. rime",
"runerigus",
"milcery",
"alcremie",
"falinks",
"pincurchin",
"snom",
"frosmoth",
"stonjourner",
"eiscue",
"indeedee",
"morpeko",
"cufant",
"copperajah",
"dracozolt",
"arctozolt",
"dracovish",
"arctovish",
"duraludon",
"dreepy",
"drakloak",
"dragapult",
"zacian",
"zamazenta",
"eternatus",
"kubfu",
"urshifu",
"zarude",
"regieleki",
"regidrago",
"glastrier",
"spectrier",
"calyrex",
"wyrdeer",
"kleavor",
"ursaluna",
"basculegion",
"sneasler",
"overqwil",
"enamorus",
"sprigatito",
"floragato",
"meowscarada",
"fuecoco",
"crocalor",
"skeledirge",
"quaxly",
"quaxwell",
"quaquaval",
"lechonk",
"oinkologne",
"tarountula",
"spidops",
"nymble",
"lokix",
"pawmi",
"pawmo",
"pawmot",
"tandemaus",
"maushold",
"fidough",
"dachsbun",
"smoliv",
"dolliv",
"arboliva",
"squawkabilly",
"nacli",
"naclstack",
"garganacl",
"charcadet",
"armarouge",
"ceruledge",
"tadbulb",
"bellibolt",
"wattrel",
"kilowattrel",
"maschiff",
"mabosstiff",
"shroodle",
"grafaiai",
"bramblin",
"brambleghast",
"toedscool",
"toedscruel",
"klawf",
"capsakid",
"scovillain",
"rellor",
"rabsca",
"flittle",
"espathra",
"tinkatink",
"tinkatuff",
"tinkaton",
"wiglett",
"wugtrio",
"bombirdier",
"finizen",
"palafin",
"varoom",
"revavroom",
"cyclizar",
"orthworm",
"glimmet",
"glimmora",
"greavard",
"houndstone",
"flamigo",
"cetoddle",
"cetitan",
"veluza",
"dondozo",
"tatsugiri",
"annihilape",
"clodsire",
"farigiraf",
"dudunsparce",
"kingambit",
"great tusk",
"scream tail",
"brute bonnet",
"flutter mane",
"slither wing",
"sandy shocks",
"iron treads",
"iron bundle",
"iron hands",
"iron jugulis",
"iron moth",
"iron thorns",
"frigibax",
"arctibax",
"baxcalibur",
"gimmighoul",
"gholdengo",
"wo-chien",
"chien-pao",
"ting-lu",
"chi-yu",
"roaring moon",
"iron valiant",
"koraidon",
"miraidon",
"walking wake",
"iron leaves",
"dipplin",
"poltchageist",
"sinistcha",
"okidogi",
"munkidori",
"fezandipiti",
"ogerpon",
"archaludon",
"hydrapple",
"gouging fire",
"raging bolt",
"iron boulder",
"iron crown",
"terapagos",
"pecharunt"
] |
janjibDEV/vit-plantnet300k
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-plantnet300k
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the mikehemberger/plantnet300K dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8831
- Accuracy: 0.8046
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 6.2973 | 0.04 | 100 | 2.0799 | 0.6139 |
| 3.413 | 0.08 | 200 | 1.4738 | 0.7076 |
| 2.6718 | 0.12 | 300 | 1.2331 | 0.7479 |
| 2.308 | 0.16 | 400 | 1.0966 | 0.7701 |
| 2.2116 | 0.2 | 500 | 1.0115 | 0.7834 |
| 1.9719 | 0.24 | 600 | 0.9609 | 0.7910 |
| 1.8785 | 0.28 | 700 | 0.9247 | 0.798 |
| 1.7549 | 0.32 | 800 | 0.9014 | 0.8002 |
| 1.8103 | 0.36 | 900 | 0.8874 | 0.8031 |
| 1.7776 | 0.4 | 1000 | 0.8831 | 0.8046 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"60",
"41",
"40",
"45",
"164",
"172",
"88",
"2",
"181",
"162",
"35",
"46",
"203",
"124",
"84",
"153",
"31",
"3",
"11",
"4",
"189",
"8",
"201",
"19",
"122",
"83",
"25",
"0",
"17",
"196",
"173",
"30",
"66",
"135",
"104",
"112",
"32",
"193",
"68",
"14",
"152",
"141",
"198",
"171",
"170",
"16",
"42",
"187",
"125",
"200",
"26",
"123",
"197",
"61",
"98",
"165",
"54",
"131",
"137",
"47",
"74",
"87",
"27",
"144",
"100",
"36",
"167",
"199",
"7",
"175",
"9",
"166",
"15",
"138",
"168",
"92",
"28",
"97",
"140",
"149",
"174",
"73",
"154",
"55",
"157",
"191",
"139",
"5",
"49",
"176",
"184",
"148",
"51",
"188",
"103",
"132",
"118",
"108",
"113",
"126",
"202",
"146",
"186",
"1",
"38",
"151",
"6",
"133",
"52",
"72",
"99",
"147",
"20",
"34",
"56",
"109",
"102",
"114",
"29",
"39",
"89",
"10",
"65",
"121",
"75",
"80",
"78",
"59",
"53",
"79",
"143",
"136",
"82",
"185",
"128",
"120",
"81",
"158",
"94",
"48",
"76",
"13",
"64",
"43",
"182",
"119",
"50",
"192",
"101",
"12",
"85",
"159",
"190",
"24",
"111",
"18",
"150",
"155",
"90",
"77",
"44",
"91",
"22",
"179",
"161",
"62",
"63",
"160",
"115",
"70",
"110",
"180",
"96",
"130",
"142",
"23",
"67",
"58",
"169",
"177",
"163",
"116",
"71",
"178",
"129",
"106",
"105",
"37",
"107",
"134",
"195",
"21",
"156",
"57",
"127",
"183",
"69",
"86",
"95",
"93",
"194",
"145",
"117",
"33"
] |
Prahaladha/pose_classification
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"calling",
"clapping",
"cycling",
"dancing",
"drinking",
"eating",
"fighting",
"hugging",
"laughing",
"listening_to_music",
"running",
"sitting",
"sleeping",
"texting",
"using_laptop"
] |
SaketR1/road-conditions
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# road-conditions
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1556
- Accuracy: 0.9518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 187 | 0.1757 | 0.9518 |
| No log | 2.0 | 374 | 0.1682 | 0.9578 |
| 0.1014 | 3.0 | 561 | 0.1556 | 0.9518 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Tokenizers 0.21.0
|
[
"good",
"poor",
"satisfactory",
"very_poor"
] |
vikas117/finetuned-ai-real-swin
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-ai-real-swin
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2812
- Accuracy: 0.8871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.3707 | 0.5682 | 25 | 0.2812 | 0.8871 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"ai",
"real"
] |
hannalj/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7072
- Accuracy: 0.98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 1.0 | 8 | 1.3674 | 0.91 |
| 7.0705 | 2.0 | 16 | 0.7072 | 0.98 |
| 3.4816 | 2.6897 | 21 | 0.5873 | 0.98 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"architecture",
"beach",
"bus",
"dinosaur",
"elephant",
"flower",
"food",
"horse",
"mountain",
"tribe"
] |
Thao2202/vit-Facial-Expression-Recognition
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-Facial-Expression-Recognition
This model is a fine-tuned version of [motheecreator/vit-Facial-Expression-Recognition](https://huggingface.co/motheecreator/vit-Facial-Expression-Recognition) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3658
- Accuracy: 0.8753
- F1: 0.8737
- Precision: 0.8749
- Recall: 0.8753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 4.5618 | 0.2164 | 100 | 0.3710 | 0.8762 | 0.8746 | 0.8752 | 0.8762 |
| 4.6091 | 0.4328 | 200 | 0.3677 | 0.8761 | 0.8747 | 0.8762 | 0.8761 |
| 4.5423 | 0.6492 | 300 | 0.3695 | 0.8748 | 0.8730 | 0.8745 | 0.8748 |
| 4.6307 | 0.8656 | 400 | 0.3745 | 0.8711 | 0.8692 | 0.8730 | 0.8711 |
| 4.3953 | 1.0801 | 500 | 0.3745 | 0.8727 | 0.8711 | 0.8724 | 0.8727 |
| 4.341 | 1.2965 | 600 | 0.3803 | 0.8688 | 0.8674 | 0.8688 | 0.8688 |
| 4.5471 | 1.5128 | 700 | 0.3841 | 0.8713 | 0.8699 | 0.8710 | 0.8713 |
| 4.522 | 1.7292 | 800 | 0.3836 | 0.8679 | 0.8662 | 0.8678 | 0.8679 |
| 4.5596 | 1.9456 | 900 | 0.3885 | 0.8672 | 0.8649 | 0.8678 | 0.8672 |
| 4.1491 | 2.1601 | 1000 | 0.3849 | 0.8691 | 0.8677 | 0.8689 | 0.8691 |
| 4.1037 | 2.3765 | 1100 | 0.3906 | 0.8667 | 0.8647 | 0.8669 | 0.8667 |
| 4.0033 | 2.5929 | 1200 | 0.3784 | 0.8704 | 0.8687 | 0.8699 | 0.8704 |
| 3.9759 | 2.8093 | 1300 | 0.3677 | 0.8752 | 0.8737 | 0.8747 | 0.8752 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"angry",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
jcguerra10/vit-platzi-beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-platzi-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0068
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1275 | 3.8462 | 500 | 0.0068 | 1.0 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
lohasingh/vit-Facial-Expression-Recognition
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-Facial-Expression-Recognition
This model is a fine-tuned version of [motheecreator/vit-Facial-Expression-Recognition](https://huggingface.co/motheecreator/vit-Facial-Expression-Recognition) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3806
- eval_accuracy: 0.8748
- eval_runtime: 375.2927
- eval_samples_per_second: 78.792
- eval_steps_per_second: 2.465
- epoch: 0.0433
- step: 20
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"angry",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
victorwkey/vit_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0137
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1295 | 3.8462 | 500 | 0.0137 | 0.9925 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
JacobChao/vit-xray-pneumonia-classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-xray-pneumonia-classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0867
- Accuracy: 0.9700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 2.0067 | 0.9882 | 63 | 0.2101 | 0.9313 |
| 0.8054 | 1.9882 | 126 | 0.1542 | 0.9519 |
| 0.7482 | 2.9882 | 189 | 0.1328 | 0.9451 |
| 0.6 | 3.9882 | 252 | 0.1121 | 0.9588 |
| 0.5436 | 4.9882 | 315 | 0.1295 | 0.9494 |
| 0.4978 | 5.9882 | 378 | 0.1167 | 0.9605 |
| 0.4683 | 6.9882 | 441 | 0.1033 | 0.9622 |
| 0.4701 | 7.9882 | 504 | 0.1176 | 0.9579 |
| 0.3527 | 8.9882 | 567 | 0.1119 | 0.9571 |
| 0.3545 | 9.9882 | 630 | 0.0990 | 0.9639 |
| 0.3264 | 10.9882 | 693 | 0.0838 | 0.9717 |
| 0.3305 | 11.9882 | 756 | 0.0733 | 0.9734 |
| 0.2702 | 12.9882 | 819 | 0.0834 | 0.9717 |
| 0.2764 | 13.9882 | 882 | 0.0763 | 0.9734 |
| 0.286 | 14.9882 | 945 | 0.0867 | 0.9700 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"normal",
"pneumonia"
] |
tinutmap/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6238
- Accuracy: 0.903
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 10.9945 | 1.0 | 63 | 2.5462 | 0.829 |
| 7.5619 | 2.0 | 126 | 1.8143 | 0.883 |
| 6.5257 | 2.96 | 186 | 1.6238 | 0.903 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1.post306
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
desarrolloasesoreslocales/cvt-13-normal
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cvt-13-normal
This model is a fine-tuned version of [microsoft/cvt-13](https://huggingface.co/microsoft/cvt-13) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0613
- Accuracy: 0.7790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 142
- eval_batch_size: 142
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 568
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.8652 | 0.7946 |
| 2.3799 | 2.0 | 14 | 0.8683 | 0.7912 |
| 2.4491 | 3.0 | 21 | 0.8807 | 0.7825 |
| 2.4491 | 4.0 | 28 | 0.9120 | 0.7851 |
| 2.3011 | 5.0 | 35 | 0.9865 | 0.7565 |
| 2.5444 | 6.0 | 42 | 0.9863 | 0.7643 |
| 2.5444 | 7.0 | 49 | 1.1580 | 0.7513 |
| 2.4127 | 8.0 | 56 | 1.1091 | 0.7383 |
| 2.8757 | 9.0 | 63 | 1.0644 | 0.7496 |
| 2.5231 | 10.0 | 70 | 1.0888 | 0.7400 |
| 2.5231 | 11.0 | 77 | 1.0668 | 0.7548 |
| 2.7538 | 12.0 | 84 | 1.0946 | 0.7435 |
| 2.7032 | 13.0 | 91 | 1.0676 | 0.7608 |
| 2.7032 | 14.0 | 98 | 1.0409 | 0.7426 |
| 2.4581 | 15.0 | 105 | 1.0679 | 0.7548 |
| 2.7023 | 16.0 | 112 | 1.0129 | 0.7487 |
| 2.7023 | 17.0 | 119 | 1.1501 | 0.7366 |
| 2.5456 | 18.0 | 126 | 1.0452 | 0.7426 |
| 2.7061 | 19.0 | 133 | 1.0034 | 0.7565 |
| 2.3491 | 20.0 | 140 | 1.0389 | 0.7574 |
| 2.3491 | 21.0 | 147 | 0.9999 | 0.7782 |
| 2.4926 | 22.0 | 154 | 1.0131 | 0.7652 |
| 2.5111 | 23.0 | 161 | 1.0940 | 0.7340 |
| 2.5111 | 24.0 | 168 | 1.0786 | 0.7582 |
| 2.3443 | 25.0 | 175 | 1.0768 | 0.7617 |
| 2.5738 | 26.0 | 182 | 0.9781 | 0.7782 |
| 2.5738 | 27.0 | 189 | 0.9955 | 0.7574 |
| 2.3528 | 28.0 | 196 | 1.0117 | 0.7669 |
| 2.599 | 29.0 | 203 | 1.0806 | 0.7660 |
| 2.3279 | 30.0 | 210 | 1.0101 | 0.7738 |
| 2.3279 | 31.0 | 217 | 1.0981 | 0.7617 |
| 2.5649 | 32.0 | 224 | 1.0185 | 0.7782 |
| 2.5432 | 33.0 | 231 | 1.1070 | 0.7591 |
| 2.5432 | 34.0 | 238 | 1.0705 | 0.7626 |
| 2.3521 | 35.0 | 245 | 1.0749 | 0.7574 |
| 2.5948 | 36.0 | 252 | 1.0508 | 0.7626 |
| 2.5948 | 37.0 | 259 | 1.0374 | 0.7712 |
| 2.3305 | 38.0 | 266 | 1.0249 | 0.7643 |
| 2.4833 | 39.0 | 273 | 1.0345 | 0.7712 |
| 2.1504 | 40.0 | 280 | 1.0252 | 0.7617 |
| 2.1504 | 41.0 | 287 | 1.0361 | 0.7574 |
| 2.4083 | 42.0 | 294 | 0.9939 | 0.7678 |
| 2.37 | 43.0 | 301 | 1.0186 | 0.7695 |
| 2.37 | 44.0 | 308 | 1.0861 | 0.7643 |
| 2.2043 | 45.0 | 315 | 1.0182 | 0.7643 |
| 2.3554 | 46.0 | 322 | 1.0584 | 0.7539 |
| 2.3554 | 47.0 | 329 | 1.0541 | 0.7617 |
| 2.1541 | 48.0 | 336 | 1.0967 | 0.7686 |
| 2.3739 | 49.0 | 343 | 1.1266 | 0.7721 |
| 2.1028 | 50.0 | 350 | 1.1116 | 0.7652 |
| 2.1028 | 51.0 | 357 | 1.0804 | 0.7643 |
| 2.3381 | 52.0 | 364 | 1.1142 | 0.7556 |
| 2.2902 | 53.0 | 371 | 1.1135 | 0.7652 |
| 2.2902 | 54.0 | 378 | 1.1024 | 0.7461 |
| 2.2452 | 55.0 | 385 | 1.0722 | 0.7626 |
| 2.4121 | 56.0 | 392 | 1.1089 | 0.7704 |
| 2.4121 | 57.0 | 399 | 1.0923 | 0.7548 |
| 2.2067 | 58.0 | 406 | 1.0811 | 0.7591 |
| 2.3894 | 59.0 | 413 | 1.1097 | 0.7634 |
| 2.2188 | 60.0 | 420 | 1.0988 | 0.7643 |
| 2.2188 | 61.0 | 427 | 1.0558 | 0.7686 |
| 2.2859 | 62.0 | 434 | 1.0569 | 0.7695 |
| 2.2293 | 63.0 | 441 | 1.1053 | 0.7643 |
| 2.2293 | 64.0 | 448 | 1.0962 | 0.7652 |
| 2.136 | 65.0 | 455 | 1.0505 | 0.7756 |
| 2.2507 | 66.0 | 462 | 1.0425 | 0.7799 |
| 2.2507 | 67.0 | 469 | 1.0703 | 0.7756 |
| 2.0269 | 68.0 | 476 | 1.0826 | 0.7695 |
| 2.2972 | 69.0 | 483 | 1.0569 | 0.7747 |
| 2.0192 | 70.0 | 490 | 1.0773 | 0.7695 |
| 2.0192 | 71.0 | 497 | 1.1000 | 0.7669 |
| 2.3668 | 72.0 | 504 | 1.1048 | 0.7712 |
| 2.1285 | 73.0 | 511 | 1.0883 | 0.7712 |
| 2.1285 | 74.0 | 518 | 1.0893 | 0.7738 |
| 2.0487 | 75.0 | 525 | 1.0644 | 0.7799 |
| 2.2508 | 76.0 | 532 | 1.0686 | 0.7764 |
| 2.2508 | 77.0 | 539 | 1.0759 | 0.7764 |
| 2.0141 | 78.0 | 546 | 1.0673 | 0.7756 |
| 2.1662 | 79.0 | 553 | 1.0610 | 0.7842 |
| 2.0567 | 80.0 | 560 | 1.0571 | 0.7851 |
| 2.0567 | 81.0 | 567 | 1.0682 | 0.7799 |
| 2.2602 | 82.0 | 574 | 1.0700 | 0.7782 |
| 2.3018 | 83.0 | 581 | 1.0703 | 0.7790 |
| 2.3018 | 84.0 | 588 | 1.0597 | 0.7825 |
| 2.0309 | 85.0 | 595 | 1.0560 | 0.7825 |
| 2.108 | 85.8 | 600 | 1.0613 | 0.7790 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"circulación prohibida",
"circular por carriles de circulación reservada",
"estacionar en carril-taxi o en carril-bus",
"estacionar en el centro de la calzada",
"estacionar en espacio reservado",
"estacionar en espacio reservado para personas de movilidad reducida",
"estacionar en espacio reservado para vehículo eléctrico, sin tener esa condición",
"estacionar en intersección",
"estacionar en lugar prohibido por línea amarilla discontinua",
"estacionar en lugar prohibido por línea amarilla en zig-zag",
"estacionar en un carril bici",
"estacionar en un lugar donde se impide la retirada o vaciado de contenedores",
"estacionar en vado señalizado",
"estacionar en zonas de carga y descarga",
"estacionar o parar donde está prohibida la parada por la señal vertical correspondiente",
"estacionar o parar en doble fila",
"estacionar o parar en paso para peatones",
"estacionar o parar en un lugar prohibido por linea amarilla continua",
"estacionar o parar sobre acera",
"estacionar o parar un vehículo en rebaje en la acera para disminuidos físicos",
"estacionar un vehículo en zonas señalizadas con franjas en el pavimento (isleta)"
] |
athiraet97/run_name
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# run_name
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8473
- Accuracy: 0.2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"kulfi",
"masala_dosa",
"momos",
"pav_bhaji",
"pizza",
"samosa"
] |
liu-you/convnext-tiny-224-finetuned-eurosat-albumentations
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-eurosat-albumentations
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"annualcrop",
"forest",
"herbaceousvegetation",
"highway",
"industrial",
"pasture",
"permanentcrop",
"residential",
"river",
"sealake"
] |
audaipurwala/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6098
- Accuracy: 0.908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 10.8023 | 1.0 | 63 | 2.4896 | 0.834 |
| 7.2983 | 2.0 | 126 | 1.7776 | 0.879 |
| 6.402 | 2.96 | 186 | 1.6098 | 0.908 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
ariG23498/vit_base_patch16_224.orig_in21k_ft_in1k.ft_food101
|
# Food Classification with ViTs
This repo contains a fine tuned version of the ViT Base, on the Food101 dataset. This showcases the `timm` model being fine tuned with `transformer`'s `Trainer` API.
|
[
"apple_pie",
"baby_back_ribs",
"baklava",
"beef_carpaccio",
"beef_tartare",
"beet_salad",
"beignets",
"bibimbap",
"bread_pudding",
"breakfast_burrito",
"bruschetta",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare",
"waffles"
] |
ITSheep/breastcancer-ultrasound-ViT
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"benign",
"malignant",
"normal"
] |
victorwkey/vit-food101
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-food101
This model is a fine-tuned version of [Falconsai/nsfw_image_detection](https://huggingface.co/Falconsai/nsfw_image_detection) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0192
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1276 | 3.8462 | 500 | 0.0192 | 0.9925 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"among us",
"apex legends",
"fortnite",
"forza horizon",
"free fire",
"genshin impact",
"god of war",
"minecraft",
"roblox",
"terraria"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.