model_id
stringlengths 7
105
| model_card
stringlengths 1
130k
| model_labels
listlengths 2
80k
|
---|---|---|
renaldidafa/results
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8706
- Accuracy: 0.275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0054 | 1.0 | 20 | 1.9922 | 0.175 |
| 1.6509 | 2.0 | 40 | 1.9052 | 0.2375 |
| 1.4793 | 3.0 | 60 | 1.8706 | 0.275 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7"
] |
fathurim/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2386
- Accuracy: 0.5625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0874 | 1.0 | 10 | 2.0621 | 0.2313 |
| 2.036 | 2.0 | 20 | 2.0392 | 0.2375 |
| 1.9297 | 3.0 | 30 | 1.9592 | 0.3 |
| 1.7723 | 4.0 | 40 | 1.7877 | 0.3937 |
| 1.6184 | 5.0 | 50 | 1.6475 | 0.45 |
| 1.5407 | 6.0 | 60 | 1.5514 | 0.4875 |
| 1.4197 | 7.0 | 70 | 1.4967 | 0.4938 |
| 1.3092 | 8.0 | 80 | 1.4332 | 0.4813 |
| 1.1251 | 9.0 | 90 | 1.4457 | 0.4688 |
| 1.2081 | 10.0 | 100 | 1.3603 | 0.4938 |
| 0.9803 | 11.0 | 110 | 1.3501 | 0.5188 |
| 1.0105 | 12.0 | 120 | 1.3212 | 0.55 |
| 0.9264 | 13.0 | 130 | 1.2895 | 0.575 |
| 0.9229 | 14.0 | 140 | 1.2882 | 0.5188 |
| 0.9397 | 15.0 | 150 | 1.4027 | 0.475 |
| 0.8322 | 16.0 | 160 | 1.2824 | 0.5312 |
| 0.8185 | 17.0 | 170 | 1.3025 | 0.5 |
| 0.7592 | 18.0 | 180 | 1.3629 | 0.475 |
| 0.7416 | 19.0 | 190 | 1.3221 | 0.5437 |
| 0.6323 | 20.0 | 200 | 1.2714 | 0.5563 |
| 0.6453 | 21.0 | 210 | 1.3015 | 0.4938 |
| 0.6049 | 22.0 | 220 | 1.3065 | 0.5375 |
| 0.5919 | 23.0 | 230 | 1.2579 | 0.5375 |
| 0.5354 | 24.0 | 240 | 1.2428 | 0.55 |
| 0.6379 | 25.0 | 250 | 1.2884 | 0.5375 |
| 0.5681 | 26.0 | 260 | 1.2201 | 0.5938 |
| 0.4275 | 27.0 | 270 | 1.3199 | 0.4875 |
| 0.4791 | 28.0 | 280 | 1.3027 | 0.5312 |
| 0.4693 | 29.0 | 290 | 1.3737 | 0.4813 |
| 0.5528 | 30.0 | 300 | 1.3342 | 0.4688 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
mrisdi/emotion_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3578
- Accuracy: 0.5125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0796 | 1.0 | 10 | 2.0709 | 0.1562 |
| 2.0631 | 2.0 | 20 | 2.0496 | 0.225 |
| 2.0242 | 3.0 | 30 | 2.0148 | 0.2875 |
| 1.9387 | 4.0 | 40 | 1.9268 | 0.325 |
| 1.789 | 5.0 | 50 | 1.7454 | 0.3812 |
| 1.6216 | 6.0 | 60 | 1.5996 | 0.3937 |
| 1.4795 | 7.0 | 70 | 1.5577 | 0.375 |
| 1.3735 | 8.0 | 80 | 1.5090 | 0.4062 |
| 1.2889 | 9.0 | 90 | 1.4418 | 0.4313 |
| 1.2092 | 10.0 | 100 | 1.4209 | 0.425 |
| 1.1127 | 11.0 | 110 | 1.3828 | 0.4437 |
| 1.032 | 12.0 | 120 | 1.3507 | 0.4562 |
| 0.9616 | 13.0 | 130 | 1.3556 | 0.4875 |
| 0.9099 | 14.0 | 140 | 1.3204 | 0.5188 |
| 0.8425 | 15.0 | 150 | 1.3490 | 0.4688 |
| 0.806 | 16.0 | 160 | 1.3690 | 0.5062 |
| 0.7377 | 17.0 | 170 | 1.3344 | 0.5563 |
| 0.677 | 18.0 | 180 | 1.4178 | 0.4625 |
| 0.6071 | 19.0 | 190 | 1.3305 | 0.4875 |
| 0.5581 | 20.0 | 200 | 1.3070 | 0.5 |
| 0.5599 | 21.0 | 210 | 1.3245 | 0.4938 |
| 0.5222 | 22.0 | 220 | 1.3765 | 0.4562 |
| 0.4856 | 23.0 | 230 | 1.3345 | 0.5 |
| 0.458 | 24.0 | 240 | 1.2938 | 0.5188 |
| 0.4393 | 25.0 | 250 | 1.3380 | 0.5062 |
| 0.4239 | 26.0 | 260 | 1.3756 | 0.525 |
| 0.4443 | 27.0 | 270 | 1.4586 | 0.4813 |
| 0.4374 | 28.0 | 280 | 1.2996 | 0.55 |
| 0.3917 | 29.0 | 290 | 1.3222 | 0.5062 |
| 0.3986 | 30.0 | 300 | 1.4486 | 0.4813 |
| 0.353 | 31.0 | 310 | 1.5204 | 0.4562 |
| 0.3598 | 32.0 | 320 | 1.3027 | 0.5625 |
| 0.3538 | 33.0 | 330 | 1.6122 | 0.4313 |
| 0.3246 | 34.0 | 340 | 1.5237 | 0.4437 |
| 0.3089 | 35.0 | 350 | 1.4717 | 0.5125 |
| 0.3278 | 36.0 | 360 | 1.5666 | 0.45 |
| 0.2865 | 37.0 | 370 | 1.4377 | 0.5 |
| 0.2958 | 38.0 | 380 | 1.4766 | 0.4938 |
| 0.3036 | 39.0 | 390 | 1.5345 | 0.4375 |
| 0.286 | 40.0 | 400 | 1.4174 | 0.5062 |
| 0.3099 | 41.0 | 410 | 1.4087 | 0.4625 |
| 0.2801 | 42.0 | 420 | 1.4439 | 0.4813 |
| 0.2973 | 43.0 | 430 | 1.4712 | 0.4938 |
| 0.2892 | 44.0 | 440 | 1.4099 | 0.5188 |
| 0.2835 | 45.0 | 450 | 1.3011 | 0.5563 |
| 0.261 | 46.0 | 460 | 1.6512 | 0.4188 |
| 0.2589 | 47.0 | 470 | 1.5651 | 0.4375 |
| 0.2806 | 48.0 | 480 | 1.5194 | 0.4938 |
| 0.2749 | 49.0 | 490 | 1.4519 | 0.525 |
| 0.2482 | 50.0 | 500 | 1.4127 | 0.5188 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
itsTomLie/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8494
- Accuracy: 0.5875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2496 | 100.0 | 1000 | 1.5520 | 0.5125 |
| 0.1094 | 200.0 | 2000 | 1.6204 | 0.55 |
| 0.096 | 300.0 | 3000 | 1.9443 | 0.5375 |
| 0.0543 | 400.0 | 4000 | 2.0227 | 0.5437 |
| 0.0455 | 500.0 | 5000 | 2.0049 | 0.5563 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
anujbishtTx/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6404
- Accuracy: 0.898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7512 | 0.992 | 62 | 2.5606 | 0.827 |
| 1.8204 | 2.0 | 125 | 1.8020 | 0.891 |
| 1.6158 | 2.976 | 186 | 1.6404 | 0.898 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cpu
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
Devon12/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4301
- Accuracy: 0.4688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0867 | 1.0 | 10 | 2.0602 | 0.1938 |
| 2.0294 | 2.0 | 20 | 1.9887 | 0.2562 |
| 1.9159 | 3.0 | 30 | 1.8738 | 0.3438 |
| 1.763 | 4.0 | 40 | 1.7523 | 0.375 |
| 1.6138 | 5.0 | 50 | 1.6505 | 0.4 |
| 1.5141 | 6.0 | 60 | 1.5861 | 0.4125 |
| 1.4328 | 7.0 | 70 | 1.5303 | 0.45 |
| 1.3357 | 8.0 | 80 | 1.4986 | 0.475 |
| 1.2833 | 9.0 | 90 | 1.4628 | 0.4688 |
| 1.2248 | 10.0 | 100 | 1.4501 | 0.5 |
| 1.1796 | 11.0 | 110 | 1.3972 | 0.4875 |
| 1.1526 | 12.0 | 120 | 1.4359 | 0.4813 |
| 1.1177 | 13.0 | 130 | 1.4077 | 0.4813 |
| 1.1006 | 14.0 | 140 | 1.3942 | 0.5 |
| 1.0679 | 15.0 | 150 | 1.3934 | 0.4875 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
Reyga/results
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [nateraw/vit-age-classifier](https://huggingface.co/nateraw/vit-age-classifier) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0824
- Accuracy: 0.9875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 100 | 1.5403 | 0.5375 |
| No log | 2.0 | 200 | 0.7882 | 0.725 |
| No log | 3.0 | 300 | 0.2481 | 0.9875 |
| No log | 4.0 | 400 | 0.1088 | 0.9875 |
| 0.8658 | 5.0 | 500 | 0.0824 | 0.9875 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"0-2",
"3-9",
"10-19",
"20-29",
"30-39",
"40-49",
"50-59",
"60-69",
"more than 70"
] |
Yudsky/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [dennisjooo/emotion_classification](https://huggingface.co/dennisjooo/emotion_classification) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0965
- Accuracy: 0.6375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1559 | 1.0 | 20 | 1.2425 | 0.5437 |
| 1.1243 | 2.0 | 40 | 1.1168 | 0.6312 |
| 1.0982 | 3.0 | 60 | 1.1411 | 0.6312 |
| 1.1412 | 4.0 | 80 | 1.1407 | 0.6625 |
| 1.1165 | 5.0 | 100 | 1.1910 | 0.6188 |
| 1.0722 | 6.0 | 120 | 1.1595 | 0.6125 |
| 1.1606 | 7.0 | 140 | 1.1311 | 0.6562 |
| 1.0792 | 8.0 | 160 | 1.1579 | 0.5938 |
| 1.0923 | 9.0 | 180 | 1.2815 | 0.5563 |
| 1.1298 | 10.0 | 200 | 1.0916 | 0.675 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
dariel36/results
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8640
- Accuracy: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 2.8449 | 0.0 |
| No log | 2.0 | 80 | 2.9103 | 0.0 |
| No log | 3.0 | 120 | 2.8640 | 0.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7"
] |
syaha/Image-Classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Image-Classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2851
- Accuracy: 0.5563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9706 | 1.0 | 20 | 1.9258 | 0.35 |
| 1.672 | 2.0 | 40 | 1.7025 | 0.4625 |
| 1.4489 | 3.0 | 60 | 1.5581 | 0.4313 |
| 1.2031 | 4.0 | 80 | 1.4534 | 0.5 |
| 0.9503 | 5.0 | 100 | 1.3794 | 0.5 |
| 0.758 | 6.0 | 120 | 1.3283 | 0.5312 |
| 0.6021 | 7.0 | 140 | 1.3007 | 0.5125 |
| 0.4784 | 8.0 | 160 | 1.2851 | 0.5563 |
| 0.3682 | 9.0 | 180 | 1.2815 | 0.525 |
| 0.3117 | 10.0 | 200 | 1.3074 | 0.5125 |
| 0.2753 | 11.0 | 220 | 1.2945 | 0.525 |
| 0.2585 | 12.0 | 240 | 1.2903 | 0.5375 |
| 0.2483 | 13.0 | 260 | 1.2903 | 0.5437 |
| 0.245 | 14.0 | 280 | 1.2927 | 0.5375 |
| 0.2459 | 15.0 | 300 | 1.2925 | 0.5375 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7"
] |
diwa02/results
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5246
- Accuracy: 0.4375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 80 | 1.7185 | 0.275 |
| 1.884 | 2.0 | 160 | 1.5676 | 0.4062 |
| 1.4761 | 3.0 | 240 | 1.5246 | 0.4375 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7"
] |
ruben09/emotion_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2493
- Accuracy: 0.5687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0679 | 1.0 | 10 | 2.0574 | 0.175 |
| 2.0366 | 2.0 | 20 | 2.0083 | 0.2812 |
| 1.9469 | 3.0 | 30 | 1.9119 | 0.35 |
| 1.8166 | 4.0 | 40 | 1.7702 | 0.4125 |
| 1.6821 | 5.0 | 50 | 1.6176 | 0.45 |
| 1.5587 | 6.0 | 60 | 1.5747 | 0.425 |
| 1.4703 | 7.0 | 70 | 1.4444 | 0.5375 |
| 1.4032 | 8.0 | 80 | 1.4226 | 0.5312 |
| 1.3367 | 9.0 | 90 | 1.3937 | 0.5188 |
| 1.2889 | 10.0 | 100 | 1.3186 | 0.5375 |
| 1.2136 | 11.0 | 110 | 1.3313 | 0.55 |
| 1.1745 | 12.0 | 120 | 1.3027 | 0.5312 |
| 1.1477 | 13.0 | 130 | 1.3004 | 0.5375 |
| 1.1414 | 14.0 | 140 | 1.2442 | 0.55 |
| 1.1202 | 15.0 | 150 | 1.2957 | 0.5062 |
| 1.0923 | 16.0 | 160 | 1.3045 | 0.5125 |
| 1.0765 | 17.0 | 170 | 1.2533 | 0.5563 |
| 1.0678 | 18.0 | 180 | 1.2392 | 0.5437 |
| 1.0837 | 19.0 | 190 | 1.2750 | 0.5375 |
| 1.0562 | 20.0 | 200 | 1.2275 | 0.5625 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
Vicmengmeng/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6255
- Accuracy: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6779 | 0.992 | 62 | 2.5162 | 0.822 |
| 1.8259 | 2.0 | 125 | 1.8007 | 0.87 |
| 1.604 | 2.976 | 186 | 1.6255 | 0.9 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.2.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
Stormlazer/vit-emotion-classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-emotion-classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3912
- Accuracy: 0.5687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.058 | 1.0 | 80 | 1.9682 | 0.3063 |
| 1.7534 | 2.0 | 160 | 1.7016 | 0.3875 |
| 1.5632 | 3.0 | 240 | 1.5568 | 0.4688 |
| 1.2999 | 4.0 | 320 | 1.4694 | 0.5437 |
| 1.1246 | 5.0 | 400 | 1.3912 | 0.5687 |
| 0.9904 | 6.0 | 480 | 1.3551 | 0.5625 |
| 0.8557 | 7.0 | 560 | 1.3209 | 0.5625 |
| 0.7612 | 8.0 | 640 | 1.3006 | 0.5625 |
| 0.6658 | 9.0 | 720 | 1.2911 | 0.5687 |
| 0.6531 | 10.0 | 800 | 1.2854 | 0.5563 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
kiwinonono/results
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5590
- Accuracy: 0.0625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6612 | 1.0 | 40 | 3.9513 | 0.0 |
| 0.8129 | 2.0 | 80 | 3.9721 | 0.025 |
| 0.3799 | 3.0 | 120 | 4.3376 | 0.0125 |
| 0.0946 | 4.0 | 160 | 4.4142 | 0.0563 |
| 0.019 | 5.0 | 200 | 4.5590 | 0.0625 |
| 0.0062 | 6.0 | 240 | 4.9286 | 0.0437 |
| 0.0039 | 7.0 | 280 | 5.0577 | 0.0437 |
| 0.0028 | 8.0 | 320 | 5.1624 | 0.0437 |
| 0.0024 | 9.0 | 360 | 5.2316 | 0.0437 |
| 0.0023 | 10.0 | 400 | 5.2923 | 0.0437 |
| 0.0019 | 11.0 | 440 | 5.3317 | 0.0375 |
| 0.0017 | 12.0 | 480 | 5.3658 | 0.0375 |
| 0.0016 | 13.0 | 520 | 5.3915 | 0.0375 |
| 0.0016 | 14.0 | 560 | 5.4004 | 0.0375 |
| 0.0016 | 15.0 | 600 | 5.4022 | 0.0375 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
michellewidjaja/EmotionAgeModel
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3452
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.5498 | 0.4188 |
| 1.7801 | 2.0 | 80 | 1.4184 | 0.4938 |
| 0.8728 | 3.0 | 120 | 1.3452 | 0.5 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
ibnuls/ibnuls
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ibnuls
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7402
- Accuracy: 0.3937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 5 | 2.0705 | 0.1125 |
| 2.0592 | 2.0 | 10 | 2.0489 | 0.1375 |
| 2.0592 | 3.0 | 15 | 2.0209 | 0.1938 |
| 1.956 | 4.0 | 20 | 1.9848 | 0.2437 |
| 1.956 | 5.0 | 25 | 1.9454 | 0.2875 |
| 1.8228 | 6.0 | 30 | 1.9015 | 0.3187 |
| 1.8228 | 7.0 | 35 | 1.8645 | 0.35 |
| 1.6978 | 8.0 | 40 | 1.8305 | 0.3625 |
| 1.6978 | 9.0 | 45 | 1.8024 | 0.3625 |
| 1.5961 | 10.0 | 50 | 1.7789 | 0.3688 |
| 1.5961 | 11.0 | 55 | 1.7616 | 0.375 |
| 1.5232 | 12.0 | 60 | 1.7490 | 0.3812 |
| 1.5232 | 13.0 | 65 | 1.7402 | 0.3937 |
| 1.4781 | 14.0 | 70 | 1.7346 | 0.3937 |
| 1.4781 | 15.0 | 75 | 1.7323 | 0.3937 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
FellOffTheStairs/Emotional_Recognition_New1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Emotional_Recognition_New1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
yudhaananda/emotion_recognition
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_recognition
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7451
- Accuracy: 0.4125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 5 | 2.0629 | 0.1625 |
| 2.0494 | 2.0 | 10 | 2.0216 | 0.2375 |
| 2.0494 | 3.0 | 15 | 1.9567 | 0.3438 |
| 1.8758 | 4.0 | 20 | 1.8914 | 0.3937 |
| 1.8758 | 5.0 | 25 | 1.8314 | 0.3937 |
| 1.6857 | 6.0 | 30 | 1.7821 | 0.3812 |
| 1.6857 | 7.0 | 35 | 1.7451 | 0.4125 |
| 1.5477 | 8.0 | 40 | 1.7205 | 0.4125 |
| 1.5477 | 9.0 | 45 | 1.7058 | 0.4125 |
| 1.4739 | 10.0 | 50 | 1.7010 | 0.4125 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
aneirady/results
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [nateraw/vit-age-classifier](https://huggingface.co/nateraw/vit-age-classifier) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7301
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0108 | 1.0 | 40 | 1.9813 | 0.325 |
| 1.7144 | 2.0 | 80 | 1.8097 | 0.45 |
| 1.5272 | 3.0 | 120 | 1.7301 | 0.5 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
FellOffTheStairs/Emotional_Recognition_New2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Emotional_Recognition_New2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
AlCyede/emotion-classifier
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7380
- Accuracy: 0.45
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 10 | 2.0828 | 0.1688 |
| No log | 2.0 | 20 | 2.0820 | 0.1688 |
| No log | 3.0 | 30 | 2.0807 | 0.175 |
| No log | 4.0 | 40 | 2.0789 | 0.1875 |
| No log | 5.0 | 50 | 2.0763 | 0.1938 |
| No log | 6.0 | 60 | 2.0733 | 0.1875 |
| No log | 7.0 | 70 | 2.0697 | 0.1875 |
| No log | 8.0 | 80 | 2.0656 | 0.1875 |
| No log | 9.0 | 90 | 2.0605 | 0.2125 |
| No log | 10.0 | 100 | 2.0540 | 0.2313 |
| No log | 11.0 | 110 | 2.0462 | 0.2625 |
| No log | 12.0 | 120 | 2.0369 | 0.2687 |
| No log | 13.0 | 130 | 2.0259 | 0.2687 |
| No log | 14.0 | 140 | 2.0117 | 0.2687 |
| No log | 15.0 | 150 | 1.9947 | 0.3125 |
| No log | 16.0 | 160 | 1.9763 | 0.2938 |
| No log | 17.0 | 170 | 1.9547 | 0.3125 |
| No log | 18.0 | 180 | 1.9313 | 0.325 |
| No log | 19.0 | 190 | 1.9075 | 0.35 |
| No log | 20.0 | 200 | 1.8817 | 0.3563 |
| No log | 21.0 | 210 | 1.8535 | 0.3812 |
| No log | 22.0 | 220 | 1.8244 | 0.4062 |
| No log | 23.0 | 230 | 1.7954 | 0.4188 |
| No log | 24.0 | 240 | 1.7664 | 0.4375 |
| No log | 25.0 | 250 | 1.7380 | 0.45 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
shadafifast/results
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3057
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6189 | 1.0 | 80 | 1.4882 | 0.3875 |
| 0.9746 | 2.0 | 160 | 1.3714 | 0.475 |
| 0.5452 | 3.0 | 240 | 1.3057 | 0.5 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
raffaelsiregar/emotions-classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch32-224-in21k](https://huggingface.co/google/vit-base-patch32-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4692
- Accuracy: 0.4562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7426 | 1.0 | 40 | 1.4692 | 0.4562 |
| 0.4647 | 2.0 | 80 | 1.5033 | 0.4313 |
| 0.2527 | 3.0 | 120 | 1.5517 | 0.4813 |
| 0.1551 | 4.0 | 160 | 1.6071 | 0.4688 |
| 0.113 | 5.0 | 200 | 1.6474 | 0.475 |
| 0.0914 | 6.0 | 240 | 1.6752 | 0.45 |
| 0.0774 | 7.0 | 280 | 1.7003 | 0.45 |
| 0.0698 | 8.0 | 320 | 1.7336 | 0.4437 |
| 0.063 | 9.0 | 360 | 1.7595 | 0.45 |
| 0.0583 | 10.0 | 400 | 1.7778 | 0.4437 |
| 0.0551 | 11.0 | 440 | 1.7938 | 0.4375 |
| 0.0531 | 12.0 | 480 | 1.8082 | 0.4375 |
| 0.0509 | 13.0 | 520 | 1.8176 | 0.4437 |
| 0.0499 | 14.0 | 560 | 1.8230 | 0.4375 |
| 0.0494 | 15.0 | 600 | 1.8249 | 0.4375 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7"
] |
sandi-irvan/results
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1456
- Accuracy: 0.5125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0195 | 1.0 | 40 | 2.1213 | 0.5 |
| 0.0183 | 2.0 | 80 | 2.1614 | 0.5062 |
| 0.0178 | 3.0 | 120 | 2.1468 | 0.5062 |
| 0.0172 | 4.0 | 160 | 2.1430 | 0.5125 |
| 0.017 | 5.0 | 200 | 2.1456 | 0.5125 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7"
] |
tangg555/clip-vit-large-patch14-finetuned-openai-clip-vit-large-patch14-emnist-letter
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-vit-large-patch14-finetuned-openai-clip-vit-large-patch14-emnist-letter
This model is a fine-tuned version of [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1347
- Accuracy: 0.9527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.917 | 0.9994 | 877 | 0.3145 | 0.8863 |
| 0.7921 | 2.0 | 1755 | 0.2430 | 0.9152 |
| 0.7534 | 2.9994 | 2632 | 0.1897 | 0.9295 |
| 0.6294 | 4.0 | 3510 | 0.1720 | 0.9373 |
| 0.6219 | 4.9994 | 4387 | 0.1750 | 0.9390 |
| 0.5749 | 6.0 | 5265 | 0.1486 | 0.9486 |
| 0.5308 | 6.9994 | 6142 | 0.1431 | 0.9494 |
| 0.5505 | 8.0 | 7020 | 0.1429 | 0.9499 |
| 0.4938 | 8.9994 | 7897 | 0.1425 | 0.9496 |
| 0.5342 | 9.9943 | 8770 | 0.1347 | 0.9527 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"a",
"b",
"c",
"d",
"e",
"f",
"g",
"h",
"i",
"j",
"k",
"l",
"m",
"n",
"o",
"p",
"q",
"r",
"s",
"t",
"u",
"v",
"w",
"x",
"y",
"z"
] |
tangg555/clip-vit-base-patch16-finetuned-openai-clip-vit-base-patch16-emnist-letter
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-vit-base-patch16-finetuned-openai-clip-vit-base-patch16-emnist-letter
This model is a fine-tuned version of [openai/clip-vit-base-patch16](https://huggingface.co/openai/clip-vit-base-patch16) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1460
- Accuracy: 0.9468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.0535 | 0.9994 | 877 | 0.3616 | 0.8803 |
| 0.9258 | 2.0 | 1755 | 0.2692 | 0.9015 |
| 0.7331 | 2.9994 | 2632 | 0.2283 | 0.9207 |
| 0.7137 | 4.0 | 3510 | 0.1815 | 0.9353 |
| 0.6585 | 4.9994 | 4387 | 0.1889 | 0.9324 |
| 0.6366 | 6.0 | 5265 | 0.1688 | 0.9376 |
| 0.6284 | 6.9994 | 6142 | 0.1565 | 0.9424 |
| 0.5834 | 8.0 | 7020 | 0.1541 | 0.9433 |
| 0.5159 | 8.9994 | 7897 | 0.1425 | 0.9485 |
| 0.5233 | 9.9943 | 8770 | 0.1460 | 0.9468 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"a",
"b",
"c",
"d",
"e",
"f",
"g",
"h",
"i",
"j",
"k",
"l",
"m",
"n",
"o",
"p",
"q",
"r",
"s",
"t",
"u",
"v",
"w",
"x",
"y",
"z"
] |
tangg555/clip-vit-large-patch14-336-finetuned-openai-clip-vit-large-patch14-336-emnist-letter
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-vit-large-patch14-336-finetuned-openai-clip-vit-large-patch14-336-emnist-letter
This model is a fine-tuned version of [openai/clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1417
- Accuracy: 0.9489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.8898 | 0.9994 | 877 | 0.3490 | 0.8834 |
| 0.8457 | 2.0 | 1755 | 0.2191 | 0.9224 |
| 0.7238 | 2.9994 | 2632 | 0.2113 | 0.9263 |
| 0.6685 | 4.0 | 3510 | 0.1838 | 0.9348 |
| 0.6236 | 4.9994 | 4387 | 0.2217 | 0.9205 |
| 0.6011 | 6.0 | 5265 | 0.1732 | 0.9386 |
| 0.6129 | 6.9994 | 6142 | 0.1582 | 0.9419 |
| 0.5606 | 8.0 | 7020 | 0.1478 | 0.9465 |
| 0.5136 | 8.9994 | 7897 | 0.1488 | 0.9458 |
| 0.4659 | 9.9943 | 8770 | 0.1417 | 0.9489 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"a",
"b",
"c",
"d",
"e",
"f",
"g",
"h",
"i",
"j",
"k",
"l",
"m",
"n",
"o",
"p",
"q",
"r",
"s",
"t",
"u",
"v",
"w",
"x",
"y",
"z"
] |
tangg555/clip-vit-base-patch32-finetuned-openai-clip-vit-base-patch32-emnist-letter
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-vit-base-patch32-finetuned-openai-clip-vit-base-patch32-emnist-letter
This model is a fine-tuned version of [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1524
- Accuracy: 0.9465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.0859 | 0.9994 | 877 | 0.4055 | 0.8640 |
| 0.927 | 2.0 | 1755 | 0.3652 | 0.8782 |
| 0.83 | 2.9994 | 2632 | 0.2687 | 0.9066 |
| 0.7747 | 4.0 | 3510 | 0.2356 | 0.9189 |
| 0.7545 | 4.9994 | 4387 | 0.2147 | 0.9245 |
| 0.6461 | 6.0 | 5265 | 0.1889 | 0.9320 |
| 0.6457 | 6.9994 | 6142 | 0.1784 | 0.9354 |
| 0.6796 | 8.0 | 7020 | 0.1659 | 0.9412 |
| 0.5502 | 8.9994 | 7897 | 0.1548 | 0.9461 |
| 0.5797 | 9.9943 | 8770 | 0.1524 | 0.9465 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"a",
"b",
"c",
"d",
"e",
"f",
"g",
"h",
"i",
"j",
"k",
"l",
"m",
"n",
"o",
"p",
"q",
"r",
"s",
"t",
"u",
"v",
"w",
"x",
"y",
"z"
] |
amauriciogonzalez/dinov2-base-fa-disabled-finetuned-har
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dinov2-base-fa-disabled-finetuned-har
This model is a fine-tuned version of [facebook/dinov2-base](https://huggingface.co/facebook/dinov2-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3027
- Accuracy: 0.9164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.8554 | 0.9910 | 83 | 0.5252 | 0.8323 |
| 0.8162 | 1.9940 | 167 | 0.4597 | 0.8598 |
| 0.7303 | 2.9970 | 251 | 0.4403 | 0.8587 |
| 0.5644 | 4.0 | 335 | 0.3922 | 0.8746 |
| 0.5672 | 4.9910 | 418 | 0.3784 | 0.8857 |
| 0.454 | 5.9940 | 502 | 0.3856 | 0.8831 |
| 0.4379 | 6.9970 | 586 | 0.3510 | 0.8889 |
| 0.3356 | 8.0 | 670 | 0.3187 | 0.9063 |
| 0.2877 | 8.9910 | 753 | 0.3209 | 0.9116 |
| 0.2717 | 9.9104 | 830 | 0.3027 | 0.9164 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"calling",
"clapping",
"cycling",
"dancing",
"drinking",
"eating",
"fighting",
"hugging",
"laughing",
"listening_to_music",
"running",
"sitting",
"sleeping",
"texting",
"using_laptop"
] |
raffaelsiregar/rgai-emotions-classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch32-224-in21k](https://huggingface.co/google/vit-base-patch32-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5006
- Accuracy: 0.4813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0441 | 1.0 | 40 | 2.0365 | 0.25 |
| 1.9219 | 2.0 | 80 | 1.9451 | 0.3063 |
| 1.7429 | 3.0 | 120 | 1.8213 | 0.375 |
| 1.5854 | 4.0 | 160 | 1.7126 | 0.4188 |
| 1.4913 | 5.0 | 200 | 1.6547 | 0.4688 |
| 1.3673 | 6.0 | 240 | 1.6200 | 0.4813 |
| 1.2713 | 7.0 | 280 | 1.5822 | 0.475 |
| 1.1907 | 8.0 | 320 | 1.5639 | 0.4875 |
| 1.0516 | 9.0 | 360 | 1.5441 | 0.4875 |
| 1.0037 | 10.0 | 400 | 1.5285 | 0.4813 |
| 0.9538 | 11.0 | 440 | 1.5229 | 0.4813 |
| 0.8983 | 12.0 | 480 | 1.5100 | 0.4813 |
| 0.8616 | 13.0 | 520 | 1.5016 | 0.4938 |
| 0.8417 | 14.0 | 560 | 1.5024 | 0.4813 |
| 0.8078 | 15.0 | 600 | 1.5006 | 0.4813 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
smartgmin/eyesCare_firstTryEntrnal_mix_model-1
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# eyesCare_firstTryEntrnal_mix_model-1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0066
- Train Accuracy: 0.8616
- Train Top-3-accuracy: 0.9785
- Validation Loss: 1.9942
- Validation Accuracy: 0.8627
- Validation Top-3-accuracy: 0.9787
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 4e-05, 'decay_steps': 4950, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.001}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 1.3981 | 0.3217 | 0.7428 | 1.1812 | 0.4135 | 0.8283 | 0 |
| 1.1137 | 0.4540 | 0.8600 | 1.0974 | 0.4763 | 0.8802 | 1 |
| 0.9296 | 0.5034 | 0.8955 | 1.0739 | 0.5231 | 0.9065 | 2 |
| 0.7444 | 0.5473 | 0.9155 | 1.1126 | 0.5663 | 0.9225 | 3 |
| 0.5534 | 0.5880 | 0.9285 | 1.1673 | 0.6076 | 0.9342 | 4 |
| 0.4105 | 0.6261 | 0.9387 | 1.1547 | 0.6422 | 0.9428 | 5 |
| 0.2830 | 0.6586 | 0.9462 | 1.3119 | 0.6729 | 0.9493 | 6 |
| 0.1984 | 0.6874 | 0.9519 | 1.3821 | 0.6990 | 0.9540 | 7 |
| 0.1224 | 0.7104 | 0.9559 | 1.4778 | 0.7213 | 0.9576 | 8 |
| 0.1021 | 0.7313 | 0.9591 | 1.5426 | 0.7400 | 0.9603 | 9 |
| 0.1017 | 0.7478 | 0.9615 | 1.6387 | 0.7545 | 0.9625 | 10 |
| 0.0646 | 0.7613 | 0.9635 | 1.6226 | 0.7678 | 0.9644 | 11 |
| 0.0500 | 0.7738 | 0.9654 | 1.6646 | 0.7793 | 0.9662 | 12 |
| 0.0571 | 0.7843 | 0.9669 | 1.7492 | 0.7890 | 0.9675 | 13 |
| 0.0248 | 0.7935 | 0.9682 | 1.6984 | 0.7978 | 0.9689 | 14 |
| 0.0185 | 0.8020 | 0.9695 | 1.7302 | 0.8059 | 0.9701 | 15 |
| 0.0145 | 0.8096 | 0.9707 | 1.7669 | 0.8129 | 0.9712 | 16 |
| 0.0129 | 0.8163 | 0.9718 | 1.7972 | 0.8193 | 0.9722 | 17 |
| 0.0116 | 0.8223 | 0.9727 | 1.8276 | 0.8251 | 0.9732 | 18 |
| 0.0106 | 0.8277 | 0.9736 | 1.8544 | 0.8302 | 0.9739 | 19 |
| 0.0098 | 0.8326 | 0.9743 | 1.8792 | 0.8348 | 0.9746 | 20 |
| 0.0091 | 0.8370 | 0.9749 | 1.9012 | 0.8391 | 0.9752 | 21 |
| 0.0085 | 0.8411 | 0.9755 | 1.9212 | 0.8430 | 0.9758 | 22 |
| 0.0080 | 0.8448 | 0.9761 | 1.9391 | 0.8465 | 0.9763 | 23 |
| 0.0076 | 0.8482 | 0.9766 | 1.9547 | 0.8498 | 0.9768 | 24 |
| 0.0073 | 0.8513 | 0.9770 | 1.9682 | 0.8527 | 0.9772 | 25 |
| 0.0070 | 0.8542 | 0.9774 | 1.9789 | 0.8555 | 0.9777 | 26 |
| 0.0068 | 0.8568 | 0.9778 | 1.9871 | 0.8580 | 0.9780 | 27 |
| 0.0067 | 0.8593 | 0.9782 | 1.9924 | 0.8605 | 0.9784 | 28 |
| 0.0066 | 0.8616 | 0.9785 | 1.9942 | 0.8627 | 0.9787 | 29 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.15.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"mild",
"moderate",
"no_dr",
"proliferative_dr",
"severe"
] |
candylion/vit-base-beans-demo-v5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans-demo-v5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0564 | 1.5385 | 100 | 0.0384 |
| 0.0204 | 3.0769 | 200 | 0.0315 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
smartgmin/glacoma_andOther_model1
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# glacoma_andOther_model1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0575
- Train Accuracy: 0.9403
- Train Top-3-accuracy: 0.9984
- Validation Loss: 0.2329
- Validation Accuracy: 0.9442
- Validation Top-3-accuracy: 0.9985
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1266, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 0.5871 | 0.7237 | 0.9808 | 0.3574 | 0.8358 | 0.9916 | 0 |
| 0.2606 | 0.8643 | 0.9942 | 0.2785 | 0.8821 | 0.9958 | 1 |
| 0.1643 | 0.8966 | 0.9966 | 0.2490 | 0.9077 | 0.9971 | 2 |
| 0.1114 | 0.9168 | 0.9975 | 0.2644 | 0.9239 | 0.9978 | 3 |
| 0.0797 | 0.9301 | 0.9980 | 0.2345 | 0.9353 | 0.9982 | 4 |
| 0.0575 | 0.9403 | 0.9984 | 0.2329 | 0.9442 | 0.9985 | 5 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.15.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"cataract",
"diabetic_retinopathy",
"glaucoma",
"normal"
] |
AlaaHussien/dinov2-base-finetuned-Leukemia
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dinov2-base-finetuned-Leukemia
This model is a fine-tuned version of [facebook/dinov2-base](https://huggingface.co/facebook/dinov2-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| 0.0541 | 0.9954 | 162 | 0.8475 | 0.9018 | 0.9047 |
| 0.0667 | 1.9969 | 325 | 0.0745 | 0.9785 | 0.9780 |
| 0.1317 | 2.9985 | 488 | 0.0159 | 0.9939 | 0.9939 |
| 0.0187 | 4.0 | 651 | 0.0771 | 0.9877 | 0.9878 |
| 0.0762 | 4.9954 | 813 | 0.1135 | 0.9877 | 0.9878 |
| 0.006 | 5.9969 | 976 | 0.0502 | 0.9969 | 0.9969 |
| 0.1322 | 6.9985 | 1139 | 0.0357 | 0.9969 | 0.9969 |
| 0.0332 | 8.0 | 1302 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 8.9954 | 1464 | 0.0004 | 1.0 | 1.0 |
| 0.0 | 9.9539 | 1620 | 0.0000 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"benign",
"early",
"pre",
"pro"
] |
crocutacrocuto/dinov2-base-MEG-2
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"aardvark",
"bird",
"black-and-white colobus",
"blue duiker",
"buffalo",
"bushbuck",
"bushpig",
"cattle",
"chimpanzee",
"civet_genet",
"dog_jackal",
"elephant",
"galago_potto",
"goat",
"golden cat",
"gorilla",
"guineafowl",
"honey badger",
"leopard",
"mandrill",
"mongoose",
"monkey",
"olive baboon",
"pangolin",
"porcupine",
"red colobus_red-capped mangabey",
"red duiker",
"rodent",
"serval",
"spotted hyena",
"squirel",
"water chevrotain",
"yellow-backed duiker"
] |
crocutacrocuto/dinov2-base-MEGW-2
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"aardvark",
"bird",
"black-and-white colobus",
"blue duiker",
"buffalo",
"bushbuck",
"bushpig",
"cattle",
"chimpanzee",
"civet_genet",
"dog_jackal",
"elephant",
"galago_potto",
"goat",
"golden cat",
"gorilla",
"guineafowl",
"honey badger",
"leopard",
"mandrill",
"mongoose",
"monkey",
"olive baboon",
"pangolin",
"porcupine",
"red colobus_red-capped mangabey",
"red duiker",
"rodent",
"serval",
"spotted hyena",
"squirel",
"water chevrotain",
"yellow-backed duiker"
] |
franibm/autotrain-Chiara2
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.2632964849472046
f1: 0.9655172413793104
precision: 1.0
recall: 0.9333333333333333
auc: 0.9866666666666667
accuracy: 0.9666666666666667
|
[
"manipulated-images",
"non-manipulated-images"
] |
JunyaoPu/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6549
- Accuracy: 0.889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7314 | 0.992 | 62 | 2.5575 | 0.817 |
| 1.8582 | 2.0 | 125 | 1.8171 | 0.879 |
| 1.6305 | 2.976 | 186 | 1.6549 | 0.889 |
### Framework versions
- Transformers 4.44.2
- Pytorch 1.11.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
fanaf91318/recomendation-system-v2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8203
- Accuracy: 0.6505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.7295 | 1.0 | 612 | 3.7145 | 0.4203 |
| 2.6027 | 2.0 | 1224 | 2.7499 | 0.5296 |
| 2.1701 | 3.0 | 1836 | 2.2983 | 0.5803 |
| 1.8428 | 4.0 | 2448 | 2.0223 | 0.6222 |
| 1.7442 | 5.0 | 3060 | 1.8794 | 0.6442 |
| 1.6609 | 6.0 | 3672 | 1.8203 | 0.6505 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"adsl modemlari (telefon tarmog‘i uchun)",
"abiturient kitoblari",
"avtomobil breloklari uchun gʻiloflar",
"idishlarni yuvish uchun suyuqliklar",
"infraqizil isitgichlar",
"intim gigiyena",
"intim tovarlar",
"issiqlik pardalari",
"issiqlik pushkalari",
"issiqlik ventilyatorlari",
"jahon adabiyoti",
"jildlar va organayzerlar",
"kabellar, adapterlar",
"avtomobil diskalari",
"kaminlar",
"kastryulkalar",
"kastryullar",
"kir yuvish, quritish uskunalari va aksessuarlari",
"klassik oshxona mebellari",
"kofe mashinalari va maydalagichlar",
"komod",
"kompleks yog‘lar",
"kompyuter korpus kulerlari",
"kompyuter korpuslari",
"avtomobil ehtiyot qismlari",
"konvektorlar",
"koriklar",
"kosmetika jihozlari",
"ko‘chat o‘tqazish va ekish jihozlari",
"ko‘chma akustika vositalari",
"ko‘chma changyutgichlar",
"ko‘chma printerlar",
"ko‘zlar",
"lerom",
"maktab to'plamlari",
"avtomobil karcherlari",
"mangallar, o‘choqlar",
"manikyur uchun instrumentlar",
"massajyorlar",
"maysalar parvarishi",
"mikrofonlar",
"mikroto‘lqinli pechlar va pechlar",
"miskalar",
"modemlar",
"monobloklar",
"moyli isitgichlar",
"avtomobil kompressorlari",
"multivarkalar",
"muzlatgichlar",
"namlantirish",
"niqoblar",
"noutbuk tagliklari",
"noutbuk uchun sumkalar",
"noutbuklar",
"noutbuklarni zaryadlovchi qurilmalar",
"o't oldirish simlari",
"o'yin kreslolari",
"avtomobil kuzuflari",
"ofis jihozlari uchun sarflanadigan materiallar",
"ofis to‘plami",
"olmos rasmlari",
"operativ xotira (ddr)",
"oshxona anjomlari",
"oshxona anjomlari uchun organayzerlar",
"oshxona idish-tovoqlari",
"oshxona mashinalari va kombaynlari",
"oshxona plitalari",
"oshxona tarozisi",
"avtomobil monitorlari",
"oshxona to'plamlari",
"otvyortkalar",
"oyna tozalagichlar",
"oziq-ovqat idishlari",
"oziqlantirib namlantiruvchi",
"o‘lchov asboblari",
"o‘rnatiladigan maishiy texnika",
"o‘zbek adabiyoti",
"parfyum sovun",
"parfyumni tozalash vositasi",
"avtomobil moylari",
"patchi",
"payvandlash uskunalari",
"pazllar",
"penal va komod",
"penal va televizor qo‘yish mebeli",
"perforatorlar, otboyka bolg‘alari",
"pichoq va taxtachalar",
"pishiriqlar uchun qoliplar",
"playstation",
"pnevmatik jihozlar",
"avtomobil o'rindiqlari",
"pol tarozilari",
"pol va stol chiroqlari",
"portmone va hamyonlar",
"printerlar va skanerlar",
"protsessorlar",
"proyektorlar",
"pul (kupyura) sanash mashinalari",
"qattiq disklar, ssd va tarmoq adapterlari",
"qizlar uchun svitshotlar",
"qoshlar",
"avtomobil shinalari",
"qozonlar",
"qo‘l uchun",
"quloqchinlar",
"qurilish fenlari",
"quvvat manbalari",
"quyosh nurlariga qarshi",
"radiotelefonlar",
"rahbariyat uchun kreslolar",
"raqamli rasmlar",
"rul boshqaruvi",
"akkumulyatorlar va zaryadlovchi qurilmalar",
"avtomobil uchun hushbo'ylantirgichlar",
"slonim mebel",
"salon uchun jihozlar",
"sariyog‘ idishlari",
"sendvich pishiruvchilar",
"shakar idishlari",
"shamdonlar",
"shampun",
"sharbat chiqargichlar",
"shatura",
"shifobaxsh yog‘lar",
"avtomobil uchun telefon tutqichlari",
"shinalar va g'ildiraklar uchun parvarish",
"shisha qopqoqlar",
"shlifovka uskunalari",
"shoxqirqqichlar, butaqirqqichlar, tokqaychilar",
"sichqoncha tagliklari",
"simsiz quloqchinlar",
"simsiz quvvatlagichlar",
"skanerlar",
"smart-soat va fitnes-brasletlar uchun aksessuarlar",
"smart-soatlar",
"avtomobillar uchun chiqarish tizimi",
"smartfonlar",
"soatlar",
"soch olish mashinkalari va trimmerlar",
"soch uchun niqob",
"sovundonlar va tokchalar",
"sovutgichlar",
"sport oziq-ovqatlari",
"sport qo'lqoplari",
"sport va sayohat uchun butilkalar",
"stanok jihozlari",
"avtovizitkalar",
"stendlar va ushlagichlar",
"stiluslar",
"stol kompyuterlari",
"stol xizmati",
"suv isitgichlari",
"suv kulerlari",
"suv nasoslari",
"tarmoq adapterlari",
"tarmoq filtrlari va uzatma kabellari",
"tarmoq kalitlari",
"ayollar kiyimlari",
"tashqi akkumulyator (power bank)",
"ta’mir uchun mayda asbob-anjomlar",
"telefon g‘iloflari",
"telefon uchun himoya oynalari",
"telefon uchun stabilizatorlar",
"telefonlar va planshetlar uchun tagliklar",
"terma chinni idishlar",
"termoidishlar",
"termometrlar",
"tikuv mashinalari",
"ayollar oyoq-kiyimlari",
"tizim platalari",
"tosterlar",
"tovalar",
"tozalovchi",
"tugmachali telefonlar",
"tunchiroqlar va krovat yoni chiroqlari",
"tyuning va tashqi dekor",
"usb fleshkalar",
"uzatma simlari va adapterlar",
"uzluksiz quvvat manbalari",
"ayollar pardozi",
"vafli pishirgichlar",
"veb-kameralar",
"videokameralar",
"videokartalar",
"videokonferensiya tizimlari",
"videokuzatuv kameralari",
"wi-fi adapterlari",
"wi-fi kuchaytirgichlari",
"wifi routerlar (optika)",
"xodimlar uchun kreslolar",
"ayollar sumkalari va ryukzaklari",
"xotira kartalari",
"yog‘och kesish jihozlari",
"yoshartiruvchi",
"yumshoq mebel to‘plamlari",
"yuviladigan kiyimlar uchun korzinalar",
"yuvish va avtotozalash vositalari",
"yuz uchun",
"zamin qoplamalari",
"zamonaviy o‘zbek adabiyoti",
"zaryadlovchi qurilmalar va usb kabellari",
"ayollar uchun futbolkalar",
"ziravorlar uchun aksessuarlar",
"akkumulyatory",
"aksessuary",
"antifriz",
"avto signalizatsiya",
"avtomobil koriklari",
"biznes kitoblari",
"bolalar samokati",
"bug' generatorlari",
"chay va kofe",
"bankalar va maxsus idishlar",
"dazmol taxtalari",
"dekoratsiyalar",
"elektr generatorlari",
"hovuzlar",
"kiyim quritgichlar",
"klaviatura va sichqoncha",
"kompyuter kolonkalari",
"konditsionerlar",
"monitorlar",
"namlagichlar",
"alifbo",
"barbekyu uchun panjaralar",
"narvon",
"planshetlar",
"rele va stabilizatorlar",
"shirinliklar",
"soch uchun gellar",
"televizor aksessuarlar",
"televizorlar",
"videoregistratorlar",
"ароматы для дома",
"barcha oila uchun kiyim-kechak",
"begona o‘tlarni yulgichlar",
"belkuraklar",
"bijuteriya",
"bio-kaminlar",
"bir zumda chop etuvchi fotoapparatlar",
"blenderlar va mikserlar",
"bluetooth-garnituralar",
"bolalar arg'imchoqlari",
"aqlli dinamiklar",
"bolalar kalyaskalari",
"bolalar kitoblari",
"bolalar moshinalari",
"bolalar o'yinchog'lari",
"bolalar tagliklari",
"bolalar velosipedlari",
"bo‘yash uchun jihozlar",
"bug‘ qozonlari",
"bug‘li tozalagichlar",
"burchak divanlar to‘plami",
"aqlli yoritish to‘plamlari va aksessuarlari",
"changyutgichlar",
"chemodanlar",
"chiqindi chelaklari va baklari",
"cho'tkalar, changyutgichlar, moplar, cho'tkalar",
"cho'tkalar, gubkalar, salfetkalar",
"choynaklar",
"dazmollar va aksessuarlar",
"dispenserlar, dozatorlar",
"divanlar",
"domofonlar",
"archa o‘yinchoqlari va bezaklari",
"drellar va shurupovertlar",
"dudbo‘ronlar",
"dudoqlar",
"dush tizimlari",
"era",
"ekshn-kameralar",
"elektr arralar",
"elektr choynaklar",
"elektr skuterlari",
"elektrik tish cho‘tkalari",
"asboblarni saqlash",
"epilatorlar va ayollar elektr soqollari",
"erkaklar kiyimlari",
"erkaklar sumkalari va ryukzaklari",
"erkaklar uchun elektr ustaralar",
"fenlar va sochni to'g'rilash vositasini",
"fitnes anjomlari",
"fitnes-bilaguzuklar",
"fotoapparatlar",
"fri pishiruvchi",
"futbolkalar",
"atirlar",
"g'ildirak disklari",
"gaz qozonlari",
"gorka",
"go‘shtqiymalagichlar",
"grafik planshetlar",
"gril",
"gril uchun aksessuarlar",
"halqali lampalar va telefonlar uchun tripodlar",
"halqasimon chiroqlar",
"hammom uchun kranlar",
"avtochiroq",
"hammom va hojatxona uchun mahsulotlar",
"hammom va oshxona uchun tashkilotchilar",
"harakat va eshik ochilishi datchiklari",
"hasharotlar va kemiruvchilarga qarshi vositalar",
"havo kompressorlari",
"hi-tech oshxona mebellari",
"hujjatli kitoblar",
"ip-telefonlar",
"ichimliklar uchun idishlar",
"idish yuvish mashinalari"
] |
hamnabint/vit-Facial-Expression-Recognition_checkpoints
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-Facial-Expression-Recognition_checkpoints
This model is a fine-tuned version of [motheecreator/vit-Facial-Expression-Recognition](https://huggingface.co/motheecreator/vit-Facial-Expression-Recognition) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1826
- Accuracy: 0.5886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.4533 | 2.2663 | 100 | 1.3534 | 0.4619 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cpu
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"angry",
"happy",
"neutral",
"sad",
"surprise"
] |
franibm/autotrain-Chiara3
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.6958691477775574
f1: 0.6666666666666666
precision: 0.5
recall: 1.0
auc: 0.4496
accuracy: 0.5
|
[
"manipulated-images",
"non-manipulated-images"
] |
semihdervis/cat-emotion-classifier
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-cat-emotions
You can try out the model live [here](https://cat-emotion-classifier.streamlit.app/), and check out the [GitHub repository](https://github.com/semihdervis/cat-emotion-classifier) for more details.
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the custom dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0160
- Accuracy: 0.6353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3361 | 3.125 | 100 | 1.0125 | 0.6548 |
| 0.0723 | 6.25 | 200 | 0.9043 | 0.7381 |
| 0.0321 | 9.375 | 300 | 0.9268 | 0.7143 |
### Framework versions
- Transformers 4.44.1
- Pytorch 2.2.2+cu118
- Datasets 2.20.0
- Tokenizers 0.19.1
|
[
"angry",
"disgusted",
"happy",
"normal",
"sad",
"scared",
"surprised"
] |
abdumalikov/image-classification-v1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recomendation-system
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3870
- Accuracy: 0.5658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.7526 | 1.0 | 612 | 4.7474 | 0.2541 |
| 3.9574 | 2.0 | 1224 | 3.8794 | 0.4050 |
| 3.4665 | 3.0 | 1836 | 3.3852 | 0.4621 |
| 3.0017 | 4.0 | 2448 | 3.0551 | 0.4944 |
| 2.7217 | 5.0 | 3060 | 2.8251 | 0.5137 |
| 2.5752 | 6.0 | 3672 | 2.6569 | 0.5399 |
| 2.5064 | 7.0 | 4284 | 2.5447 | 0.5501 |
| 2.3956 | 8.0 | 4896 | 2.4493 | 0.5631 |
| 2.1768 | 9.0 | 5508 | 2.4040 | 0.5631 |
| 2.2168 | 10.0 | 6120 | 2.3870 | 0.5658 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"adsl modemlari (telefon tarmog‘i uchun)",
"abiturient kitoblari",
"avtomobil breloklari uchun gʻiloflar",
"idishlarni yuvish uchun suyuqliklar",
"infraqizil isitgichlar",
"intim gigiyena",
"intim tovarlar",
"issiqlik pardalari",
"issiqlik pushkalari",
"issiqlik ventilyatorlari",
"jahon adabiyoti",
"jildlar va organayzerlar",
"kabellar, adapterlar",
"avtomobil diskalari",
"kaminlar",
"kastryulkalar",
"kastryullar",
"kir yuvish, quritish uskunalari va aksessuarlari",
"klassik oshxona mebellari",
"kofe mashinalari va maydalagichlar",
"komod",
"kompleks yog‘lar",
"kompyuter korpus kulerlari",
"kompyuter korpuslari",
"avtomobil ehtiyot qismlari",
"konvektorlar",
"koriklar",
"kosmetika jihozlari",
"ko‘chat o‘tqazish va ekish jihozlari",
"ko‘chma akustika vositalari",
"ko‘chma changyutgichlar",
"ko‘chma printerlar",
"ko‘zlar",
"lerom",
"maktab to'plamlari",
"avtomobil karcherlari",
"mangallar, o‘choqlar",
"manikyur uchun instrumentlar",
"massajyorlar",
"maysalar parvarishi",
"mikrofonlar",
"mikroto‘lqinli pechlar va pechlar",
"miskalar",
"modemlar",
"monobloklar",
"moyli isitgichlar",
"avtomobil kompressorlari",
"multivarkalar",
"muzlatgichlar",
"namlantirish",
"niqoblar",
"noutbuk tagliklari",
"noutbuk uchun sumkalar",
"noutbuklar",
"noutbuklarni zaryadlovchi qurilmalar",
"o't oldirish simlari",
"o'yin kreslolari",
"avtomobil kuzuflari",
"ofis jihozlari uchun sarflanadigan materiallar",
"ofis to‘plami",
"olmos rasmlari",
"operativ xotira (ddr)",
"oshxona anjomlari",
"oshxona anjomlari uchun organayzerlar",
"oshxona idish-tovoqlari",
"oshxona mashinalari va kombaynlari",
"oshxona plitalari",
"oshxona tarozisi",
"avtomobil monitorlari",
"oshxona to'plamlari",
"otvyortkalar",
"oyna tozalagichlar",
"oziq-ovqat idishlari",
"oziqlantirib namlantiruvchi",
"o‘lchov asboblari",
"o‘rnatiladigan maishiy texnika",
"o‘zbek adabiyoti",
"parfyum sovun",
"parfyumni tozalash vositasi",
"avtomobil moylari",
"patchi",
"payvandlash uskunalari",
"pazllar",
"penal va komod",
"penal va televizor qo‘yish mebeli",
"perforatorlar, otboyka bolg‘alari",
"pichoq va taxtachalar",
"pishiriqlar uchun qoliplar",
"playstation",
"pnevmatik jihozlar",
"avtomobil o'rindiqlari",
"pol tarozilari",
"pol va stol chiroqlari",
"portmone va hamyonlar",
"printerlar va skanerlar",
"protsessorlar",
"proyektorlar",
"pul (kupyura) sanash mashinalari",
"qattiq disklar, ssd va tarmoq adapterlari",
"qizlar uchun svitshotlar",
"qoshlar",
"avtomobil shinalari",
"qozonlar",
"qo‘l uchun",
"quloqchinlar",
"qurilish fenlari",
"quvvat manbalari",
"quyosh nurlariga qarshi",
"radiotelefonlar",
"rahbariyat uchun kreslolar",
"raqamli rasmlar",
"rul boshqaruvi",
"akkumulyatorlar va zaryadlovchi qurilmalar",
"avtomobil uchun hushbo'ylantirgichlar",
"slonim mebel",
"salon uchun jihozlar",
"sariyog‘ idishlari",
"sendvich pishiruvchilar",
"shakar idishlari",
"shamdonlar",
"shampun",
"sharbat chiqargichlar",
"shatura",
"shifobaxsh yog‘lar",
"avtomobil uchun telefon tutqichlari",
"shinalar va g'ildiraklar uchun parvarish",
"shisha qopqoqlar",
"shlifovka uskunalari",
"shoxqirqqichlar, butaqirqqichlar, tokqaychilar",
"sichqoncha tagliklari",
"simsiz quloqchinlar",
"simsiz quvvatlagichlar",
"skanerlar",
"smart-soat va fitnes-brasletlar uchun aksessuarlar",
"smart-soatlar",
"avtomobillar uchun chiqarish tizimi",
"smartfonlar",
"soatlar",
"soch olish mashinkalari va trimmerlar",
"soch uchun niqob",
"sovundonlar va tokchalar",
"sovutgichlar",
"sport oziq-ovqatlari",
"sport qo'lqoplari",
"sport va sayohat uchun butilkalar",
"stanok jihozlari",
"avtovizitkalar",
"stendlar va ushlagichlar",
"stiluslar",
"stol kompyuterlari",
"stol xizmati",
"suv isitgichlari",
"suv kulerlari",
"suv nasoslari",
"tarmoq adapterlari",
"tarmoq filtrlari va uzatma kabellari",
"tarmoq kalitlari",
"ayollar kiyimlari",
"tashqi akkumulyator (power bank)",
"ta’mir uchun mayda asbob-anjomlar",
"telefon g‘iloflari",
"telefon uchun himoya oynalari",
"telefon uchun stabilizatorlar",
"telefonlar va planshetlar uchun tagliklar",
"terma chinni idishlar",
"termoidishlar",
"termometrlar",
"tikuv mashinalari",
"ayollar oyoq-kiyimlari",
"tizim platalari",
"tosterlar",
"tovalar",
"tozalovchi",
"tugmachali telefonlar",
"tunchiroqlar va krovat yoni chiroqlari",
"tyuning va tashqi dekor",
"usb fleshkalar",
"uzatma simlari va adapterlar",
"uzluksiz quvvat manbalari",
"ayollar pardozi",
"vafli pishirgichlar",
"veb-kameralar",
"videokameralar",
"videokartalar",
"videokonferensiya tizimlari",
"videokuzatuv kameralari",
"wi-fi adapterlari",
"wi-fi kuchaytirgichlari",
"wifi routerlar (optika)",
"xodimlar uchun kreslolar",
"ayollar sumkalari va ryukzaklari",
"xotira kartalari",
"yog‘och kesish jihozlari",
"yoshartiruvchi",
"yumshoq mebel to‘plamlari",
"yuviladigan kiyimlar uchun korzinalar",
"yuvish va avtotozalash vositalari",
"yuz uchun",
"zamin qoplamalari",
"zamonaviy o‘zbek adabiyoti",
"zaryadlovchi qurilmalar va usb kabellari",
"ayollar uchun futbolkalar",
"ziravorlar uchun aksessuarlar",
"akkumulyatory",
"aksessuary",
"antifriz",
"avto signalizatsiya",
"avtomobil koriklari",
"biznes kitoblari",
"bolalar samokati",
"bug' generatorlari",
"chay va kofe",
"bankalar va maxsus idishlar",
"dazmol taxtalari",
"dekoratsiyalar",
"elektr generatorlari",
"hovuzlar",
"kiyim quritgichlar",
"klaviatura va sichqoncha",
"kompyuter kolonkalari",
"konditsionerlar",
"monitorlar",
"namlagichlar",
"alifbo",
"barbekyu uchun panjaralar",
"narvon",
"planshetlar",
"rele va stabilizatorlar",
"shirinliklar",
"soch uchun gellar",
"televizor aksessuarlar",
"televizorlar",
"videoregistratorlar",
"ароматы для дома",
"barcha oila uchun kiyim-kechak",
"begona o‘tlarni yulgichlar",
"belkuraklar",
"bijuteriya",
"bio-kaminlar",
"bir zumda chop etuvchi fotoapparatlar",
"blenderlar va mikserlar",
"bluetooth-garnituralar",
"bolalar arg'imchoqlari",
"aqlli dinamiklar",
"bolalar kalyaskalari",
"bolalar kitoblari",
"bolalar moshinalari",
"bolalar o'yinchog'lari",
"bolalar tagliklari",
"bolalar velosipedlari",
"bo‘yash uchun jihozlar",
"bug‘ qozonlari",
"bug‘li tozalagichlar",
"burchak divanlar to‘plami",
"aqlli yoritish to‘plamlari va aksessuarlari",
"changyutgichlar",
"chemodanlar",
"chiqindi chelaklari va baklari",
"cho'tkalar, changyutgichlar, moplar, cho'tkalar",
"cho'tkalar, gubkalar, salfetkalar",
"choynaklar",
"dazmollar va aksessuarlar",
"dispenserlar, dozatorlar",
"divanlar",
"domofonlar",
"archa o‘yinchoqlari va bezaklari",
"drellar va shurupovertlar",
"dudbo‘ronlar",
"dudoqlar",
"dush tizimlari",
"era",
"ekshn-kameralar",
"elektr arralar",
"elektr choynaklar",
"elektr skuterlari",
"elektrik tish cho‘tkalari",
"asboblarni saqlash",
"epilatorlar va ayollar elektr soqollari",
"erkaklar kiyimlari",
"erkaklar sumkalari va ryukzaklari",
"erkaklar uchun elektr ustaralar",
"fenlar va sochni to'g'rilash vositasini",
"fitnes anjomlari",
"fitnes-bilaguzuklar",
"fotoapparatlar",
"fri pishiruvchi",
"futbolkalar",
"atirlar",
"g'ildirak disklari",
"gaz qozonlari",
"gorka",
"go‘shtqiymalagichlar",
"grafik planshetlar",
"gril",
"gril uchun aksessuarlar",
"halqali lampalar va telefonlar uchun tripodlar",
"halqasimon chiroqlar",
"hammom uchun kranlar",
"avtochiroq",
"hammom va hojatxona uchun mahsulotlar",
"hammom va oshxona uchun tashkilotchilar",
"harakat va eshik ochilishi datchiklari",
"hasharotlar va kemiruvchilarga qarshi vositalar",
"havo kompressorlari",
"hi-tech oshxona mebellari",
"hujjatli kitoblar",
"ip-telefonlar",
"ichimliklar uchun idishlar",
"idish yuvish mashinalari"
] |
blackhole-boys/recommendation-system-v1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Recommendation-system
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8203
- Accuracy: 0.6505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.7295 | 1.0 | 612 | 3.7145 | 0.4203 |
| 2.6027 | 2.0 | 1224 | 2.7499 | 0.5296 |
| 2.1701 | 3.0 | 1836 | 2.2983 | 0.5803 |
| 1.8428 | 4.0 | 2448 | 2.0223 | 0.6222 |
| 1.7442 | 5.0 | 3060 | 1.8794 | 0.6442 |
| 1.6609 | 6.0 | 3672 | 1.8203 | 0.6505 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"adsl modemlari (telefon tarmog‘i uchun)",
"abiturient kitoblari",
"akkumulyatorlar va zaryadlovchi qurilmalar",
"alifbo",
"aqlli dinamiklar",
"aqlli yoritish to‘plamlari va aksessuarlari",
"archa o‘yinchoqlari va bezaklari",
"asboblarni saqlash",
"atirlar",
"avtochiroq",
"avtomobil breloklari uchun gʻiloflar",
"avtomobil diskalari",
"avtomobil ehtiyot qismlari",
"avtomobil karcherlari",
"avtomobil kompressorlari",
"avtomobil kuzuflari",
"avtomobil monitorlari",
"avtomobil moylari",
"avtomobil o'rindiqlari",
"avtomobil shinalari",
"avtomobil uchun hushbo'ylantirgichlar",
"avtomobil uchun telefon tutqichlari",
"avtomobillar uchun chiqarish tizimi",
"avtovizitkalar",
"ayollar kiyimlari",
"ayollar oyoq-kiyimlari",
"ayollar pardozi",
"ayollar sumkalari va ryukzaklari",
"ayollar uchun futbolkalar",
"bankalar va maxsus idishlar",
"barbekyu uchun panjaralar",
"barcha oila uchun kiyim-kechak",
"begona o‘tlarni yulgichlar",
"belkuraklar",
"bijuteriya",
"bio-kaminlar",
"bir zumda chop etuvchi fotoapparatlar",
"blenderlar va mikserlar",
"bluetooth-garnituralar",
"bolalar arg'imchoqlari",
"bolalar kalyaskalari",
"bolalar kitoblari",
"bolalar moshinalari",
"bolalar o'yinchog'lari",
"bolalar tagliklari",
"bolalar velosipedlari",
"bo‘yash uchun jihozlar",
"bug‘ qozonlari",
"bug‘li tozalagichlar",
"burchak divanlar to‘plami",
"changyutgichlar",
"chemodanlar",
"chiqindi chelaklari va baklari",
"cho'tkalar, changyutgichlar, moplar, cho'tkalar",
"cho'tkalar, gubkalar, salfetkalar",
"choynaklar",
"dazmollar va aksessuarlar",
"dispenserlar, dozatorlar",
"divanlar",
"domofonlar",
"drellar va shurupovertlar",
"dudbo‘ronlar",
"dudoqlar",
"dush tizimlari",
"era",
"ekshn-kameralar",
"elektr arralar",
"elektr choynaklar",
"elektr skuterlari",
"elektrik tish cho‘tkalari",
"epilatorlar va ayollar elektr soqollari",
"erkaklar kiyimlari",
"erkaklar sumkalari va ryukzaklari",
"erkaklar uchun elektr ustaralar",
"fenlar va sochni to'g'rilash vositasini",
"fitnes anjomlari",
"fitnes-bilaguzuklar",
"fotoapparatlar",
"fri pishiruvchi",
"futbolkalar",
"g'ildirak disklari",
"gaz qozonlari",
"gorka",
"go‘shtqiymalagichlar",
"grafik planshetlar",
"gril",
"gril uchun aksessuarlar",
"halqali lampalar va telefonlar uchun tripodlar",
"halqasimon chiroqlar",
"hammom uchun kranlar",
"hammom va hojatxona uchun mahsulotlar",
"hammom va oshxona uchun tashkilotchilar",
"harakat va eshik ochilishi datchiklari",
"hasharotlar va kemiruvchilarga qarshi vositalar",
"havo kompressorlari",
"hi-tech oshxona mebellari",
"hujjatli kitoblar",
"ip-telefonlar",
"ichimliklar uchun idishlar",
"idish yuvish mashinalari",
"idishlarni yuvish uchun suyuqliklar",
"infraqizil isitgichlar",
"intim gigiyena",
"intim tovarlar",
"issiqlik pardalari",
"issiqlik pushkalari",
"issiqlik ventilyatorlari",
"jahon adabiyoti",
"jildlar va organayzerlar",
"kabellar, adapterlar",
"kaminlar",
"kastryulkalar",
"kastryullar",
"kir yuvish, quritish uskunalari va aksessuarlari",
"klassik oshxona mebellari",
"kofe mashinalari va maydalagichlar",
"komod",
"kompleks yog‘lar",
"kompyuter korpus kulerlari",
"kompyuter korpuslari",
"konvektorlar",
"koriklar",
"kosmetika jihozlari",
"ko‘chat o‘tqazish va ekish jihozlari",
"ko‘chma akustika vositalari",
"ko‘chma changyutgichlar",
"ko‘chma printerlar",
"ko‘zlar",
"lerom",
"maktab to'plamlari",
"mangallar, o‘choqlar",
"manikyur uchun instrumentlar",
"massajyorlar",
"maysalar parvarishi",
"mikrofonlar",
"mikroto‘lqinli pechlar va pechlar",
"miskalar",
"modemlar",
"monobloklar",
"moyli isitgichlar",
"multivarkalar",
"muzlatgichlar",
"namlantirish",
"niqoblar",
"noutbuk tagliklari",
"noutbuk uchun sumkalar",
"noutbuklar",
"noutbuklarni zaryadlovchi qurilmalar",
"o't oldirish simlari",
"o'yin kreslolari",
"ofis jihozlari uchun sarflanadigan materiallar",
"ofis to‘plami",
"olmos rasmlari",
"operativ xotira (ddr)",
"oshxona anjomlari",
"oshxona anjomlari uchun organayzerlar",
"oshxona idish-tovoqlari",
"oshxona mashinalari va kombaynlari",
"oshxona plitalari",
"oshxona tarozisi",
"oshxona to'plamlari",
"otvyortkalar",
"oyna tozalagichlar",
"oziq-ovqat idishlari",
"oziqlantirib namlantiruvchi",
"o‘lchov asboblari",
"o‘rnatiladigan maishiy texnika",
"o‘zbek adabiyoti",
"parfyum sovun",
"parfyumni tozalash vositasi",
"patchi",
"payvandlash uskunalari",
"pazllar",
"penal va komod",
"penal va televizor qo‘yish mebeli",
"perforatorlar, otboyka bolg‘alari",
"pichoq va taxtachalar",
"pishiriqlar uchun qoliplar",
"playstation",
"pnevmatik jihozlar",
"pol tarozilari",
"pol va stol chiroqlari",
"portmone va hamyonlar",
"printerlar va skanerlar",
"protsessorlar",
"proyektorlar",
"pul (kupyura) sanash mashinalari",
"qattiq disklar, ssd va tarmoq adapterlari",
"qizlar uchun svitshotlar",
"qoshlar",
"qozonlar",
"qo‘l uchun",
"quloqchinlar",
"qurilish fenlari",
"quvvat manbalari",
"quyosh nurlariga qarshi",
"radiotelefonlar",
"rahbariyat uchun kreslolar",
"raqamli rasmlar",
"rul boshqaruvi",
"slonim mebel",
"salon uchun jihozlar",
"sariyog‘ idishlari",
"sendvich pishiruvchilar",
"shakar idishlari",
"shamdonlar",
"shampun",
"sharbat chiqargichlar",
"shatura",
"shifobaxsh yog‘lar",
"shinalar va g'ildiraklar uchun parvarish",
"shisha qopqoqlar",
"shlifovka uskunalari",
"shoxqirqqichlar, butaqirqqichlar, tokqaychilar",
"sichqoncha tagliklari",
"simsiz quloqchinlar",
"simsiz quvvatlagichlar",
"skanerlar",
"smart-soat va fitnes-brasletlar uchun aksessuarlar",
"smart-soatlar",
"smartfonlar",
"soatlar",
"soch olish mashinkalari va trimmerlar",
"soch uchun niqob",
"sovundonlar va tokchalar",
"sovutgichlar",
"sport oziq-ovqatlari",
"sport qo'lqoplari",
"sport va sayohat uchun butilkalar",
"stanok jihozlari",
"stendlar va ushlagichlar",
"stiluslar",
"stol kompyuterlari",
"stol xizmati",
"suv isitgichlari",
"suv kulerlari",
"suv nasoslari",
"tarmoq adapterlari",
"tarmoq filtrlari va uzatma kabellari",
"tarmoq kalitlari",
"tashqi akkumulyator (power bank)",
"ta’mir uchun mayda asbob-anjomlar",
"telefon g‘iloflari",
"telefon uchun himoya oynalari",
"telefon uchun stabilizatorlar",
"telefonlar va planshetlar uchun tagliklar",
"terma chinni idishlar",
"termoidishlar",
"termometrlar",
"tikuv mashinalari",
"tizim platalari",
"tosterlar",
"tovalar",
"tozalovchi",
"tugmachali telefonlar",
"tunchiroqlar va krovat yoni chiroqlari",
"tyuning va tashqi dekor",
"usb fleshkalar",
"uzatma simlari va adapterlar",
"uzluksiz quvvat manbalari",
"vafli pishirgichlar",
"veb-kameralar",
"videokameralar",
"videokartalar",
"videokonferensiya tizimlari",
"videokuzatuv kameralari",
"wi-fi adapterlari",
"wi-fi kuchaytirgichlari",
"wifi routerlar (optika)",
"xodimlar uchun kreslolar",
"xotira kartalari",
"yog‘och kesish jihozlari",
"yoshartiruvchi",
"yumshoq mebel to‘plamlari",
"yuviladigan kiyimlar uchun korzinalar",
"yuvish va avtotozalash vositalari",
"yuz uchun",
"zamin qoplamalari",
"zamonaviy o‘zbek adabiyoti",
"zaryadlovchi qurilmalar va usb kabellari",
"ziravorlar uchun aksessuarlar",
"akkumulyatory",
"aksessuary",
"antifriz",
"avto signalizatsiya",
"avtomobil koriklari",
"biznes kitoblari",
"bolalar samokati",
"bug' generatorlari",
"chay va kofe",
"dazmol taxtalari",
"dekoratsiyalar",
"elektr generatorlari",
"hovuzlar",
"kiyim quritgichlar",
"klaviatura va sichqoncha",
"kompyuter kolonkalari",
"konditsionerlar",
"monitorlar",
"namlagichlar",
"narvon",
"planshetlar",
"rele va stabilizatorlar",
"shirinliklar",
"soch uchun gellar",
"televizor aksessuarlar",
"televizorlar",
"videoregistratorlar",
"ароматы для дома"
] |
Lez94/classifier-posterior-glare-removal
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classifier-posterior-glare-removal
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the classifier_posterior_glare_removal_256_crop_s1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4990
- Accuracy: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.626 | 0.8065 | 50 | 0.5622 | 0.7582 |
| 0.4848 | 1.6129 | 100 | 0.5952 | 0.6675 |
| 0.2195 | 2.4194 | 150 | 0.5258 | 0.8325 |
| 0.1967 | 3.2258 | 200 | 0.5911 | 0.7960 |
| 0.2945 | 4.0323 | 250 | 0.4966 | 0.8300 |
| 0.1866 | 4.8387 | 300 | 0.5222 | 0.8350 |
| 0.1211 | 5.6452 | 350 | 0.5328 | 0.8426 |
| 0.1666 | 6.4516 | 400 | 0.5545 | 0.8426 |
| 0.0737 | 7.2581 | 450 | 0.5327 | 0.8526 |
| 0.0314 | 8.0645 | 500 | 0.5208 | 0.8526 |
| 0.0329 | 8.8710 | 550 | 0.5773 | 0.8489 |
| 0.0497 | 9.6774 | 600 | 0.5994 | 0.8489 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"good",
"bad"
] |
crocutacrocuto/convnext-base-224-MEG_C-3
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"aardvark",
"bird",
"black-and-white colobus",
"blue duiker",
"buffalo",
"bushbuck",
"bushpig",
"cattle",
"chimpanzee",
"civet_genet",
"dog_jackal",
"elephant",
"galago_potto",
"goat",
"golden cat",
"gorilla",
"guineafowl",
"honey badger",
"leopard",
"mandrill",
"mongoose",
"monkey",
"olive baboon",
"pangolin",
"porcupine",
"red colobus_red-capped mangabey",
"red duiker",
"rodent",
"serval",
"spotted hyena",
"squirel",
"water chevrotain",
"yellow-backed duiker"
] |
Glainez/waste_encoder_II
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"0",
"1",
"10",
"11",
"12",
"13",
"14",
"15",
"16",
"17",
"18",
"19",
"2",
"20",
"21",
"22",
"23",
"24",
"25",
"26",
"27",
"28",
"29",
"3",
"30",
"31",
"32",
"33",
"34",
"35",
"36",
"37",
"38",
"39",
"4",
"40",
"41",
"42",
"43",
"44",
"45",
"46",
"47",
"48",
"49",
"5",
"50",
"51",
"52",
"53",
"54",
"55",
"56",
"57",
"58",
"59",
"6",
"60",
"61",
"62",
"63",
"64",
"65",
"66",
"67",
"68",
"69",
"7",
"70",
"71",
"72",
"73",
"74",
"75",
"76",
"77",
"78",
"79",
"8",
"80",
"81",
"82",
"83",
"84",
"85",
"86",
"87",
"88",
"89",
"9",
"90",
"91",
"92",
"93",
"94",
"95",
"96",
"97",
"98",
"99"
] |
Ryukijano/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1798
- Accuracy: 0.9310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 512
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 12 | 2.6101 | 0.5223 |
| No log | 2.0 | 24 | 1.7190 | 0.8227 |
| No log | 3.0 | 36 | 1.0833 | 0.8890 |
| No log | 4.0 | 48 | 0.7011 | 0.9120 |
| No log | 5.0 | 60 | 0.5052 | 0.9242 |
| No log | 6.0 | 72 | 0.4097 | 0.9310 |
| No log | 7.0 | 84 | 0.3560 | 0.9350 |
| No log | 8.0 | 96 | 0.3237 | 0.9337 |
| 1.1364 | 9.0 | 108 | 0.3008 | 0.9378 |
| 1.1364 | 10.0 | 120 | 0.2833 | 0.9364 |
| 1.1364 | 11.0 | 132 | 0.2694 | 0.9391 |
| 1.1364 | 12.0 | 144 | 0.2586 | 0.9391 |
| 1.1364 | 13.0 | 156 | 0.2498 | 0.9418 |
| 1.1364 | 14.0 | 168 | 0.2423 | 0.9405 |
| 1.1364 | 15.0 | 180 | 0.2359 | 0.9405 |
| 1.1364 | 16.0 | 192 | 0.2303 | 0.9459 |
| 0.2326 | 17.0 | 204 | 0.2259 | 0.9405 |
| 0.2326 | 18.0 | 216 | 0.2222 | 0.9405 |
| 0.2326 | 19.0 | 228 | 0.2178 | 0.9432 |
| 0.2326 | 20.0 | 240 | 0.2146 | 0.9445 |
| 0.2326 | 21.0 | 252 | 0.2114 | 0.9432 |
| 0.2326 | 22.0 | 264 | 0.2087 | 0.9445 |
| 0.2326 | 23.0 | 276 | 0.2061 | 0.9432 |
| 0.2326 | 24.0 | 288 | 0.2040 | 0.9459 |
| 0.1651 | 25.0 | 300 | 0.2018 | 0.9459 |
| 0.1651 | 26.0 | 312 | 0.2000 | 0.9445 |
| 0.1651 | 27.0 | 324 | 0.1985 | 0.9459 |
| 0.1651 | 28.0 | 336 | 0.1968 | 0.9472 |
| 0.1651 | 29.0 | 348 | 0.1948 | 0.9459 |
| 0.1651 | 30.0 | 360 | 0.1939 | 0.9459 |
| 0.1651 | 31.0 | 372 | 0.1924 | 0.9459 |
| 0.1651 | 32.0 | 384 | 0.1915 | 0.9459 |
| 0.1651 | 33.0 | 396 | 0.1909 | 0.9459 |
| 0.134 | 34.0 | 408 | 0.1894 | 0.9472 |
| 0.134 | 35.0 | 420 | 0.1883 | 0.9459 |
| 0.134 | 36.0 | 432 | 0.1877 | 0.9472 |
| 0.134 | 37.0 | 444 | 0.1866 | 0.9486 |
| 0.134 | 38.0 | 456 | 0.1863 | 0.9472 |
| 0.134 | 39.0 | 468 | 0.1851 | 0.9486 |
| 0.134 | 40.0 | 480 | 0.1843 | 0.9472 |
| 0.134 | 41.0 | 492 | 0.1837 | 0.9472 |
| 0.1128 | 42.0 | 504 | 0.1831 | 0.9459 |
| 0.1128 | 43.0 | 516 | 0.1828 | 0.9472 |
| 0.1128 | 44.0 | 528 | 0.1822 | 0.9472 |
| 0.1128 | 45.0 | 540 | 0.1816 | 0.9472 |
| 0.1128 | 46.0 | 552 | 0.1808 | 0.9459 |
| 0.1128 | 47.0 | 564 | 0.1804 | 0.9459 |
| 0.1128 | 48.0 | 576 | 0.1802 | 0.9459 |
| 0.1128 | 49.0 | 588 | 0.1796 | 0.9459 |
| 0.0999 | 50.0 | 600 | 0.1793 | 0.9472 |
| 0.0999 | 51.0 | 612 | 0.1792 | 0.9486 |
| 0.0999 | 52.0 | 624 | 0.1787 | 0.9472 |
| 0.0999 | 53.0 | 636 | 0.1784 | 0.9472 |
| 0.0999 | 54.0 | 648 | 0.1780 | 0.9459 |
| 0.0999 | 55.0 | 660 | 0.1778 | 0.9445 |
| 0.0999 | 56.0 | 672 | 0.1772 | 0.9445 |
| 0.0999 | 57.0 | 684 | 0.1769 | 0.9472 |
| 0.0999 | 58.0 | 696 | 0.1768 | 0.9472 |
| 0.0894 | 59.0 | 708 | 0.1766 | 0.9472 |
| 0.0894 | 60.0 | 720 | 0.1763 | 0.9472 |
| 0.0894 | 61.0 | 732 | 0.1762 | 0.9486 |
| 0.0894 | 62.0 | 744 | 0.1760 | 0.9472 |
| 0.0894 | 63.0 | 756 | 0.1755 | 0.9459 |
| 0.0894 | 64.0 | 768 | 0.1752 | 0.9459 |
| 0.0894 | 65.0 | 780 | 0.1749 | 0.9459 |
| 0.0894 | 66.0 | 792 | 0.1749 | 0.9459 |
| 0.0828 | 67.0 | 804 | 0.1746 | 0.9472 |
| 0.0828 | 68.0 | 816 | 0.1745 | 0.9459 |
| 0.0828 | 69.0 | 828 | 0.1745 | 0.9459 |
| 0.0828 | 70.0 | 840 | 0.1744 | 0.9459 |
| 0.0828 | 71.0 | 852 | 0.1740 | 0.9459 |
| 0.0828 | 72.0 | 864 | 0.1741 | 0.9459 |
| 0.0828 | 73.0 | 876 | 0.1737 | 0.9459 |
| 0.0828 | 74.0 | 888 | 0.1739 | 0.9459 |
| 0.0778 | 75.0 | 900 | 0.1739 | 0.9459 |
| 0.0778 | 76.0 | 912 | 0.1737 | 0.9459 |
| 0.0778 | 77.0 | 924 | 0.1735 | 0.9459 |
| 0.0778 | 78.0 | 936 | 0.1733 | 0.9459 |
| 0.0778 | 79.0 | 948 | 0.1732 | 0.9459 |
| 0.0778 | 80.0 | 960 | 0.1732 | 0.9459 |
| 0.0778 | 81.0 | 972 | 0.1730 | 0.9459 |
| 0.0778 | 82.0 | 984 | 0.1730 | 0.9459 |
| 0.0778 | 83.0 | 996 | 0.1730 | 0.9459 |
| 0.0738 | 84.0 | 1008 | 0.1729 | 0.9459 |
| 0.0738 | 85.0 | 1020 | 0.1727 | 0.9459 |
| 0.0738 | 86.0 | 1032 | 0.1726 | 0.9459 |
| 0.0738 | 87.0 | 1044 | 0.1726 | 0.9459 |
| 0.0738 | 88.0 | 1056 | 0.1726 | 0.9459 |
| 0.0738 | 89.0 | 1068 | 0.1726 | 0.9459 |
| 0.0738 | 90.0 | 1080 | 0.1725 | 0.9459 |
| 0.0738 | 91.0 | 1092 | 0.1724 | 0.9459 |
| 0.0715 | 92.0 | 1104 | 0.1724 | 0.9459 |
| 0.0715 | 93.0 | 1116 | 0.1723 | 0.9459 |
| 0.0715 | 94.0 | 1128 | 0.1723 | 0.9459 |
| 0.0715 | 95.0 | 1140 | 0.1723 | 0.9459 |
| 0.0715 | 96.0 | 1152 | 0.1722 | 0.9459 |
| 0.0715 | 97.0 | 1164 | 0.1722 | 0.9459 |
| 0.0715 | 98.0 | 1176 | 0.1722 | 0.9459 |
| 0.0715 | 99.0 | 1188 | 0.1722 | 0.9459 |
| 0.0701 | 100.0 | 1200 | 0.1722 | 0.9459 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
sha000/ppmr-classifier-google-vit-base-patch16-224-in21k-5-epochs
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"control",
"ppmr"
] |
tiesenx14/vit-base-patch16-224-car-angle
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"ben_phai_dau_xe",
"ben_phai_duoi_xe",
"ben_phai_xe",
"ben_trai_dau_xe",
"ben_trai_duoi_xe",
"ben_trai_xe",
"chinh_dien_dau_xe",
"chinh_dien_duoi_xe",
"khong_xac_dinh"
] |
itsTomLie/Jaundice_Classifier
|
## Model Usage
```
import gradio as gr
from transformers import pipeline
from PIL import Image
import numpy as np
def predict_image(image):
pipe = pipeline("image-classification", model="itsTomLie/Jaundice_Classifier")
if isinstance(image, np.ndarray):
image = Image.fromarray(image.astype('uint8'))
elif isinstance(image, str):
image = Image.open(image)
result = pipe(image)
label = result[0]['label']
confidence = result[0]['score']
print(f"Prediction: {label}, Confidence: {confidence}")
return label, confidence
interface = gr.Interface(
fn=predict_image,
inputs=gr.Image(type="numpy", label="Upload an Image"),
outputs=[gr.Textbox(label="Prediction"), gr.Textbox(label="Confidence")]
)
interface.launch(debug=True)
```
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"jaundiced eyes",
"normal eyes"
] |
gerbejon/webpage_labeling_classifier
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# webpage_labeling_classifier
This model is a fine-tuned version of [gerbejon/webpage_labeling_classifier](https://huggingface.co/gerbejon/webpage_labeling_classifier) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1555
- Accuracy: 0.9416
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.2002 | 0.9968 | 78 | 0.1917 | 0.9281 |
| 0.2191 | 1.9936 | 156 | 0.2132 | 0.9097 |
| 0.2067 | 2.9904 | 234 | 0.2522 | 0.9065 |
| 0.1751 | 4.0 | 313 | 0.1931 | 0.9217 |
| 0.1346 | 4.9968 | 391 | 0.1933 | 0.9241 |
| 0.1448 | 5.9936 | 469 | 0.1816 | 0.9313 |
| 0.1389 | 6.9904 | 547 | 0.2027 | 0.9209 |
| 0.1387 | 8.0 | 626 | 0.1696 | 0.9384 |
| 0.1234 | 8.9968 | 704 | 0.1758 | 0.9345 |
| 0.1196 | 9.9936 | 782 | 0.1848 | 0.9305 |
| 0.1213 | 10.9904 | 860 | 0.1769 | 0.9400 |
| 0.1287 | 12.0 | 939 | 0.1421 | 0.9488 |
| 0.117 | 12.9968 | 1017 | 0.2046 | 0.9241 |
| 0.1433 | 13.9936 | 1095 | 0.1769 | 0.9369 |
| 0.0988 | 14.9904 | 1173 | 0.1494 | 0.9496 |
| 0.1136 | 16.0 | 1252 | 0.1571 | 0.9424 |
| 0.086 | 16.9968 | 1330 | 0.1712 | 0.9384 |
| 0.089 | 17.9936 | 1408 | 0.1437 | 0.9440 |
| 0.0991 | 18.9904 | 1486 | 0.1510 | 0.9448 |
| 0.0824 | 19.9361 | 1560 | 0.1555 | 0.9416 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"interesting",
"uninteresting"
] |
crocutacrocuto/convnext-base-224-MEGW_C-3
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"aardvark",
"bird",
"black-and-white colobus",
"blue duiker",
"buffalo",
"bushbuck",
"bushpig",
"cattle",
"chimpanzee",
"civet_genet",
"dog_jackal",
"elephant",
"galago_potto",
"goat",
"golden cat",
"gorilla",
"guineafowl",
"honey badger",
"leopard",
"mandrill",
"mongoose",
"monkey",
"olive baboon",
"pangolin",
"porcupine",
"red colobus_red-capped mangabey",
"red duiker",
"rodent",
"serval",
"spotted hyena",
"squirel",
"water chevrotain",
"yellow-backed duiker"
] |
hamnabint/swin-transformer-results
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-transformer-results
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8055
- Accuracy: 0.6794
- F1: 0.6810
- Precision: 0.6904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|
| 1.0686 | 0.1952 | 500 | 1.0585 | 0.5266 | 0.5042 | 0.5355 |
| 1.3283 | 0.3903 | 1000 | 1.0015 | 0.5722 | 0.5794 | 0.6006 |
| 0.991 | 0.5855 | 1500 | 0.9601 | 0.5828 | 0.5865 | 0.6194 |
| 0.7919 | 0.7806 | 2000 | 0.9066 | 0.6135 | 0.6191 | 0.6580 |
| 0.9748 | 0.9758 | 2500 | 0.8327 | 0.6460 | 0.6443 | 0.6458 |
| 0.7183 | 1.1710 | 3000 | 0.8808 | 0.6421 | 0.6419 | 0.6638 |
| 0.769 | 1.3661 | 3500 | 0.8454 | 0.6526 | 0.6483 | 0.6553 |
| 0.8558 | 1.5613 | 4000 | 0.8773 | 0.6482 | 0.6364 | 0.6454 |
| 0.6713 | 1.7564 | 4500 | 0.8338 | 0.6561 | 0.6560 | 0.6711 |
| 0.7476 | 1.9516 | 5000 | 0.8083 | 0.6632 | 0.6636 | 0.6690 |
| 0.6896 | 2.1468 | 5500 | 0.8055 | 0.6794 | 0.6810 | 0.6904 |
| 0.648 | 2.3419 | 6000 | 0.8252 | 0.6697 | 0.6726 | 0.6822 |
| 0.5969 | 2.5371 | 6500 | 0.8179 | 0.6697 | 0.6676 | 0.6661 |
| 0.7098 | 2.7322 | 7000 | 0.8139 | 0.6724 | 0.6705 | 0.6698 |
| 0.5318 | 2.9274 | 7500 | 0.8033 | 0.6790 | 0.6783 | 0.6793 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cpu
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"happy",
"sad",
"angry",
"neutral"
] |
krasuluk/vit-base-oxford-pets-krasuluk
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-pets-krasuluk
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2309
- Accuracy: 0.9364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3808 | 1.0 | 370 | 0.3351 | 0.9161 |
| 0.2093 | 2.0 | 740 | 0.2660 | 0.9147 |
| 0.1548 | 3.0 | 1110 | 0.2464 | 0.9202 |
| 0.1471 | 4.0 | 1480 | 0.2405 | 0.9269 |
| 0.12 | 5.0 | 1850 | 0.2379 | 0.9229 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
smartgmin/traynothein_resize_treeclasss
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# traynothein_resize_treeclasss
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0426
- Train Accuracy: 0.9814
- Train Top-3-accuracy: 1.0
- Validation Loss: 0.0803
- Validation Accuracy: 0.9823
- Validation Top-3-accuracy: 1.0
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 504, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 0.4021 | 0.8416 | 1.0 | 0.1892 | 0.9342 | 1.0 | 0 |
| 0.1232 | 0.9479 | 1.0 | 0.1078 | 0.9574 | 1.0 | 1 |
| 0.0852 | 0.9635 | 1.0 | 0.1014 | 0.9678 | 1.0 | 2 |
| 0.0597 | 0.9712 | 1.0 | 0.0798 | 0.9740 | 1.0 | 3 |
| 0.0549 | 0.9761 | 1.0 | 0.0891 | 0.9777 | 1.0 | 4 |
| 0.0485 | 0.9790 | 1.0 | 0.0754 | 0.9803 | 1.0 | 5 |
| 0.0426 | 0.9814 | 1.0 | 0.0803 | 0.9823 | 1.0 | 6 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.15.1
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"dr",
"cataract",
"normal"
] |
smartgmin/traynothein_resize_foreclasss
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# traynothein_resize_foreclasss
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0744
- Train Accuracy: 0.9404
- Train Top-3-accuracy: 0.9991
- Validation Loss: 0.2720
- Validation Accuracy: 0.9431
- Validation Top-3-accuracy: 0.9991
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 658, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 0.6708 | 0.7378 | 0.9752 | 0.4218 | 0.8246 | 0.9933 | 0 |
| 0.3109 | 0.8569 | 0.9956 | 0.3083 | 0.8754 | 0.9968 | 1 |
| 0.2024 | 0.8899 | 0.9975 | 0.2776 | 0.9011 | 0.9979 | 2 |
| 0.1370 | 0.9104 | 0.9982 | 0.2734 | 0.9170 | 0.9985 | 3 |
| 0.0996 | 0.9237 | 0.9986 | 0.2775 | 0.9288 | 0.9988 | 4 |
| 0.0814 | 0.9334 | 0.9989 | 0.2695 | 0.9372 | 0.9990 | 5 |
| 0.0744 | 0.9404 | 0.9991 | 0.2720 | 0.9431 | 0.9991 | 6 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.15.1
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"dr",
"cataract",
"glaucoma",
"normal"
] |
binbinao/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
sanali209/imclasif-genres-v001out
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# imclasif-genres-v001out
This model was trained from scratch on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 312 | 0.4142 |
| 0.4142 | 2.0 | 624 | 0.3919 |
| 0.4142 | 3.0 | 936 | 0.3848 |
| 0.3329 | 4.0 | 1248 | 0.3849 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"3d renderer",
"combined",
"drawing",
"other",
"photo",
"pixel art",
"text"
] |
smartgmin/Entrenal_eyes_5clasess_withOther_model
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Entrenal_eyes_5clasess_withOther_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0845
- Train Accuracy: 0.9283
- Train Top-3-accuracy: 0.9936
- Validation Loss: 0.4386
- Validation Accuracy: 0.9313
- Validation Top-3-accuracy: 0.9940
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 847, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 0.8358 | 0.6703 | 0.9165 | 0.5139 | 0.7995 | 0.9693 | 0 |
| 0.3540 | 0.8366 | 0.9783 | 0.4737 | 0.8589 | 0.9835 | 1 |
| 0.2235 | 0.8749 | 0.9862 | 0.3874 | 0.8876 | 0.9884 | 2 |
| 0.1607 | 0.8972 | 0.9898 | 0.4559 | 0.9045 | 0.9908 | 3 |
| 0.1204 | 0.9109 | 0.9914 | 0.4410 | 0.9163 | 0.9921 | 4 |
| 0.0961 | 0.9208 | 0.9927 | 0.4393 | 0.9246 | 0.9932 | 5 |
| 0.0845 | 0.9283 | 0.9936 | 0.4386 | 0.9313 | 0.9940 | 6 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.15.1
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"dr",
"cataract",
"glaucoma",
"normal",
"other"
] |
raj777/vit-base-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-pets
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the rokmr/pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0765
- Accuracy: 0.98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.0428 | 1.7544 | 100 | 0.0765 | 0.98 |
| 0.0089 | 3.5088 | 200 | 0.0770 | 0.98 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
[
"cat",
"dog",
"rabbit"
] |
MohamedFiyaz/indian-food-classifier
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14"
] |
nst-t/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nst-t/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3636
- Validation Loss: 0.3328
- Train Accuracy: 0.922
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.8054 | 1.6563 | 0.819 | 0 |
| 1.2309 | 0.8322 | 0.879 | 1 |
| 0.6857 | 0.5211 | 0.913 | 2 |
| 0.4820 | 0.4088 | 0.911 | 3 |
| 0.3636 | 0.3328 | 0.922 | 4 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.17.0
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
DouglasBraga/swin-tiny-patch4-window7-224-finetuned-leukemia-08-2024.v1.2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-leukemia-08-2024.v1.2
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8305
- Accuracy: 0.7025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.3704 | 0.9984 | 312 | 0.7881 | 0.6388 |
| 0.2726 | 2.0 | 625 | 0.8305 | 0.7025 |
| 0.2103 | 2.9952 | 936 | 0.8708 | 0.692 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
[
"all",
"hem"
] |
rukundob451/swin-tiny-patch4-window7-224-finetuned-papsmear
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-papsmear
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3057
- Accuracy: 0.8971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.4898 | 0.9935 | 38 | 1.3709 | 0.4632 |
| 0.8902 | 1.9869 | 76 | 0.9261 | 0.6324 |
| 0.9107 | 2.9804 | 114 | 0.8400 | 0.6397 |
| 0.564 | 4.0 | 153 | 0.6937 | 0.7279 |
| 0.5563 | 4.9935 | 191 | 0.5622 | 0.7647 |
| 0.3851 | 5.9869 | 229 | 0.5238 | 0.8015 |
| 0.3327 | 6.9804 | 267 | 0.6382 | 0.7941 |
| 0.2469 | 8.0 | 306 | 0.4330 | 0.8456 |
| 0.2903 | 8.9935 | 344 | 0.4212 | 0.8309 |
| 0.1861 | 9.9869 | 382 | 0.4140 | 0.8529 |
| 0.1533 | 10.9804 | 420 | 0.3810 | 0.8603 |
| 0.1017 | 12.0 | 459 | 0.3565 | 0.8603 |
| 0.1285 | 12.9935 | 497 | 0.3057 | 0.8971 |
| 0.1377 | 13.9869 | 535 | 0.3058 | 0.8824 |
| 0.1033 | 14.9020 | 570 | 0.3140 | 0.8824 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"ascus",
"cancer",
"hsil",
"lsil",
"nilm",
"non-diagnostic"
] |
AlCyede/berry-and-berrylike-fruit-classification
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"blueberry",
"cherry",
"cherry rainier",
"grape blue",
"grape pink",
"grape white",
"huckleberry",
"mulberry",
"raspberry",
"redcurrant",
"tomato cherry red"
] |
platzi/platzi-vit-model-einoa
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-einoa
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0381
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1295 | 3.8462 | 500 | 0.0381 | 0.9925 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
einoa04/human_action_recognition_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# human_action_recognition_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.8069
- Accuracy: 0.0659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.3102 | 0.3175 | 500 | 3.5439 | 0.0761 |
| 0.9861 | 0.6349 | 1000 | 4.1324 | 0.065 |
| 0.8791 | 0.9524 | 1500 | 4.6708 | 0.0752 |
| 0.5281 | 1.2698 | 2000 | 5.0605 | 0.0980 |
| 0.4598 | 1.5873 | 2500 | 6.1627 | 0.0437 |
| 0.4733 | 1.9048 | 3000 | 5.6746 | 0.0754 |
| 0.2844 | 2.2222 | 3500 | 6.5390 | 0.0746 |
| 0.1697 | 2.5397 | 4000 | 6.9396 | 0.0537 |
| 0.1697 | 2.8571 | 4500 | 7.1644 | 0.0672 |
| 0.1013 | 3.1746 | 5000 | 7.4083 | 0.0619 |
| 0.0556 | 3.4921 | 5500 | 7.4283 | 0.0694 |
| 0.0338 | 3.8095 | 6000 | 7.8069 | 0.0659 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"calling",
"clapping",
"running",
"sitting",
"sleeping",
"texting",
"using_laptop",
"cycling",
"dancing",
"drinking",
"eating",
"fighting",
"hugging",
"laughing",
"listening_to_music"
] |
djbp/NMM_Classification_base_V10
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NMM_Classification_base_V10
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4066
- Accuracy: 0.8349
- Auc Overall: 0.9379
- Auc Class 0: 0.9614
- Auc Class 1: 0.9315
- Auc Class 2: 0.9207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 1.13.1+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
[
"invalid",
"mid market",
"non mid market"
] |
AfiqN/convnext-tiny-2262-mango-disease-dropout
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"anthracnose",
"bacterial canker",
"cutting weevil",
"die back",
"gall midge",
"healthy",
"powdery mildew",
"sooty mould"
] |
awanicka/TransparentBagClassifier
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TransparentBagClassifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0411
- Accuracy: 0.9955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0694 | 1.0 | 158 | 0.0719 | 0.9821 |
| 0.0871 | 2.0 | 316 | 0.0411 | 0.9955 |
| 0.0561 | 3.0 | 474 | 0.0419 | 0.9910 |
| 0.0673 | 4.0 | 632 | 0.0424 | 0.9865 |
| 0.0099 | 5.0 | 790 | 0.0517 | 0.9821 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cpu
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"non_transparent",
"transparent"
] |
smartgmin/Entrnal_eyes_data_4class_resize_224_model
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Entrnal_eyes_data_4class_resize_224_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0823
- Train Accuracy: 0.9261
- Train Top-3-accuracy: 0.9972
- Validation Loss: 0.2588
- Validation Accuracy: 0.9299
- Validation Top-3-accuracy: 0.9974
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 651, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 0.7993 | 0.6130 | 0.9518 | 0.5184 | 0.7611 | 0.9833 | 0 |
| 0.3482 | 0.8052 | 0.9881 | 0.3126 | 0.8382 | 0.9913 | 1 |
| 0.2260 | 0.8597 | 0.9929 | 0.2990 | 0.8739 | 0.9942 | 2 |
| 0.1576 | 0.8861 | 0.9949 | 0.2597 | 0.8954 | 0.9956 | 3 |
| 0.1191 | 0.9041 | 0.9960 | 0.2642 | 0.9106 | 0.9964 | 4 |
| 0.0933 | 0.9167 | 0.9967 | 0.2598 | 0.9216 | 0.9970 | 5 |
| 0.0823 | 0.9261 | 0.9972 | 0.2588 | 0.9299 | 0.9974 | 6 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.15.1
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"cataract",
"diabetic_retinopathy",
"glaucoma",
"normal"
] |
crocutacrocuto/dinov2-base-MEGWA-3
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"aardvark",
"bird",
"black-and-white colobus",
"blue duiker",
"buffalo",
"bushbuck",
"bushpig",
"cattle",
"chimpanzee",
"civet_genet",
"dog_jackal",
"elephant",
"galago_potto",
"goat",
"golden cat",
"gorilla",
"guineafowl",
"honey badger",
"leopard",
"mandrill",
"mongoose",
"monkey",
"olive baboon",
"pangolin",
"porcupine",
"red colobus_red-capped mangabey",
"red duiker",
"rodent",
"serval",
"spotted hyena",
"squirel",
"water chevrotain",
"yellow-backed duiker"
] |
smartgmin/Entrnal_eyes_data_5class_RVO_resize_224_model
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Entrnal_eyes_data_5class_RVO_resize_224_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0870
- Train Accuracy: 0.9372
- Train Top-3-accuracy: 0.9944
- Validation Loss: 0.2468
- Validation Accuracy: 0.9406
- Validation Top-3-accuracy: 0.9948
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 784, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 0.9323 | 0.6128 | 0.8870 | 0.4850 | 0.7838 | 0.9644 | 0 |
| 0.3507 | 0.8315 | 0.9758 | 0.3223 | 0.8593 | 0.9822 | 1 |
| 0.2174 | 0.8787 | 0.9858 | 0.2710 | 0.8925 | 0.9883 | 2 |
| 0.1573 | 0.9034 | 0.9899 | 0.3544 | 0.9108 | 0.9911 | 3 |
| 0.1231 | 0.9172 | 0.9920 | 0.2527 | 0.9235 | 0.9928 | 4 |
| 0.0963 | 0.9287 | 0.9934 | 0.2485 | 0.9333 | 0.9940 | 5 |
| 0.0870 | 0.9372 | 0.9944 | 0.2468 | 0.9406 | 0.9948 | 6 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.15.1
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"dp",
"rvo",
"cataract",
"glaucoma",
"normal"
] |
smartgmin/Entrnal_eyes_data_5class_RVO_newNormal_resize_224_model
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Entrnal_eyes_data_5class_RVO_newNormal_resize_224_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0885
- Train Accuracy: 0.9332
- Train Top-3-accuracy: 0.9946
- Validation Loss: 0.2622
- Validation Accuracy: 0.9369
- Validation Top-3-accuracy: 0.9950
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 777, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 0.9273 | 0.5844 | 0.9067 | 0.5047 | 0.7651 | 0.9651 | 0 |
| 0.3467 | 0.8197 | 0.9763 | 0.3231 | 0.8519 | 0.9828 | 1 |
| 0.2263 | 0.8717 | 0.9862 | 0.3327 | 0.8846 | 0.9886 | 2 |
| 0.1624 | 0.8956 | 0.9902 | 0.2742 | 0.9047 | 0.9914 | 3 |
| 0.1247 | 0.9124 | 0.9923 | 0.2696 | 0.9190 | 0.9931 | 4 |
| 0.1000 | 0.9243 | 0.9937 | 0.2560 | 0.9292 | 0.9942 | 5 |
| 0.0885 | 0.9332 | 0.9946 | 0.2622 | 0.9369 | 0.9950 | 6 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.15.1
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"dp",
"rvo",
"cataract",
"glaucoma",
"normal"
] |
MakAIHealthLab/deit-tiny-patch16-224-finetuned-papsmear
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-tiny-patch16-224-finetuned-papsmear
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4747
- Accuracy: 0.8235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.5381 | 0.9935 | 38 | 1.4222 | 0.3897 |
| 1.172 | 1.9869 | 76 | 1.1008 | 0.5882 |
| 0.8361 | 2.9804 | 114 | 0.8529 | 0.6618 |
| 0.6869 | 4.0 | 153 | 0.9582 | 0.6324 |
| 0.4995 | 4.9935 | 191 | 0.6926 | 0.7574 |
| 0.4576 | 5.9869 | 229 | 0.4967 | 0.8529 |
| 0.4187 | 6.9804 | 267 | 0.5350 | 0.8162 |
| 0.4075 | 8.0 | 306 | 0.4903 | 0.8088 |
| 0.3585 | 8.9935 | 344 | 0.5252 | 0.7868 |
| 0.3528 | 9.9869 | 382 | 0.5027 | 0.8088 |
| 0.2788 | 10.9804 | 420 | 0.4503 | 0.8456 |
| 0.2419 | 12.0 | 459 | 0.4857 | 0.8309 |
| 0.2544 | 12.9935 | 497 | 0.5543 | 0.7868 |
| 0.2591 | 13.9869 | 535 | 0.4839 | 0.8382 |
| 0.207 | 14.9020 | 570 | 0.4747 | 0.8235 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"ascus",
"cancer",
"hsil",
"lsil",
"nilm",
"non-diagnostic"
] |
mirluvams/swinv2-base-patch4-window16-256-popocatepetl
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"label_0",
"label_1",
"label_2",
"label_3"
] |
Davalejo/vitModel
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vitModel
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0137
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.149 | 3.8462 | 500 | 0.0137 | 1.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
mariamoracrossitcr/vit-base-beans-demo-v18Set
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans-demo-v18Set
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0830
- Accuracy: 0.9774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.0828 | 1.5385 | 100 | 0.1131 | 0.9699 |
| 0.0145 | 3.0769 | 200 | 0.0830 | 0.9774 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 2.17.0
- Tokenizers 0.21.1
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
rukundob451/vit-tiny-patch16-224-finetuned-papsmear
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-tiny-patch16-224-finetuned-papsmear
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2121
- Accuracy: 0.9338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.4005 | 0.9935 | 38 | 1.2214 | 0.5294 |
| 0.8877 | 1.9869 | 76 | 1.0727 | 0.6691 |
| 0.603 | 2.9804 | 114 | 0.6807 | 0.7574 |
| 0.465 | 4.0 | 153 | 0.6485 | 0.7574 |
| 0.432 | 4.9935 | 191 | 0.5024 | 0.8015 |
| 0.2957 | 5.9869 | 229 | 0.4485 | 0.8162 |
| 0.2203 | 6.9804 | 267 | 0.3850 | 0.8529 |
| 0.236 | 8.0 | 306 | 0.3628 | 0.8456 |
| 0.1857 | 8.9935 | 344 | 0.2930 | 0.8824 |
| 0.1907 | 9.9869 | 382 | 0.2121 | 0.9338 |
| 0.1546 | 10.9804 | 420 | 0.2242 | 0.9265 |
| 0.1375 | 12.0 | 459 | 0.1918 | 0.9191 |
| 0.1237 | 12.9935 | 497 | 0.1809 | 0.9338 |
| 0.1637 | 13.9869 | 535 | 0.1774 | 0.9338 |
| 0.0803 | 14.9020 | 570 | 0.1882 | 0.9338 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"ascus",
"cancer",
"hsil",
"lsil",
"nilm",
"non-diagnostic"
] |
rukundob451/convnext-tiny-224-finetuned-papsmear
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-papsmear
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4893
- Accuracy: 0.8088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.706 | 0.9935 | 38 | 1.6028 | 0.2794 |
| 1.3867 | 1.9869 | 76 | 1.2961 | 0.4853 |
| 1.0784 | 2.9804 | 114 | 1.0588 | 0.5221 |
| 0.9128 | 4.0 | 153 | 0.8886 | 0.6618 |
| 0.7466 | 4.9935 | 191 | 0.8913 | 0.6029 |
| 0.6886 | 5.9869 | 229 | 0.7380 | 0.7059 |
| 0.6198 | 6.9804 | 267 | 0.7622 | 0.7132 |
| 0.6001 | 8.0 | 306 | 0.7083 | 0.6838 |
| 0.5542 | 8.9935 | 344 | 0.5909 | 0.7721 |
| 0.5161 | 9.9869 | 382 | 0.5909 | 0.7574 |
| 0.4631 | 10.9804 | 420 | 0.5677 | 0.7721 |
| 0.4284 | 12.0 | 459 | 0.5229 | 0.7868 |
| 0.4334 | 12.9935 | 497 | 0.5160 | 0.8015 |
| 0.4386 | 13.9869 | 535 | 0.4788 | 0.8015 |
| 0.3728 | 14.9020 | 570 | 0.4893 | 0.8088 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"ascus",
"cancer",
"hsil",
"lsil",
"nilm",
"non-diagnostic"
] |
DiegoBraz/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4382
- Accuracy: 0.8727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.9032 | 7 | 2.3727 | 0.2 |
| 2.3966 | 1.9355 | 15 | 2.2910 | 0.3182 |
| 2.3131 | 2.9677 | 23 | 2.1218 | 0.4091 |
| 2.072 | 4.0 | 31 | 1.8349 | 0.4545 |
| 2.072 | 4.9032 | 38 | 1.4635 | 0.5364 |
| 1.5528 | 5.9355 | 46 | 1.1036 | 0.6636 |
| 1.0472 | 6.9677 | 54 | 0.9273 | 0.7273 |
| 0.7989 | 8.0 | 62 | 0.8008 | 0.7909 |
| 0.7989 | 8.9032 | 69 | 0.7359 | 0.7818 |
| 0.604 | 9.9355 | 77 | 0.7283 | 0.7909 |
| 0.5228 | 10.9677 | 85 | 0.5897 | 0.8364 |
| 0.4734 | 12.0 | 93 | 0.6503 | 0.8182 |
| 0.3987 | 12.9032 | 100 | 0.5785 | 0.8273 |
| 0.3987 | 13.9355 | 108 | 0.6091 | 0.8182 |
| 0.3742 | 14.9677 | 116 | 0.5278 | 0.8455 |
| 0.3588 | 16.0 | 124 | 0.5279 | 0.8545 |
| 0.3536 | 16.9032 | 131 | 0.5189 | 0.8364 |
| 0.3536 | 17.9355 | 139 | 0.5036 | 0.8545 |
| 0.331 | 18.9677 | 147 | 0.5327 | 0.8364 |
| 0.2836 | 20.0 | 155 | 0.4717 | 0.8636 |
| 0.2785 | 20.9032 | 162 | 0.4598 | 0.8545 |
| 0.2439 | 21.9355 | 170 | 0.4783 | 0.8545 |
| 0.2439 | 22.9677 | 178 | 0.4948 | 0.8545 |
| 0.2779 | 24.0 | 186 | 0.4884 | 0.8455 |
| 0.2167 | 24.9032 | 193 | 0.5084 | 0.8545 |
| 0.2164 | 25.9355 | 201 | 0.4715 | 0.8545 |
| 0.2164 | 26.9677 | 209 | 0.5503 | 0.8273 |
| 0.2342 | 28.0 | 217 | 0.4980 | 0.8273 |
| 0.216 | 28.9032 | 224 | 0.4241 | 0.8545 |
| 0.1986 | 29.9355 | 232 | 0.4466 | 0.8545 |
| 0.1919 | 30.9677 | 240 | 0.4558 | 0.8636 |
| 0.1919 | 32.0 | 248 | 0.4390 | 0.8636 |
| 0.1958 | 32.9032 | 255 | 0.4379 | 0.8545 |
| 0.1693 | 33.9355 | 263 | 0.4424 | 0.8455 |
| 0.2158 | 34.9677 | 271 | 0.4524 | 0.8364 |
| 0.2158 | 36.0 | 279 | 0.4388 | 0.8545 |
| 0.1578 | 36.9032 | 286 | 0.4327 | 0.8545 |
| 0.1866 | 37.9355 | 294 | 0.4528 | 0.8455 |
| 0.1664 | 38.9677 | 302 | 0.4533 | 0.8455 |
| 0.1757 | 40.0 | 310 | 0.4492 | 0.8545 |
| 0.1757 | 40.9032 | 317 | 0.4418 | 0.8636 |
| 0.1542 | 41.9355 | 325 | 0.4412 | 0.8636 |
| 0.144 | 42.9677 | 333 | 0.4438 | 0.8545 |
| 0.1647 | 44.0 | 341 | 0.4411 | 0.8636 |
| 0.1647 | 44.9032 | 348 | 0.4383 | 0.8636 |
| 0.1418 | 45.1613 | 350 | 0.4382 | 0.8727 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"ak-47",
"awp",
"famas",
"galil-ar",
"glock",
"m4a1",
"m4a4",
"p-90",
"sg-553",
"ump",
"usp"
] |
vishalkatheriya18/neck_vit
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"mandarin neck",
"notch neck",
"round neck",
"shirt collar",
"v neck"
] |
sailinginnocent/vit-base-beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0662
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2797 | 1.0 | 130 | 0.2151 | 0.9624 |
| 0.1295 | 2.0 | 260 | 0.1254 | 0.9774 |
| 0.1402 | 3.0 | 390 | 0.0957 | 0.9774 |
| 0.0819 | 4.0 | 520 | 0.0662 | 0.9850 |
| 0.1172 | 5.0 | 650 | 0.0822 | 0.9699 |
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.1+cu124
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
CodeMania/Vehicle_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# CodeMania/Vehicle_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4395
- Validation Loss: 0.5309
- Train Accuracy: 0.8601
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 13595, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.8665 | 1.3278 | 0.6654 | 0 |
| 1.1317 | 0.9559 | 0.7569 | 1 |
| 0.7964 | 0.7558 | 0.7908 | 2 |
| 0.5967 | 0.6633 | 0.8183 | 3 |
| 0.4395 | 0.5309 | 0.8601 | 4 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.17.0
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"auto-rickshaw",
"bicycle",
"truck",
"van",
"bus",
"car",
"e-rickshaw",
"mini-bus",
"mini-truck",
"motorcycle",
"rickshaw",
"tractor"
] |
smartgmin/Entrnal_eyes_data_7class_allNew_withother_resize_224_model
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Entrnal_eyes_data_7class_allNew_withother_resize_224_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0693
- Train Accuracy: 0.9107
- Train Top-3-accuracy: 0.9914
- Validation Loss: 0.4731
- Validation Accuracy: 0.9137
- Validation Top-3-accuracy: 0.9918
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1580, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 1.1195 | 0.5630 | 0.8481 | 0.7181 | 0.7020 | 0.9377 | 0 |
| 0.5314 | 0.7457 | 0.9559 | 0.5566 | 0.7758 | 0.9668 | 1 |
| 0.3817 | 0.7982 | 0.9725 | 0.4695 | 0.8146 | 0.9767 | 2 |
| 0.2853 | 0.8284 | 0.9795 | 0.4379 | 0.8405 | 0.9819 | 3 |
| 0.2111 | 0.8515 | 0.9837 | 0.4234 | 0.8605 | 0.9852 | 4 |
| 0.1475 | 0.8695 | 0.9864 | 0.4329 | 0.8767 | 0.9874 | 5 |
| 0.1070 | 0.8835 | 0.9882 | 0.4625 | 0.8896 | 0.9890 | 6 |
| 0.0847 | 0.8948 | 0.9896 | 0.4766 | 0.8993 | 0.9901 | 7 |
| 0.0745 | 0.9035 | 0.9906 | 0.4688 | 0.9073 | 0.9910 | 8 |
| 0.0693 | 0.9107 | 0.9914 | 0.4731 | 0.9137 | 0.9918 | 9 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.15.1
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"dp",
"other",
"rop",
"rvo",
"cataract",
"glaucoma",
"normal"
] |
smartgmin/Entrnal_eyes_data_6class_allNew_not_other_resize_224_model
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Entrnal_eyes_data_6class_allNew_not_other_resize_224_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0917
- Train Accuracy: 0.9468
- Train Top-3-accuracy: 0.9954
- Validation Loss: 0.2431
- Validation Accuracy: 0.9496
- Validation Top-3-accuracy: 0.9957
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 917, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 0.9132 | 0.6556 | 0.9035 | 0.4997 | 0.8161 | 0.9711 | 0 |
| 0.3301 | 0.8571 | 0.9805 | 0.3293 | 0.8811 | 0.9856 | 1 |
| 0.2152 | 0.8971 | 0.9883 | 0.2990 | 0.9090 | 0.9902 | 2 |
| 0.1612 | 0.9176 | 0.9915 | 0.2913 | 0.9244 | 0.9926 | 3 |
| 0.1231 | 0.9302 | 0.9933 | 0.2531 | 0.9354 | 0.9940 | 4 |
| 0.1020 | 0.9397 | 0.9945 | 0.2420 | 0.9436 | 0.9950 | 5 |
| 0.0917 | 0.9468 | 0.9954 | 0.2431 | 0.9496 | 0.9957 | 6 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.15.1
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"dp",
"rop",
"rvo",
"cataract",
"glaucoma",
"normal"
] |
smartgmin/Entrnal_eyes_data_6_true_agoiment211_model
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Entrnal_eyes_data_6_true_agoiment211_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1455
- Train Accuracy: 0.9282
- Train Top-3-accuracy: 0.9908
- Validation Loss: 0.3319
- Validation Accuracy: 0.9322
- Validation Top-3-accuracy: 0.9914
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 434, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 1.1623 | 0.5608 | 0.8521 | 0.7419 | 0.7200 | 0.9394 | 0 |
| 0.5255 | 0.7824 | 0.9588 | 0.4509 | 0.8190 | 0.9701 | 1 |
| 0.3218 | 0.8454 | 0.9759 | 0.3839 | 0.8644 | 0.9803 | 2 |
| 0.2230 | 0.8794 | 0.9830 | 0.3494 | 0.8923 | 0.9852 | 3 |
| 0.1755 | 0.9022 | 0.9868 | 0.3445 | 0.9104 | 0.9882 | 4 |
| 0.1539 | 0.9173 | 0.9892 | 0.3343 | 0.9231 | 0.9901 | 5 |
| 0.1455 | 0.9282 | 0.9908 | 0.3319 | 0.9322 | 0.9914 | 6 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.15.1
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"cataract",
"dr",
"glaucoma",
"rvo",
"normal"
] |
smartgmin/Entrnal_eyes_data_6_true_agoiment211_model2
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Entrnal_eyes_data_6_true_agoiment211_model2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0883
- Train Accuracy: 0.9406
- Train Top-3-accuracy: 0.9940
- Validation Loss: 0.2930
- Validation Accuracy: 0.9430
- Validation Top-3-accuracy: 0.9943
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 620, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 1.1642 | 0.5159 | 0.8895 | 0.8054 | 0.6679 | 0.9485 | 0 |
| 0.5389 | 0.7360 | 0.9637 | 0.4377 | 0.7847 | 0.9737 | 1 |
| 0.3063 | 0.8169 | 0.9788 | 0.3756 | 0.8425 | 0.9825 | 2 |
| 0.2024 | 0.8607 | 0.9848 | 0.3307 | 0.8758 | 0.9868 | 3 |
| 0.1515 | 0.8875 | 0.9882 | 0.3064 | 0.8976 | 0.9893 | 4 |
| 0.1205 | 0.9058 | 0.9902 | 0.2965 | 0.9127 | 0.9909 | 5 |
| 0.1071 | 0.9184 | 0.9916 | 0.2962 | 0.9234 | 0.9921 | 6 |
| 0.0969 | 0.9277 | 0.9926 | 0.2831 | 0.9316 | 0.9930 | 7 |
| 0.0948 | 0.9348 | 0.9934 | 0.2905 | 0.9379 | 0.9937 | 8 |
| 0.0883 | 0.9406 | 0.9940 | 0.2930 | 0.9430 | 0.9943 | 9 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.15.1
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"cataract",
"dr",
"glaucoma",
"rvo",
"normal"
] |
candylion/ViT_face
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT_face
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the face dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 38 | 0.9844 |
| No log | 2.0 | 76 | 0.8261 |
| No log | 3.0 | 114 | 0.6908 |
| No log | 4.0 | 152 | 0.6297 |
| No log | 5.0 | 190 | 0.5770 |
| No log | 6.0 | 228 | 0.5463 |
| No log | 7.0 | 266 | 0.5250 |
| No log | 8.0 | 304 | 0.5263 |
| No log | 9.0 | 342 | 0.5306 |
| No log | 10.0 | 380 | 0.5240 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"류준열",
"송중기",
"현빈"
] |
crocutacrocuto/convnext-base-224-MEGW_C-5
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"aardvark",
"bird",
"black-and-white colobus",
"blue duiker",
"buffalo",
"bushbuck",
"bushpig",
"cattle",
"chimpanzee",
"civet_genet",
"dog_jackal",
"elephant",
"galago_potto",
"goat",
"golden cat",
"gorilla",
"guineafowl",
"honey badger",
"leopard",
"mandrill",
"mongoose",
"monkey",
"olive baboon",
"pangolin",
"porcupine",
"red colobus_red-capped mangabey",
"red duiker",
"rodent",
"serval",
"spotted hyena",
"squirel",
"water chevrotain",
"yellow-backed duiker"
] |
smartgmin/Entrnal_5class_agumm_last_newV6_model
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Entrnal_5class_agumm_last_newV6_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0410
- Train Accuracy: 0.9612
- Train Top-3-accuracy: 0.9962
- Validation Loss: 0.3703
- Validation Accuracy: 0.9623
- Validation Top-3-accuracy: 0.9963
- Epoch: 12
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1209, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 1.0109 | 0.5898 | 0.8913 | 0.5771 | 0.7468 | 0.9576 | 0 |
| 0.4103 | 0.7997 | 0.9708 | 0.4029 | 0.8329 | 0.9786 | 1 |
| 0.2249 | 0.8581 | 0.9827 | 0.3677 | 0.8769 | 0.9857 | 2 |
| 0.1584 | 0.8905 | 0.9877 | 0.3730 | 0.9010 | 0.9893 | 3 |
| 0.1164 | 0.9097 | 0.9904 | 0.3957 | 0.9169 | 0.9913 | 4 |
| 0.0841 | 0.9231 | 0.9920 | 0.3896 | 0.9285 | 0.9927 | 5 |
| 0.0676 | 0.9331 | 0.9932 | 0.3718 | 0.9373 | 0.9937 | 6 |
| 0.0561 | 0.9408 | 0.9941 | 0.3701 | 0.9440 | 0.9944 | 7 |
| 0.0500 | 0.9468 | 0.9947 | 0.3691 | 0.9493 | 0.9949 | 8 |
| 0.0461 | 0.9516 | 0.9952 | 0.3698 | 0.9535 | 0.9954 | 9 |
| 0.0435 | 0.9554 | 0.9956 | 0.3694 | 0.9570 | 0.9958 | 10 |
| 0.0418 | 0.9585 | 0.9959 | 0.3705 | 0.9598 | 0.9961 | 11 |
| 0.0410 | 0.9612 | 0.9962 | 0.3703 | 0.9623 | 0.9963 | 12 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.15.1
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"cataract",
"dr",
"glaucoma",
"rvo",
"normal"
] |
smartgmin/Entrnal_5class_agumm_last_newV7_model
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Entrnal_5class_agumm_last_newV7_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0959
- Train Accuracy: 0.9365
- Train Top-3-accuracy: 0.9913
- Validation Loss: 0.3424
- Validation Accuracy: 0.9390
- Validation Top-3-accuracy: 0.9917
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 620, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 1.1895 | 0.4833 | 0.8342 | 0.8125 | 0.6525 | 0.9200 | 0 |
| 0.5511 | 0.7329 | 0.9448 | 0.4587 | 0.7829 | 0.9601 | 1 |
| 0.3174 | 0.8164 | 0.9677 | 0.3909 | 0.8395 | 0.9735 | 2 |
| 0.2299 | 0.8576 | 0.9772 | 0.3711 | 0.8709 | 0.9802 | 3 |
| 0.1699 | 0.8824 | 0.9824 | 0.3564 | 0.8920 | 0.9842 | 4 |
| 0.1344 | 0.9003 | 0.9856 | 0.3389 | 0.9073 | 0.9865 | 5 |
| 0.1187 | 0.9131 | 0.9875 | 0.3391 | 0.9183 | 0.9884 | 6 |
| 0.1060 | 0.9229 | 0.9891 | 0.3424 | 0.9267 | 0.9898 | 7 |
| 0.0992 | 0.9304 | 0.9903 | 0.3426 | 0.9334 | 0.9908 | 8 |
| 0.0959 | 0.9365 | 0.9913 | 0.3424 | 0.9390 | 0.9917 | 9 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.15.1
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"cataract",
"dr",
"glaucoma",
"rvo",
"normal"
] |
sha000/ppmr-classifier-google-vit-base-patch16-224-in21k-8-epochs-full-balanced
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"control",
"ppmr"
] |
mrisdi/asl_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# asl_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.0043
- eval_accuracy: 0.2019
- eval_runtime: 1.4504
- eval_samples_per_second: 71.703
- eval_steps_per_second: 2.758
- epoch: 21.5385
- step: 35
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"a",
"b",
"k",
"l",
"m",
"n",
"o",
"p",
"q",
"r",
"s",
"t",
"c",
"u",
"v",
"w",
"x",
"y",
"z",
"d",
"e",
"f",
"g",
"h",
"i",
"j"
] |
mattwharper/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0641
- Accuracy: 0.9763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2668 | 1.0 | 190 | 0.1167 | 0.9630 |
| 0.1794 | 2.0 | 380 | 0.0837 | 0.9711 |
| 0.1539 | 3.0 | 570 | 0.0641 | 0.9763 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
[
"annualcrop",
"forest",
"herbaceousvegetation",
"highway",
"industrial",
"pasture",
"permanentcrop",
"residential",
"river",
"sealake"
] |
sha000/ppmr-classifier-microsoft-beit-base-patch16-224-pt22k-ft22k-8-epochs-full-balanced
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"control",
"ppmr"
] |
sha000/ppmr-classifier-facebook-deit-base-patch16-224-8-epochs-full-balanced
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"control",
"ppmr"
] |
hangpatrick92/TransparentBagClassifier
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TransparentBagClassifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3956
- Accuracy: 0.8598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 0.448 | 1.0 | 82 | 0.7304 | 0.5725 |
| 0.5097 | 2.0 | 164 | 0.7652 | 0.4946 |
| 0.452 | 3.0 | 246 | 0.7565 | 0.4841 |
| 0.3885 | 4.0 | 328 | 0.7565 | 0.4812 |
| 0.4743 | 5.0 | 410 | 0.7739 | 0.4626 |
| 0.4749 | 4.0 | 464 | 0.4572 | 0.7988 |
| 0.4319 | 5.0 | 580 | 0.3956 | 0.8598 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cpu
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"non-transparent",
"transparent"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.