model_id
stringlengths 7
105
| model_card
stringlengths 1
130k
| model_labels
listlengths 2
80k
|
---|---|---|
hkivancoral/smids_10x_deit_base_adamax_0001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_adamax_0001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0292
- Accuracy: 0.8985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1628 | 1.0 | 750 | 0.2778 | 0.8885 |
| 0.0865 | 2.0 | 1500 | 0.3575 | 0.9002 |
| 0.0255 | 3.0 | 2250 | 0.5656 | 0.8885 |
| 0.0266 | 4.0 | 3000 | 0.5733 | 0.9002 |
| 0.0169 | 5.0 | 3750 | 0.7037 | 0.9002 |
| 0.0024 | 6.0 | 4500 | 0.6843 | 0.8918 |
| 0.0124 | 7.0 | 5250 | 0.7219 | 0.8985 |
| 0.0002 | 8.0 | 6000 | 0.8057 | 0.8935 |
| 0.002 | 9.0 | 6750 | 0.7964 | 0.8985 |
| 0.0 | 10.0 | 7500 | 0.7467 | 0.9002 |
| 0.0004 | 11.0 | 8250 | 0.6482 | 0.9002 |
| 0.0 | 12.0 | 9000 | 0.8380 | 0.8968 |
| 0.0 | 13.0 | 9750 | 0.8814 | 0.8902 |
| 0.0 | 14.0 | 10500 | 0.8400 | 0.8985 |
| 0.0 | 15.0 | 11250 | 0.7646 | 0.9068 |
| 0.0003 | 16.0 | 12000 | 0.8183 | 0.9068 |
| 0.0 | 17.0 | 12750 | 0.8397 | 0.8985 |
| 0.0 | 18.0 | 13500 | 0.9452 | 0.9052 |
| 0.0 | 19.0 | 14250 | 0.8230 | 0.9085 |
| 0.0 | 20.0 | 15000 | 0.8880 | 0.9085 |
| 0.0 | 21.0 | 15750 | 0.9221 | 0.9035 |
| 0.004 | 22.0 | 16500 | 0.9029 | 0.8968 |
| 0.0026 | 23.0 | 17250 | 0.9288 | 0.9002 |
| 0.0 | 24.0 | 18000 | 0.9054 | 0.9002 |
| 0.0 | 25.0 | 18750 | 0.9636 | 0.8952 |
| 0.0 | 26.0 | 19500 | 0.9715 | 0.8935 |
| 0.0035 | 27.0 | 20250 | 0.9914 | 0.9002 |
| 0.0 | 28.0 | 21000 | 1.0064 | 0.9002 |
| 0.0 | 29.0 | 21750 | 0.9401 | 0.8985 |
| 0.0 | 30.0 | 22500 | 0.9859 | 0.9002 |
| 0.0 | 31.0 | 23250 | 1.0344 | 0.8952 |
| 0.0 | 32.0 | 24000 | 1.0163 | 0.8935 |
| 0.0 | 33.0 | 24750 | 0.9803 | 0.9018 |
| 0.0 | 34.0 | 25500 | 1.0043 | 0.8918 |
| 0.0 | 35.0 | 26250 | 1.0148 | 0.8968 |
| 0.0 | 36.0 | 27000 | 1.0177 | 0.9035 |
| 0.0028 | 37.0 | 27750 | 1.0101 | 0.8985 |
| 0.0 | 38.0 | 28500 | 0.9955 | 0.9018 |
| 0.0 | 39.0 | 29250 | 1.0034 | 0.8985 |
| 0.0 | 40.0 | 30000 | 1.0043 | 0.8985 |
| 0.0 | 41.0 | 30750 | 0.9850 | 0.9002 |
| 0.0 | 42.0 | 31500 | 0.9989 | 0.8985 |
| 0.0 | 43.0 | 32250 | 1.0060 | 0.8985 |
| 0.0 | 44.0 | 33000 | 1.0153 | 0.8985 |
| 0.0024 | 45.0 | 33750 | 1.0191 | 0.8985 |
| 0.0 | 46.0 | 34500 | 1.0252 | 0.8985 |
| 0.0 | 47.0 | 35250 | 1.0250 | 0.8985 |
| 0.0 | 48.0 | 36000 | 1.0256 | 0.8985 |
| 0.0 | 49.0 | 36750 | 1.0285 | 0.8985 |
| 0.0 | 50.0 | 37500 | 1.0292 | 0.8985 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_small_rms_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_small_rms_00001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5041
- Accuracy: 0.8817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1992 | 1.0 | 750 | 0.3642 | 0.8683 |
| 0.1164 | 2.0 | 1500 | 0.4047 | 0.88 |
| 0.0544 | 3.0 | 2250 | 0.5775 | 0.8733 |
| 0.0424 | 4.0 | 3000 | 0.7191 | 0.8717 |
| 0.0428 | 5.0 | 3750 | 0.8609 | 0.88 |
| 0.0178 | 6.0 | 4500 | 0.9003 | 0.8767 |
| 0.002 | 7.0 | 5250 | 0.9741 | 0.8883 |
| 0.0106 | 8.0 | 6000 | 1.0810 | 0.88 |
| 0.015 | 9.0 | 6750 | 1.0442 | 0.8767 |
| 0.0 | 10.0 | 7500 | 1.1184 | 0.875 |
| 0.0002 | 11.0 | 8250 | 1.1373 | 0.88 |
| 0.0 | 12.0 | 9000 | 1.0569 | 0.8833 |
| 0.0004 | 13.0 | 9750 | 1.1443 | 0.8783 |
| 0.0211 | 14.0 | 10500 | 1.1371 | 0.8717 |
| 0.0 | 15.0 | 11250 | 1.1887 | 0.8717 |
| 0.074 | 16.0 | 12000 | 1.0864 | 0.8767 |
| 0.0 | 17.0 | 12750 | 1.1543 | 0.8817 |
| 0.0061 | 18.0 | 13500 | 1.0371 | 0.8833 |
| 0.0 | 19.0 | 14250 | 1.2499 | 0.875 |
| 0.006 | 20.0 | 15000 | 1.1799 | 0.8733 |
| 0.0265 | 21.0 | 15750 | 1.2064 | 0.8783 |
| 0.0 | 22.0 | 16500 | 1.2139 | 0.875 |
| 0.0 | 23.0 | 17250 | 1.2392 | 0.8833 |
| 0.0 | 24.0 | 18000 | 1.3220 | 0.8717 |
| 0.0002 | 25.0 | 18750 | 1.3064 | 0.8717 |
| 0.0 | 26.0 | 19500 | 1.3677 | 0.8767 |
| 0.0 | 27.0 | 20250 | 1.2718 | 0.89 |
| 0.0 | 28.0 | 21000 | 1.2639 | 0.8783 |
| 0.0 | 29.0 | 21750 | 1.3047 | 0.8817 |
| 0.0 | 30.0 | 22500 | 1.3577 | 0.885 |
| 0.0 | 31.0 | 23250 | 1.3501 | 0.88 |
| 0.0 | 32.0 | 24000 | 1.3749 | 0.875 |
| 0.0 | 33.0 | 24750 | 1.4487 | 0.875 |
| 0.0 | 34.0 | 25500 | 1.3473 | 0.8867 |
| 0.0 | 35.0 | 26250 | 1.4090 | 0.885 |
| 0.0 | 36.0 | 27000 | 1.4862 | 0.8733 |
| 0.0 | 37.0 | 27750 | 1.3778 | 0.8783 |
| 0.0 | 38.0 | 28500 | 1.4730 | 0.8767 |
| 0.0 | 39.0 | 29250 | 1.4305 | 0.89 |
| 0.0 | 40.0 | 30000 | 1.4453 | 0.8833 |
| 0.0 | 41.0 | 30750 | 1.4782 | 0.8833 |
| 0.0 | 42.0 | 31500 | 1.4838 | 0.88 |
| 0.0 | 43.0 | 32250 | 1.4721 | 0.8867 |
| 0.0 | 44.0 | 33000 | 1.4749 | 0.8867 |
| 0.0 | 45.0 | 33750 | 1.4889 | 0.8867 |
| 0.0 | 46.0 | 34500 | 1.4951 | 0.885 |
| 0.0 | 47.0 | 35250 | 1.4970 | 0.8833 |
| 0.0 | 48.0 | 36000 | 1.4998 | 0.8833 |
| 0.0 | 49.0 | 36750 | 1.5024 | 0.8833 |
| 0.0 | 50.0 | 37500 | 1.5041 | 0.8817 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
adhisetiawan/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# adhisetiawan/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3809
- Validation Loss: 0.3324
- Train Accuracy: 0.914
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.8115 | 1.6364 | 0.818 | 0 |
| 1.2239 | 0.8429 | 0.867 | 1 |
| 0.7043 | 0.4780 | 0.925 | 2 |
| 0.4916 | 0.3880 | 0.924 | 3 |
| 0.3809 | 0.3324 | 0.914 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
hkivancoral/smids_10x_deit_small_rms_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_small_rms_00001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9311
- Accuracy: 0.9167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1779 | 1.0 | 750 | 0.2473 | 0.905 |
| 0.139 | 2.0 | 1500 | 0.3262 | 0.8817 |
| 0.0937 | 3.0 | 2250 | 0.2997 | 0.9133 |
| 0.0255 | 4.0 | 3000 | 0.4034 | 0.9033 |
| 0.0426 | 5.0 | 3750 | 0.4840 | 0.9133 |
| 0.0099 | 6.0 | 4500 | 0.7148 | 0.9017 |
| 0.0272 | 7.0 | 5250 | 0.7135 | 0.9 |
| 0.0013 | 8.0 | 6000 | 0.7156 | 0.91 |
| 0.0252 | 9.0 | 6750 | 0.7066 | 0.9 |
| 0.0 | 10.0 | 7500 | 0.7258 | 0.91 |
| 0.0281 | 11.0 | 8250 | 0.8120 | 0.8967 |
| 0.0 | 12.0 | 9000 | 0.7428 | 0.91 |
| 0.0001 | 13.0 | 9750 | 0.7455 | 0.9183 |
| 0.0272 | 14.0 | 10500 | 0.7891 | 0.92 |
| 0.0 | 15.0 | 11250 | 0.8803 | 0.8967 |
| 0.0 | 16.0 | 12000 | 0.8867 | 0.9 |
| 0.0025 | 17.0 | 12750 | 0.8600 | 0.9067 |
| 0.0 | 18.0 | 13500 | 0.7993 | 0.9183 |
| 0.0 | 19.0 | 14250 | 0.8779 | 0.9133 |
| 0.0 | 20.0 | 15000 | 0.8996 | 0.9117 |
| 0.0004 | 21.0 | 15750 | 0.9765 | 0.8917 |
| 0.0157 | 22.0 | 16500 | 0.7715 | 0.92 |
| 0.0 | 23.0 | 17250 | 0.7227 | 0.91 |
| 0.0 | 24.0 | 18000 | 0.7725 | 0.9167 |
| 0.0 | 25.0 | 18750 | 0.8320 | 0.9117 |
| 0.0004 | 26.0 | 19500 | 0.9795 | 0.8967 |
| 0.0 | 27.0 | 20250 | 0.8537 | 0.9183 |
| 0.0 | 28.0 | 21000 | 0.8796 | 0.9033 |
| 0.0 | 29.0 | 21750 | 0.8896 | 0.9067 |
| 0.0035 | 30.0 | 22500 | 0.9700 | 0.9033 |
| 0.0 | 31.0 | 23250 | 0.8273 | 0.9117 |
| 0.0 | 32.0 | 24000 | 0.8778 | 0.91 |
| 0.0 | 33.0 | 24750 | 0.8576 | 0.9117 |
| 0.0 | 34.0 | 25500 | 0.8235 | 0.9167 |
| 0.0 | 35.0 | 26250 | 0.8389 | 0.9133 |
| 0.0 | 36.0 | 27000 | 0.8611 | 0.9133 |
| 0.0052 | 37.0 | 27750 | 0.9201 | 0.91 |
| 0.0 | 38.0 | 28500 | 0.9394 | 0.9117 |
| 0.0 | 39.0 | 29250 | 0.9985 | 0.91 |
| 0.0 | 40.0 | 30000 | 0.9682 | 0.9133 |
| 0.0 | 41.0 | 30750 | 0.9333 | 0.915 |
| 0.0 | 42.0 | 31500 | 0.9270 | 0.9167 |
| 0.0 | 43.0 | 32250 | 0.9299 | 0.915 |
| 0.0 | 44.0 | 33000 | 0.9241 | 0.9133 |
| 0.0 | 45.0 | 33750 | 0.9269 | 0.9133 |
| 0.0 | 46.0 | 34500 | 0.9286 | 0.915 |
| 0.0 | 47.0 | 35250 | 0.9293 | 0.915 |
| 0.0 | 48.0 | 36000 | 0.9293 | 0.915 |
| 0.0 | 49.0 | 36750 | 0.9307 | 0.915 |
| 0.0 | 50.0 | 37500 | 0.9311 | 0.9167 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_base_adamax_0001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_adamax_0001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9689
- Accuracy: 0.9083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1129 | 1.0 | 750 | 0.2718 | 0.9117 |
| 0.0722 | 2.0 | 1500 | 0.4401 | 0.905 |
| 0.0419 | 3.0 | 2250 | 0.5283 | 0.905 |
| 0.0006 | 4.0 | 3000 | 0.5496 | 0.915 |
| 0.0003 | 5.0 | 3750 | 0.5791 | 0.9133 |
| 0.0001 | 6.0 | 4500 | 0.6619 | 0.925 |
| 0.016 | 7.0 | 5250 | 0.6974 | 0.905 |
| 0.0111 | 8.0 | 6000 | 0.7232 | 0.9133 |
| 0.0091 | 9.0 | 6750 | 0.7489 | 0.91 |
| 0.0 | 10.0 | 7500 | 0.7230 | 0.9117 |
| 0.0 | 11.0 | 8250 | 0.7815 | 0.915 |
| 0.0009 | 12.0 | 9000 | 0.9909 | 0.8867 |
| 0.0 | 13.0 | 9750 | 0.7983 | 0.9133 |
| 0.0001 | 14.0 | 10500 | 0.8706 | 0.895 |
| 0.0 | 15.0 | 11250 | 0.8170 | 0.91 |
| 0.0 | 16.0 | 12000 | 0.7999 | 0.9133 |
| 0.0 | 17.0 | 12750 | 0.8248 | 0.915 |
| 0.0 | 18.0 | 13500 | 0.8530 | 0.9133 |
| 0.0 | 19.0 | 14250 | 0.8754 | 0.9117 |
| 0.0 | 20.0 | 15000 | 0.8771 | 0.915 |
| 0.0 | 21.0 | 15750 | 0.9189 | 0.9167 |
| 0.0 | 22.0 | 16500 | 0.8993 | 0.91 |
| 0.0 | 23.0 | 17250 | 0.9440 | 0.91 |
| 0.0 | 24.0 | 18000 | 0.8723 | 0.9167 |
| 0.0 | 25.0 | 18750 | 0.9352 | 0.905 |
| 0.0 | 26.0 | 19500 | 0.9409 | 0.9067 |
| 0.0 | 27.0 | 20250 | 0.8935 | 0.9083 |
| 0.0 | 28.0 | 21000 | 0.8793 | 0.9083 |
| 0.0 | 29.0 | 21750 | 0.8869 | 0.91 |
| 0.0 | 30.0 | 22500 | 0.8662 | 0.9083 |
| 0.0 | 31.0 | 23250 | 0.8978 | 0.9033 |
| 0.0 | 32.0 | 24000 | 0.9161 | 0.9033 |
| 0.0 | 33.0 | 24750 | 0.9084 | 0.9117 |
| 0.0 | 34.0 | 25500 | 0.9326 | 0.905 |
| 0.0 | 35.0 | 26250 | 0.9089 | 0.9083 |
| 0.0 | 36.0 | 27000 | 0.9257 | 0.9067 |
| 0.0 | 37.0 | 27750 | 0.9241 | 0.905 |
| 0.0 | 38.0 | 28500 | 0.9604 | 0.9067 |
| 0.0 | 39.0 | 29250 | 0.9400 | 0.905 |
| 0.0 | 40.0 | 30000 | 0.9471 | 0.905 |
| 0.0 | 41.0 | 30750 | 0.9447 | 0.905 |
| 0.0 | 42.0 | 31500 | 0.9592 | 0.9067 |
| 0.0 | 43.0 | 32250 | 0.9571 | 0.9067 |
| 0.0 | 44.0 | 33000 | 0.9622 | 0.9083 |
| 0.0 | 45.0 | 33750 | 0.9615 | 0.9083 |
| 0.0 | 46.0 | 34500 | 0.9640 | 0.9083 |
| 0.0 | 47.0 | 35250 | 0.9645 | 0.9083 |
| 0.0 | 48.0 | 36000 | 0.9645 | 0.9083 |
| 0.0 | 49.0 | 36750 | 0.9677 | 0.9083 |
| 0.0 | 50.0 | 37500 | 0.9689 | 0.9083 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
andrecastro/convnext-tiny-224-convnext
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-convnext
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0225
- Accuracy: 0.9951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2175 | 1.0 | 327 | 0.1708 | 0.9436 |
| 0.1476 | 2.0 | 654 | 0.0908 | 0.9672 |
| 0.0961 | 3.0 | 981 | 0.0428 | 0.9862 |
| 0.0677 | 4.0 | 1309 | 0.0654 | 0.9777 |
| 0.049 | 5.0 | 1636 | 0.0498 | 0.9857 |
| 0.0347 | 6.0 | 1963 | 0.0352 | 0.9886 |
| 0.0282 | 7.0 | 2290 | 0.0278 | 0.9913 |
| 0.0694 | 8.0 | 2618 | 0.0299 | 0.9918 |
| 0.0733 | 9.0 | 2945 | 0.0246 | 0.9938 |
| 0.0399 | 10.0 | 3272 | 0.0285 | 0.9918 |
| 0.0276 | 11.0 | 3599 | 0.0249 | 0.9933 |
| 0.0259 | 12.0 | 3927 | 0.0241 | 0.9942 |
| 0.0551 | 13.0 | 4254 | 0.0298 | 0.9920 |
| 0.0658 | 14.0 | 4581 | 0.0288 | 0.9924 |
| 0.0208 | 14.99 | 4905 | 0.0225 | 0.9951 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"anormal_total",
"normal_total"
] |
Bliu3/roadSigns
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoadSigns
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on GTSRB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0871
- Accuracy: 0.9914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2187 | 1.0 | 612 | 0.2436 | 0.9888 |
| 0.0742 | 2.0 | 1225 | 0.1142 | 0.9888 |
| 0.0516 | 3.0 | 1836 | 0.0871 | 0.9914 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24",
"label_25",
"label_26",
"label_27",
"label_28",
"label_29",
"label_30",
"label_31",
"label_32",
"label_33",
"label_34",
"label_35",
"label_36",
"label_37",
"label_38",
"label_39",
"label_40",
"label_41",
"label_42",
"label_43"
] |
hkivancoral/smids_10x_deit_base_adamax_0001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_adamax_0001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2336
- Accuracy: 0.9067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2142 | 1.0 | 750 | 0.3742 | 0.87 |
| 0.1202 | 2.0 | 1500 | 0.5077 | 0.8833 |
| 0.0137 | 3.0 | 2250 | 0.5390 | 0.895 |
| 0.0129 | 4.0 | 3000 | 0.6551 | 0.9017 |
| 0.046 | 5.0 | 3750 | 0.8679 | 0.8817 |
| 0.0005 | 6.0 | 4500 | 0.8420 | 0.885 |
| 0.0 | 7.0 | 5250 | 0.8718 | 0.885 |
| 0.0001 | 8.0 | 6000 | 0.8975 | 0.8883 |
| 0.0001 | 9.0 | 6750 | 0.9922 | 0.8867 |
| 0.0 | 10.0 | 7500 | 0.9915 | 0.8967 |
| 0.0 | 11.0 | 8250 | 1.0413 | 0.89 |
| 0.0 | 12.0 | 9000 | 1.0425 | 0.895 |
| 0.0002 | 13.0 | 9750 | 0.9907 | 0.89 |
| 0.0 | 14.0 | 10500 | 1.0509 | 0.8933 |
| 0.0 | 15.0 | 11250 | 1.0442 | 0.895 |
| 0.0042 | 16.0 | 12000 | 1.0843 | 0.8833 |
| 0.0 | 17.0 | 12750 | 0.9781 | 0.9017 |
| 0.0 | 18.0 | 13500 | 0.9714 | 0.9033 |
| 0.0 | 19.0 | 14250 | 0.9845 | 0.9 |
| 0.0 | 20.0 | 15000 | 0.9929 | 0.9 |
| 0.0 | 21.0 | 15750 | 1.0161 | 0.8983 |
| 0.0 | 22.0 | 16500 | 1.0271 | 0.9017 |
| 0.0 | 23.0 | 17250 | 1.0328 | 0.9033 |
| 0.0 | 24.0 | 18000 | 1.0356 | 0.9067 |
| 0.0 | 25.0 | 18750 | 1.0729 | 0.905 |
| 0.0 | 26.0 | 19500 | 1.1105 | 0.9033 |
| 0.0 | 27.0 | 20250 | 1.0987 | 0.9067 |
| 0.0 | 28.0 | 21000 | 1.1302 | 0.9033 |
| 0.0 | 29.0 | 21750 | 1.1229 | 0.9033 |
| 0.0 | 30.0 | 22500 | 1.1633 | 0.9033 |
| 0.0 | 31.0 | 23250 | 1.1494 | 0.9067 |
| 0.0 | 32.0 | 24000 | 1.1783 | 0.905 |
| 0.0 | 33.0 | 24750 | 1.1734 | 0.905 |
| 0.0 | 34.0 | 25500 | 1.1907 | 0.905 |
| 0.0 | 35.0 | 26250 | 1.1837 | 0.905 |
| 0.0 | 36.0 | 27000 | 1.2014 | 0.9067 |
| 0.0 | 37.0 | 27750 | 1.2027 | 0.905 |
| 0.0 | 38.0 | 28500 | 1.2049 | 0.905 |
| 0.0 | 39.0 | 29250 | 1.2195 | 0.905 |
| 0.0 | 40.0 | 30000 | 1.2229 | 0.9067 |
| 0.0 | 41.0 | 30750 | 1.2277 | 0.9067 |
| 0.0 | 42.0 | 31500 | 1.2269 | 0.9067 |
| 0.0 | 43.0 | 32250 | 1.2286 | 0.9067 |
| 0.0 | 44.0 | 33000 | 1.2298 | 0.9067 |
| 0.0 | 45.0 | 33750 | 1.2302 | 0.9067 |
| 0.0 | 46.0 | 34500 | 1.2317 | 0.9067 |
| 0.0 | 47.0 | 35250 | 1.2325 | 0.9067 |
| 0.0 | 48.0 | 36000 | 1.2327 | 0.9067 |
| 0.0 | 49.0 | 36750 | 1.2332 | 0.9067 |
| 0.0 | 50.0 | 37500 | 1.2336 | 0.9067 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_adamax_0001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_adamax_0001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7859
- Accuracy: 0.9182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2608 | 1.0 | 751 | 0.2844 | 0.8898 |
| 0.1304 | 2.0 | 1502 | 0.3294 | 0.8765 |
| 0.111 | 3.0 | 2253 | 0.3516 | 0.9015 |
| 0.1003 | 4.0 | 3004 | 0.4446 | 0.8932 |
| 0.034 | 5.0 | 3755 | 0.5205 | 0.8982 |
| 0.0067 | 6.0 | 4506 | 0.6326 | 0.9015 |
| 0.0227 | 7.0 | 5257 | 0.8411 | 0.8815 |
| 0.0262 | 8.0 | 6008 | 0.8754 | 0.8865 |
| 0.0488 | 9.0 | 6759 | 0.7139 | 0.9098 |
| 0.0361 | 10.0 | 7510 | 0.7866 | 0.8948 |
| 0.0107 | 11.0 | 8261 | 0.8081 | 0.9048 |
| 0.0056 | 12.0 | 9012 | 0.7555 | 0.8998 |
| 0.0 | 13.0 | 9763 | 0.8196 | 0.9015 |
| 0.0016 | 14.0 | 10514 | 0.8589 | 0.9032 |
| 0.0 | 15.0 | 11265 | 0.8346 | 0.9098 |
| 0.0 | 16.0 | 12016 | 0.7703 | 0.9115 |
| 0.0 | 17.0 | 12767 | 0.8587 | 0.9032 |
| 0.0 | 18.0 | 13518 | 0.8122 | 0.9115 |
| 0.0 | 19.0 | 14269 | 0.8002 | 0.9048 |
| 0.0 | 20.0 | 15020 | 0.8446 | 0.9115 |
| 0.0 | 21.0 | 15771 | 0.8926 | 0.9048 |
| 0.0 | 22.0 | 16522 | 0.8190 | 0.9065 |
| 0.0 | 23.0 | 17273 | 0.7943 | 0.9098 |
| 0.0 | 24.0 | 18024 | 0.7616 | 0.9098 |
| 0.0 | 25.0 | 18775 | 0.7566 | 0.9149 |
| 0.0 | 26.0 | 19526 | 0.7309 | 0.9149 |
| 0.0 | 27.0 | 20277 | 0.7760 | 0.9032 |
| 0.0 | 28.0 | 21028 | 0.7849 | 0.9132 |
| 0.008 | 29.0 | 21779 | 0.7826 | 0.9149 |
| 0.0 | 30.0 | 22530 | 0.7666 | 0.9199 |
| 0.0 | 31.0 | 23281 | 0.7402 | 0.9199 |
| 0.0 | 32.0 | 24032 | 0.7484 | 0.9199 |
| 0.0 | 33.0 | 24783 | 0.7616 | 0.9165 |
| 0.0 | 34.0 | 25534 | 0.7803 | 0.9149 |
| 0.0 | 35.0 | 26285 | 0.7685 | 0.9199 |
| 0.0 | 36.0 | 27036 | 0.7685 | 0.9165 |
| 0.0 | 37.0 | 27787 | 0.7687 | 0.9199 |
| 0.0 | 38.0 | 28538 | 0.7876 | 0.9199 |
| 0.0 | 39.0 | 29289 | 0.7749 | 0.9215 |
| 0.0 | 40.0 | 30040 | 0.7734 | 0.9165 |
| 0.0 | 41.0 | 30791 | 0.7803 | 0.9199 |
| 0.0 | 42.0 | 31542 | 0.7799 | 0.9182 |
| 0.0 | 43.0 | 32293 | 0.7798 | 0.9182 |
| 0.0 | 44.0 | 33044 | 0.7789 | 0.9182 |
| 0.0 | 45.0 | 33795 | 0.7827 | 0.9199 |
| 0.0 | 46.0 | 34546 | 0.7810 | 0.9182 |
| 0.0 | 47.0 | 35297 | 0.7840 | 0.9182 |
| 0.0 | 48.0 | 36048 | 0.7837 | 0.9199 |
| 0.0 | 49.0 | 36799 | 0.7839 | 0.9199 |
| 0.0 | 50.0 | 37550 | 0.7859 | 0.9182 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_base_adamax_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_adamax_00001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5153
- Accuracy: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2391 | 1.0 | 751 | 0.2691 | 0.8881 |
| 0.1261 | 2.0 | 1502 | 0.2399 | 0.9082 |
| 0.1129 | 3.0 | 2253 | 0.2368 | 0.9165 |
| 0.0547 | 4.0 | 3004 | 0.2399 | 0.9215 |
| 0.0473 | 5.0 | 3755 | 0.3024 | 0.9165 |
| 0.0084 | 6.0 | 4506 | 0.3345 | 0.9182 |
| 0.001 | 7.0 | 5257 | 0.4000 | 0.9215 |
| 0.0092 | 8.0 | 6008 | 0.4032 | 0.9182 |
| 0.0007 | 9.0 | 6759 | 0.4106 | 0.9249 |
| 0.0001 | 10.0 | 7510 | 0.4482 | 0.9182 |
| 0.0001 | 11.0 | 8261 | 0.4776 | 0.9182 |
| 0.0 | 12.0 | 9012 | 0.4461 | 0.9215 |
| 0.0002 | 13.0 | 9763 | 0.4646 | 0.9199 |
| 0.0 | 14.0 | 10514 | 0.4721 | 0.9199 |
| 0.0 | 15.0 | 11265 | 0.4754 | 0.9232 |
| 0.0 | 16.0 | 12016 | 0.4752 | 0.9282 |
| 0.0 | 17.0 | 12767 | 0.4772 | 0.9265 |
| 0.0 | 18.0 | 13518 | 0.4906 | 0.9215 |
| 0.0 | 19.0 | 14269 | 0.4791 | 0.9182 |
| 0.0 | 20.0 | 15020 | 0.4897 | 0.9215 |
| 0.0 | 21.0 | 15771 | 0.5412 | 0.9132 |
| 0.0 | 22.0 | 16522 | 0.5200 | 0.9265 |
| 0.0 | 23.0 | 17273 | 0.4930 | 0.9249 |
| 0.0 | 24.0 | 18024 | 0.5327 | 0.9165 |
| 0.0 | 25.0 | 18775 | 0.4977 | 0.9182 |
| 0.0 | 26.0 | 19526 | 0.5032 | 0.9215 |
| 0.0 | 27.0 | 20277 | 0.5327 | 0.9165 |
| 0.0 | 28.0 | 21028 | 0.5170 | 0.9232 |
| 0.0022 | 29.0 | 21779 | 0.5055 | 0.9249 |
| 0.0 | 30.0 | 22530 | 0.4999 | 0.9232 |
| 0.0 | 31.0 | 23281 | 0.5556 | 0.9149 |
| 0.0 | 32.0 | 24032 | 0.5049 | 0.9249 |
| 0.0 | 33.0 | 24783 | 0.5110 | 0.9232 |
| 0.0 | 34.0 | 25534 | 0.5596 | 0.9115 |
| 0.0 | 35.0 | 26285 | 0.5071 | 0.9265 |
| 0.0 | 36.0 | 27036 | 0.5052 | 0.9249 |
| 0.0 | 37.0 | 27787 | 0.5090 | 0.9249 |
| 0.0 | 38.0 | 28538 | 0.5107 | 0.9249 |
| 0.0 | 39.0 | 29289 | 0.5094 | 0.9249 |
| 0.0 | 40.0 | 30040 | 0.5107 | 0.9249 |
| 0.0 | 41.0 | 30791 | 0.5100 | 0.9249 |
| 0.0 | 42.0 | 31542 | 0.5114 | 0.9249 |
| 0.0 | 43.0 | 32293 | 0.5123 | 0.9249 |
| 0.0 | 44.0 | 33044 | 0.5134 | 0.9249 |
| 0.0 | 45.0 | 33795 | 0.5146 | 0.9249 |
| 0.0 | 46.0 | 34546 | 0.5165 | 0.9249 |
| 0.0 | 47.0 | 35297 | 0.5154 | 0.9249 |
| 0.0 | 48.0 | 36048 | 0.5153 | 0.9249 |
| 0.0 | 49.0 | 36799 | 0.5157 | 0.9249 |
| 0.0 | 50.0 | 37550 | 0.5153 | 0.9249 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_base_adamax_0001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_adamax_0001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8747
- Accuracy: 0.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1226 | 1.0 | 750 | 0.2826 | 0.89 |
| 0.1 | 2.0 | 1500 | 0.3891 | 0.9017 |
| 0.0601 | 3.0 | 2250 | 0.4750 | 0.9133 |
| 0.0263 | 4.0 | 3000 | 0.4903 | 0.9117 |
| 0.0141 | 5.0 | 3750 | 0.6938 | 0.905 |
| 0.0263 | 6.0 | 4500 | 0.7445 | 0.895 |
| 0.0152 | 7.0 | 5250 | 0.5895 | 0.92 |
| 0.0017 | 8.0 | 6000 | 0.6688 | 0.92 |
| 0.0043 | 9.0 | 6750 | 0.6499 | 0.9167 |
| 0.0 | 10.0 | 7500 | 0.7976 | 0.91 |
| 0.011 | 11.0 | 8250 | 0.6899 | 0.9183 |
| 0.0 | 12.0 | 9000 | 0.6947 | 0.905 |
| 0.0 | 13.0 | 9750 | 0.8213 | 0.9067 |
| 0.0 | 14.0 | 10500 | 0.6853 | 0.9217 |
| 0.0 | 15.0 | 11250 | 0.8220 | 0.9117 |
| 0.0 | 16.0 | 12000 | 0.6897 | 0.9217 |
| 0.0038 | 17.0 | 12750 | 0.7869 | 0.9167 |
| 0.0004 | 18.0 | 13500 | 0.7956 | 0.9233 |
| 0.0031 | 19.0 | 14250 | 0.8745 | 0.9067 |
| 0.0 | 20.0 | 15000 | 0.7439 | 0.92 |
| 0.0 | 21.0 | 15750 | 0.7389 | 0.9233 |
| 0.0 | 22.0 | 16500 | 0.7135 | 0.925 |
| 0.0028 | 23.0 | 17250 | 0.7301 | 0.925 |
| 0.0 | 24.0 | 18000 | 0.7434 | 0.9283 |
| 0.0 | 25.0 | 18750 | 0.7538 | 0.9233 |
| 0.0 | 26.0 | 19500 | 0.7620 | 0.9233 |
| 0.0 | 27.0 | 20250 | 0.7318 | 0.9233 |
| 0.0 | 28.0 | 21000 | 0.7471 | 0.92 |
| 0.0 | 29.0 | 21750 | 0.7685 | 0.9183 |
| 0.0046 | 30.0 | 22500 | 0.8050 | 0.92 |
| 0.0 | 31.0 | 23250 | 0.7778 | 0.92 |
| 0.0 | 32.0 | 24000 | 0.7888 | 0.9217 |
| 0.0 | 33.0 | 24750 | 0.7725 | 0.9233 |
| 0.0 | 34.0 | 25500 | 0.8185 | 0.9167 |
| 0.0 | 35.0 | 26250 | 0.8187 | 0.92 |
| 0.0 | 36.0 | 27000 | 0.8276 | 0.92 |
| 0.0031 | 37.0 | 27750 | 0.8218 | 0.9217 |
| 0.0 | 38.0 | 28500 | 0.8408 | 0.92 |
| 0.0 | 39.0 | 29250 | 0.8462 | 0.9183 |
| 0.0 | 40.0 | 30000 | 0.8525 | 0.92 |
| 0.0 | 41.0 | 30750 | 0.8553 | 0.92 |
| 0.0 | 42.0 | 31500 | 0.8584 | 0.92 |
| 0.0 | 43.0 | 32250 | 0.8634 | 0.9183 |
| 0.0 | 44.0 | 33000 | 0.8639 | 0.92 |
| 0.0 | 45.0 | 33750 | 0.8671 | 0.92 |
| 0.0 | 46.0 | 34500 | 0.8704 | 0.92 |
| 0.0 | 47.0 | 35250 | 0.8722 | 0.92 |
| 0.0 | 48.0 | 36000 | 0.8729 | 0.92 |
| 0.0 | 49.0 | 36750 | 0.8728 | 0.92 |
| 0.0 | 50.0 | 37500 | 0.8747 | 0.92 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
andakm/cars_new_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# andakm/cars_new_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0611
- Train Accuracy: 0.6863
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 2295, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 2.0876 | 0.2941 | 0 |
| 1.8215 | 0.3922 | 1 |
| 1.5758 | 0.4510 | 2 |
| 1.3175 | 0.5490 | 3 |
| 1.0611 | 0.6863 | 4 |
### Framework versions
- Transformers 4.41.1
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
[
"1-series",
"3-series",
"4-series",
"5-series",
"6-series",
"7-series",
"8-series",
"m3",
"m4",
"m5"
] |
pitangent-ds/MobileNet-V2-food
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MobileNet-V2-food
This model is a fine-tuned version of [google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) on the ItsNotRohit/Food121-224 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6890
- Accuracy: 0.5793
- Recall: 0.5793
- Precision: 0.6006
- F1: 0.5769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 128
- seed: 20329
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 20000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 2.9653 | 0.33 | 2000 | 2.7802 | 0.3438 | 0.3438 | 0.3932 | 0.3105 |
| 2.3854 | 0.66 | 4000 | 2.3105 | 0.4440 | 0.4440 | 0.4979 | 0.4336 |
| 2.1576 | 0.99 | 6000 | 2.0508 | 0.4958 | 0.4958 | 0.5263 | 0.4837 |
| 1.9767 | 1.32 | 8000 | 1.9860 | 0.5086 | 0.5086 | 0.5504 | 0.4956 |
| 1.9215 | 1.65 | 10000 | 1.8312 | 0.5462 | 0.5462 | 0.5815 | 0.5390 |
| 1.782 | 1.98 | 12000 | 1.8554 | 0.5441 | 0.5441 | 0.5864 | 0.5431 |
| 1.7755 | 2.31 | 14000 | 1.9241 | 0.5308 | 0.5308 | 0.5841 | 0.5272 |
| 1.7006 | 2.64 | 16000 | 1.8625 | 0.5451 | 0.5451 | 0.6004 | 0.5466 |
| 1.7289 | 2.98 | 18000 | 1.8560 | 0.5432 | 0.5432 | 0.5940 | 0.5395 |
| 1.7296 | 3.31 | 20000 | 1.6890 | 0.5793 | 0.5793 | 0.6006 | 0.5769 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"apple_pie",
"baby_back_ribs",
"baklava",
"beef_carpaccio",
"beef_tartare",
"beet_salad",
"beignets",
"bibimbap",
"biryani",
"bread_pudding",
"breakfast_burrito",
"bruschetta",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"chai",
"chapati",
"cheese_plate",
"cheesecake",
"chicken_curry",
"chicken_quesadilla",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"chole_bhature",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"dabeli",
"dal",
"deviled_eggs",
"dhokla",
"donuts",
"dosa",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"idli",
"jalebi",
"kathi_rolls",
"kofta",
"kulfi",
"lasagna",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"momos",
"mussels",
"naan",
"nachos",
"omelette",
"onion_rings",
"oysters",
"pad_thai",
"paella",
"pakoda",
"pancakes",
"pani_puri",
"panna_cotta",
"panner_butter_masala",
"pav_bhaji",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare",
"vadapav",
"waffles"
] |
hkivancoral/smids_10x_deit_tiny_adamax_0001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_adamax_0001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0252
- Accuracy: 0.8885
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2345 | 1.0 | 750 | 0.2862 | 0.8852 |
| 0.1226 | 2.0 | 1500 | 0.3492 | 0.8819 |
| 0.0786 | 3.0 | 2250 | 0.4125 | 0.8785 |
| 0.0821 | 4.0 | 3000 | 0.6690 | 0.8569 |
| 0.0058 | 5.0 | 3750 | 0.6467 | 0.8869 |
| 0.0246 | 6.0 | 4500 | 0.7107 | 0.8819 |
| 0.0249 | 7.0 | 5250 | 0.7670 | 0.8852 |
| 0.0023 | 8.0 | 6000 | 0.8535 | 0.8952 |
| 0.0094 | 9.0 | 6750 | 0.9434 | 0.8819 |
| 0.0202 | 10.0 | 7500 | 1.0037 | 0.8835 |
| 0.0073 | 11.0 | 8250 | 1.0395 | 0.8802 |
| 0.0 | 12.0 | 9000 | 0.9154 | 0.8935 |
| 0.0001 | 13.0 | 9750 | 1.0585 | 0.8785 |
| 0.0048 | 14.0 | 10500 | 0.9392 | 0.8952 |
| 0.0002 | 15.0 | 11250 | 0.9865 | 0.8885 |
| 0.0 | 16.0 | 12000 | 1.1010 | 0.8885 |
| 0.0 | 17.0 | 12750 | 1.0500 | 0.8968 |
| 0.0 | 18.0 | 13500 | 1.0366 | 0.8968 |
| 0.0 | 19.0 | 14250 | 0.9974 | 0.8735 |
| 0.0 | 20.0 | 15000 | 1.0266 | 0.8935 |
| 0.0 | 21.0 | 15750 | 0.9740 | 0.9018 |
| 0.0135 | 22.0 | 16500 | 1.0163 | 0.8935 |
| 0.0062 | 23.0 | 17250 | 1.0796 | 0.8835 |
| 0.0 | 24.0 | 18000 | 1.0547 | 0.8852 |
| 0.0 | 25.0 | 18750 | 1.0544 | 0.8918 |
| 0.0 | 26.0 | 19500 | 0.9809 | 0.8952 |
| 0.0089 | 27.0 | 20250 | 1.0367 | 0.8918 |
| 0.0 | 28.0 | 21000 | 1.0326 | 0.8835 |
| 0.0 | 29.0 | 21750 | 1.0069 | 0.8935 |
| 0.0 | 30.0 | 22500 | 1.0290 | 0.8968 |
| 0.0 | 31.0 | 23250 | 1.0034 | 0.8935 |
| 0.0 | 32.0 | 24000 | 0.9398 | 0.8985 |
| 0.0 | 33.0 | 24750 | 1.0178 | 0.8935 |
| 0.0 | 34.0 | 25500 | 1.0300 | 0.8902 |
| 0.0 | 35.0 | 26250 | 1.0140 | 0.8918 |
| 0.0 | 36.0 | 27000 | 1.0115 | 0.8902 |
| 0.0074 | 37.0 | 27750 | 1.0130 | 0.8918 |
| 0.0 | 38.0 | 28500 | 1.0096 | 0.8902 |
| 0.0 | 39.0 | 29250 | 1.0259 | 0.8935 |
| 0.0 | 40.0 | 30000 | 1.0330 | 0.8918 |
| 0.0 | 41.0 | 30750 | 1.0283 | 0.8902 |
| 0.0 | 42.0 | 31500 | 1.0254 | 0.8902 |
| 0.0 | 43.0 | 32250 | 1.0245 | 0.8869 |
| 0.0 | 44.0 | 33000 | 1.0228 | 0.8885 |
| 0.0024 | 45.0 | 33750 | 1.0249 | 0.8885 |
| 0.0 | 46.0 | 34500 | 1.0240 | 0.8885 |
| 0.0 | 47.0 | 35250 | 1.0247 | 0.8885 |
| 0.0 | 48.0 | 36000 | 1.0252 | 0.8885 |
| 0.0 | 49.0 | 36750 | 1.0256 | 0.8885 |
| 0.0 | 50.0 | 37500 | 1.0252 | 0.8885 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_base_adamax_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_adamax_00001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8741
- Accuracy: 0.8902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.216 | 1.0 | 750 | 0.2934 | 0.8852 |
| 0.1089 | 2.0 | 1500 | 0.3003 | 0.8885 |
| 0.0954 | 3.0 | 2250 | 0.3369 | 0.8852 |
| 0.0518 | 4.0 | 3000 | 0.4029 | 0.8902 |
| 0.0792 | 5.0 | 3750 | 0.4723 | 0.8819 |
| 0.0261 | 6.0 | 4500 | 0.5460 | 0.8885 |
| 0.0158 | 7.0 | 5250 | 0.6663 | 0.8819 |
| 0.0057 | 8.0 | 6000 | 0.6585 | 0.8802 |
| 0.0002 | 9.0 | 6750 | 0.6982 | 0.8918 |
| 0.0006 | 10.0 | 7500 | 0.7734 | 0.8918 |
| 0.0 | 11.0 | 8250 | 0.7967 | 0.8852 |
| 0.0 | 12.0 | 9000 | 0.8312 | 0.8885 |
| 0.0001 | 13.0 | 9750 | 0.7897 | 0.8869 |
| 0.0 | 14.0 | 10500 | 0.8305 | 0.8885 |
| 0.0 | 15.0 | 11250 | 0.7993 | 0.8869 |
| 0.0 | 16.0 | 12000 | 0.8582 | 0.8835 |
| 0.0 | 17.0 | 12750 | 0.8331 | 0.8902 |
| 0.0 | 18.0 | 13500 | 0.8238 | 0.8952 |
| 0.0 | 19.0 | 14250 | 0.8345 | 0.8952 |
| 0.0 | 20.0 | 15000 | 0.8413 | 0.8885 |
| 0.0 | 21.0 | 15750 | 0.8573 | 0.8918 |
| 0.0085 | 22.0 | 16500 | 0.8363 | 0.8918 |
| 0.005 | 23.0 | 17250 | 0.8431 | 0.8885 |
| 0.0 | 24.0 | 18000 | 0.8537 | 0.8902 |
| 0.0 | 25.0 | 18750 | 0.8479 | 0.8918 |
| 0.0 | 26.0 | 19500 | 0.8197 | 0.8869 |
| 0.0098 | 27.0 | 20250 | 0.8247 | 0.8869 |
| 0.0 | 28.0 | 21000 | 0.8273 | 0.8885 |
| 0.0 | 29.0 | 21750 | 0.8481 | 0.8918 |
| 0.0 | 30.0 | 22500 | 0.8486 | 0.8885 |
| 0.0 | 31.0 | 23250 | 0.8913 | 0.8935 |
| 0.0 | 32.0 | 24000 | 0.8769 | 0.8918 |
| 0.0 | 33.0 | 24750 | 0.8699 | 0.8885 |
| 0.0 | 34.0 | 25500 | 0.8861 | 0.8935 |
| 0.0 | 35.0 | 26250 | 0.8555 | 0.8852 |
| 0.0 | 36.0 | 27000 | 0.8657 | 0.8918 |
| 0.0099 | 37.0 | 27750 | 0.8602 | 0.8902 |
| 0.0 | 38.0 | 28500 | 0.8913 | 0.8918 |
| 0.0 | 39.0 | 29250 | 0.8649 | 0.8885 |
| 0.0 | 40.0 | 30000 | 0.8620 | 0.8885 |
| 0.0 | 41.0 | 30750 | 0.8685 | 0.8885 |
| 0.0 | 42.0 | 31500 | 0.8731 | 0.8902 |
| 0.0 | 43.0 | 32250 | 0.8772 | 0.8902 |
| 0.0 | 44.0 | 33000 | 0.8742 | 0.8902 |
| 0.0026 | 45.0 | 33750 | 0.8773 | 0.8902 |
| 0.0 | 46.0 | 34500 | 0.8745 | 0.8902 |
| 0.0 | 47.0 | 35250 | 0.8728 | 0.8885 |
| 0.0 | 48.0 | 36000 | 0.8716 | 0.8885 |
| 0.0 | 49.0 | 36750 | 0.8740 | 0.8885 |
| 0.0 | 50.0 | 37500 | 0.8741 | 0.8902 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_base_sgd_001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_sgd_001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2634
- Accuracy: 0.8965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5921 | 1.0 | 751 | 0.6125 | 0.7713 |
| 0.3591 | 2.0 | 1502 | 0.4405 | 0.8314 |
| 0.3047 | 3.0 | 2253 | 0.3783 | 0.8598 |
| 0.2551 | 4.0 | 3004 | 0.3472 | 0.8781 |
| 0.2903 | 5.0 | 3755 | 0.3280 | 0.8748 |
| 0.2606 | 6.0 | 4506 | 0.3152 | 0.8765 |
| 0.2793 | 7.0 | 5257 | 0.3058 | 0.8881 |
| 0.2466 | 8.0 | 6008 | 0.2984 | 0.8915 |
| 0.2613 | 9.0 | 6759 | 0.2938 | 0.8881 |
| 0.2309 | 10.0 | 7510 | 0.2878 | 0.8948 |
| 0.2362 | 11.0 | 8261 | 0.2856 | 0.8915 |
| 0.2432 | 12.0 | 9012 | 0.2826 | 0.8982 |
| 0.2244 | 13.0 | 9763 | 0.2806 | 0.8898 |
| 0.1581 | 14.0 | 10514 | 0.2786 | 0.8915 |
| 0.1869 | 15.0 | 11265 | 0.2757 | 0.8915 |
| 0.1863 | 16.0 | 12016 | 0.2739 | 0.8915 |
| 0.25 | 17.0 | 12767 | 0.2738 | 0.8898 |
| 0.1609 | 18.0 | 13518 | 0.2717 | 0.8932 |
| 0.1993 | 19.0 | 14269 | 0.2719 | 0.8881 |
| 0.1982 | 20.0 | 15020 | 0.2698 | 0.8915 |
| 0.1558 | 21.0 | 15771 | 0.2692 | 0.8982 |
| 0.1601 | 22.0 | 16522 | 0.2690 | 0.8998 |
| 0.1754 | 23.0 | 17273 | 0.2694 | 0.8898 |
| 0.1664 | 24.0 | 18024 | 0.2677 | 0.8932 |
| 0.1901 | 25.0 | 18775 | 0.2669 | 0.8948 |
| 0.2023 | 26.0 | 19526 | 0.2671 | 0.8965 |
| 0.1982 | 27.0 | 20277 | 0.2651 | 0.8965 |
| 0.2093 | 28.0 | 21028 | 0.2655 | 0.8998 |
| 0.1635 | 29.0 | 21779 | 0.2645 | 0.8982 |
| 0.1441 | 30.0 | 22530 | 0.2639 | 0.8998 |
| 0.1066 | 31.0 | 23281 | 0.2648 | 0.9015 |
| 0.227 | 32.0 | 24032 | 0.2644 | 0.9032 |
| 0.1777 | 33.0 | 24783 | 0.2648 | 0.8965 |
| 0.1846 | 34.0 | 25534 | 0.2641 | 0.8982 |
| 0.1694 | 35.0 | 26285 | 0.2643 | 0.8982 |
| 0.1711 | 36.0 | 27036 | 0.2639 | 0.8965 |
| 0.2649 | 37.0 | 27787 | 0.2640 | 0.9032 |
| 0.1541 | 38.0 | 28538 | 0.2641 | 0.8982 |
| 0.107 | 39.0 | 29289 | 0.2640 | 0.8932 |
| 0.2208 | 40.0 | 30040 | 0.2641 | 0.8998 |
| 0.1601 | 41.0 | 30791 | 0.2638 | 0.8948 |
| 0.1237 | 42.0 | 31542 | 0.2635 | 0.8965 |
| 0.1676 | 43.0 | 32293 | 0.2637 | 0.8932 |
| 0.1349 | 44.0 | 33044 | 0.2638 | 0.8965 |
| 0.1329 | 45.0 | 33795 | 0.2638 | 0.8965 |
| 0.177 | 46.0 | 34546 | 0.2636 | 0.8982 |
| 0.1141 | 47.0 | 35297 | 0.2635 | 0.8965 |
| 0.1694 | 48.0 | 36048 | 0.2636 | 0.8965 |
| 0.1549 | 49.0 | 36799 | 0.2634 | 0.8965 |
| 0.1223 | 50.0 | 37550 | 0.2634 | 0.8965 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_adamax_0001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_adamax_0001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8519
- Accuracy: 0.8983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.201 | 1.0 | 750 | 0.2389 | 0.905 |
| 0.2264 | 2.0 | 1500 | 0.3492 | 0.9033 |
| 0.086 | 3.0 | 2250 | 0.4323 | 0.8917 |
| 0.1151 | 4.0 | 3000 | 0.4359 | 0.8867 |
| 0.0552 | 5.0 | 3750 | 0.5210 | 0.8933 |
| 0.0373 | 6.0 | 4500 | 0.7270 | 0.885 |
| 0.0559 | 7.0 | 5250 | 0.6401 | 0.8917 |
| 0.0155 | 8.0 | 6000 | 0.8201 | 0.8883 |
| 0.0012 | 9.0 | 6750 | 0.7935 | 0.9017 |
| 0.0025 | 10.0 | 7500 | 0.8189 | 0.8983 |
| 0.0349 | 11.0 | 8250 | 0.9771 | 0.8933 |
| 0.0002 | 12.0 | 9000 | 0.9998 | 0.8817 |
| 0.0235 | 13.0 | 9750 | 0.8826 | 0.8883 |
| 0.003 | 14.0 | 10500 | 0.8505 | 0.9 |
| 0.0 | 15.0 | 11250 | 0.9776 | 0.895 |
| 0.0 | 16.0 | 12000 | 0.8400 | 0.905 |
| 0.0 | 17.0 | 12750 | 0.8401 | 0.9017 |
| 0.0 | 18.0 | 13500 | 0.9571 | 0.8967 |
| 0.0 | 19.0 | 14250 | 0.8971 | 0.89 |
| 0.0 | 20.0 | 15000 | 0.8936 | 0.8917 |
| 0.0005 | 21.0 | 15750 | 0.8479 | 0.8917 |
| 0.0 | 22.0 | 16500 | 0.8022 | 0.9083 |
| 0.0136 | 23.0 | 17250 | 0.7709 | 0.9083 |
| 0.0 | 24.0 | 18000 | 0.8730 | 0.8983 |
| 0.0 | 25.0 | 18750 | 0.9311 | 0.895 |
| 0.0 | 26.0 | 19500 | 0.8622 | 0.905 |
| 0.0 | 27.0 | 20250 | 0.8794 | 0.9017 |
| 0.0 | 28.0 | 21000 | 0.8617 | 0.9017 |
| 0.0 | 29.0 | 21750 | 0.8575 | 0.905 |
| 0.0 | 30.0 | 22500 | 0.8219 | 0.9033 |
| 0.0 | 31.0 | 23250 | 0.8966 | 0.9017 |
| 0.0 | 32.0 | 24000 | 0.8450 | 0.8967 |
| 0.0 | 33.0 | 24750 | 0.8688 | 0.8933 |
| 0.0 | 34.0 | 25500 | 0.8664 | 0.8933 |
| 0.0 | 35.0 | 26250 | 0.8160 | 0.9017 |
| 0.0 | 36.0 | 27000 | 0.8559 | 0.895 |
| 0.0 | 37.0 | 27750 | 0.8682 | 0.8967 |
| 0.0 | 38.0 | 28500 | 0.8657 | 0.8983 |
| 0.0 | 39.0 | 29250 | 0.8354 | 0.8983 |
| 0.0 | 40.0 | 30000 | 0.8506 | 0.8983 |
| 0.0 | 41.0 | 30750 | 0.8341 | 0.8967 |
| 0.0 | 42.0 | 31500 | 0.8506 | 0.9 |
| 0.0 | 43.0 | 32250 | 0.8450 | 0.8983 |
| 0.0 | 44.0 | 33000 | 0.8494 | 0.9 |
| 0.0 | 45.0 | 33750 | 0.8478 | 0.9 |
| 0.0 | 46.0 | 34500 | 0.8491 | 0.9 |
| 0.0 | 47.0 | 35250 | 0.8495 | 0.9 |
| 0.0 | 48.0 | 36000 | 0.8491 | 0.9 |
| 0.0 | 49.0 | 36750 | 0.8507 | 0.9 |
| 0.0 | 50.0 | 37500 | 0.8519 | 0.8983 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_base_adamax_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_adamax_00001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7470
- Accuracy: 0.915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.218 | 1.0 | 750 | 0.2550 | 0.9217 |
| 0.1784 | 2.0 | 1500 | 0.2609 | 0.915 |
| 0.0871 | 3.0 | 2250 | 0.2647 | 0.9133 |
| 0.0623 | 4.0 | 3000 | 0.3219 | 0.9117 |
| 0.0311 | 5.0 | 3750 | 0.3796 | 0.9133 |
| 0.0257 | 6.0 | 4500 | 0.4439 | 0.92 |
| 0.0152 | 7.0 | 5250 | 0.4788 | 0.9133 |
| 0.0014 | 8.0 | 6000 | 0.5433 | 0.9133 |
| 0.0002 | 9.0 | 6750 | 0.5937 | 0.9167 |
| 0.0001 | 10.0 | 7500 | 0.5984 | 0.9167 |
| 0.0001 | 11.0 | 8250 | 0.6383 | 0.905 |
| 0.0 | 12.0 | 9000 | 0.6698 | 0.915 |
| 0.0 | 13.0 | 9750 | 0.6800 | 0.9133 |
| 0.0009 | 14.0 | 10500 | 0.6448 | 0.9133 |
| 0.0 | 15.0 | 11250 | 0.6953 | 0.915 |
| 0.0 | 16.0 | 12000 | 0.6895 | 0.915 |
| 0.0 | 17.0 | 12750 | 0.6872 | 0.9133 |
| 0.0 | 18.0 | 13500 | 0.6939 | 0.9133 |
| 0.0 | 19.0 | 14250 | 0.7078 | 0.91 |
| 0.0 | 20.0 | 15000 | 0.7103 | 0.9083 |
| 0.0 | 21.0 | 15750 | 0.7293 | 0.91 |
| 0.0 | 22.0 | 16500 | 0.7093 | 0.9117 |
| 0.0 | 23.0 | 17250 | 0.6939 | 0.91 |
| 0.0 | 24.0 | 18000 | 0.7152 | 0.91 |
| 0.0 | 25.0 | 18750 | 0.7121 | 0.9133 |
| 0.0 | 26.0 | 19500 | 0.7140 | 0.9133 |
| 0.0 | 27.0 | 20250 | 0.7127 | 0.9133 |
| 0.0 | 28.0 | 21000 | 0.7205 | 0.9133 |
| 0.0 | 29.0 | 21750 | 0.7206 | 0.9133 |
| 0.0 | 30.0 | 22500 | 0.7149 | 0.9117 |
| 0.0 | 31.0 | 23250 | 0.7529 | 0.9067 |
| 0.0 | 32.0 | 24000 | 0.7394 | 0.9117 |
| 0.0 | 33.0 | 24750 | 0.7449 | 0.9117 |
| 0.0 | 34.0 | 25500 | 0.7551 | 0.91 |
| 0.0 | 35.0 | 26250 | 0.7241 | 0.9117 |
| 0.0 | 36.0 | 27000 | 0.7326 | 0.9133 |
| 0.0 | 37.0 | 27750 | 0.7616 | 0.91 |
| 0.0 | 38.0 | 28500 | 0.7478 | 0.9117 |
| 0.0 | 39.0 | 29250 | 0.7373 | 0.91 |
| 0.0 | 40.0 | 30000 | 0.7426 | 0.9133 |
| 0.0 | 41.0 | 30750 | 0.7400 | 0.91 |
| 0.0 | 42.0 | 31500 | 0.7458 | 0.9133 |
| 0.0 | 43.0 | 32250 | 0.7444 | 0.9133 |
| 0.0 | 44.0 | 33000 | 0.7461 | 0.9133 |
| 0.0 | 45.0 | 33750 | 0.7448 | 0.915 |
| 0.0 | 46.0 | 34500 | 0.7457 | 0.915 |
| 0.0 | 47.0 | 35250 | 0.7456 | 0.915 |
| 0.0 | 48.0 | 36000 | 0.7454 | 0.915 |
| 0.0 | 49.0 | 36750 | 0.7466 | 0.915 |
| 0.0 | 50.0 | 37500 | 0.7470 | 0.915 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_base_sgd_001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_sgd_001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2876
- Accuracy: 0.8985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.569 | 1.0 | 750 | 0.5992 | 0.7754 |
| 0.3171 | 2.0 | 1500 | 0.4389 | 0.8136 |
| 0.3055 | 3.0 | 2250 | 0.3875 | 0.8403 |
| 0.3488 | 4.0 | 3000 | 0.3592 | 0.8519 |
| 0.2869 | 5.0 | 3750 | 0.3460 | 0.8586 |
| 0.2849 | 6.0 | 4500 | 0.3276 | 0.8669 |
| 0.2527 | 7.0 | 5250 | 0.3203 | 0.8719 |
| 0.2343 | 8.0 | 6000 | 0.3120 | 0.8802 |
| 0.2514 | 9.0 | 6750 | 0.3061 | 0.8802 |
| 0.2509 | 10.0 | 7500 | 0.3037 | 0.8819 |
| 0.2672 | 11.0 | 8250 | 0.2976 | 0.8835 |
| 0.2698 | 12.0 | 9000 | 0.2941 | 0.8885 |
| 0.2268 | 13.0 | 9750 | 0.2935 | 0.8869 |
| 0.1786 | 14.0 | 10500 | 0.2916 | 0.8835 |
| 0.166 | 15.0 | 11250 | 0.2890 | 0.8885 |
| 0.192 | 16.0 | 12000 | 0.2883 | 0.8869 |
| 0.1507 | 17.0 | 12750 | 0.2880 | 0.8869 |
| 0.1815 | 18.0 | 13500 | 0.2859 | 0.8885 |
| 0.1982 | 19.0 | 14250 | 0.2882 | 0.8902 |
| 0.2 | 20.0 | 15000 | 0.2835 | 0.8869 |
| 0.1801 | 21.0 | 15750 | 0.2845 | 0.8902 |
| 0.1194 | 22.0 | 16500 | 0.2833 | 0.8918 |
| 0.1747 | 23.0 | 17250 | 0.2854 | 0.8935 |
| 0.1283 | 24.0 | 18000 | 0.2844 | 0.8935 |
| 0.1821 | 25.0 | 18750 | 0.2833 | 0.8952 |
| 0.2094 | 26.0 | 19500 | 0.2844 | 0.8918 |
| 0.1857 | 27.0 | 20250 | 0.2847 | 0.8985 |
| 0.1436 | 28.0 | 21000 | 0.2850 | 0.8952 |
| 0.1786 | 29.0 | 21750 | 0.2841 | 0.8985 |
| 0.1785 | 30.0 | 22500 | 0.2847 | 0.8952 |
| 0.1889 | 31.0 | 23250 | 0.2850 | 0.8952 |
| 0.1905 | 32.0 | 24000 | 0.2845 | 0.8968 |
| 0.1812 | 33.0 | 24750 | 0.2853 | 0.8935 |
| 0.1723 | 34.0 | 25500 | 0.2858 | 0.8952 |
| 0.1909 | 35.0 | 26250 | 0.2850 | 0.8952 |
| 0.1661 | 36.0 | 27000 | 0.2862 | 0.8985 |
| 0.1719 | 37.0 | 27750 | 0.2861 | 0.8935 |
| 0.156 | 38.0 | 28500 | 0.2859 | 0.9002 |
| 0.1428 | 39.0 | 29250 | 0.2859 | 0.8985 |
| 0.229 | 40.0 | 30000 | 0.2871 | 0.8985 |
| 0.2023 | 41.0 | 30750 | 0.2873 | 0.8968 |
| 0.1583 | 42.0 | 31500 | 0.2870 | 0.8968 |
| 0.1454 | 43.0 | 32250 | 0.2875 | 0.8985 |
| 0.1231 | 44.0 | 33000 | 0.2872 | 0.8985 |
| 0.1299 | 45.0 | 33750 | 0.2879 | 0.8985 |
| 0.1947 | 46.0 | 34500 | 0.2876 | 0.8985 |
| 0.1774 | 47.0 | 35250 | 0.2875 | 0.8985 |
| 0.1319 | 48.0 | 36000 | 0.2874 | 0.8968 |
| 0.1298 | 49.0 | 36750 | 0.2876 | 0.8985 |
| 0.1348 | 50.0 | 37500 | 0.2876 | 0.8985 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
jefercania/vit_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0457
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1391 | 3.85 | 500 | 0.0457 | 0.9850 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
jefercania/vit-beans-image-classification-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-beans-image-classification-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1321
- Accuracy: 0.9699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0532 | 3.85 | 500 | 0.1321 | 0.9699 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
hkivancoral/smids_10x_deit_tiny_adamax_0001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_adamax_0001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1802
- Accuracy: 0.8917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2287 | 1.0 | 750 | 0.3453 | 0.8783 |
| 0.1774 | 2.0 | 1500 | 0.3819 | 0.8783 |
| 0.1288 | 3.0 | 2250 | 0.5077 | 0.8617 |
| 0.0591 | 4.0 | 3000 | 0.5258 | 0.8767 |
| 0.013 | 5.0 | 3750 | 0.7962 | 0.87 |
| 0.0441 | 6.0 | 4500 | 0.7798 | 0.875 |
| 0.0022 | 7.0 | 5250 | 0.8865 | 0.8783 |
| 0.0168 | 8.0 | 6000 | 0.9982 | 0.8817 |
| 0.0002 | 9.0 | 6750 | 0.9825 | 0.8833 |
| 0.012 | 10.0 | 7500 | 0.9837 | 0.8883 |
| 0.0139 | 11.0 | 8250 | 1.0185 | 0.88 |
| 0.0283 | 12.0 | 9000 | 1.0469 | 0.8767 |
| 0.0013 | 13.0 | 9750 | 1.1375 | 0.885 |
| 0.0051 | 14.0 | 10500 | 1.1468 | 0.8817 |
| 0.0 | 15.0 | 11250 | 1.1486 | 0.875 |
| 0.0211 | 16.0 | 12000 | 1.0421 | 0.8867 |
| 0.0 | 17.0 | 12750 | 1.1215 | 0.8783 |
| 0.0 | 18.0 | 13500 | 1.1501 | 0.8917 |
| 0.0001 | 19.0 | 14250 | 1.2352 | 0.88 |
| 0.0002 | 20.0 | 15000 | 1.2860 | 0.8883 |
| 0.0 | 21.0 | 15750 | 1.1704 | 0.8833 |
| 0.0 | 22.0 | 16500 | 1.0833 | 0.8933 |
| 0.0 | 23.0 | 17250 | 1.1109 | 0.8933 |
| 0.0 | 24.0 | 18000 | 1.1424 | 0.8933 |
| 0.0 | 25.0 | 18750 | 1.0812 | 0.89 |
| 0.0 | 26.0 | 19500 | 1.1046 | 0.8917 |
| 0.0 | 27.0 | 20250 | 1.1453 | 0.8883 |
| 0.0 | 28.0 | 21000 | 1.1203 | 0.885 |
| 0.0 | 29.0 | 21750 | 1.1015 | 0.8933 |
| 0.0 | 30.0 | 22500 | 1.1212 | 0.8967 |
| 0.0 | 31.0 | 23250 | 1.1480 | 0.8883 |
| 0.0 | 32.0 | 24000 | 1.1454 | 0.8833 |
| 0.0 | 33.0 | 24750 | 1.1314 | 0.8867 |
| 0.0 | 34.0 | 25500 | 1.1208 | 0.885 |
| 0.0 | 35.0 | 26250 | 1.1448 | 0.8833 |
| 0.0 | 36.0 | 27000 | 1.1486 | 0.8833 |
| 0.0 | 37.0 | 27750 | 1.1572 | 0.885 |
| 0.0 | 38.0 | 28500 | 1.1406 | 0.8867 |
| 0.0 | 39.0 | 29250 | 1.1768 | 0.89 |
| 0.0 | 40.0 | 30000 | 1.1690 | 0.885 |
| 0.0 | 41.0 | 30750 | 1.1715 | 0.8883 |
| 0.0 | 42.0 | 31500 | 1.1720 | 0.89 |
| 0.0 | 43.0 | 32250 | 1.1654 | 0.8917 |
| 0.0 | 44.0 | 33000 | 1.1692 | 0.8917 |
| 0.0 | 45.0 | 33750 | 1.1750 | 0.8917 |
| 0.0 | 46.0 | 34500 | 1.1770 | 0.8917 |
| 0.0 | 47.0 | 35250 | 1.1783 | 0.8917 |
| 0.0 | 48.0 | 36000 | 1.1786 | 0.8917 |
| 0.0 | 49.0 | 36750 | 1.1796 | 0.8917 |
| 0.0 | 50.0 | 37500 | 1.1802 | 0.8917 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_base_adamax_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_adamax_00001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1898
- Accuracy: 0.8833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2529 | 1.0 | 750 | 0.3083 | 0.8717 |
| 0.1742 | 2.0 | 1500 | 0.3045 | 0.8817 |
| 0.085 | 3.0 | 2250 | 0.3224 | 0.8917 |
| 0.0584 | 4.0 | 3000 | 0.3946 | 0.89 |
| 0.0231 | 5.0 | 3750 | 0.5149 | 0.8783 |
| 0.0068 | 6.0 | 4500 | 0.5629 | 0.8917 |
| 0.0006 | 7.0 | 5250 | 0.6401 | 0.8883 |
| 0.0003 | 8.0 | 6000 | 0.7450 | 0.8833 |
| 0.0003 | 9.0 | 6750 | 0.7717 | 0.8867 |
| 0.0001 | 10.0 | 7500 | 0.8308 | 0.8817 |
| 0.0 | 11.0 | 8250 | 0.8547 | 0.8867 |
| 0.0001 | 12.0 | 9000 | 0.8939 | 0.8817 |
| 0.0 | 13.0 | 9750 | 0.9205 | 0.8783 |
| 0.0 | 14.0 | 10500 | 0.9505 | 0.8783 |
| 0.0 | 15.0 | 11250 | 0.9627 | 0.885 |
| 0.0277 | 16.0 | 12000 | 0.9874 | 0.88 |
| 0.0 | 17.0 | 12750 | 0.9840 | 0.8783 |
| 0.0 | 18.0 | 13500 | 1.0047 | 0.885 |
| 0.0 | 19.0 | 14250 | 1.0103 | 0.8833 |
| 0.0 | 20.0 | 15000 | 1.0263 | 0.88 |
| 0.0 | 21.0 | 15750 | 1.0370 | 0.8867 |
| 0.0 | 22.0 | 16500 | 1.0526 | 0.885 |
| 0.0 | 23.0 | 17250 | 1.0655 | 0.8867 |
| 0.0 | 24.0 | 18000 | 1.0468 | 0.8917 |
| 0.0 | 25.0 | 18750 | 1.0862 | 0.88 |
| 0.0 | 26.0 | 19500 | 1.1000 | 0.8833 |
| 0.0 | 27.0 | 20250 | 1.0936 | 0.885 |
| 0.0 | 28.0 | 21000 | 1.1024 | 0.885 |
| 0.0 | 29.0 | 21750 | 1.1116 | 0.885 |
| 0.0 | 30.0 | 22500 | 1.1299 | 0.88 |
| 0.0 | 31.0 | 23250 | 1.1217 | 0.8833 |
| 0.0 | 32.0 | 24000 | 1.1546 | 0.8817 |
| 0.0 | 33.0 | 24750 | 1.1366 | 0.8833 |
| 0.0 | 34.0 | 25500 | 1.1374 | 0.8833 |
| 0.0 | 35.0 | 26250 | 1.1519 | 0.8833 |
| 0.0 | 36.0 | 27000 | 1.1620 | 0.8833 |
| 0.0 | 37.0 | 27750 | 1.1614 | 0.8833 |
| 0.0 | 38.0 | 28500 | 1.1586 | 0.8833 |
| 0.0 | 39.0 | 29250 | 1.1778 | 0.8833 |
| 0.0 | 40.0 | 30000 | 1.1771 | 0.8833 |
| 0.0 | 41.0 | 30750 | 1.1795 | 0.8833 |
| 0.0 | 42.0 | 31500 | 1.1811 | 0.8833 |
| 0.0 | 43.0 | 32250 | 1.1780 | 0.885 |
| 0.0 | 44.0 | 33000 | 1.1793 | 0.885 |
| 0.0 | 45.0 | 33750 | 1.1843 | 0.8833 |
| 0.0 | 46.0 | 34500 | 1.1866 | 0.8833 |
| 0.0 | 47.0 | 35250 | 1.1877 | 0.8833 |
| 0.0 | 48.0 | 36000 | 1.1886 | 0.8833 |
| 0.0 | 49.0 | 36750 | 1.1894 | 0.8833 |
| 0.0 | 50.0 | 37500 | 1.1898 | 0.8833 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
andrecastro/cvt-13
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cvt-13
This model is a fine-tuned version of [microsoft/cvt-13](https://huggingface.co/microsoft/cvt-13) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0361
- Accuracy: 0.9886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4048 | 1.0 | 327 | 0.2161 | 0.9156 |
| 0.33 | 2.0 | 654 | 0.1320 | 0.9501 |
| 0.3147 | 3.0 | 981 | 0.1060 | 0.9612 |
| 0.2213 | 4.0 | 1309 | 0.0820 | 0.9742 |
| 0.3256 | 5.0 | 1636 | 0.0717 | 0.9750 |
| 0.3207 | 6.0 | 1963 | 0.1062 | 0.9626 |
| 0.2273 | 7.0 | 2290 | 0.0535 | 0.9797 |
| 0.2066 | 8.0 | 2618 | 0.0566 | 0.9817 |
| 0.2162 | 9.0 | 2945 | 0.0459 | 0.9828 |
| 0.2296 | 10.0 | 3272 | 0.0444 | 0.9851 |
| 0.187 | 11.0 | 3599 | 0.0348 | 0.9882 |
| 0.2208 | 12.0 | 3927 | 0.0505 | 0.9848 |
| 0.1855 | 13.0 | 4254 | 0.0371 | 0.9869 |
| 0.1875 | 14.0 | 4581 | 0.0384 | 0.9880 |
| 0.202 | 14.99 | 4905 | 0.0361 | 0.9886 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"anormal_total",
"normal_total"
] |
hkivancoral/smids_10x_deit_base_sgd_001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_sgd_001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2607
- Accuracy: 0.9083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5671 | 1.0 | 750 | 0.5928 | 0.7967 |
| 0.408 | 2.0 | 1500 | 0.4283 | 0.8517 |
| 0.3447 | 3.0 | 2250 | 0.3742 | 0.8633 |
| 0.3088 | 4.0 | 3000 | 0.3475 | 0.8683 |
| 0.2979 | 5.0 | 3750 | 0.3269 | 0.8733 |
| 0.2962 | 6.0 | 4500 | 0.3183 | 0.8767 |
| 0.2557 | 7.0 | 5250 | 0.3059 | 0.8817 |
| 0.2555 | 8.0 | 6000 | 0.2957 | 0.8817 |
| 0.2367 | 9.0 | 6750 | 0.2914 | 0.885 |
| 0.1949 | 10.0 | 7500 | 0.2859 | 0.8917 |
| 0.2488 | 11.0 | 8250 | 0.2846 | 0.8917 |
| 0.2475 | 12.0 | 9000 | 0.2777 | 0.895 |
| 0.1828 | 13.0 | 9750 | 0.2753 | 0.8983 |
| 0.2439 | 14.0 | 10500 | 0.2718 | 0.9017 |
| 0.2473 | 15.0 | 11250 | 0.2704 | 0.895 |
| 0.1928 | 16.0 | 12000 | 0.2696 | 0.895 |
| 0.1843 | 17.0 | 12750 | 0.2710 | 0.8983 |
| 0.2029 | 18.0 | 13500 | 0.2625 | 0.9033 |
| 0.2035 | 19.0 | 14250 | 0.2665 | 0.9 |
| 0.1744 | 20.0 | 15000 | 0.2677 | 0.905 |
| 0.152 | 21.0 | 15750 | 0.2612 | 0.9033 |
| 0.1898 | 22.0 | 16500 | 0.2631 | 0.9 |
| 0.1983 | 23.0 | 17250 | 0.2648 | 0.9067 |
| 0.1746 | 24.0 | 18000 | 0.2651 | 0.9067 |
| 0.2045 | 25.0 | 18750 | 0.2633 | 0.9067 |
| 0.1969 | 26.0 | 19500 | 0.2578 | 0.9067 |
| 0.1227 | 27.0 | 20250 | 0.2593 | 0.91 |
| 0.1518 | 28.0 | 21000 | 0.2610 | 0.9083 |
| 0.1661 | 29.0 | 21750 | 0.2607 | 0.9067 |
| 0.1698 | 30.0 | 22500 | 0.2600 | 0.9083 |
| 0.1513 | 31.0 | 23250 | 0.2624 | 0.9067 |
| 0.1181 | 32.0 | 24000 | 0.2595 | 0.9083 |
| 0.1772 | 33.0 | 24750 | 0.2601 | 0.9083 |
| 0.1745 | 34.0 | 25500 | 0.2608 | 0.9083 |
| 0.1241 | 35.0 | 26250 | 0.2599 | 0.9083 |
| 0.1459 | 36.0 | 27000 | 0.2607 | 0.9083 |
| 0.1333 | 37.0 | 27750 | 0.2607 | 0.9083 |
| 0.1934 | 38.0 | 28500 | 0.2605 | 0.905 |
| 0.1357 | 39.0 | 29250 | 0.2594 | 0.91 |
| 0.1781 | 40.0 | 30000 | 0.2597 | 0.91 |
| 0.1473 | 41.0 | 30750 | 0.2601 | 0.91 |
| 0.1773 | 42.0 | 31500 | 0.2596 | 0.9083 |
| 0.1488 | 43.0 | 32250 | 0.2600 | 0.9083 |
| 0.1451 | 44.0 | 33000 | 0.2615 | 0.9083 |
| 0.1365 | 45.0 | 33750 | 0.2606 | 0.9083 |
| 0.1144 | 46.0 | 34500 | 0.2619 | 0.9067 |
| 0.1714 | 47.0 | 35250 | 0.2607 | 0.9083 |
| 0.1189 | 48.0 | 36000 | 0.2607 | 0.9083 |
| 0.1679 | 49.0 | 36750 | 0.2607 | 0.9083 |
| 0.1606 | 50.0 | 37500 | 0.2607 | 0.9083 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_adamax_0001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_adamax_0001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9737
- Accuracy: 0.905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2005 | 1.0 | 750 | 0.2656 | 0.9017 |
| 0.1545 | 2.0 | 1500 | 0.2591 | 0.905 |
| 0.0978 | 3.0 | 2250 | 0.4162 | 0.8983 |
| 0.0801 | 4.0 | 3000 | 0.4834 | 0.8967 |
| 0.0206 | 5.0 | 3750 | 0.5823 | 0.8983 |
| 0.0031 | 6.0 | 4500 | 0.5789 | 0.91 |
| 0.063 | 7.0 | 5250 | 0.6700 | 0.905 |
| 0.0223 | 8.0 | 6000 | 0.7780 | 0.9 |
| 0.0158 | 9.0 | 6750 | 0.7077 | 0.91 |
| 0.0007 | 10.0 | 7500 | 0.8470 | 0.9133 |
| 0.0055 | 11.0 | 8250 | 0.8576 | 0.8983 |
| 0.0 | 12.0 | 9000 | 0.8487 | 0.9117 |
| 0.0005 | 13.0 | 9750 | 1.0023 | 0.9 |
| 0.0 | 14.0 | 10500 | 0.8480 | 0.91 |
| 0.0003 | 15.0 | 11250 | 0.8518 | 0.91 |
| 0.0 | 16.0 | 12000 | 0.9913 | 0.9033 |
| 0.006 | 17.0 | 12750 | 0.9222 | 0.9033 |
| 0.0005 | 18.0 | 13500 | 0.9286 | 0.905 |
| 0.0012 | 19.0 | 14250 | 0.9239 | 0.905 |
| 0.0 | 20.0 | 15000 | 0.9679 | 0.9033 |
| 0.0004 | 21.0 | 15750 | 0.9229 | 0.9017 |
| 0.0 | 22.0 | 16500 | 0.9725 | 0.9033 |
| 0.0024 | 23.0 | 17250 | 0.9148 | 0.9133 |
| 0.0 | 24.0 | 18000 | 0.9300 | 0.9117 |
| 0.0 | 25.0 | 18750 | 0.9072 | 0.9067 |
| 0.0 | 26.0 | 19500 | 0.9306 | 0.9017 |
| 0.0 | 27.0 | 20250 | 0.8933 | 0.9117 |
| 0.0 | 28.0 | 21000 | 0.8908 | 0.91 |
| 0.0 | 29.0 | 21750 | 0.9728 | 0.9 |
| 0.0036 | 30.0 | 22500 | 0.9189 | 0.9067 |
| 0.0 | 31.0 | 23250 | 0.9100 | 0.9133 |
| 0.0 | 32.0 | 24000 | 0.9330 | 0.9067 |
| 0.0 | 33.0 | 24750 | 0.9244 | 0.9083 |
| 0.0 | 34.0 | 25500 | 0.9397 | 0.9067 |
| 0.0 | 35.0 | 26250 | 0.9397 | 0.9083 |
| 0.0 | 36.0 | 27000 | 0.9534 | 0.9067 |
| 0.0047 | 37.0 | 27750 | 0.9577 | 0.9067 |
| 0.0 | 38.0 | 28500 | 0.9431 | 0.9067 |
| 0.0 | 39.0 | 29250 | 0.9640 | 0.905 |
| 0.0 | 40.0 | 30000 | 0.9546 | 0.9067 |
| 0.0 | 41.0 | 30750 | 0.9625 | 0.9033 |
| 0.0 | 42.0 | 31500 | 0.9609 | 0.905 |
| 0.0 | 43.0 | 32250 | 0.9631 | 0.905 |
| 0.0 | 44.0 | 33000 | 0.9649 | 0.905 |
| 0.0 | 45.0 | 33750 | 0.9667 | 0.905 |
| 0.0 | 46.0 | 34500 | 0.9694 | 0.905 |
| 0.0 | 47.0 | 35250 | 0.9714 | 0.905 |
| 0.0 | 48.0 | 36000 | 0.9719 | 0.905 |
| 0.0 | 49.0 | 36750 | 0.9733 | 0.905 |
| 0.0 | 50.0 | 37500 | 0.9737 | 0.905 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
B4Z00/pokemons_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# B4Z00/pokemons_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4438
- Validation Loss: 5.8186
- Train Accuracy: 0.0451
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 4.5854 | 5.2434 | 0.0191 | 0 |
| 3.4375 | 5.2942 | 0.0191 | 1 |
| 2.3515 | 5.3596 | 0.0246 | 2 |
| 1.4229 | 5.5273 | 0.0383 | 3 |
| 0.7900 | 5.6574 | 0.0464 | 4 |
| 0.4438 | 5.8186 | 0.0451 | 5 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"porygon",
"goldeen",
"articuno",
"clefable",
"cubone",
"marowak",
"nidorino",
"jolteon",
"muk",
"magikarp",
"slowbro",
"tauros",
"kabuto",
"seaking",
"spearow",
"sandshrew",
"eevee",
"kakuna",
"omastar",
"ekans",
"geodude",
"magmar",
"snorlax",
"meowth",
"dugtrio",
"pidgeotto",
"venusaur",
"persian",
"rhydon",
"starmie",
"charmeleon",
"lapras",
"alakazam",
"graveler",
"psyduck",
"machop",
"rapidash",
"doduo",
"magneton",
"arcanine",
"electrode",
"omanyte",
"poliwhirl",
"mew",
"alolan sandslash",
"mewtwo",
"jynx",
"weezing",
"gastly",
"victreebel",
"ivysaur",
"mrmime",
"shellder",
"scyther",
"diglett",
"primeape",
"raichu",
"oddish",
"dodrio",
"dragonair",
"weedle",
"golduck",
"hitmonlee",
"flareon",
"krabby",
"parasect",
"ninetales",
"nidoqueen",
"kabutops",
"drowzee",
"caterpie",
"jigglypuff",
"machamp",
"hitmonchan",
"clefairy",
"kangaskhan",
"dragonite",
"weepinbell",
"fearow",
"bellsprout",
"grimer",
"nidorina",
"staryu",
"horsea",
"gloom",
"electabuzz",
"dratini",
"machoke",
"magnemite",
"squirtle",
"gyarados",
"pidgeot",
"bulbasaur",
"nidoking",
"golem",
"aerodactyl",
"dewgong",
"moltres",
"zapdos",
"poliwrath",
"vulpix",
"beedrill",
"charmander",
"abra",
"zubat",
"golbat",
"mankey",
"wigglytuff",
"charizard",
"slowpoke",
"poliwag",
"tentacruel",
"rhyhorn",
"onix",
"butterfree",
"exeggcute",
"sandslash",
"seadra",
"pinsir",
"rattata",
"growlithe",
"haunter",
"pidgey",
"ditto",
"farfetchd",
"pikachu",
"raticate",
"wartortle",
"gengar",
"vaporeon",
"cloyster",
"hypno",
"arbok",
"metapod",
"tangela",
"kingler",
"exeggutor",
"kadabra",
"seel",
"venonat",
"voltorb",
"chansey",
"venomoth",
"ponyta",
"vileplume",
"koffing",
"blastoise",
"tentacool",
"lickitung",
"paras"
] |
hkivancoral/smids_10x_deit_base_adamax_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_adamax_00001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7940
- Accuracy: 0.9017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2093 | 1.0 | 750 | 0.2691 | 0.8933 |
| 0.1756 | 2.0 | 1500 | 0.2406 | 0.905 |
| 0.137 | 3.0 | 2250 | 0.2712 | 0.905 |
| 0.0854 | 4.0 | 3000 | 0.3429 | 0.9033 |
| 0.0523 | 5.0 | 3750 | 0.4008 | 0.9083 |
| 0.0156 | 6.0 | 4500 | 0.4936 | 0.9 |
| 0.0114 | 7.0 | 5250 | 0.5881 | 0.8983 |
| 0.0007 | 8.0 | 6000 | 0.5995 | 0.9017 |
| 0.0011 | 9.0 | 6750 | 0.6327 | 0.8933 |
| 0.0013 | 10.0 | 7500 | 0.6424 | 0.9033 |
| 0.0001 | 11.0 | 8250 | 0.6384 | 0.9067 |
| 0.0 | 12.0 | 9000 | 0.6782 | 0.905 |
| 0.0 | 13.0 | 9750 | 0.7151 | 0.9067 |
| 0.0 | 14.0 | 10500 | 0.7016 | 0.9017 |
| 0.0 | 15.0 | 11250 | 0.7152 | 0.9033 |
| 0.0 | 16.0 | 12000 | 0.7512 | 0.9 |
| 0.0095 | 17.0 | 12750 | 0.7354 | 0.8983 |
| 0.0 | 18.0 | 13500 | 0.7495 | 0.9033 |
| 0.0 | 19.0 | 14250 | 0.7609 | 0.9 |
| 0.0 | 20.0 | 15000 | 0.7954 | 0.895 |
| 0.0 | 21.0 | 15750 | 0.7575 | 0.8983 |
| 0.0 | 22.0 | 16500 | 0.7508 | 0.9 |
| 0.0 | 23.0 | 17250 | 0.7396 | 0.9033 |
| 0.0 | 24.0 | 18000 | 0.6948 | 0.9017 |
| 0.0 | 25.0 | 18750 | 0.7758 | 0.9017 |
| 0.0 | 26.0 | 19500 | 0.7490 | 0.905 |
| 0.0 | 27.0 | 20250 | 0.7438 | 0.905 |
| 0.0 | 28.0 | 21000 | 0.7884 | 0.9017 |
| 0.0 | 29.0 | 21750 | 0.7693 | 0.9 |
| 0.008 | 30.0 | 22500 | 0.8123 | 0.9017 |
| 0.0 | 31.0 | 23250 | 0.7714 | 0.9 |
| 0.0 | 32.0 | 24000 | 0.7832 | 0.905 |
| 0.0 | 33.0 | 24750 | 0.7708 | 0.9 |
| 0.0 | 34.0 | 25500 | 0.7992 | 0.9 |
| 0.0 | 35.0 | 26250 | 0.7850 | 0.9 |
| 0.0 | 36.0 | 27000 | 0.7791 | 0.9017 |
| 0.0028 | 37.0 | 27750 | 0.7669 | 0.9 |
| 0.0 | 38.0 | 28500 | 0.7718 | 0.9 |
| 0.0 | 39.0 | 29250 | 0.8025 | 0.9033 |
| 0.0 | 40.0 | 30000 | 0.8035 | 0.9017 |
| 0.0 | 41.0 | 30750 | 0.7913 | 0.9017 |
| 0.0 | 42.0 | 31500 | 0.7895 | 0.9033 |
| 0.0 | 43.0 | 32250 | 0.7964 | 0.9017 |
| 0.0 | 44.0 | 33000 | 0.7921 | 0.9017 |
| 0.0 | 45.0 | 33750 | 0.7925 | 0.9033 |
| 0.0 | 46.0 | 34500 | 0.7930 | 0.9033 |
| 0.0 | 47.0 | 35250 | 0.7931 | 0.9017 |
| 0.0 | 48.0 | 36000 | 0.7925 | 0.9017 |
| 0.0 | 49.0 | 36750 | 0.7920 | 0.9033 |
| 0.0 | 50.0 | 37500 | 0.7940 | 0.9017 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_base_sgd_001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_sgd_001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3178
- Accuracy: 0.8733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.599 | 1.0 | 750 | 0.5727 | 0.805 |
| 0.4144 | 2.0 | 1500 | 0.4280 | 0.8367 |
| 0.3778 | 3.0 | 2250 | 0.3848 | 0.85 |
| 0.3651 | 4.0 | 3000 | 0.3623 | 0.8483 |
| 0.2724 | 5.0 | 3750 | 0.3497 | 0.8633 |
| 0.2557 | 6.0 | 4500 | 0.3423 | 0.865 |
| 0.2087 | 7.0 | 5250 | 0.3348 | 0.8617 |
| 0.2338 | 8.0 | 6000 | 0.3321 | 0.8667 |
| 0.2309 | 9.0 | 6750 | 0.3286 | 0.8683 |
| 0.222 | 10.0 | 7500 | 0.3223 | 0.8683 |
| 0.2535 | 11.0 | 8250 | 0.3209 | 0.8683 |
| 0.2943 | 12.0 | 9000 | 0.3182 | 0.865 |
| 0.2348 | 13.0 | 9750 | 0.3166 | 0.87 |
| 0.2428 | 14.0 | 10500 | 0.3176 | 0.865 |
| 0.1672 | 15.0 | 11250 | 0.3141 | 0.875 |
| 0.2041 | 16.0 | 12000 | 0.3122 | 0.8783 |
| 0.143 | 17.0 | 12750 | 0.3133 | 0.8717 |
| 0.1867 | 18.0 | 13500 | 0.3116 | 0.8767 |
| 0.2171 | 19.0 | 14250 | 0.3182 | 0.87 |
| 0.1733 | 20.0 | 15000 | 0.3144 | 0.8733 |
| 0.1721 | 21.0 | 15750 | 0.3128 | 0.875 |
| 0.1968 | 22.0 | 16500 | 0.3133 | 0.8717 |
| 0.1782 | 23.0 | 17250 | 0.3143 | 0.8717 |
| 0.2288 | 24.0 | 18000 | 0.3135 | 0.8717 |
| 0.1955 | 25.0 | 18750 | 0.3134 | 0.8733 |
| 0.1366 | 26.0 | 19500 | 0.3154 | 0.87 |
| 0.1576 | 27.0 | 20250 | 0.3131 | 0.8783 |
| 0.1394 | 28.0 | 21000 | 0.3139 | 0.8733 |
| 0.1334 | 29.0 | 21750 | 0.3147 | 0.8733 |
| 0.1696 | 30.0 | 22500 | 0.3135 | 0.8767 |
| 0.1131 | 31.0 | 23250 | 0.3147 | 0.8767 |
| 0.1384 | 32.0 | 24000 | 0.3141 | 0.8733 |
| 0.1516 | 33.0 | 24750 | 0.3150 | 0.875 |
| 0.19 | 34.0 | 25500 | 0.3153 | 0.8733 |
| 0.1122 | 35.0 | 26250 | 0.3150 | 0.8733 |
| 0.164 | 36.0 | 27000 | 0.3158 | 0.8717 |
| 0.1263 | 37.0 | 27750 | 0.3166 | 0.875 |
| 0.1818 | 38.0 | 28500 | 0.3156 | 0.8733 |
| 0.1588 | 39.0 | 29250 | 0.3163 | 0.8717 |
| 0.1761 | 40.0 | 30000 | 0.3172 | 0.8733 |
| 0.1577 | 41.0 | 30750 | 0.3157 | 0.8733 |
| 0.1328 | 42.0 | 31500 | 0.3172 | 0.875 |
| 0.1699 | 43.0 | 32250 | 0.3163 | 0.8733 |
| 0.1505 | 44.0 | 33000 | 0.3182 | 0.875 |
| 0.1595 | 45.0 | 33750 | 0.3176 | 0.8733 |
| 0.1562 | 46.0 | 34500 | 0.3182 | 0.8733 |
| 0.1053 | 47.0 | 35250 | 0.3176 | 0.875 |
| 0.1264 | 48.0 | 36000 | 0.3177 | 0.8733 |
| 0.1357 | 49.0 | 36750 | 0.3177 | 0.8733 |
| 0.0881 | 50.0 | 37500 | 0.3178 | 0.8733 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
eryuefei/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6886
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7489 | 0.99 | 62 | 2.6204 | 0.801 |
| 1.8652 | 2.0 | 125 | 1.8567 | 0.852 |
| 1.6314 | 2.98 | 186 | 1.6886 | 0.87 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
SalapaoSaidam/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# SalapaoSaidam/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3988
- Validation Loss: 0.3758
- Train Accuracy: 0.903
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7964 | 1.7008 | 0.812 | 0 |
| 1.2366 | 0.8712 | 0.859 | 1 |
| 0.7196 | 0.5486 | 0.895 | 2 |
| 0.4894 | 0.4344 | 0.902 | 3 |
| 0.3988 | 0.3758 | 0.903 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
MattyB95/VIT-ASVspoof2019-Mel_Spectrogram-Synthetic-Voice-Detection
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VIT-ASVspoof2019-Mel_Spectrogram-Synthetic-Voice-Detection
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0649
- Accuracy: 0.7167
- F1: 0.8124
- Precision: 0.9998
- Recall: 0.6842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.007 | 1.0 | 3173 | 0.0108 | 0.9972 | 0.9984 | 0.9969 | 1.0 |
| 0.0015 | 2.0 | 6346 | 0.0022 | 0.9997 | 0.9998 | 0.9999 | 0.9998 |
| 0.0 | 3.0 | 9519 | 0.0025 | 0.9996 | 0.9998 | 0.9997 | 0.9999 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"bonafide",
"spoof"
] |
hkivancoral/smids_10x_deit_base_sgd_001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_sgd_001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2447
- Accuracy: 0.8967
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5962 | 1.0 | 750 | 0.5861 | 0.7883 |
| 0.4051 | 2.0 | 1500 | 0.4173 | 0.8267 |
| 0.3569 | 3.0 | 2250 | 0.3612 | 0.8367 |
| 0.3447 | 4.0 | 3000 | 0.3341 | 0.85 |
| 0.3363 | 5.0 | 3750 | 0.3169 | 0.855 |
| 0.2736 | 6.0 | 4500 | 0.3026 | 0.8683 |
| 0.2339 | 7.0 | 5250 | 0.2954 | 0.87 |
| 0.2686 | 8.0 | 6000 | 0.2855 | 0.8683 |
| 0.2668 | 9.0 | 6750 | 0.2807 | 0.8733 |
| 0.247 | 10.0 | 7500 | 0.2762 | 0.8783 |
| 0.2811 | 11.0 | 8250 | 0.2739 | 0.89 |
| 0.2638 | 12.0 | 9000 | 0.2726 | 0.8833 |
| 0.2445 | 13.0 | 9750 | 0.2668 | 0.8883 |
| 0.245 | 14.0 | 10500 | 0.2627 | 0.8883 |
| 0.2557 | 15.0 | 11250 | 0.2593 | 0.8867 |
| 0.1782 | 16.0 | 12000 | 0.2589 | 0.8867 |
| 0.2171 | 17.0 | 12750 | 0.2586 | 0.8883 |
| 0.1998 | 18.0 | 13500 | 0.2548 | 0.8933 |
| 0.2462 | 19.0 | 14250 | 0.2572 | 0.8917 |
| 0.1609 | 20.0 | 15000 | 0.2549 | 0.8933 |
| 0.1833 | 21.0 | 15750 | 0.2494 | 0.895 |
| 0.2212 | 22.0 | 16500 | 0.2509 | 0.895 |
| 0.2078 | 23.0 | 17250 | 0.2493 | 0.895 |
| 0.1922 | 24.0 | 18000 | 0.2508 | 0.8983 |
| 0.2035 | 25.0 | 18750 | 0.2506 | 0.8933 |
| 0.1816 | 26.0 | 19500 | 0.2465 | 0.8967 |
| 0.1488 | 27.0 | 20250 | 0.2466 | 0.8983 |
| 0.1736 | 28.0 | 21000 | 0.2478 | 0.8967 |
| 0.1851 | 29.0 | 21750 | 0.2450 | 0.8967 |
| 0.2091 | 30.0 | 22500 | 0.2502 | 0.8933 |
| 0.1735 | 31.0 | 23250 | 0.2445 | 0.8983 |
| 0.1511 | 32.0 | 24000 | 0.2473 | 0.895 |
| 0.1917 | 33.0 | 24750 | 0.2450 | 0.895 |
| 0.1536 | 34.0 | 25500 | 0.2464 | 0.8983 |
| 0.1399 | 35.0 | 26250 | 0.2436 | 0.895 |
| 0.1867 | 36.0 | 27000 | 0.2448 | 0.8983 |
| 0.1193 | 37.0 | 27750 | 0.2459 | 0.8967 |
| 0.1456 | 38.0 | 28500 | 0.2448 | 0.9017 |
| 0.1489 | 39.0 | 29250 | 0.2453 | 0.8967 |
| 0.1393 | 40.0 | 30000 | 0.2452 | 0.8983 |
| 0.1841 | 41.0 | 30750 | 0.2456 | 0.8967 |
| 0.1682 | 42.0 | 31500 | 0.2446 | 0.8967 |
| 0.1428 | 43.0 | 32250 | 0.2457 | 0.9 |
| 0.1636 | 44.0 | 33000 | 0.2455 | 0.8967 |
| 0.1783 | 45.0 | 33750 | 0.2453 | 0.8983 |
| 0.1167 | 46.0 | 34500 | 0.2456 | 0.8967 |
| 0.1786 | 47.0 | 35250 | 0.2449 | 0.8967 |
| 0.1666 | 48.0 | 36000 | 0.2447 | 0.8983 |
| 0.1479 | 49.0 | 36750 | 0.2447 | 0.8967 |
| 0.099 | 50.0 | 37500 | 0.2447 | 0.8967 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_adamax_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_adamax_00001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8545
- Accuracy: 0.9115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2768 | 1.0 | 751 | 0.3271 | 0.8681 |
| 0.2559 | 2.0 | 1502 | 0.2686 | 0.8932 |
| 0.1723 | 3.0 | 2253 | 0.2752 | 0.8932 |
| 0.1343 | 4.0 | 3004 | 0.2784 | 0.8898 |
| 0.1389 | 5.0 | 3755 | 0.2896 | 0.8965 |
| 0.1143 | 6.0 | 4506 | 0.3456 | 0.8881 |
| 0.0797 | 7.0 | 5257 | 0.3441 | 0.8865 |
| 0.0583 | 8.0 | 6008 | 0.3964 | 0.8998 |
| 0.0472 | 9.0 | 6759 | 0.4458 | 0.8915 |
| 0.0757 | 10.0 | 7510 | 0.4767 | 0.8982 |
| 0.0191 | 11.0 | 8261 | 0.5147 | 0.8915 |
| 0.031 | 12.0 | 9012 | 0.5873 | 0.8898 |
| 0.0022 | 13.0 | 9763 | 0.6291 | 0.8982 |
| 0.0003 | 14.0 | 10514 | 0.6449 | 0.9048 |
| 0.0014 | 15.0 | 11265 | 0.6651 | 0.8982 |
| 0.0237 | 16.0 | 12016 | 0.7228 | 0.9015 |
| 0.0016 | 17.0 | 12767 | 0.7272 | 0.8948 |
| 0.0001 | 18.0 | 13518 | 0.7560 | 0.9032 |
| 0.0001 | 19.0 | 14269 | 0.7571 | 0.8982 |
| 0.0 | 20.0 | 15020 | 0.7689 | 0.9048 |
| 0.0 | 21.0 | 15771 | 0.7584 | 0.9048 |
| 0.0 | 22.0 | 16522 | 0.7967 | 0.9032 |
| 0.0 | 23.0 | 17273 | 0.7987 | 0.9065 |
| 0.0001 | 24.0 | 18024 | 0.8298 | 0.9065 |
| 0.0 | 25.0 | 18775 | 0.8022 | 0.9098 |
| 0.0 | 26.0 | 19526 | 0.8054 | 0.9098 |
| 0.0 | 27.0 | 20277 | 0.8124 | 0.9065 |
| 0.0 | 28.0 | 21028 | 0.8128 | 0.9082 |
| 0.0194 | 29.0 | 21779 | 0.8361 | 0.9015 |
| 0.0 | 30.0 | 22530 | 0.8316 | 0.9065 |
| 0.0 | 31.0 | 23281 | 0.8255 | 0.9132 |
| 0.0 | 32.0 | 24032 | 0.8225 | 0.9115 |
| 0.0 | 33.0 | 24783 | 0.8294 | 0.9098 |
| 0.0 | 34.0 | 25534 | 0.8377 | 0.9082 |
| 0.0 | 35.0 | 26285 | 0.8477 | 0.9032 |
| 0.0 | 36.0 | 27036 | 0.8439 | 0.9115 |
| 0.0 | 37.0 | 27787 | 0.8492 | 0.9065 |
| 0.0 | 38.0 | 28538 | 0.8435 | 0.9098 |
| 0.0 | 39.0 | 29289 | 0.8490 | 0.9098 |
| 0.0 | 40.0 | 30040 | 0.8482 | 0.9115 |
| 0.0 | 41.0 | 30791 | 0.8506 | 0.9065 |
| 0.0 | 42.0 | 31542 | 0.8515 | 0.9082 |
| 0.0 | 43.0 | 32293 | 0.8517 | 0.9115 |
| 0.0 | 44.0 | 33044 | 0.8525 | 0.9115 |
| 0.0 | 45.0 | 33795 | 0.8550 | 0.9048 |
| 0.0 | 46.0 | 34546 | 0.8557 | 0.9082 |
| 0.0 | 47.0 | 35297 | 0.8547 | 0.9115 |
| 0.0 | 48.0 | 36048 | 0.8545 | 0.9115 |
| 0.0 | 49.0 | 36799 | 0.8544 | 0.9115 |
| 0.0 | 50.0 | 37550 | 0.8545 | 0.9115 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_base_sgd_0001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_sgd_0001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4637
- Accuracy: 0.8197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0699 | 1.0 | 751 | 1.0865 | 0.3372 |
| 1.0294 | 2.0 | 1502 | 1.0518 | 0.4207 |
| 0.9671 | 3.0 | 2253 | 1.0151 | 0.4992 |
| 0.8907 | 4.0 | 3004 | 0.9742 | 0.5526 |
| 0.8635 | 5.0 | 3755 | 0.9306 | 0.6043 |
| 0.8115 | 6.0 | 4506 | 0.8888 | 0.6461 |
| 0.7615 | 7.0 | 5257 | 0.8493 | 0.6694 |
| 0.7205 | 8.0 | 6008 | 0.8123 | 0.6861 |
| 0.6698 | 9.0 | 6759 | 0.7780 | 0.7078 |
| 0.6144 | 10.0 | 7510 | 0.7466 | 0.7195 |
| 0.6353 | 11.0 | 8261 | 0.7180 | 0.7295 |
| 0.5516 | 12.0 | 9012 | 0.6921 | 0.7396 |
| 0.5555 | 13.0 | 9763 | 0.6685 | 0.7446 |
| 0.5272 | 14.0 | 10514 | 0.6476 | 0.7546 |
| 0.5021 | 15.0 | 11265 | 0.6291 | 0.7596 |
| 0.4905 | 16.0 | 12016 | 0.6125 | 0.7713 |
| 0.4771 | 17.0 | 12767 | 0.5975 | 0.7696 |
| 0.4486 | 18.0 | 13518 | 0.5841 | 0.7730 |
| 0.4932 | 19.0 | 14269 | 0.5723 | 0.7713 |
| 0.4372 | 20.0 | 15020 | 0.5615 | 0.7746 |
| 0.3961 | 21.0 | 15771 | 0.5521 | 0.7780 |
| 0.455 | 22.0 | 16522 | 0.5434 | 0.7796 |
| 0.4028 | 23.0 | 17273 | 0.5355 | 0.7896 |
| 0.4361 | 24.0 | 18024 | 0.5285 | 0.7913 |
| 0.4568 | 25.0 | 18775 | 0.5220 | 0.7930 |
| 0.5129 | 26.0 | 19526 | 0.5160 | 0.7947 |
| 0.4225 | 27.0 | 20277 | 0.5106 | 0.7997 |
| 0.4409 | 28.0 | 21028 | 0.5056 | 0.7997 |
| 0.4065 | 29.0 | 21779 | 0.5010 | 0.7997 |
| 0.4201 | 30.0 | 22530 | 0.4969 | 0.8013 |
| 0.3612 | 31.0 | 23281 | 0.4930 | 0.8047 |
| 0.5003 | 32.0 | 24032 | 0.4895 | 0.8063 |
| 0.4064 | 33.0 | 24783 | 0.4864 | 0.8080 |
| 0.4 | 34.0 | 25534 | 0.4834 | 0.8097 |
| 0.4159 | 35.0 | 26285 | 0.4807 | 0.8097 |
| 0.4048 | 36.0 | 27036 | 0.4783 | 0.8130 |
| 0.4636 | 37.0 | 27787 | 0.4761 | 0.8147 |
| 0.3893 | 38.0 | 28538 | 0.4740 | 0.8147 |
| 0.3469 | 39.0 | 29289 | 0.4723 | 0.8147 |
| 0.4394 | 40.0 | 30040 | 0.4706 | 0.8164 |
| 0.4132 | 41.0 | 30791 | 0.4692 | 0.8180 |
| 0.3775 | 42.0 | 31542 | 0.4680 | 0.8197 |
| 0.3957 | 43.0 | 32293 | 0.4669 | 0.8197 |
| 0.4064 | 44.0 | 33044 | 0.4660 | 0.8197 |
| 0.4124 | 45.0 | 33795 | 0.4652 | 0.8197 |
| 0.3879 | 46.0 | 34546 | 0.4646 | 0.8197 |
| 0.3807 | 47.0 | 35297 | 0.4642 | 0.8197 |
| 0.4327 | 48.0 | 36048 | 0.4639 | 0.8197 |
| 0.3727 | 49.0 | 36799 | 0.4638 | 0.8197 |
| 0.3716 | 50.0 | 37550 | 0.4637 | 0.8197 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_base_sgd_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_sgd_00001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9942
- Accuracy: 0.5776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1101 | 1.0 | 751 | 1.0912 | 0.3940 |
| 1.1137 | 2.0 | 1502 | 1.0874 | 0.3990 |
| 1.0977 | 3.0 | 2253 | 1.0838 | 0.4057 |
| 1.0963 | 4.0 | 3004 | 1.0803 | 0.4157 |
| 1.0734 | 5.0 | 3755 | 1.0769 | 0.4307 |
| 1.0814 | 6.0 | 4506 | 1.0737 | 0.4424 |
| 1.0905 | 7.0 | 5257 | 1.0705 | 0.4558 |
| 1.0682 | 8.0 | 6008 | 1.0673 | 0.4508 |
| 1.0698 | 9.0 | 6759 | 1.0643 | 0.4524 |
| 1.066 | 10.0 | 7510 | 1.0613 | 0.4608 |
| 1.0809 | 11.0 | 8261 | 1.0584 | 0.4691 |
| 1.0569 | 12.0 | 9012 | 1.0556 | 0.4708 |
| 1.0524 | 13.0 | 9763 | 1.0527 | 0.4908 |
| 1.051 | 14.0 | 10514 | 1.0500 | 0.5042 |
| 1.0693 | 15.0 | 11265 | 1.0473 | 0.5092 |
| 1.0517 | 16.0 | 12016 | 1.0446 | 0.5175 |
| 1.0588 | 17.0 | 12767 | 1.0419 | 0.5159 |
| 1.0399 | 18.0 | 13518 | 1.0394 | 0.5192 |
| 1.0367 | 19.0 | 14269 | 1.0368 | 0.5225 |
| 1.0366 | 20.0 | 15020 | 1.0343 | 0.5259 |
| 1.0281 | 21.0 | 15771 | 1.0318 | 0.5326 |
| 1.036 | 22.0 | 16522 | 1.0294 | 0.5342 |
| 1.0334 | 23.0 | 17273 | 1.0270 | 0.5359 |
| 1.0262 | 24.0 | 18024 | 1.0247 | 0.5376 |
| 1.0153 | 25.0 | 18775 | 1.0224 | 0.5392 |
| 1.0269 | 26.0 | 19526 | 1.0202 | 0.5409 |
| 1.0163 | 27.0 | 20277 | 1.0180 | 0.5459 |
| 1.0112 | 28.0 | 21028 | 1.0159 | 0.5459 |
| 1.0292 | 29.0 | 21779 | 1.0139 | 0.5509 |
| 1.0159 | 30.0 | 22530 | 1.0120 | 0.5526 |
| 1.0063 | 31.0 | 23281 | 1.0102 | 0.5526 |
| 1.0238 | 32.0 | 24032 | 1.0085 | 0.5543 |
| 0.9962 | 33.0 | 24783 | 1.0069 | 0.5543 |
| 1.0016 | 34.0 | 25534 | 1.0053 | 0.5593 |
| 1.0116 | 35.0 | 26285 | 1.0039 | 0.5593 |
| 1.0035 | 36.0 | 27036 | 1.0026 | 0.5626 |
| 1.0094 | 37.0 | 27787 | 1.0013 | 0.5659 |
| 0.9975 | 38.0 | 28538 | 1.0002 | 0.5676 |
| 1.0065 | 39.0 | 29289 | 0.9992 | 0.5743 |
| 0.9968 | 40.0 | 30040 | 0.9982 | 0.5760 |
| 0.9969 | 41.0 | 30791 | 0.9974 | 0.5760 |
| 0.9821 | 42.0 | 31542 | 0.9967 | 0.5776 |
| 0.9849 | 43.0 | 32293 | 0.9961 | 0.5776 |
| 1.002 | 44.0 | 33044 | 0.9955 | 0.5776 |
| 1.0006 | 45.0 | 33795 | 0.9951 | 0.5776 |
| 1.0018 | 46.0 | 34546 | 0.9948 | 0.5776 |
| 0.9926 | 47.0 | 35297 | 0.9945 | 0.5776 |
| 1.0075 | 48.0 | 36048 | 0.9943 | 0.5776 |
| 0.9883 | 49.0 | 36799 | 0.9942 | 0.5776 |
| 0.9793 | 50.0 | 37550 | 0.9942 | 0.5776 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
Bliu3/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6482
- Accuracy: 0.877
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7341 | 0.99 | 62 | 2.5760 | 0.801 |
| 1.8694 | 2.0 | 125 | 1.8133 | 0.861 |
| 1.588 | 2.98 | 186 | 1.6482 | 0.877 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
imatag/stable-signature-bzh-detector-resnet18
|
# BZH watermark detector (demo)
You can use this classifier to detect watermarks generated with our [SDXL-turbo watermarking demo](https://huggingface.co/spaces/imatag/stable-signature-bzh).
## Usage
```py
from transformers import AutoModelForImageClassification, BlipImageProcessor
from PIL import Image
import sys
image_processor = BlipImageProcessor.from_pretrained("imatag/stable-signature-bzh-detector-resnet18")
model = AutoModelForImageClassification.from_pretrained("imatag/stable-signature-bzh-detector-resnet18")
model.eval()
img = Image.open(sys.argv[1]).convert("RGB")
inputs = image_processor(img, return_tensors="pt")
p = model(**inputs).logits[0,0] < 0
print(f"watermarked: {p}")
```
## Purpose
This model is an approximate version of [IMATAG](https://www.imatag.com/)'s BZH decoder for the watermark embedded in our [SDXL-turbo watermarking demo](https://huggingface.co/spaces/imatag/stable-signature-bzh).
It works on this watermark only and cannot be used to decode other watermarks.
It will catch most altered versions of a watermarked image while making roughly one mistake in one thousand on non-watermarked images.
Alternatively, it can produce an approximate p-value measuring the risk of mistakenly detecting a watermark on a benign (non-watermarked) image, by recalibrating the output as in [this script](https://huggingface.co/imatag/stable-signature-bzh-detector-resnet18/resolve/main/detect_demo_pvalue.py).
To get an exact p-value and for improved robustness, please use the [API](https://huggingface.co/spaces/imatag/stable-signature-bzh/resolve/main/detect_api.py) instead.
For more details on this watermarking technique, check out our [announcement](https://www.imatag.com/blog/unlocking-the-future-of-content-authentication-imatags-breakthrough-in-ai-generated-image-watermarking) and our lab's [blog post](https://imatag-lab.medium.com/stable-signature-meets-bzh-53ad0ba13691).
For watermarked models with a different key, support for payload, other perceptual compromises, robustness to other attacks, or faster detection, please [contact IMATAG](https://pages.imatag.com/contact-us-imatag).
|
[
"no watermark detected",
"watermarked"
] |
hkivancoral/smids_10x_deit_tiny_adamax_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_adamax_00001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9769
- Accuracy: 0.8935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2848 | 1.0 | 750 | 0.3201 | 0.8702 |
| 0.1766 | 2.0 | 1500 | 0.2882 | 0.8869 |
| 0.1715 | 3.0 | 2250 | 0.2793 | 0.8918 |
| 0.1528 | 4.0 | 3000 | 0.2928 | 0.8918 |
| 0.1474 | 5.0 | 3750 | 0.3209 | 0.8902 |
| 0.0904 | 6.0 | 4500 | 0.3386 | 0.8835 |
| 0.0506 | 7.0 | 5250 | 0.4033 | 0.8918 |
| 0.0528 | 8.0 | 6000 | 0.4656 | 0.8985 |
| 0.0576 | 9.0 | 6750 | 0.4872 | 0.8852 |
| 0.0247 | 10.0 | 7500 | 0.5636 | 0.8968 |
| 0.0283 | 11.0 | 8250 | 0.5957 | 0.8918 |
| 0.017 | 12.0 | 9000 | 0.6715 | 0.8952 |
| 0.0013 | 13.0 | 9750 | 0.7105 | 0.8885 |
| 0.0004 | 14.0 | 10500 | 0.7432 | 0.8819 |
| 0.0011 | 15.0 | 11250 | 0.7702 | 0.8968 |
| 0.0001 | 16.0 | 12000 | 0.8390 | 0.8935 |
| 0.0001 | 17.0 | 12750 | 0.8598 | 0.8918 |
| 0.0002 | 18.0 | 13500 | 0.8606 | 0.8852 |
| 0.0 | 19.0 | 14250 | 0.8893 | 0.8902 |
| 0.0001 | 20.0 | 15000 | 0.9092 | 0.8869 |
| 0.0 | 21.0 | 15750 | 0.9243 | 0.8935 |
| 0.0414 | 22.0 | 16500 | 0.8856 | 0.8935 |
| 0.0196 | 23.0 | 17250 | 0.9361 | 0.8902 |
| 0.0 | 24.0 | 18000 | 0.9456 | 0.8952 |
| 0.0 | 25.0 | 18750 | 0.9417 | 0.8935 |
| 0.0 | 26.0 | 19500 | 0.9178 | 0.8968 |
| 0.0233 | 27.0 | 20250 | 0.9491 | 0.8902 |
| 0.0 | 28.0 | 21000 | 0.9447 | 0.9002 |
| 0.0 | 29.0 | 21750 | 0.9458 | 0.8952 |
| 0.0 | 30.0 | 22500 | 0.9429 | 0.8935 |
| 0.0 | 31.0 | 23250 | 0.9485 | 0.8902 |
| 0.0 | 32.0 | 24000 | 0.9592 | 0.8918 |
| 0.0 | 33.0 | 24750 | 0.9720 | 0.8935 |
| 0.0 | 34.0 | 25500 | 0.9605 | 0.8918 |
| 0.0 | 35.0 | 26250 | 0.9711 | 0.8952 |
| 0.0 | 36.0 | 27000 | 0.9779 | 0.8902 |
| 0.0248 | 37.0 | 27750 | 0.9824 | 0.8918 |
| 0.0 | 38.0 | 28500 | 0.9776 | 0.8968 |
| 0.0 | 39.0 | 29250 | 0.9729 | 0.8968 |
| 0.0 | 40.0 | 30000 | 0.9687 | 0.8952 |
| 0.0 | 41.0 | 30750 | 0.9781 | 0.8952 |
| 0.0 | 42.0 | 31500 | 0.9860 | 0.8918 |
| 0.0 | 43.0 | 32250 | 0.9836 | 0.8918 |
| 0.0 | 44.0 | 33000 | 0.9765 | 0.8952 |
| 0.0004 | 45.0 | 33750 | 0.9803 | 0.8935 |
| 0.0 | 46.0 | 34500 | 0.9754 | 0.8952 |
| 0.0 | 47.0 | 35250 | 0.9749 | 0.8952 |
| 0.0 | 48.0 | 36000 | 0.9757 | 0.8952 |
| 0.0 | 49.0 | 36750 | 0.9765 | 0.8935 |
| 0.0 | 50.0 | 37500 | 0.9769 | 0.8935 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_base_sgd_0001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_sgd_0001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4033
- Accuracy: 0.8386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0726 | 1.0 | 750 | 1.0697 | 0.4193 |
| 0.9885 | 2.0 | 1500 | 1.0230 | 0.5324 |
| 0.9161 | 3.0 | 2250 | 0.9631 | 0.6057 |
| 0.8922 | 4.0 | 3000 | 0.8962 | 0.6389 |
| 0.7847 | 5.0 | 3750 | 0.8318 | 0.6922 |
| 0.7649 | 6.0 | 4500 | 0.7745 | 0.7221 |
| 0.6927 | 7.0 | 5250 | 0.7242 | 0.7454 |
| 0.6518 | 8.0 | 6000 | 0.6810 | 0.7521 |
| 0.6316 | 9.0 | 6750 | 0.6449 | 0.7587 |
| 0.5913 | 10.0 | 7500 | 0.6145 | 0.7720 |
| 0.5908 | 11.0 | 8250 | 0.5890 | 0.7770 |
| 0.5365 | 12.0 | 9000 | 0.5673 | 0.7804 |
| 0.5035 | 13.0 | 9750 | 0.5486 | 0.7870 |
| 0.4588 | 14.0 | 10500 | 0.5325 | 0.7937 |
| 0.4827 | 15.0 | 11250 | 0.5186 | 0.7953 |
| 0.4605 | 16.0 | 12000 | 0.5064 | 0.8020 |
| 0.3999 | 17.0 | 12750 | 0.4957 | 0.8037 |
| 0.4114 | 18.0 | 13500 | 0.4861 | 0.8070 |
| 0.4361 | 19.0 | 14250 | 0.4777 | 0.8070 |
| 0.4491 | 20.0 | 15000 | 0.4702 | 0.8136 |
| 0.4325 | 21.0 | 15750 | 0.4634 | 0.8136 |
| 0.3856 | 22.0 | 16500 | 0.4573 | 0.8136 |
| 0.3915 | 23.0 | 17250 | 0.4519 | 0.8136 |
| 0.3628 | 24.0 | 18000 | 0.4469 | 0.8170 |
| 0.4391 | 25.0 | 18750 | 0.4425 | 0.8170 |
| 0.426 | 26.0 | 19500 | 0.4384 | 0.8153 |
| 0.3848 | 27.0 | 20250 | 0.4347 | 0.8170 |
| 0.4075 | 28.0 | 21000 | 0.4313 | 0.8236 |
| 0.367 | 29.0 | 21750 | 0.4282 | 0.8286 |
| 0.3873 | 30.0 | 22500 | 0.4254 | 0.8303 |
| 0.3615 | 31.0 | 23250 | 0.4227 | 0.8319 |
| 0.4147 | 32.0 | 24000 | 0.4203 | 0.8319 |
| 0.3687 | 33.0 | 24750 | 0.4182 | 0.8319 |
| 0.3673 | 34.0 | 25500 | 0.4163 | 0.8353 |
| 0.3984 | 35.0 | 26250 | 0.4145 | 0.8353 |
| 0.4014 | 36.0 | 27000 | 0.4129 | 0.8353 |
| 0.3649 | 37.0 | 27750 | 0.4114 | 0.8353 |
| 0.3739 | 38.0 | 28500 | 0.4101 | 0.8353 |
| 0.3864 | 39.0 | 29250 | 0.4088 | 0.8353 |
| 0.4645 | 40.0 | 30000 | 0.4079 | 0.8369 |
| 0.3819 | 41.0 | 30750 | 0.4069 | 0.8369 |
| 0.3624 | 42.0 | 31500 | 0.4061 | 0.8369 |
| 0.331 | 43.0 | 32250 | 0.4054 | 0.8369 |
| 0.3186 | 44.0 | 33000 | 0.4047 | 0.8369 |
| 0.3184 | 45.0 | 33750 | 0.4043 | 0.8369 |
| 0.3966 | 46.0 | 34500 | 0.4039 | 0.8386 |
| 0.364 | 47.0 | 35250 | 0.4036 | 0.8386 |
| 0.3848 | 48.0 | 36000 | 0.4034 | 0.8386 |
| 0.3506 | 49.0 | 36750 | 0.4033 | 0.8386 |
| 0.3494 | 50.0 | 37500 | 0.4033 | 0.8386 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_base_sgd_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_sgd_00001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0091
- Accuracy: 0.5541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1301 | 1.0 | 750 | 1.1044 | 0.3361 |
| 1.0956 | 2.0 | 1500 | 1.1008 | 0.3394 |
| 1.0885 | 3.0 | 2250 | 1.0973 | 0.3527 |
| 1.0813 | 4.0 | 3000 | 1.0939 | 0.3677 |
| 1.0897 | 5.0 | 3750 | 1.0907 | 0.3794 |
| 1.0999 | 6.0 | 4500 | 1.0875 | 0.3794 |
| 1.0837 | 7.0 | 5250 | 1.0844 | 0.3810 |
| 1.0633 | 8.0 | 6000 | 1.0813 | 0.3860 |
| 1.0777 | 9.0 | 6750 | 1.0784 | 0.3977 |
| 1.0672 | 10.0 | 7500 | 1.0755 | 0.4027 |
| 1.0688 | 11.0 | 8250 | 1.0726 | 0.4143 |
| 1.0524 | 12.0 | 9000 | 1.0698 | 0.4193 |
| 1.0588 | 13.0 | 9750 | 1.0670 | 0.4276 |
| 1.0355 | 14.0 | 10500 | 1.0642 | 0.4443 |
| 1.0421 | 15.0 | 11250 | 1.0615 | 0.4526 |
| 1.0517 | 16.0 | 12000 | 1.0588 | 0.4526 |
| 1.0228 | 17.0 | 12750 | 1.0561 | 0.4609 |
| 1.052 | 18.0 | 13500 | 1.0535 | 0.4626 |
| 1.0453 | 19.0 | 14250 | 1.0509 | 0.4742 |
| 1.0307 | 20.0 | 15000 | 1.0483 | 0.4842 |
| 1.0308 | 21.0 | 15750 | 1.0459 | 0.4892 |
| 1.0369 | 22.0 | 16500 | 1.0434 | 0.5008 |
| 1.0173 | 23.0 | 17250 | 1.0411 | 0.5008 |
| 1.0178 | 24.0 | 18000 | 1.0388 | 0.5058 |
| 1.021 | 25.0 | 18750 | 1.0366 | 0.5075 |
| 1.0167 | 26.0 | 19500 | 1.0344 | 0.5075 |
| 1.0247 | 27.0 | 20250 | 1.0323 | 0.5125 |
| 1.0234 | 28.0 | 21000 | 1.0303 | 0.5208 |
| 1.003 | 29.0 | 21750 | 1.0283 | 0.5258 |
| 1.008 | 30.0 | 22500 | 1.0265 | 0.5308 |
| 1.0192 | 31.0 | 23250 | 1.0247 | 0.5341 |
| 1.0098 | 32.0 | 24000 | 1.0231 | 0.5374 |
| 0.9987 | 33.0 | 24750 | 1.0215 | 0.5408 |
| 0.9994 | 34.0 | 25500 | 1.0200 | 0.5408 |
| 1.0171 | 35.0 | 26250 | 1.0186 | 0.5408 |
| 1.013 | 36.0 | 27000 | 1.0173 | 0.5441 |
| 0.988 | 37.0 | 27750 | 1.0161 | 0.5441 |
| 0.9905 | 38.0 | 28500 | 1.0150 | 0.5491 |
| 0.9967 | 39.0 | 29250 | 1.0140 | 0.5491 |
| 0.9901 | 40.0 | 30000 | 1.0131 | 0.5557 |
| 0.9977 | 41.0 | 30750 | 1.0123 | 0.5557 |
| 1.0045 | 42.0 | 31500 | 1.0116 | 0.5557 |
| 0.9983 | 43.0 | 32250 | 1.0109 | 0.5541 |
| 0.9818 | 44.0 | 33000 | 1.0104 | 0.5541 |
| 0.9768 | 45.0 | 33750 | 1.0100 | 0.5541 |
| 0.9827 | 46.0 | 34500 | 1.0096 | 0.5541 |
| 0.9904 | 47.0 | 35250 | 1.0094 | 0.5541 |
| 0.9884 | 48.0 | 36000 | 1.0092 | 0.5541 |
| 0.9851 | 49.0 | 36750 | 1.0091 | 0.5541 |
| 0.9904 | 50.0 | 37500 | 1.0091 | 0.5541 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_adamax_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_adamax_00001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8454
- Accuracy: 0.9117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3334 | 1.0 | 750 | 0.3137 | 0.8867 |
| 0.2212 | 2.0 | 1500 | 0.2847 | 0.8933 |
| 0.1557 | 3.0 | 2250 | 0.2655 | 0.9017 |
| 0.1618 | 4.0 | 3000 | 0.2547 | 0.9083 |
| 0.1458 | 5.0 | 3750 | 0.2827 | 0.9117 |
| 0.0827 | 6.0 | 4500 | 0.3021 | 0.9067 |
| 0.0585 | 7.0 | 5250 | 0.3302 | 0.915 |
| 0.038 | 8.0 | 6000 | 0.3826 | 0.9117 |
| 0.0072 | 9.0 | 6750 | 0.4421 | 0.9033 |
| 0.0278 | 10.0 | 7500 | 0.4604 | 0.9117 |
| 0.024 | 11.0 | 8250 | 0.5200 | 0.9083 |
| 0.0385 | 12.0 | 9000 | 0.5950 | 0.905 |
| 0.0036 | 13.0 | 9750 | 0.6062 | 0.9167 |
| 0.0266 | 14.0 | 10500 | 0.6478 | 0.9083 |
| 0.0002 | 15.0 | 11250 | 0.6832 | 0.905 |
| 0.0008 | 16.0 | 12000 | 0.6882 | 0.9083 |
| 0.0001 | 17.0 | 12750 | 0.7251 | 0.9033 |
| 0.0001 | 18.0 | 13500 | 0.7412 | 0.9 |
| 0.0 | 19.0 | 14250 | 0.7303 | 0.905 |
| 0.0 | 20.0 | 15000 | 0.7516 | 0.9083 |
| 0.0 | 21.0 | 15750 | 0.7572 | 0.905 |
| 0.0 | 22.0 | 16500 | 0.7738 | 0.9033 |
| 0.0 | 23.0 | 17250 | 0.7783 | 0.905 |
| 0.0 | 24.0 | 18000 | 0.7887 | 0.9083 |
| 0.0 | 25.0 | 18750 | 0.8047 | 0.9067 |
| 0.0 | 26.0 | 19500 | 0.7947 | 0.905 |
| 0.0 | 27.0 | 20250 | 0.8074 | 0.9017 |
| 0.0 | 28.0 | 21000 | 0.8188 | 0.9067 |
| 0.0 | 29.0 | 21750 | 0.8171 | 0.9083 |
| 0.0 | 30.0 | 22500 | 0.8254 | 0.91 |
| 0.0 | 31.0 | 23250 | 0.8136 | 0.91 |
| 0.0 | 32.0 | 24000 | 0.8161 | 0.9117 |
| 0.0 | 33.0 | 24750 | 0.8510 | 0.9083 |
| 0.0 | 34.0 | 25500 | 0.8503 | 0.905 |
| 0.0 | 35.0 | 26250 | 0.8125 | 0.9083 |
| 0.0 | 36.0 | 27000 | 0.8406 | 0.9083 |
| 0.0 | 37.0 | 27750 | 0.8651 | 0.9033 |
| 0.0 | 38.0 | 28500 | 0.8378 | 0.9083 |
| 0.0 | 39.0 | 29250 | 0.8214 | 0.905 |
| 0.0 | 40.0 | 30000 | 0.8342 | 0.91 |
| 0.0 | 41.0 | 30750 | 0.8303 | 0.905 |
| 0.0 | 42.0 | 31500 | 0.8436 | 0.9117 |
| 0.0 | 43.0 | 32250 | 0.8482 | 0.9083 |
| 0.0 | 44.0 | 33000 | 0.8466 | 0.9117 |
| 0.0 | 45.0 | 33750 | 0.8396 | 0.9083 |
| 0.0 | 46.0 | 34500 | 0.8440 | 0.9117 |
| 0.0 | 47.0 | 35250 | 0.8439 | 0.9117 |
| 0.0 | 48.0 | 36000 | 0.8441 | 0.91 |
| 0.0 | 49.0 | 36750 | 0.8457 | 0.9117 |
| 0.0 | 50.0 | 37500 | 0.8454 | 0.9117 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
naveentnj/vit-base-patch16-224-finetuned-flower
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1140
- Accuracy: 0.9728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0528 | 5.38 | 500 | 0.0848 | 0.9809 |
| 0.0283 | 10.75 | 1000 | 0.1140 | 0.9728 |
### Framework versions
- Transformers 4.24.0
- Pytorch 2.1.0+cu121
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"daisy",
"dandelion",
"roses",
"sunflowers",
"tulips"
] |
hkivancoral/smids_10x_deit_base_sgd_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_sgd_00001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0056
- Accuracy: 0.5583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1044 | 1.0 | 750 | 1.1039 | 0.36 |
| 1.0936 | 2.0 | 1500 | 1.1001 | 0.3683 |
| 1.0833 | 3.0 | 2250 | 1.0965 | 0.385 |
| 1.0922 | 4.0 | 3000 | 1.0929 | 0.3867 |
| 1.0937 | 5.0 | 3750 | 1.0895 | 0.3933 |
| 1.0793 | 6.0 | 4500 | 1.0862 | 0.4083 |
| 1.0808 | 7.0 | 5250 | 1.0829 | 0.4133 |
| 1.0795 | 8.0 | 6000 | 1.0798 | 0.42 |
| 1.0735 | 9.0 | 6750 | 1.0767 | 0.4283 |
| 1.0874 | 10.0 | 7500 | 1.0736 | 0.435 |
| 1.0661 | 11.0 | 8250 | 1.0706 | 0.4483 |
| 1.038 | 12.0 | 9000 | 1.0677 | 0.445 |
| 1.0511 | 13.0 | 9750 | 1.0648 | 0.45 |
| 1.0539 | 14.0 | 10500 | 1.0620 | 0.4617 |
| 1.0408 | 15.0 | 11250 | 1.0592 | 0.46 |
| 1.0259 | 16.0 | 12000 | 1.0564 | 0.4617 |
| 1.0339 | 17.0 | 12750 | 1.0537 | 0.47 |
| 1.0416 | 18.0 | 13500 | 1.0511 | 0.4683 |
| 1.0423 | 19.0 | 14250 | 1.0484 | 0.4733 |
| 1.0333 | 20.0 | 15000 | 1.0459 | 0.4817 |
| 1.0189 | 21.0 | 15750 | 1.0434 | 0.485 |
| 1.0248 | 22.0 | 16500 | 1.0409 | 0.4917 |
| 1.0226 | 23.0 | 17250 | 1.0385 | 0.495 |
| 1.0228 | 24.0 | 18000 | 1.0362 | 0.5 |
| 1.0292 | 25.0 | 18750 | 1.0339 | 0.4983 |
| 1.02 | 26.0 | 19500 | 1.0317 | 0.5 |
| 1.0227 | 27.0 | 20250 | 1.0296 | 0.5017 |
| 1.0121 | 28.0 | 21000 | 1.0275 | 0.5083 |
| 1.0052 | 29.0 | 21750 | 1.0255 | 0.5183 |
| 0.9924 | 30.0 | 22500 | 1.0236 | 0.52 |
| 1.0074 | 31.0 | 23250 | 1.0218 | 0.5267 |
| 0.9858 | 32.0 | 24000 | 1.0200 | 0.5267 |
| 1.0028 | 33.0 | 24750 | 1.0184 | 0.5283 |
| 0.9845 | 34.0 | 25500 | 1.0169 | 0.5317 |
| 0.992 | 35.0 | 26250 | 1.0154 | 0.535 |
| 0.9814 | 36.0 | 27000 | 1.0141 | 0.535 |
| 1.0022 | 37.0 | 27750 | 1.0128 | 0.535 |
| 0.9973 | 38.0 | 28500 | 1.0117 | 0.5367 |
| 0.9952 | 39.0 | 29250 | 1.0106 | 0.54 |
| 0.9809 | 40.0 | 30000 | 1.0097 | 0.5417 |
| 0.9816 | 41.0 | 30750 | 1.0089 | 0.5467 |
| 1.0061 | 42.0 | 31500 | 1.0081 | 0.5533 |
| 0.9998 | 43.0 | 32250 | 1.0075 | 0.555 |
| 0.992 | 44.0 | 33000 | 1.0070 | 0.555 |
| 0.9773 | 45.0 | 33750 | 1.0065 | 0.5567 |
| 0.963 | 46.0 | 34500 | 1.0062 | 0.5567 |
| 0.9849 | 47.0 | 35250 | 1.0059 | 0.5567 |
| 0.9816 | 48.0 | 36000 | 1.0057 | 0.5583 |
| 0.999 | 49.0 | 36750 | 1.0056 | 0.5583 |
| 0.9748 | 50.0 | 37500 | 1.0056 | 0.5583 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_base_sgd_0001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_sgd_0001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3897
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0499 | 1.0 | 750 | 1.0678 | 0.4483 |
| 1.0069 | 2.0 | 1500 | 1.0201 | 0.5333 |
| 0.9342 | 3.0 | 2250 | 0.9596 | 0.605 |
| 0.8668 | 4.0 | 3000 | 0.8944 | 0.65 |
| 0.8215 | 5.0 | 3750 | 0.8304 | 0.6833 |
| 0.7386 | 6.0 | 4500 | 0.7725 | 0.7233 |
| 0.7073 | 7.0 | 5250 | 0.7219 | 0.75 |
| 0.6732 | 8.0 | 6000 | 0.6781 | 0.7667 |
| 0.6382 | 9.0 | 6750 | 0.6410 | 0.7817 |
| 0.5894 | 10.0 | 7500 | 0.6100 | 0.795 |
| 0.5595 | 11.0 | 8250 | 0.5840 | 0.7967 |
| 0.5115 | 12.0 | 9000 | 0.5616 | 0.81 |
| 0.5204 | 13.0 | 9750 | 0.5423 | 0.8133 |
| 0.5584 | 14.0 | 10500 | 0.5257 | 0.8183 |
| 0.4766 | 15.0 | 11250 | 0.5110 | 0.8233 |
| 0.4665 | 16.0 | 12000 | 0.4984 | 0.825 |
| 0.4224 | 17.0 | 12750 | 0.4872 | 0.8267 |
| 0.4651 | 18.0 | 13500 | 0.4771 | 0.8283 |
| 0.4708 | 19.0 | 14250 | 0.4682 | 0.8333 |
| 0.4353 | 20.0 | 15000 | 0.4602 | 0.8367 |
| 0.3692 | 21.0 | 15750 | 0.4528 | 0.8417 |
| 0.4104 | 22.0 | 16500 | 0.4465 | 0.8433 |
| 0.4734 | 23.0 | 17250 | 0.4407 | 0.845 |
| 0.4505 | 24.0 | 18000 | 0.4355 | 0.8467 |
| 0.4228 | 25.0 | 18750 | 0.4307 | 0.85 |
| 0.4446 | 26.0 | 19500 | 0.4264 | 0.85 |
| 0.3547 | 27.0 | 20250 | 0.4224 | 0.85 |
| 0.3891 | 28.0 | 21000 | 0.4188 | 0.8517 |
| 0.4084 | 29.0 | 21750 | 0.4156 | 0.8533 |
| 0.3837 | 30.0 | 22500 | 0.4127 | 0.855 |
| 0.3728 | 31.0 | 23250 | 0.4101 | 0.8583 |
| 0.3097 | 32.0 | 24000 | 0.4075 | 0.8583 |
| 0.4329 | 33.0 | 24750 | 0.4053 | 0.8583 |
| 0.4029 | 34.0 | 25500 | 0.4033 | 0.86 |
| 0.3546 | 35.0 | 26250 | 0.4014 | 0.8617 |
| 0.3774 | 36.0 | 27000 | 0.3998 | 0.8633 |
| 0.3804 | 37.0 | 27750 | 0.3982 | 0.8633 |
| 0.3895 | 38.0 | 28500 | 0.3968 | 0.865 |
| 0.3722 | 39.0 | 29250 | 0.3955 | 0.8667 |
| 0.4041 | 40.0 | 30000 | 0.3944 | 0.8683 |
| 0.3565 | 41.0 | 30750 | 0.3935 | 0.87 |
| 0.4081 | 42.0 | 31500 | 0.3926 | 0.87 |
| 0.3738 | 43.0 | 32250 | 0.3919 | 0.87 |
| 0.3617 | 44.0 | 33000 | 0.3913 | 0.87 |
| 0.3447 | 45.0 | 33750 | 0.3908 | 0.87 |
| 0.3297 | 46.0 | 34500 | 0.3904 | 0.87 |
| 0.3716 | 47.0 | 35250 | 0.3901 | 0.87 |
| 0.3734 | 48.0 | 36000 | 0.3899 | 0.87 |
| 0.3771 | 49.0 | 36750 | 0.3898 | 0.87 |
| 0.3572 | 50.0 | 37500 | 0.3897 | 0.87 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
juns/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6533
- Accuracy: 0.888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6213 | 0.99 | 62 | 1.6533 | 0.888 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
hkivancoral/smids_10x_deit_tiny_adamax_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_adamax_00001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3154
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2882 | 1.0 | 750 | 0.3671 | 0.8467 |
| 0.2123 | 2.0 | 1500 | 0.3694 | 0.8633 |
| 0.189 | 3.0 | 2250 | 0.3920 | 0.8617 |
| 0.1612 | 4.0 | 3000 | 0.4004 | 0.8733 |
| 0.1046 | 5.0 | 3750 | 0.4710 | 0.85 |
| 0.0761 | 6.0 | 4500 | 0.4693 | 0.86 |
| 0.0433 | 7.0 | 5250 | 0.5028 | 0.8633 |
| 0.0491 | 8.0 | 6000 | 0.5468 | 0.865 |
| 0.0675 | 9.0 | 6750 | 0.5733 | 0.8717 |
| 0.039 | 10.0 | 7500 | 0.6491 | 0.875 |
| 0.0538 | 11.0 | 8250 | 0.7887 | 0.8567 |
| 0.012 | 12.0 | 9000 | 0.7915 | 0.87 |
| 0.0116 | 13.0 | 9750 | 0.8692 | 0.87 |
| 0.0189 | 14.0 | 10500 | 0.8744 | 0.8717 |
| 0.0002 | 15.0 | 11250 | 0.9425 | 0.8683 |
| 0.024 | 16.0 | 12000 | 0.9688 | 0.8633 |
| 0.0001 | 17.0 | 12750 | 0.9969 | 0.87 |
| 0.0001 | 18.0 | 13500 | 1.0452 | 0.865 |
| 0.0001 | 19.0 | 14250 | 1.0740 | 0.87 |
| 0.0 | 20.0 | 15000 | 1.0493 | 0.8733 |
| 0.0 | 21.0 | 15750 | 1.0980 | 0.87 |
| 0.0 | 22.0 | 16500 | 1.1024 | 0.8683 |
| 0.0 | 23.0 | 17250 | 1.1005 | 0.8733 |
| 0.0 | 24.0 | 18000 | 1.1084 | 0.875 |
| 0.0 | 25.0 | 18750 | 1.1253 | 0.865 |
| 0.0 | 26.0 | 19500 | 1.1772 | 0.87 |
| 0.0 | 27.0 | 20250 | 1.1989 | 0.87 |
| 0.0 | 28.0 | 21000 | 1.2090 | 0.87 |
| 0.0 | 29.0 | 21750 | 1.2004 | 0.8683 |
| 0.0 | 30.0 | 22500 | 1.2111 | 0.87 |
| 0.0 | 31.0 | 23250 | 1.2258 | 0.87 |
| 0.0 | 32.0 | 24000 | 1.2322 | 0.87 |
| 0.0 | 33.0 | 24750 | 1.2293 | 0.87 |
| 0.0 | 34.0 | 25500 | 1.2340 | 0.8683 |
| 0.0 | 35.0 | 26250 | 1.2422 | 0.8667 |
| 0.0 | 36.0 | 27000 | 1.2597 | 0.8667 |
| 0.0 | 37.0 | 27750 | 1.2593 | 0.8683 |
| 0.0 | 38.0 | 28500 | 1.2651 | 0.87 |
| 0.0 | 39.0 | 29250 | 1.2897 | 0.8667 |
| 0.0 | 40.0 | 30000 | 1.2906 | 0.87 |
| 0.0 | 41.0 | 30750 | 1.2961 | 0.87 |
| 0.0 | 42.0 | 31500 | 1.2976 | 0.8683 |
| 0.0 | 43.0 | 32250 | 1.2961 | 0.87 |
| 0.0 | 44.0 | 33000 | 1.3012 | 0.8683 |
| 0.0 | 45.0 | 33750 | 1.3074 | 0.87 |
| 0.0 | 46.0 | 34500 | 1.3114 | 0.8683 |
| 0.0 | 47.0 | 35250 | 1.3127 | 0.87 |
| 0.0 | 48.0 | 36000 | 1.3140 | 0.87 |
| 0.0 | 49.0 | 36750 | 1.3150 | 0.87 |
| 0.0 | 50.0 | 37500 | 1.3154 | 0.87 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
rdsmaia/pokemon_class_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pokemon_class_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the pokemon-classification dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7799
- Accuracy: 0.8439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.871 | 1.0 | 61 | 4.8286 | 0.1129 |
| 4.4362 | 2.0 | 122 | 4.3949 | 0.5626 |
| 3.9543 | 3.0 | 183 | 3.9551 | 0.7238 |
| 3.5859 | 4.0 | 244 | 3.6081 | 0.7772 |
| 3.2793 | 5.0 | 305 | 3.3454 | 0.8049 |
| 3.0146 | 6.0 | 366 | 3.1411 | 0.8152 |
| 2.8492 | 7.0 | 427 | 2.9854 | 0.8347 |
| 2.6706 | 8.0 | 488 | 2.8625 | 0.8501 |
| 2.5676 | 9.0 | 549 | 2.8014 | 0.8337 |
| 2.6059 | 10.0 | 610 | 2.7799 | 0.8439 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"porygon",
"goldeen",
"articuno",
"clefable",
"cubone",
"marowak",
"nidorino",
"jolteon",
"muk",
"magikarp",
"slowbro",
"tauros",
"kabuto",
"seaking",
"spearow",
"sandshrew",
"eevee",
"kakuna",
"omastar",
"ekans",
"geodude",
"magmar",
"snorlax",
"meowth",
"dugtrio",
"pidgeotto",
"venusaur",
"persian",
"rhydon",
"starmie",
"charmeleon",
"lapras",
"alakazam",
"graveler",
"psyduck",
"machop",
"rapidash",
"doduo",
"magneton",
"arcanine",
"electrode",
"omanyte",
"poliwhirl",
"mew",
"alolan sandslash",
"mewtwo",
"jynx",
"weezing",
"gastly",
"victreebel",
"ivysaur",
"mrmime",
"shellder",
"scyther",
"diglett",
"primeape",
"raichu",
"oddish",
"dodrio",
"dragonair",
"weedle",
"golduck",
"hitmonlee",
"flareon",
"krabby",
"parasect",
"ninetales",
"nidoqueen",
"kabutops",
"drowzee",
"caterpie",
"jigglypuff",
"machamp",
"hitmonchan",
"clefairy",
"kangaskhan",
"dragonite",
"weepinbell",
"fearow",
"bellsprout",
"grimer",
"nidorina",
"staryu",
"horsea",
"gloom",
"electabuzz",
"dratini",
"machoke",
"magnemite",
"squirtle",
"gyarados",
"pidgeot",
"bulbasaur",
"nidoking",
"golem",
"aerodactyl",
"dewgong",
"moltres",
"zapdos",
"poliwrath",
"vulpix",
"beedrill",
"charmander",
"abra",
"zubat",
"golbat",
"mankey",
"wigglytuff",
"charizard",
"slowpoke",
"poliwag",
"tentacruel",
"rhyhorn",
"onix",
"butterfree",
"exeggcute",
"sandslash",
"seadra",
"pinsir",
"rattata",
"growlithe",
"haunter",
"pidgey",
"ditto",
"farfetchd",
"pikachu",
"raticate",
"wartortle",
"gengar",
"vaporeon",
"cloyster",
"hypno",
"arbok",
"metapod",
"tangela",
"kingler",
"exeggutor",
"kadabra",
"seel",
"venonat",
"voltorb",
"chansey",
"venomoth",
"ponyta",
"vileplume",
"koffing",
"blastoise",
"tentacool",
"lickitung",
"paras"
] |
hkivancoral/smids_10x_deit_base_sgd_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_sgd_00001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0053
- Accuracy: 0.5833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1184 | 1.0 | 750 | 1.1087 | 0.3417 |
| 1.0954 | 2.0 | 1500 | 1.1049 | 0.3567 |
| 1.0943 | 3.0 | 2250 | 1.1012 | 0.3733 |
| 1.0965 | 4.0 | 3000 | 1.0976 | 0.3883 |
| 1.097 | 5.0 | 3750 | 1.0941 | 0.3983 |
| 1.0859 | 6.0 | 4500 | 1.0907 | 0.41 |
| 1.0772 | 7.0 | 5250 | 1.0874 | 0.4133 |
| 1.0913 | 8.0 | 6000 | 1.0841 | 0.43 |
| 1.0722 | 9.0 | 6750 | 1.0809 | 0.4383 |
| 1.0755 | 10.0 | 7500 | 1.0778 | 0.445 |
| 1.0725 | 11.0 | 8250 | 1.0746 | 0.4467 |
| 1.0546 | 12.0 | 9000 | 1.0716 | 0.4517 |
| 1.0625 | 13.0 | 9750 | 1.0686 | 0.46 |
| 1.0581 | 14.0 | 10500 | 1.0656 | 0.4683 |
| 1.047 | 15.0 | 11250 | 1.0626 | 0.4733 |
| 1.0434 | 16.0 | 12000 | 1.0597 | 0.48 |
| 1.0251 | 17.0 | 12750 | 1.0568 | 0.4917 |
| 1.0453 | 18.0 | 13500 | 1.0540 | 0.4933 |
| 1.0513 | 19.0 | 14250 | 1.0512 | 0.5 |
| 1.0388 | 20.0 | 15000 | 1.0484 | 0.51 |
| 1.0333 | 21.0 | 15750 | 1.0457 | 0.52 |
| 1.0252 | 22.0 | 16500 | 1.0431 | 0.5267 |
| 1.0295 | 23.0 | 17250 | 1.0405 | 0.5283 |
| 1.0271 | 24.0 | 18000 | 1.0379 | 0.5317 |
| 1.0317 | 25.0 | 18750 | 1.0355 | 0.5333 |
| 1.0277 | 26.0 | 19500 | 1.0331 | 0.5367 |
| 1.0061 | 27.0 | 20250 | 1.0308 | 0.5467 |
| 1.0064 | 28.0 | 21000 | 1.0286 | 0.555 |
| 1.0287 | 29.0 | 21750 | 1.0265 | 0.56 |
| 1.0101 | 30.0 | 22500 | 1.0245 | 0.5633 |
| 1.0026 | 31.0 | 23250 | 1.0225 | 0.5667 |
| 1.0037 | 32.0 | 24000 | 1.0207 | 0.5667 |
| 1.0192 | 33.0 | 24750 | 1.0189 | 0.57 |
| 1.0129 | 34.0 | 25500 | 1.0173 | 0.57 |
| 1.007 | 35.0 | 26250 | 1.0157 | 0.57 |
| 0.9958 | 36.0 | 27000 | 1.0143 | 0.5717 |
| 1.0225 | 37.0 | 27750 | 1.0130 | 0.5733 |
| 1.0064 | 38.0 | 28500 | 1.0118 | 0.575 |
| 0.99 | 39.0 | 29250 | 1.0107 | 0.5733 |
| 0.9987 | 40.0 | 30000 | 1.0097 | 0.5767 |
| 1.0146 | 41.0 | 30750 | 1.0088 | 0.5817 |
| 0.9734 | 42.0 | 31500 | 1.0080 | 0.5817 |
| 1.0086 | 43.0 | 32250 | 1.0073 | 0.5817 |
| 0.9898 | 44.0 | 33000 | 1.0068 | 0.5833 |
| 0.9877 | 45.0 | 33750 | 1.0063 | 0.5833 |
| 0.9974 | 46.0 | 34500 | 1.0059 | 0.5833 |
| 0.9888 | 47.0 | 35250 | 1.0057 | 0.5833 |
| 0.9987 | 48.0 | 36000 | 1.0055 | 0.5833 |
| 1.01 | 49.0 | 36750 | 1.0054 | 0.5833 |
| 0.9819 | 50.0 | 37500 | 1.0053 | 0.5833 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_base_sgd_0001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_sgd_0001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3973
- Accuracy: 0.8417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0764 | 1.0 | 750 | 1.0715 | 0.4467 |
| 1.0016 | 2.0 | 1500 | 1.0200 | 0.565 |
| 0.9544 | 3.0 | 2250 | 0.9543 | 0.625 |
| 0.8923 | 4.0 | 3000 | 0.8833 | 0.665 |
| 0.8291 | 5.0 | 3750 | 0.8138 | 0.7317 |
| 0.7577 | 6.0 | 4500 | 0.7525 | 0.7483 |
| 0.6942 | 7.0 | 5250 | 0.6996 | 0.775 |
| 0.6967 | 8.0 | 6000 | 0.6552 | 0.785 |
| 0.6475 | 9.0 | 6750 | 0.6183 | 0.7967 |
| 0.5601 | 10.0 | 7500 | 0.5880 | 0.8017 |
| 0.5611 | 11.0 | 8250 | 0.5631 | 0.8033 |
| 0.5838 | 12.0 | 9000 | 0.5423 | 0.8117 |
| 0.4987 | 13.0 | 9750 | 0.5248 | 0.8133 |
| 0.5348 | 14.0 | 10500 | 0.5101 | 0.815 |
| 0.4457 | 15.0 | 11250 | 0.4972 | 0.8167 |
| 0.4244 | 16.0 | 12000 | 0.4863 | 0.82 |
| 0.394 | 17.0 | 12750 | 0.4768 | 0.8217 |
| 0.4347 | 18.0 | 13500 | 0.4683 | 0.8267 |
| 0.455 | 19.0 | 14250 | 0.4609 | 0.8283 |
| 0.4228 | 20.0 | 15000 | 0.4543 | 0.8317 |
| 0.3783 | 21.0 | 15750 | 0.4484 | 0.8333 |
| 0.4279 | 22.0 | 16500 | 0.4432 | 0.835 |
| 0.4396 | 23.0 | 17250 | 0.4385 | 0.8333 |
| 0.4808 | 24.0 | 18000 | 0.4342 | 0.8367 |
| 0.4208 | 25.0 | 18750 | 0.4304 | 0.8367 |
| 0.3784 | 26.0 | 19500 | 0.4269 | 0.8367 |
| 0.3843 | 27.0 | 20250 | 0.4238 | 0.8367 |
| 0.3813 | 28.0 | 21000 | 0.4210 | 0.8333 |
| 0.3843 | 29.0 | 21750 | 0.4183 | 0.835 |
| 0.3906 | 30.0 | 22500 | 0.4160 | 0.835 |
| 0.3475 | 31.0 | 23250 | 0.4138 | 0.8367 |
| 0.3808 | 32.0 | 24000 | 0.4118 | 0.8383 |
| 0.373 | 33.0 | 24750 | 0.4100 | 0.8383 |
| 0.4094 | 34.0 | 25500 | 0.4083 | 0.84 |
| 0.3294 | 35.0 | 26250 | 0.4068 | 0.84 |
| 0.3714 | 36.0 | 27000 | 0.4054 | 0.84 |
| 0.3219 | 37.0 | 27750 | 0.4042 | 0.84 |
| 0.3856 | 38.0 | 28500 | 0.4031 | 0.84 |
| 0.3967 | 39.0 | 29250 | 0.4021 | 0.84 |
| 0.3872 | 40.0 | 30000 | 0.4012 | 0.8417 |
| 0.3755 | 41.0 | 30750 | 0.4004 | 0.8417 |
| 0.3647 | 42.0 | 31500 | 0.3997 | 0.8417 |
| 0.4013 | 43.0 | 32250 | 0.3991 | 0.8417 |
| 0.3537 | 44.0 | 33000 | 0.3986 | 0.8417 |
| 0.3786 | 45.0 | 33750 | 0.3982 | 0.8417 |
| 0.344 | 46.0 | 34500 | 0.3978 | 0.8417 |
| 0.2971 | 47.0 | 35250 | 0.3976 | 0.8417 |
| 0.3345 | 48.0 | 36000 | 0.3974 | 0.8417 |
| 0.3788 | 49.0 | 36750 | 0.3974 | 0.8417 |
| 0.2837 | 50.0 | 37500 | 0.3973 | 0.8417 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_adamax_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_adamax_00001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8518
- Accuracy: 0.8933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.292 | 1.0 | 750 | 0.2930 | 0.885 |
| 0.2303 | 2.0 | 1500 | 0.2459 | 0.9033 |
| 0.1782 | 3.0 | 2250 | 0.2507 | 0.9017 |
| 0.1592 | 4.0 | 3000 | 0.2451 | 0.9 |
| 0.1917 | 5.0 | 3750 | 0.2847 | 0.8967 |
| 0.0682 | 6.0 | 4500 | 0.3017 | 0.895 |
| 0.0567 | 7.0 | 5250 | 0.3497 | 0.8983 |
| 0.0573 | 8.0 | 6000 | 0.3610 | 0.9 |
| 0.1186 | 9.0 | 6750 | 0.4077 | 0.9067 |
| 0.0339 | 10.0 | 7500 | 0.4570 | 0.9017 |
| 0.0496 | 11.0 | 8250 | 0.5695 | 0.8933 |
| 0.0132 | 12.0 | 9000 | 0.6133 | 0.8983 |
| 0.0021 | 13.0 | 9750 | 0.5988 | 0.8933 |
| 0.0118 | 14.0 | 10500 | 0.6072 | 0.8933 |
| 0.0074 | 15.0 | 11250 | 0.6482 | 0.8967 |
| 0.0005 | 16.0 | 12000 | 0.7207 | 0.8933 |
| 0.0021 | 17.0 | 12750 | 0.7385 | 0.885 |
| 0.0 | 18.0 | 13500 | 0.7467 | 0.895 |
| 0.0005 | 19.0 | 14250 | 0.7555 | 0.8967 |
| 0.0 | 20.0 | 15000 | 0.7776 | 0.9 |
| 0.0 | 21.0 | 15750 | 0.8136 | 0.8933 |
| 0.0 | 22.0 | 16500 | 0.7839 | 0.8983 |
| 0.0002 | 23.0 | 17250 | 0.7795 | 0.8983 |
| 0.0 | 24.0 | 18000 | 0.7219 | 0.8967 |
| 0.0 | 25.0 | 18750 | 0.7978 | 0.9 |
| 0.0 | 26.0 | 19500 | 0.7801 | 0.8983 |
| 0.0 | 27.0 | 20250 | 0.8617 | 0.9 |
| 0.0 | 28.0 | 21000 | 0.7908 | 0.8967 |
| 0.0 | 29.0 | 21750 | 0.7838 | 0.9 |
| 0.0154 | 30.0 | 22500 | 0.8352 | 0.895 |
| 0.0 | 31.0 | 23250 | 0.8276 | 0.8967 |
| 0.0 | 32.0 | 24000 | 0.8168 | 0.8983 |
| 0.0 | 33.0 | 24750 | 0.8030 | 0.8933 |
| 0.0 | 34.0 | 25500 | 0.8162 | 0.895 |
| 0.0 | 35.0 | 26250 | 0.8097 | 0.895 |
| 0.0 | 36.0 | 27000 | 0.8226 | 0.8967 |
| 0.0047 | 37.0 | 27750 | 0.8297 | 0.895 |
| 0.0 | 38.0 | 28500 | 0.8250 | 0.8933 |
| 0.0 | 39.0 | 29250 | 0.8392 | 0.8983 |
| 0.0 | 40.0 | 30000 | 0.8326 | 0.9 |
| 0.0 | 41.0 | 30750 | 0.8399 | 0.895 |
| 0.0 | 42.0 | 31500 | 0.8505 | 0.895 |
| 0.0 | 43.0 | 32250 | 0.8292 | 0.8967 |
| 0.0 | 44.0 | 33000 | 0.8390 | 0.895 |
| 0.0 | 45.0 | 33750 | 0.8458 | 0.895 |
| 0.0 | 46.0 | 34500 | 0.8491 | 0.8933 |
| 0.0 | 47.0 | 35250 | 0.8489 | 0.8933 |
| 0.0 | 48.0 | 36000 | 0.8502 | 0.8933 |
| 0.0 | 49.0 | 36750 | 0.8534 | 0.8933 |
| 0.0 | 50.0 | 37500 | 0.8518 | 0.8933 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
samihaafaf/test
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.6925497055053711
f1: 0.0
precision: 0.0
recall: 0.0
auc: 0.5622222222222223
accuracy: 0.5
|
[
"pvp",
"non-pvp"
] |
EdwarV/computer_vision_example
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# computer_vision_example
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0117
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.137 | 3.85 | 500 | 0.0117 | 1.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.13.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
samihaafaf/test2
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.6926904320716858
f1: 0.6644144144144143
precision: 0.5017006802721088
recall: 0.9833333333333333
auc: 0.5401888888888888
accuracy: 0.5033333333333333
|
[
"pvp",
"non-pvp"
] |
hkivancoral/smids_10x_deit_base_sgd_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_sgd_00001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0147
- Accuracy: 0.53
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1118 | 1.0 | 750 | 1.1081 | 0.3483 |
| 1.0989 | 2.0 | 1500 | 1.1045 | 0.3467 |
| 1.098 | 3.0 | 2250 | 1.1011 | 0.3483 |
| 1.0956 | 4.0 | 3000 | 1.0977 | 0.35 |
| 1.0908 | 5.0 | 3750 | 1.0945 | 0.3533 |
| 1.0728 | 6.0 | 4500 | 1.0914 | 0.36 |
| 1.0863 | 7.0 | 5250 | 1.0883 | 0.3783 |
| 1.0893 | 8.0 | 6000 | 1.0853 | 0.3833 |
| 1.0669 | 9.0 | 6750 | 1.0824 | 0.3883 |
| 1.0847 | 10.0 | 7500 | 1.0795 | 0.3933 |
| 1.062 | 11.0 | 8250 | 1.0767 | 0.4033 |
| 1.0612 | 12.0 | 9000 | 1.0739 | 0.4183 |
| 1.0546 | 13.0 | 9750 | 1.0712 | 0.4267 |
| 1.0486 | 14.0 | 10500 | 1.0685 | 0.43 |
| 1.0488 | 15.0 | 11250 | 1.0659 | 0.4367 |
| 1.0504 | 16.0 | 12000 | 1.0633 | 0.4367 |
| 1.0424 | 17.0 | 12750 | 1.0607 | 0.4417 |
| 1.0426 | 18.0 | 13500 | 1.0582 | 0.4533 |
| 1.0273 | 19.0 | 14250 | 1.0557 | 0.4533 |
| 1.0365 | 20.0 | 15000 | 1.0533 | 0.4633 |
| 1.0042 | 21.0 | 15750 | 1.0509 | 0.4683 |
| 1.0296 | 22.0 | 16500 | 1.0486 | 0.4717 |
| 1.0417 | 23.0 | 17250 | 1.0462 | 0.475 |
| 1.0189 | 24.0 | 18000 | 1.0440 | 0.4817 |
| 1.0029 | 25.0 | 18750 | 1.0418 | 0.4833 |
| 1.0163 | 26.0 | 19500 | 1.0396 | 0.4883 |
| 1.0097 | 27.0 | 20250 | 1.0376 | 0.4867 |
| 1.0284 | 28.0 | 21000 | 1.0356 | 0.49 |
| 1.0024 | 29.0 | 21750 | 1.0337 | 0.4967 |
| 1.0018 | 30.0 | 22500 | 1.0318 | 0.505 |
| 1.0059 | 31.0 | 23250 | 1.0301 | 0.5083 |
| 1.0064 | 32.0 | 24000 | 1.0284 | 0.5133 |
| 1.0048 | 33.0 | 24750 | 1.0269 | 0.5133 |
| 0.9972 | 34.0 | 25500 | 1.0254 | 0.5217 |
| 0.9969 | 35.0 | 26250 | 1.0240 | 0.5217 |
| 1.0105 | 36.0 | 27000 | 1.0228 | 0.5217 |
| 0.9858 | 37.0 | 27750 | 1.0216 | 0.525 |
| 1.0046 | 38.0 | 28500 | 1.0205 | 0.525 |
| 0.9725 | 39.0 | 29250 | 1.0195 | 0.5267 |
| 0.9836 | 40.0 | 30000 | 1.0186 | 0.5267 |
| 0.9756 | 41.0 | 30750 | 1.0178 | 0.5283 |
| 0.9954 | 42.0 | 31500 | 1.0171 | 0.5283 |
| 1.0089 | 43.0 | 32250 | 1.0165 | 0.5283 |
| 0.9937 | 44.0 | 33000 | 1.0160 | 0.5267 |
| 0.9919 | 45.0 | 33750 | 1.0156 | 0.53 |
| 0.9943 | 46.0 | 34500 | 1.0153 | 0.53 |
| 0.9683 | 47.0 | 35250 | 1.0150 | 0.53 |
| 0.9833 | 48.0 | 36000 | 1.0148 | 0.53 |
| 0.9815 | 49.0 | 36750 | 1.0147 | 0.53 |
| 0.9749 | 50.0 | 37500 | 1.0147 | 0.53 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_base_sgd_0001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_sgd_0001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3787
- Accuracy: 0.835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0636 | 1.0 | 750 | 1.0738 | 0.4167 |
| 0.9993 | 2.0 | 1500 | 1.0277 | 0.5117 |
| 0.9435 | 3.0 | 2250 | 0.9677 | 0.5817 |
| 0.8744 | 4.0 | 3000 | 0.8995 | 0.6383 |
| 0.8256 | 5.0 | 3750 | 0.8300 | 0.705 |
| 0.7543 | 6.0 | 4500 | 0.7676 | 0.73 |
| 0.673 | 7.0 | 5250 | 0.7129 | 0.76 |
| 0.6641 | 8.0 | 6000 | 0.6669 | 0.7717 |
| 0.6662 | 9.0 | 6750 | 0.6286 | 0.7817 |
| 0.612 | 10.0 | 7500 | 0.5968 | 0.79 |
| 0.5807 | 11.0 | 8250 | 0.5703 | 0.7933 |
| 0.5436 | 12.0 | 9000 | 0.5477 | 0.8 |
| 0.5472 | 13.0 | 9750 | 0.5286 | 0.805 |
| 0.5287 | 14.0 | 10500 | 0.5121 | 0.8083 |
| 0.4928 | 15.0 | 11250 | 0.4975 | 0.8083 |
| 0.4187 | 16.0 | 12000 | 0.4850 | 0.8133 |
| 0.4686 | 17.0 | 12750 | 0.4741 | 0.8167 |
| 0.4337 | 18.0 | 13500 | 0.4641 | 0.82 |
| 0.4835 | 19.0 | 14250 | 0.4556 | 0.8217 |
| 0.4689 | 20.0 | 15000 | 0.4478 | 0.825 |
| 0.4014 | 21.0 | 15750 | 0.4407 | 0.8233 |
| 0.4524 | 22.0 | 16500 | 0.4346 | 0.8233 |
| 0.4363 | 23.0 | 17250 | 0.4289 | 0.825 |
| 0.4366 | 24.0 | 18000 | 0.4239 | 0.825 |
| 0.419 | 25.0 | 18750 | 0.4192 | 0.8267 |
| 0.4041 | 26.0 | 19500 | 0.4151 | 0.8283 |
| 0.334 | 27.0 | 20250 | 0.4112 | 0.8317 |
| 0.3991 | 28.0 | 21000 | 0.4077 | 0.8317 |
| 0.4162 | 29.0 | 21750 | 0.4044 | 0.83 |
| 0.3862 | 30.0 | 22500 | 0.4016 | 0.8317 |
| 0.3788 | 31.0 | 23250 | 0.3990 | 0.8333 |
| 0.3692 | 32.0 | 24000 | 0.3965 | 0.8333 |
| 0.3919 | 33.0 | 24750 | 0.3944 | 0.8333 |
| 0.3436 | 34.0 | 25500 | 0.3923 | 0.8317 |
| 0.385 | 35.0 | 26250 | 0.3904 | 0.8317 |
| 0.4009 | 36.0 | 27000 | 0.3887 | 0.835 |
| 0.3069 | 37.0 | 27750 | 0.3872 | 0.835 |
| 0.3924 | 38.0 | 28500 | 0.3858 | 0.835 |
| 0.3366 | 39.0 | 29250 | 0.3846 | 0.835 |
| 0.3431 | 40.0 | 30000 | 0.3835 | 0.8333 |
| 0.3539 | 41.0 | 30750 | 0.3825 | 0.8333 |
| 0.3975 | 42.0 | 31500 | 0.3816 | 0.8333 |
| 0.3795 | 43.0 | 32250 | 0.3809 | 0.835 |
| 0.36 | 44.0 | 33000 | 0.3803 | 0.835 |
| 0.3923 | 45.0 | 33750 | 0.3798 | 0.835 |
| 0.3388 | 46.0 | 34500 | 0.3793 | 0.835 |
| 0.3751 | 47.0 | 35250 | 0.3790 | 0.835 |
| 0.3764 | 48.0 | 36000 | 0.3788 | 0.835 |
| 0.3502 | 49.0 | 36750 | 0.3787 | 0.835 |
| 0.28 | 50.0 | 37500 | 0.3787 | 0.835 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_sgd_001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_sgd_001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3411
- Accuracy: 0.8715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5089 | 1.0 | 751 | 0.6112 | 0.7429 |
| 0.3967 | 2.0 | 1502 | 0.4763 | 0.7930 |
| 0.2926 | 3.0 | 2253 | 0.3983 | 0.8314 |
| 0.2707 | 4.0 | 3004 | 0.3655 | 0.8431 |
| 0.3017 | 5.0 | 3755 | 0.3388 | 0.8598 |
| 0.2883 | 6.0 | 4506 | 0.3239 | 0.8581 |
| 0.2607 | 7.0 | 5257 | 0.3147 | 0.8681 |
| 0.2345 | 8.0 | 6008 | 0.3166 | 0.8564 |
| 0.2342 | 9.0 | 6759 | 0.3080 | 0.8765 |
| 0.2448 | 10.0 | 7510 | 0.3026 | 0.8715 |
| 0.2489 | 11.0 | 8261 | 0.3046 | 0.8664 |
| 0.2253 | 12.0 | 9012 | 0.3048 | 0.8731 |
| 0.1846 | 13.0 | 9763 | 0.3033 | 0.8681 |
| 0.1475 | 14.0 | 10514 | 0.3005 | 0.8698 |
| 0.1911 | 15.0 | 11265 | 0.2987 | 0.8731 |
| 0.1765 | 16.0 | 12016 | 0.3037 | 0.8664 |
| 0.2371 | 17.0 | 12767 | 0.3023 | 0.8698 |
| 0.1758 | 18.0 | 13518 | 0.3012 | 0.8731 |
| 0.2258 | 19.0 | 14269 | 0.3050 | 0.8698 |
| 0.1902 | 20.0 | 15020 | 0.3050 | 0.8681 |
| 0.1077 | 21.0 | 15771 | 0.3133 | 0.8698 |
| 0.1647 | 22.0 | 16522 | 0.3182 | 0.8715 |
| 0.1389 | 23.0 | 17273 | 0.3149 | 0.8731 |
| 0.1858 | 24.0 | 18024 | 0.3136 | 0.8664 |
| 0.1711 | 25.0 | 18775 | 0.3139 | 0.8631 |
| 0.191 | 26.0 | 19526 | 0.3138 | 0.8648 |
| 0.181 | 27.0 | 20277 | 0.3209 | 0.8715 |
| 0.2185 | 28.0 | 21028 | 0.3215 | 0.8748 |
| 0.1776 | 29.0 | 21779 | 0.3185 | 0.8715 |
| 0.135 | 30.0 | 22530 | 0.3255 | 0.8681 |
| 0.1282 | 31.0 | 23281 | 0.3235 | 0.8715 |
| 0.1784 | 32.0 | 24032 | 0.3244 | 0.8698 |
| 0.2006 | 33.0 | 24783 | 0.3311 | 0.8581 |
| 0.106 | 34.0 | 25534 | 0.3280 | 0.8715 |
| 0.1426 | 35.0 | 26285 | 0.3297 | 0.8681 |
| 0.1309 | 36.0 | 27036 | 0.3304 | 0.8614 |
| 0.2379 | 37.0 | 27787 | 0.3292 | 0.8748 |
| 0.1297 | 38.0 | 28538 | 0.3352 | 0.8698 |
| 0.0635 | 39.0 | 29289 | 0.3365 | 0.8664 |
| 0.1936 | 40.0 | 30040 | 0.3376 | 0.8698 |
| 0.1143 | 41.0 | 30791 | 0.3368 | 0.8731 |
| 0.1251 | 42.0 | 31542 | 0.3385 | 0.8664 |
| 0.1338 | 43.0 | 32293 | 0.3411 | 0.8681 |
| 0.1093 | 44.0 | 33044 | 0.3395 | 0.8681 |
| 0.1208 | 45.0 | 33795 | 0.3395 | 0.8715 |
| 0.1467 | 46.0 | 34546 | 0.3416 | 0.8698 |
| 0.0894 | 47.0 | 35297 | 0.3413 | 0.8698 |
| 0.1318 | 48.0 | 36048 | 0.3412 | 0.8681 |
| 0.0983 | 49.0 | 36799 | 0.3410 | 0.8715 |
| 0.1226 | 50.0 | 37550 | 0.3411 | 0.8715 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
A2H0H0R1/swin-tiny-patch4-window7-224-plant-diseases
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0075
- Accuracy: 0.9974
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1159 | 1.0 | 494 | 0.0371 | 0.9863 |
| 0.0765 | 2.0 | 989 | 0.0130 | 0.9959 |
| 0.0346 | 3.0 | 1482 | 0.0075 | 0.9974 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"apple___apple_scab",
"apple___black_rot",
"apple___cedar_apple_rust",
"apple___healthy",
"blueberry___healthy",
"cherry_(including_sour)___powdery_mildew",
"cherry_(including_sour)___healthy",
"corn_(maize)___cercospora_leaf_spot gray_leaf_spot",
"corn_(maize)___common_rust_",
"corn_(maize)___northern_leaf_blight",
"corn_(maize)___healthy",
"grape___black_rot",
"grape___esca_(black_measles)",
"grape___leaf_blight_(isariopsis_leaf_spot)",
"grape___healthy",
"orange___haunglongbing_(citrus_greening)",
"peach___bacterial_spot",
"peach___healthy",
"pepper,_bell___bacterial_spot",
"pepper,_bell___healthy",
"potato___early_blight",
"potato___late_blight",
"potato___healthy",
"raspberry___healthy",
"soybean___healthy",
"squash___powdery_mildew",
"strawberry___leaf_scorch",
"strawberry___healthy",
"tomato___bacterial_spot",
"tomato___early_blight",
"tomato___late_blight",
"tomato___leaf_mold",
"tomato___septoria_leaf_spot",
"tomato___spider_mites two-spotted_spider_mite",
"tomato___target_spot",
"tomato___tomato_yellow_leaf_curl_virus",
"tomato___tomato_mosaic_virus",
"tomato___healthy"
] |
samihaafaf/test3
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.4435860514640808
f1: 0.823211781206171
precision: 0.8121065375302663
recall: 0.8346249555634554
auc: 0.8712395645629174
accuracy: 0.7952819846522392
|
[
"pvp",
"non-pvp"
] |
arnabdhar/Swin-V2-base-Food
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Swin-V2-base-Food
This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window8-256](https://huggingface.co/microsoft/swinv2-base-patch4-window8-256) on the ItsNotRohit/Food121-224 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7099
- Accuracy: 0.8160
- Recall: 0.8160
- Precision: 0.8168
- F1: 0.8159
## Model description
Swin v2 is a powerful vision model based on Transformers, achieving top-notch accuracy in image classification tasks. It excels thanks to:
- __Hierarchical architecture__: Efficiently captures features at different scales, like CNNs.
- __Shifted windows__: Improves information flow and reduces computational cost.
- __Large model capacity__: Enables accurate and generalizable predictions.
Swin v2 sets new records on ImageNet, even needing 40x less data and training time than similar models. It's also versatile, tackling various vision tasks and handling large images.
The model was fine tuned on a 120 categories of food images.
To use the model use the following code snippet:
```python
from transformers import pipeline
from PIL import Image
# init image classification pipeline
classifier = pipeline("image-classification", "arnabdhar/Swin-V2-base-Food")
# use pipeline for inference
image = Image.open(image_path)
results = classifier(image)
```
## Intended uses
The model can be used for the following tasks:
- __Food Image Classification__: Use this model to classify food images using the Transformers `pipeline` module.
- __Base Model for Fine Tuning__: If you want to use this model for your own custom dataset you can surely do so by treating this model as a base model and fine tune it for your own dataset.
## Training procedure
The fine tuning was done on Google Colab with a NVIDIA T4 GPU with 15GB of VRAM, the model was trained for 20,000 steps and it took ~5.5 hours for the fine tuning to complete which also included periodic evaluation of the model.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 128
- seed: 17769929
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 20000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.5169 | 0.33 | 2000 | 1.2680 | 0.6746 | 0.6746 | 0.7019 | 0.6737 |
| 1.2362 | 0.66 | 4000 | 1.0759 | 0.7169 | 0.7169 | 0.7411 | 0.7178 |
| 1.1076 | 0.99 | 6000 | 0.9757 | 0.7437 | 0.7437 | 0.7593 | 0.7430 |
| 0.9163 | 1.32 | 8000 | 0.9123 | 0.7623 | 0.7623 | 0.7737 | 0.7628 |
| 0.8291 | 1.65 | 10000 | 0.8397 | 0.7807 | 0.7807 | 0.7874 | 0.7796 |
| 0.7949 | 1.98 | 12000 | 0.7724 | 0.7965 | 0.7965 | 0.8014 | 0.7965 |
| 0.6455 | 2.31 | 14000 | 0.7458 | 0.8030 | 0.8030 | 0.8069 | 0.8031 |
| 0.6332 | 2.64 | 16000 | 0.7222 | 0.8110 | 0.8110 | 0.8122 | 0.8106 |
| 0.6132 | 2.98 | 18000 | 0.7021 | 0.8154 | 0.8154 | 0.8170 | 0.8155 |
| 0.57 | 3.31 | 20000 | 0.7099 | 0.8160 | 0.8160 | 0.8168 | 0.8159 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"apple_pie",
"baby_back_ribs",
"baklava",
"beef_carpaccio",
"beef_tartare",
"beet_salad",
"beignets",
"bibimbap",
"biryani",
"bread_pudding",
"breakfast_burrito",
"bruschetta",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"chai",
"chapati",
"cheese_plate",
"cheesecake",
"chicken_curry",
"chicken_quesadilla",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"chole_bhature",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"dabeli",
"dal",
"deviled_eggs",
"dhokla",
"donuts",
"dosa",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"idli",
"jalebi",
"kathi_rolls",
"kofta",
"kulfi",
"lasagna",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"momos",
"mussels",
"naan",
"nachos",
"omelette",
"onion_rings",
"oysters",
"pad_thai",
"paella",
"pakoda",
"pancakes",
"pani_puri",
"panna_cotta",
"panner_butter_masala",
"pav_bhaji",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare",
"vadapav",
"waffles"
] |
hkivancoral/smids_10x_deit_tiny_sgd_001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_sgd_001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3073
- Accuracy: 0.8902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5172 | 1.0 | 750 | 0.5682 | 0.7687 |
| 0.3347 | 2.0 | 1500 | 0.4469 | 0.8153 |
| 0.3177 | 3.0 | 2250 | 0.3953 | 0.8469 |
| 0.3776 | 4.0 | 3000 | 0.3688 | 0.8502 |
| 0.2886 | 5.0 | 3750 | 0.3556 | 0.8519 |
| 0.2396 | 6.0 | 4500 | 0.3328 | 0.8502 |
| 0.2545 | 7.0 | 5250 | 0.3237 | 0.8586 |
| 0.2435 | 8.0 | 6000 | 0.3188 | 0.8569 |
| 0.2366 | 9.0 | 6750 | 0.3065 | 0.8686 |
| 0.232 | 10.0 | 7500 | 0.3041 | 0.8652 |
| 0.2399 | 11.0 | 8250 | 0.2971 | 0.8785 |
| 0.2717 | 12.0 | 9000 | 0.2941 | 0.8769 |
| 0.2579 | 13.0 | 9750 | 0.2863 | 0.8852 |
| 0.1661 | 14.0 | 10500 | 0.2895 | 0.8802 |
| 0.1655 | 15.0 | 11250 | 0.2865 | 0.8785 |
| 0.1921 | 16.0 | 12000 | 0.2897 | 0.8802 |
| 0.1525 | 17.0 | 12750 | 0.2854 | 0.8835 |
| 0.1653 | 18.0 | 13500 | 0.2861 | 0.8819 |
| 0.1849 | 19.0 | 14250 | 0.2939 | 0.8702 |
| 0.1923 | 20.0 | 15000 | 0.2850 | 0.8835 |
| 0.1967 | 21.0 | 15750 | 0.2874 | 0.8802 |
| 0.1373 | 22.0 | 16500 | 0.2916 | 0.8802 |
| 0.1229 | 23.0 | 17250 | 0.2891 | 0.8869 |
| 0.1054 | 24.0 | 18000 | 0.2911 | 0.8802 |
| 0.1456 | 25.0 | 18750 | 0.2869 | 0.8869 |
| 0.2052 | 26.0 | 19500 | 0.2987 | 0.8835 |
| 0.1723 | 27.0 | 20250 | 0.2918 | 0.8835 |
| 0.1277 | 28.0 | 21000 | 0.2937 | 0.8902 |
| 0.1569 | 29.0 | 21750 | 0.2956 | 0.8902 |
| 0.1514 | 30.0 | 22500 | 0.2954 | 0.8885 |
| 0.1603 | 31.0 | 23250 | 0.2954 | 0.8902 |
| 0.1428 | 32.0 | 24000 | 0.2940 | 0.8918 |
| 0.1564 | 33.0 | 24750 | 0.3002 | 0.8835 |
| 0.1386 | 34.0 | 25500 | 0.3023 | 0.8852 |
| 0.1564 | 35.0 | 26250 | 0.2982 | 0.8869 |
| 0.183 | 36.0 | 27000 | 0.3004 | 0.8885 |
| 0.1456 | 37.0 | 27750 | 0.3058 | 0.8869 |
| 0.1394 | 38.0 | 28500 | 0.3047 | 0.8869 |
| 0.121 | 39.0 | 29250 | 0.3021 | 0.8902 |
| 0.1192 | 40.0 | 30000 | 0.3035 | 0.8852 |
| 0.1706 | 41.0 | 30750 | 0.3048 | 0.8918 |
| 0.1421 | 42.0 | 31500 | 0.3036 | 0.8885 |
| 0.1223 | 43.0 | 32250 | 0.3066 | 0.8852 |
| 0.1116 | 44.0 | 33000 | 0.3060 | 0.8885 |
| 0.1122 | 45.0 | 33750 | 0.3075 | 0.8885 |
| 0.1411 | 46.0 | 34500 | 0.3066 | 0.8885 |
| 0.1644 | 47.0 | 35250 | 0.3072 | 0.8885 |
| 0.0953 | 48.0 | 36000 | 0.3070 | 0.8902 |
| 0.1109 | 49.0 | 36750 | 0.3072 | 0.8902 |
| 0.1061 | 50.0 | 37500 | 0.3073 | 0.8902 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
MattyB95/VIT-ASVspoof2019-ConstantQ-Synthetic-Voice-Detection
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VIT-ASVspoof2019-ConstantQ-Synthetic-Voice-Detection
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2115
- Accuracy: 0.9560
- F1: 0.9750
- Precision: 0.9950
- Recall: 0.9557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0383 | 1.0 | 3173 | 0.1192 | 0.9753 | 0.9864 | 0.9734 | 0.9997 |
| 0.0158 | 2.0 | 6346 | 0.0505 | 0.9888 | 0.9938 | 0.9911 | 0.9965 |
| 0.0021 | 3.0 | 9519 | 0.1042 | 0.9849 | 0.9917 | 0.9836 | 0.9998 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"bonafide",
"spoof"
] |
hkivancoral/smids_10x_deit_tiny_sgd_001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_sgd_001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2982
- Accuracy: 0.895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5867 | 1.0 | 750 | 0.5827 | 0.7783 |
| 0.3913 | 2.0 | 1500 | 0.4291 | 0.83 |
| 0.3437 | 3.0 | 2250 | 0.3734 | 0.86 |
| 0.3224 | 4.0 | 3000 | 0.3340 | 0.8633 |
| 0.3802 | 5.0 | 3750 | 0.3192 | 0.875 |
| 0.3066 | 6.0 | 4500 | 0.3104 | 0.88 |
| 0.2589 | 7.0 | 5250 | 0.2967 | 0.8867 |
| 0.2794 | 8.0 | 6000 | 0.2987 | 0.8867 |
| 0.1833 | 9.0 | 6750 | 0.2867 | 0.8933 |
| 0.2023 | 10.0 | 7500 | 0.2817 | 0.9 |
| 0.2616 | 11.0 | 8250 | 0.2809 | 0.8883 |
| 0.2286 | 12.0 | 9000 | 0.2812 | 0.8983 |
| 0.191 | 13.0 | 9750 | 0.2821 | 0.895 |
| 0.2573 | 14.0 | 10500 | 0.2824 | 0.895 |
| 0.233 | 15.0 | 11250 | 0.2788 | 0.9033 |
| 0.227 | 16.0 | 12000 | 0.2755 | 0.9133 |
| 0.2065 | 17.0 | 12750 | 0.2819 | 0.8933 |
| 0.1957 | 18.0 | 13500 | 0.2734 | 0.9033 |
| 0.1915 | 19.0 | 14250 | 0.2738 | 0.9017 |
| 0.1774 | 20.0 | 15000 | 0.2840 | 0.8967 |
| 0.1639 | 21.0 | 15750 | 0.2800 | 0.9 |
| 0.18 | 22.0 | 16500 | 0.2722 | 0.9033 |
| 0.1754 | 23.0 | 17250 | 0.2797 | 0.8983 |
| 0.1721 | 24.0 | 18000 | 0.2818 | 0.8967 |
| 0.2322 | 25.0 | 18750 | 0.2867 | 0.8933 |
| 0.1833 | 26.0 | 19500 | 0.2854 | 0.8933 |
| 0.0838 | 27.0 | 20250 | 0.2833 | 0.9083 |
| 0.1291 | 28.0 | 21000 | 0.2872 | 0.8883 |
| 0.1475 | 29.0 | 21750 | 0.2853 | 0.8933 |
| 0.1339 | 30.0 | 22500 | 0.2879 | 0.8917 |
| 0.0869 | 31.0 | 23250 | 0.2884 | 0.895 |
| 0.1341 | 32.0 | 24000 | 0.2859 | 0.89 |
| 0.1322 | 33.0 | 24750 | 0.2895 | 0.8933 |
| 0.1482 | 34.0 | 25500 | 0.2910 | 0.8933 |
| 0.1123 | 35.0 | 26250 | 0.2921 | 0.8933 |
| 0.1145 | 36.0 | 27000 | 0.2928 | 0.8933 |
| 0.1372 | 37.0 | 27750 | 0.2965 | 0.8917 |
| 0.1907 | 38.0 | 28500 | 0.2941 | 0.8917 |
| 0.1101 | 39.0 | 29250 | 0.2932 | 0.89 |
| 0.1502 | 40.0 | 30000 | 0.2921 | 0.895 |
| 0.1006 | 41.0 | 30750 | 0.2941 | 0.8983 |
| 0.1237 | 42.0 | 31500 | 0.2961 | 0.8967 |
| 0.0943 | 43.0 | 32250 | 0.2963 | 0.895 |
| 0.1038 | 44.0 | 33000 | 0.2980 | 0.8983 |
| 0.1286 | 45.0 | 33750 | 0.2956 | 0.8917 |
| 0.0851 | 46.0 | 34500 | 0.2954 | 0.8917 |
| 0.1551 | 47.0 | 35250 | 0.2984 | 0.8917 |
| 0.0707 | 48.0 | 36000 | 0.2985 | 0.8967 |
| 0.143 | 49.0 | 36750 | 0.2982 | 0.8967 |
| 0.1125 | 50.0 | 37500 | 0.2982 | 0.895 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_sgd_0001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_sgd_0001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4530
- Accuracy: 0.8114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0181 | 1.0 | 751 | 0.9693 | 0.5359 |
| 0.81 | 2.0 | 1502 | 0.8850 | 0.5993 |
| 0.7699 | 3.0 | 2253 | 0.8246 | 0.6377 |
| 0.6601 | 4.0 | 3004 | 0.7789 | 0.6578 |
| 0.653 | 5.0 | 3755 | 0.7391 | 0.6745 |
| 0.6463 | 6.0 | 4506 | 0.7047 | 0.6912 |
| 0.5744 | 7.0 | 5257 | 0.6756 | 0.7028 |
| 0.4963 | 8.0 | 6008 | 0.6490 | 0.7129 |
| 0.5329 | 9.0 | 6759 | 0.6286 | 0.7195 |
| 0.5165 | 10.0 | 7510 | 0.6094 | 0.7295 |
| 0.5717 | 11.0 | 8261 | 0.5949 | 0.7279 |
| 0.4844 | 12.0 | 9012 | 0.5809 | 0.7396 |
| 0.4587 | 13.0 | 9763 | 0.5699 | 0.7446 |
| 0.4195 | 14.0 | 10514 | 0.5589 | 0.7496 |
| 0.4521 | 15.0 | 11265 | 0.5504 | 0.7579 |
| 0.4327 | 16.0 | 12016 | 0.5411 | 0.7596 |
| 0.4611 | 17.0 | 12767 | 0.5341 | 0.7663 |
| 0.4248 | 18.0 | 13518 | 0.5294 | 0.7746 |
| 0.4694 | 19.0 | 14269 | 0.5215 | 0.7780 |
| 0.395 | 20.0 | 15020 | 0.5170 | 0.7880 |
| 0.3437 | 21.0 | 15771 | 0.5117 | 0.7880 |
| 0.4367 | 22.0 | 16522 | 0.5057 | 0.7947 |
| 0.3451 | 23.0 | 17273 | 0.5010 | 0.7930 |
| 0.4413 | 24.0 | 18024 | 0.4962 | 0.7930 |
| 0.3908 | 25.0 | 18775 | 0.4929 | 0.7930 |
| 0.4631 | 26.0 | 19526 | 0.4899 | 0.7930 |
| 0.3779 | 27.0 | 20277 | 0.4860 | 0.7930 |
| 0.4436 | 28.0 | 21028 | 0.4829 | 0.7963 |
| 0.3794 | 29.0 | 21779 | 0.4792 | 0.7997 |
| 0.3732 | 30.0 | 22530 | 0.4775 | 0.7963 |
| 0.3411 | 31.0 | 23281 | 0.4746 | 0.7980 |
| 0.4745 | 32.0 | 24032 | 0.4718 | 0.7980 |
| 0.4263 | 33.0 | 24783 | 0.4692 | 0.7997 |
| 0.3711 | 34.0 | 25534 | 0.4676 | 0.8030 |
| 0.3951 | 35.0 | 26285 | 0.4656 | 0.8047 |
| 0.4026 | 36.0 | 27036 | 0.4635 | 0.8047 |
| 0.4811 | 37.0 | 27787 | 0.4621 | 0.8063 |
| 0.3816 | 38.0 | 28538 | 0.4609 | 0.8063 |
| 0.2904 | 39.0 | 29289 | 0.4596 | 0.8047 |
| 0.4708 | 40.0 | 30040 | 0.4586 | 0.8097 |
| 0.3633 | 41.0 | 30791 | 0.4575 | 0.8080 |
| 0.367 | 42.0 | 31542 | 0.4565 | 0.8080 |
| 0.4048 | 43.0 | 32293 | 0.4557 | 0.8080 |
| 0.3531 | 44.0 | 33044 | 0.4549 | 0.8080 |
| 0.3608 | 45.0 | 33795 | 0.4542 | 0.8097 |
| 0.3794 | 46.0 | 34546 | 0.4538 | 0.8097 |
| 0.3429 | 47.0 | 35297 | 0.4534 | 0.8114 |
| 0.395 | 48.0 | 36048 | 0.4532 | 0.8114 |
| 0.3682 | 49.0 | 36799 | 0.4531 | 0.8114 |
| 0.3927 | 50.0 | 37550 | 0.4530 | 0.8114 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_sgd_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_sgd_00001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8572
- Accuracy: 0.6210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1945 | 1.0 | 751 | 1.0811 | 0.4240 |
| 1.055 | 2.0 | 1502 | 1.0655 | 0.4407 |
| 1.0679 | 3.0 | 2253 | 1.0517 | 0.4608 |
| 1.0096 | 4.0 | 3004 | 1.0390 | 0.4808 |
| 1.0116 | 5.0 | 3755 | 1.0270 | 0.4908 |
| 1.0321 | 6.0 | 4506 | 1.0157 | 0.4975 |
| 1.0032 | 7.0 | 5257 | 1.0051 | 0.5175 |
| 0.974 | 8.0 | 6008 | 0.9949 | 0.5225 |
| 1.0026 | 9.0 | 6759 | 0.9855 | 0.5275 |
| 0.9625 | 10.0 | 7510 | 0.9766 | 0.5309 |
| 1.0189 | 11.0 | 8261 | 0.9684 | 0.5359 |
| 0.9634 | 12.0 | 9012 | 0.9606 | 0.5459 |
| 0.9198 | 13.0 | 9763 | 0.9533 | 0.5492 |
| 0.8927 | 14.0 | 10514 | 0.9463 | 0.5543 |
| 0.913 | 15.0 | 11265 | 0.9398 | 0.5626 |
| 0.8696 | 16.0 | 12016 | 0.9337 | 0.5659 |
| 0.9094 | 17.0 | 12767 | 0.9279 | 0.5710 |
| 0.8979 | 18.0 | 13518 | 0.9225 | 0.5826 |
| 0.9196 | 19.0 | 14269 | 0.9174 | 0.5860 |
| 0.8982 | 20.0 | 15020 | 0.9126 | 0.5876 |
| 0.8408 | 21.0 | 15771 | 0.9081 | 0.5893 |
| 0.8801 | 22.0 | 16522 | 0.9038 | 0.5927 |
| 0.8134 | 23.0 | 17273 | 0.8998 | 0.5927 |
| 0.8902 | 24.0 | 18024 | 0.8960 | 0.5943 |
| 0.7916 | 25.0 | 18775 | 0.8925 | 0.5977 |
| 0.9125 | 26.0 | 19526 | 0.8892 | 0.5977 |
| 0.8433 | 27.0 | 20277 | 0.8861 | 0.5977 |
| 0.8267 | 28.0 | 21028 | 0.8831 | 0.6010 |
| 0.8301 | 29.0 | 21779 | 0.8804 | 0.6027 |
| 0.8483 | 30.0 | 22530 | 0.8778 | 0.6043 |
| 0.8001 | 31.0 | 23281 | 0.8755 | 0.6043 |
| 0.8847 | 32.0 | 24032 | 0.8733 | 0.6043 |
| 0.8518 | 33.0 | 24783 | 0.8712 | 0.6043 |
| 0.8101 | 34.0 | 25534 | 0.8693 | 0.6060 |
| 0.8163 | 35.0 | 26285 | 0.8676 | 0.6060 |
| 0.793 | 36.0 | 27036 | 0.8660 | 0.6110 |
| 0.8627 | 37.0 | 27787 | 0.8646 | 0.6127 |
| 0.845 | 38.0 | 28538 | 0.8633 | 0.6160 |
| 0.8148 | 39.0 | 29289 | 0.8622 | 0.6177 |
| 0.8547 | 40.0 | 30040 | 0.8611 | 0.6177 |
| 0.8088 | 41.0 | 30791 | 0.8603 | 0.6177 |
| 0.7818 | 42.0 | 31542 | 0.8595 | 0.6194 |
| 0.8609 | 43.0 | 32293 | 0.8589 | 0.6210 |
| 0.7903 | 44.0 | 33044 | 0.8583 | 0.6210 |
| 0.8473 | 45.0 | 33795 | 0.8579 | 0.6210 |
| 0.8128 | 46.0 | 34546 | 0.8576 | 0.6210 |
| 0.7919 | 47.0 | 35297 | 0.8574 | 0.6210 |
| 0.8252 | 48.0 | 36048 | 0.8573 | 0.6210 |
| 0.8055 | 49.0 | 36799 | 0.8572 | 0.6210 |
| 0.8385 | 50.0 | 37550 | 0.8572 | 0.6210 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_sgd_0001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_sgd_0001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4154
- Accuracy: 0.8386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0614 | 1.0 | 750 | 1.0774 | 0.4276 |
| 0.9244 | 2.0 | 1500 | 0.9849 | 0.5008 |
| 0.8719 | 3.0 | 2250 | 0.9059 | 0.5474 |
| 0.8364 | 4.0 | 3000 | 0.8357 | 0.6140 |
| 0.7154 | 5.0 | 3750 | 0.7698 | 0.6589 |
| 0.7009 | 6.0 | 4500 | 0.7160 | 0.7038 |
| 0.6226 | 7.0 | 5250 | 0.6676 | 0.7321 |
| 0.5568 | 8.0 | 6000 | 0.6320 | 0.7521 |
| 0.5746 | 9.0 | 6750 | 0.6030 | 0.7604 |
| 0.5421 | 10.0 | 7500 | 0.5805 | 0.7671 |
| 0.5134 | 11.0 | 8250 | 0.5611 | 0.7737 |
| 0.5557 | 12.0 | 9000 | 0.5444 | 0.7770 |
| 0.5053 | 13.0 | 9750 | 0.5297 | 0.7804 |
| 0.4226 | 14.0 | 10500 | 0.5183 | 0.7887 |
| 0.4645 | 15.0 | 11250 | 0.5092 | 0.7903 |
| 0.4059 | 16.0 | 12000 | 0.5013 | 0.7920 |
| 0.421 | 17.0 | 12750 | 0.4951 | 0.7987 |
| 0.4242 | 18.0 | 13500 | 0.4876 | 0.7970 |
| 0.4439 | 19.0 | 14250 | 0.4811 | 0.7970 |
| 0.4437 | 20.0 | 15000 | 0.4767 | 0.7987 |
| 0.4454 | 21.0 | 15750 | 0.4711 | 0.8037 |
| 0.3749 | 22.0 | 16500 | 0.4658 | 0.8037 |
| 0.3717 | 23.0 | 17250 | 0.4614 | 0.8053 |
| 0.3725 | 24.0 | 18000 | 0.4568 | 0.8053 |
| 0.4228 | 25.0 | 18750 | 0.4527 | 0.8136 |
| 0.4364 | 26.0 | 19500 | 0.4498 | 0.8103 |
| 0.4024 | 27.0 | 20250 | 0.4458 | 0.8203 |
| 0.3741 | 28.0 | 21000 | 0.4427 | 0.8220 |
| 0.38 | 29.0 | 21750 | 0.4402 | 0.8203 |
| 0.3796 | 30.0 | 22500 | 0.4372 | 0.8236 |
| 0.3538 | 31.0 | 23250 | 0.4351 | 0.8253 |
| 0.3869 | 32.0 | 24000 | 0.4332 | 0.8253 |
| 0.3759 | 33.0 | 24750 | 0.4310 | 0.8286 |
| 0.394 | 34.0 | 25500 | 0.4290 | 0.8270 |
| 0.3753 | 35.0 | 26250 | 0.4274 | 0.8270 |
| 0.4036 | 36.0 | 27000 | 0.4252 | 0.8303 |
| 0.3883 | 37.0 | 27750 | 0.4241 | 0.8336 |
| 0.3856 | 38.0 | 28500 | 0.4227 | 0.8336 |
| 0.3479 | 39.0 | 29250 | 0.4214 | 0.8336 |
| 0.4431 | 40.0 | 30000 | 0.4201 | 0.8336 |
| 0.391 | 41.0 | 30750 | 0.4193 | 0.8336 |
| 0.3751 | 42.0 | 31500 | 0.4184 | 0.8336 |
| 0.3523 | 43.0 | 32250 | 0.4178 | 0.8336 |
| 0.3279 | 44.0 | 33000 | 0.4171 | 0.8336 |
| 0.341 | 45.0 | 33750 | 0.4165 | 0.8353 |
| 0.3735 | 46.0 | 34500 | 0.4161 | 0.8353 |
| 0.3807 | 47.0 | 35250 | 0.4158 | 0.8353 |
| 0.373 | 48.0 | 36000 | 0.4155 | 0.8386 |
| 0.3296 | 49.0 | 36750 | 0.4154 | 0.8386 |
| 0.3593 | 50.0 | 37500 | 0.4154 | 0.8386 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_sgd_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_sgd_00001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9487
- Accuracy: 0.5075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.3101 | 1.0 | 750 | 1.2862 | 0.3394 |
| 1.2651 | 2.0 | 1500 | 1.2316 | 0.3494 |
| 1.2534 | 3.0 | 2250 | 1.1915 | 0.3461 |
| 1.2051 | 4.0 | 3000 | 1.1627 | 0.3627 |
| 1.1704 | 5.0 | 3750 | 1.1422 | 0.3644 |
| 1.1559 | 6.0 | 4500 | 1.1266 | 0.3727 |
| 1.1314 | 7.0 | 5250 | 1.1140 | 0.3827 |
| 1.0917 | 8.0 | 6000 | 1.1032 | 0.3844 |
| 1.0985 | 9.0 | 6750 | 1.0936 | 0.4193 |
| 1.0734 | 10.0 | 7500 | 1.0847 | 0.4243 |
| 1.0397 | 11.0 | 8250 | 1.0764 | 0.4293 |
| 1.0584 | 12.0 | 9000 | 1.0685 | 0.4409 |
| 1.0692 | 13.0 | 9750 | 1.0611 | 0.4426 |
| 1.0127 | 14.0 | 10500 | 1.0540 | 0.4542 |
| 1.0605 | 15.0 | 11250 | 1.0471 | 0.4542 |
| 1.0197 | 16.0 | 12000 | 1.0406 | 0.4626 |
| 1.0472 | 17.0 | 12750 | 1.0344 | 0.4659 |
| 0.9868 | 18.0 | 13500 | 1.0285 | 0.4709 |
| 1.0498 | 19.0 | 14250 | 1.0228 | 0.4725 |
| 0.9916 | 20.0 | 15000 | 1.0174 | 0.4742 |
| 1.0032 | 21.0 | 15750 | 1.0122 | 0.4792 |
| 1.0262 | 22.0 | 16500 | 1.0073 | 0.4792 |
| 0.9732 | 23.0 | 17250 | 1.0026 | 0.4792 |
| 0.9627 | 24.0 | 18000 | 0.9981 | 0.4875 |
| 0.9933 | 25.0 | 18750 | 0.9939 | 0.4892 |
| 0.9645 | 26.0 | 19500 | 0.9898 | 0.4942 |
| 0.9413 | 27.0 | 20250 | 0.9860 | 0.4942 |
| 0.9502 | 28.0 | 21000 | 0.9824 | 0.4925 |
| 0.9622 | 29.0 | 21750 | 0.9790 | 0.4958 |
| 0.9399 | 30.0 | 22500 | 0.9758 | 0.4975 |
| 0.9259 | 31.0 | 23250 | 0.9728 | 0.4992 |
| 0.9425 | 32.0 | 24000 | 0.9700 | 0.5008 |
| 0.9657 | 33.0 | 24750 | 0.9673 | 0.5042 |
| 0.9537 | 34.0 | 25500 | 0.9649 | 0.5058 |
| 0.9361 | 35.0 | 26250 | 0.9627 | 0.5075 |
| 0.934 | 36.0 | 27000 | 0.9606 | 0.5092 |
| 0.927 | 37.0 | 27750 | 0.9587 | 0.5092 |
| 0.9435 | 38.0 | 28500 | 0.9570 | 0.5092 |
| 0.9139 | 39.0 | 29250 | 0.9555 | 0.5092 |
| 0.9394 | 40.0 | 30000 | 0.9541 | 0.5075 |
| 0.9635 | 41.0 | 30750 | 0.9529 | 0.5075 |
| 0.9447 | 42.0 | 31500 | 0.9519 | 0.5075 |
| 0.9124 | 43.0 | 32250 | 0.9510 | 0.5075 |
| 0.9404 | 44.0 | 33000 | 0.9503 | 0.5075 |
| 0.9374 | 45.0 | 33750 | 0.9497 | 0.5075 |
| 0.9103 | 46.0 | 34500 | 0.9493 | 0.5075 |
| 0.9609 | 47.0 | 35250 | 0.9490 | 0.5075 |
| 0.9309 | 48.0 | 36000 | 0.9488 | 0.5075 |
| 0.9307 | 49.0 | 36750 | 0.9487 | 0.5075 |
| 0.9119 | 50.0 | 37500 | 0.9487 | 0.5075 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_sgd_001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_sgd_001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3938
- Accuracy: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5453 | 1.0 | 750 | 0.5623 | 0.7633 |
| 0.3882 | 2.0 | 1500 | 0.4483 | 0.8183 |
| 0.3799 | 3.0 | 2250 | 0.4088 | 0.8317 |
| 0.3643 | 4.0 | 3000 | 0.3893 | 0.8383 |
| 0.2628 | 5.0 | 3750 | 0.3770 | 0.8467 |
| 0.2344 | 6.0 | 4500 | 0.3757 | 0.8467 |
| 0.2158 | 7.0 | 5250 | 0.3640 | 0.8583 |
| 0.2518 | 8.0 | 6000 | 0.3700 | 0.86 |
| 0.2784 | 9.0 | 6750 | 0.3645 | 0.8617 |
| 0.2124 | 10.0 | 7500 | 0.3619 | 0.86 |
| 0.2508 | 11.0 | 8250 | 0.3628 | 0.8583 |
| 0.2963 | 12.0 | 9000 | 0.3717 | 0.86 |
| 0.2464 | 13.0 | 9750 | 0.3675 | 0.86 |
| 0.2153 | 14.0 | 10500 | 0.3661 | 0.8633 |
| 0.1783 | 15.0 | 11250 | 0.3637 | 0.8633 |
| 0.1889 | 16.0 | 12000 | 0.3675 | 0.865 |
| 0.1615 | 17.0 | 12750 | 0.3615 | 0.8633 |
| 0.1602 | 18.0 | 13500 | 0.3665 | 0.8683 |
| 0.2382 | 19.0 | 14250 | 0.3640 | 0.8633 |
| 0.1431 | 20.0 | 15000 | 0.3640 | 0.8667 |
| 0.1246 | 21.0 | 15750 | 0.3698 | 0.865 |
| 0.1642 | 22.0 | 16500 | 0.3698 | 0.8617 |
| 0.1435 | 23.0 | 17250 | 0.3719 | 0.8617 |
| 0.184 | 24.0 | 18000 | 0.3745 | 0.865 |
| 0.1543 | 25.0 | 18750 | 0.3749 | 0.8617 |
| 0.1463 | 26.0 | 19500 | 0.3762 | 0.8633 |
| 0.1225 | 27.0 | 20250 | 0.3737 | 0.8667 |
| 0.1542 | 28.0 | 21000 | 0.3785 | 0.865 |
| 0.1065 | 29.0 | 21750 | 0.3788 | 0.87 |
| 0.1351 | 30.0 | 22500 | 0.3799 | 0.8667 |
| 0.1281 | 31.0 | 23250 | 0.3825 | 0.8667 |
| 0.1337 | 32.0 | 24000 | 0.3866 | 0.8633 |
| 0.1066 | 33.0 | 24750 | 0.3848 | 0.8667 |
| 0.1503 | 34.0 | 25500 | 0.3856 | 0.87 |
| 0.0933 | 35.0 | 26250 | 0.3837 | 0.8717 |
| 0.1119 | 36.0 | 27000 | 0.3871 | 0.87 |
| 0.0916 | 37.0 | 27750 | 0.3845 | 0.87 |
| 0.1419 | 38.0 | 28500 | 0.3888 | 0.8683 |
| 0.1831 | 39.0 | 29250 | 0.3865 | 0.87 |
| 0.1443 | 40.0 | 30000 | 0.3886 | 0.8683 |
| 0.1089 | 41.0 | 30750 | 0.3938 | 0.865 |
| 0.0931 | 42.0 | 31500 | 0.3903 | 0.8683 |
| 0.1349 | 43.0 | 32250 | 0.3917 | 0.8683 |
| 0.1005 | 44.0 | 33000 | 0.3917 | 0.8667 |
| 0.12 | 45.0 | 33750 | 0.3918 | 0.8667 |
| 0.1354 | 46.0 | 34500 | 0.3924 | 0.8667 |
| 0.0817 | 47.0 | 35250 | 0.3922 | 0.8667 |
| 0.0828 | 48.0 | 36000 | 0.3931 | 0.8667 |
| 0.0941 | 49.0 | 36750 | 0.3938 | 0.8667 |
| 0.0837 | 50.0 | 37500 | 0.3938 | 0.8667 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_sgd_0001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_sgd_0001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3822
- Accuracy: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0766 | 1.0 | 750 | 1.0931 | 0.4283 |
| 0.9667 | 2.0 | 1500 | 1.0054 | 0.5083 |
| 0.9193 | 3.0 | 2250 | 0.9314 | 0.5533 |
| 0.7955 | 4.0 | 3000 | 0.8645 | 0.595 |
| 0.78 | 5.0 | 3750 | 0.7998 | 0.6283 |
| 0.6828 | 6.0 | 4500 | 0.7419 | 0.6833 |
| 0.6256 | 7.0 | 5250 | 0.6910 | 0.715 |
| 0.6533 | 8.0 | 6000 | 0.6501 | 0.7417 |
| 0.5725 | 9.0 | 6750 | 0.6153 | 0.76 |
| 0.5585 | 10.0 | 7500 | 0.5875 | 0.7733 |
| 0.5119 | 11.0 | 8250 | 0.5647 | 0.7917 |
| 0.4751 | 12.0 | 9000 | 0.5443 | 0.8033 |
| 0.4748 | 13.0 | 9750 | 0.5264 | 0.815 |
| 0.543 | 14.0 | 10500 | 0.5129 | 0.8167 |
| 0.4792 | 15.0 | 11250 | 0.4995 | 0.8233 |
| 0.4638 | 16.0 | 12000 | 0.4900 | 0.8267 |
| 0.4373 | 17.0 | 12750 | 0.4791 | 0.8267 |
| 0.4624 | 18.0 | 13500 | 0.4695 | 0.8333 |
| 0.4581 | 19.0 | 14250 | 0.4625 | 0.8317 |
| 0.4031 | 20.0 | 15000 | 0.4549 | 0.8283 |
| 0.3798 | 21.0 | 15750 | 0.4466 | 0.8367 |
| 0.4127 | 22.0 | 16500 | 0.4410 | 0.84 |
| 0.4351 | 23.0 | 17250 | 0.4350 | 0.8433 |
| 0.4464 | 24.0 | 18000 | 0.4308 | 0.835 |
| 0.4334 | 25.0 | 18750 | 0.4260 | 0.8333 |
| 0.4382 | 26.0 | 19500 | 0.4212 | 0.8367 |
| 0.3088 | 27.0 | 20250 | 0.4169 | 0.8367 |
| 0.3982 | 28.0 | 21000 | 0.4136 | 0.8367 |
| 0.4061 | 29.0 | 21750 | 0.4101 | 0.8367 |
| 0.4007 | 30.0 | 22500 | 0.4079 | 0.84 |
| 0.3333 | 31.0 | 23250 | 0.4046 | 0.8417 |
| 0.3804 | 32.0 | 24000 | 0.4012 | 0.84 |
| 0.4007 | 33.0 | 24750 | 0.3989 | 0.8417 |
| 0.4048 | 34.0 | 25500 | 0.3970 | 0.8417 |
| 0.3319 | 35.0 | 26250 | 0.3948 | 0.8433 |
| 0.3736 | 36.0 | 27000 | 0.3932 | 0.8467 |
| 0.3994 | 37.0 | 27750 | 0.3918 | 0.8483 |
| 0.3998 | 38.0 | 28500 | 0.3900 | 0.8483 |
| 0.3526 | 39.0 | 29250 | 0.3888 | 0.85 |
| 0.4438 | 40.0 | 30000 | 0.3872 | 0.8483 |
| 0.3369 | 41.0 | 30750 | 0.3860 | 0.8517 |
| 0.3716 | 42.0 | 31500 | 0.3855 | 0.8517 |
| 0.3661 | 43.0 | 32250 | 0.3844 | 0.8517 |
| 0.3454 | 44.0 | 33000 | 0.3840 | 0.8517 |
| 0.3872 | 45.0 | 33750 | 0.3835 | 0.8517 |
| 0.3238 | 46.0 | 34500 | 0.3830 | 0.8517 |
| 0.404 | 47.0 | 35250 | 0.3826 | 0.8517 |
| 0.3607 | 48.0 | 36000 | 0.3823 | 0.8517 |
| 0.3506 | 49.0 | 36750 | 0.3823 | 0.8517 |
| 0.3442 | 50.0 | 37500 | 0.3822 | 0.85 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_sgd_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_sgd_00001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9716
- Accuracy: 0.5233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.3912 | 1.0 | 750 | 1.3088 | 0.3483 |
| 1.282 | 2.0 | 1500 | 1.2523 | 0.3583 |
| 1.2674 | 3.0 | 2250 | 1.2106 | 0.375 |
| 1.2413 | 4.0 | 3000 | 1.1803 | 0.3833 |
| 1.1441 | 5.0 | 3750 | 1.1586 | 0.39 |
| 1.1769 | 6.0 | 4500 | 1.1422 | 0.3967 |
| 1.1041 | 7.0 | 5250 | 1.1291 | 0.395 |
| 1.1052 | 8.0 | 6000 | 1.1182 | 0.4083 |
| 1.1033 | 9.0 | 6750 | 1.1088 | 0.4133 |
| 1.0691 | 10.0 | 7500 | 1.1002 | 0.42 |
| 1.0707 | 11.0 | 8250 | 1.0922 | 0.4333 |
| 1.055 | 12.0 | 9000 | 1.0847 | 0.4417 |
| 1.0312 | 13.0 | 9750 | 1.0777 | 0.4433 |
| 1.0496 | 14.0 | 10500 | 1.0710 | 0.4417 |
| 1.0476 | 15.0 | 11250 | 1.0646 | 0.45 |
| 1.0102 | 16.0 | 12000 | 1.0585 | 0.455 |
| 0.9929 | 17.0 | 12750 | 1.0527 | 0.4617 |
| 1.009 | 18.0 | 13500 | 1.0471 | 0.47 |
| 0.9872 | 19.0 | 14250 | 1.0418 | 0.4767 |
| 0.9829 | 20.0 | 15000 | 1.0367 | 0.485 |
| 0.9531 | 21.0 | 15750 | 1.0318 | 0.4867 |
| 0.9943 | 22.0 | 16500 | 1.0272 | 0.4917 |
| 0.9746 | 23.0 | 17250 | 1.0227 | 0.4933 |
| 0.9878 | 24.0 | 18000 | 1.0185 | 0.4933 |
| 0.9395 | 25.0 | 18750 | 1.0145 | 0.5017 |
| 0.9881 | 26.0 | 19500 | 1.0107 | 0.505 |
| 0.9509 | 27.0 | 20250 | 1.0071 | 0.51 |
| 0.9714 | 28.0 | 21000 | 1.0038 | 0.5117 |
| 0.9762 | 29.0 | 21750 | 1.0005 | 0.5133 |
| 0.9566 | 30.0 | 22500 | 0.9975 | 0.5133 |
| 0.9136 | 31.0 | 23250 | 0.9946 | 0.5133 |
| 0.9478 | 32.0 | 24000 | 0.9920 | 0.5167 |
| 0.9511 | 33.0 | 24750 | 0.9895 | 0.5167 |
| 0.9608 | 34.0 | 25500 | 0.9872 | 0.5183 |
| 0.9345 | 35.0 | 26250 | 0.9850 | 0.5183 |
| 0.941 | 36.0 | 27000 | 0.9831 | 0.5217 |
| 0.9373 | 37.0 | 27750 | 0.9813 | 0.5217 |
| 0.9836 | 38.0 | 28500 | 0.9796 | 0.5233 |
| 0.9294 | 39.0 | 29250 | 0.9782 | 0.5217 |
| 0.9394 | 40.0 | 30000 | 0.9769 | 0.5233 |
| 0.9196 | 41.0 | 30750 | 0.9757 | 0.5233 |
| 0.941 | 42.0 | 31500 | 0.9747 | 0.5233 |
| 0.9468 | 43.0 | 32250 | 0.9739 | 0.5233 |
| 0.9204 | 44.0 | 33000 | 0.9732 | 0.5233 |
| 0.9194 | 45.0 | 33750 | 0.9727 | 0.5233 |
| 0.9493 | 46.0 | 34500 | 0.9722 | 0.5233 |
| 0.9121 | 47.0 | 35250 | 0.9719 | 0.5233 |
| 0.9108 | 48.0 | 36000 | 0.9717 | 0.5233 |
| 0.8965 | 49.0 | 36750 | 0.9716 | 0.5233 |
| 0.896 | 50.0 | 37500 | 0.9716 | 0.5233 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
Sai1212/swin-finetuned-food101
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-finetuned-food101
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7046
- Accuracy: 0.4167
- F1: 0.5882
- Precision: 0.4167
- Recall: 1.0
- Auc: 0.5742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.6978 | 1.0 | 14 | 0.6847 | 0.5833 | 0.0 | 0.0 | 0.0 | 0.5717 |
| 0.7025 | 2.0 | 28 | 0.7120 | 0.4167 | 0.5882 | 0.4167 | 1.0 | 0.5570 |
| 0.6946 | 3.0 | 42 | 0.6955 | 0.4167 | 0.5882 | 0.4167 | 1.0 | 0.5662 |
| 0.6935 | 4.0 | 56 | 0.7047 | 0.4167 | 0.5882 | 0.4167 | 1.0 | 0.5644 |
| 0.6935 | 5.0 | 70 | 0.7046 | 0.4167 | 0.5882 | 0.4167 | 1.0 | 0.5742 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"mi",
"non_mi"
] |
MattyB95/VIT-ASVspoof2019-MFCC-Synthetic-Voice-Detection
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VIT-ASVspoof2019-MFCC-Synthetic-Voice-Detection
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1213
- Accuracy: 0.9804
- F1: 0.9892
- Precision: 0.9788
- Recall: 0.9999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0283 | 1.0 | 3173 | 0.0958 | 0.9797 | 0.9888 | 0.9782 | 0.9996 |
| 0.0227 | 2.0 | 6346 | 0.0597 | 0.9874 | 0.9930 | 0.9890 | 0.9971 |
| 0.0036 | 3.0 | 9519 | 0.1213 | 0.9804 | 0.9892 | 0.9788 | 0.9999 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"bonafide",
"spoof"
] |
hkivancoral/smids_10x_deit_tiny_sgd_0001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_sgd_0001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4161
- Accuracy: 0.8417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0688 | 1.0 | 750 | 1.0895 | 0.3983 |
| 0.9867 | 2.0 | 1500 | 0.9975 | 0.4833 |
| 0.9278 | 3.0 | 2250 | 0.9175 | 0.54 |
| 0.8528 | 4.0 | 3000 | 0.8474 | 0.5783 |
| 0.714 | 5.0 | 3750 | 0.7808 | 0.635 |
| 0.6866 | 6.0 | 4500 | 0.7206 | 0.6917 |
| 0.6101 | 7.0 | 5250 | 0.6718 | 0.7167 |
| 0.6452 | 8.0 | 6000 | 0.6299 | 0.7333 |
| 0.6113 | 9.0 | 6750 | 0.5989 | 0.7467 |
| 0.5055 | 10.0 | 7500 | 0.5743 | 0.7633 |
| 0.4983 | 11.0 | 8250 | 0.5553 | 0.7717 |
| 0.5538 | 12.0 | 9000 | 0.5377 | 0.7733 |
| 0.4959 | 13.0 | 9750 | 0.5236 | 0.7833 |
| 0.4737 | 14.0 | 10500 | 0.5129 | 0.7867 |
| 0.4376 | 15.0 | 11250 | 0.5024 | 0.7967 |
| 0.3926 | 16.0 | 12000 | 0.4941 | 0.8033 |
| 0.397 | 17.0 | 12750 | 0.4866 | 0.805 |
| 0.4304 | 18.0 | 13500 | 0.4793 | 0.8067 |
| 0.4526 | 19.0 | 14250 | 0.4737 | 0.81 |
| 0.4267 | 20.0 | 15000 | 0.4680 | 0.81 |
| 0.3746 | 21.0 | 15750 | 0.4626 | 0.8183 |
| 0.4237 | 22.0 | 16500 | 0.4581 | 0.815 |
| 0.4022 | 23.0 | 17250 | 0.4540 | 0.82 |
| 0.465 | 24.0 | 18000 | 0.4503 | 0.8233 |
| 0.3585 | 25.0 | 18750 | 0.4464 | 0.8267 |
| 0.3671 | 26.0 | 19500 | 0.4431 | 0.8267 |
| 0.3889 | 27.0 | 20250 | 0.4400 | 0.8283 |
| 0.3836 | 28.0 | 21000 | 0.4372 | 0.83 |
| 0.3751 | 29.0 | 21750 | 0.4351 | 0.83 |
| 0.3772 | 30.0 | 22500 | 0.4334 | 0.8333 |
| 0.3959 | 31.0 | 23250 | 0.4312 | 0.8333 |
| 0.3701 | 32.0 | 24000 | 0.4290 | 0.8317 |
| 0.3441 | 33.0 | 24750 | 0.4274 | 0.8317 |
| 0.371 | 34.0 | 25500 | 0.4262 | 0.8333 |
| 0.327 | 35.0 | 26250 | 0.4246 | 0.8333 |
| 0.3799 | 36.0 | 27000 | 0.4233 | 0.8367 |
| 0.3186 | 37.0 | 27750 | 0.4226 | 0.835 |
| 0.3955 | 38.0 | 28500 | 0.4215 | 0.835 |
| 0.4171 | 39.0 | 29250 | 0.4206 | 0.835 |
| 0.4116 | 40.0 | 30000 | 0.4196 | 0.8367 |
| 0.369 | 41.0 | 30750 | 0.4189 | 0.8383 |
| 0.3461 | 42.0 | 31500 | 0.4184 | 0.8383 |
| 0.3837 | 43.0 | 32250 | 0.4178 | 0.8417 |
| 0.3565 | 44.0 | 33000 | 0.4174 | 0.8417 |
| 0.3745 | 45.0 | 33750 | 0.4170 | 0.8417 |
| 0.3413 | 46.0 | 34500 | 0.4167 | 0.8417 |
| 0.301 | 47.0 | 35250 | 0.4164 | 0.8417 |
| 0.3105 | 48.0 | 36000 | 0.4162 | 0.8417 |
| 0.3511 | 49.0 | 36750 | 0.4161 | 0.8417 |
| 0.3221 | 50.0 | 37500 | 0.4161 | 0.8417 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_sgd_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_sgd_00001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9601
- Accuracy: 0.51
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4156 | 1.0 | 750 | 1.2978 | 0.34 |
| 1.3315 | 2.0 | 1500 | 1.2425 | 0.3483 |
| 1.2993 | 3.0 | 2250 | 1.2021 | 0.37 |
| 1.2642 | 4.0 | 3000 | 1.1733 | 0.3717 |
| 1.1084 | 5.0 | 3750 | 1.1526 | 0.375 |
| 1.1915 | 6.0 | 4500 | 1.1373 | 0.3733 |
| 1.1121 | 7.0 | 5250 | 1.1248 | 0.3817 |
| 1.1023 | 8.0 | 6000 | 1.1144 | 0.39 |
| 1.0611 | 9.0 | 6750 | 1.1051 | 0.3867 |
| 1.0698 | 10.0 | 7500 | 1.0965 | 0.39 |
| 1.0512 | 11.0 | 8250 | 1.0884 | 0.4017 |
| 1.0962 | 12.0 | 9000 | 1.0808 | 0.405 |
| 1.0873 | 13.0 | 9750 | 1.0735 | 0.4117 |
| 1.0536 | 14.0 | 10500 | 1.0664 | 0.4183 |
| 1.0525 | 15.0 | 11250 | 1.0596 | 0.4283 |
| 1.026 | 16.0 | 12000 | 1.0532 | 0.4317 |
| 1.0131 | 17.0 | 12750 | 1.0470 | 0.44 |
| 0.9786 | 18.0 | 13500 | 1.0410 | 0.4433 |
| 0.9869 | 19.0 | 14250 | 1.0353 | 0.4467 |
| 0.9996 | 20.0 | 15000 | 1.0299 | 0.4517 |
| 1.0078 | 21.0 | 15750 | 1.0247 | 0.4533 |
| 0.9709 | 22.0 | 16500 | 1.0197 | 0.4617 |
| 1.009 | 23.0 | 17250 | 1.0149 | 0.4633 |
| 1.0068 | 24.0 | 18000 | 1.0104 | 0.4633 |
| 0.9737 | 25.0 | 18750 | 1.0061 | 0.47 |
| 0.9634 | 26.0 | 19500 | 1.0021 | 0.4767 |
| 0.9648 | 27.0 | 20250 | 0.9982 | 0.4783 |
| 0.931 | 28.0 | 21000 | 0.9946 | 0.485 |
| 0.993 | 29.0 | 21750 | 0.9911 | 0.4867 |
| 0.9852 | 30.0 | 22500 | 0.9879 | 0.49 |
| 0.9579 | 31.0 | 23250 | 0.9848 | 0.49 |
| 0.9747 | 32.0 | 24000 | 0.9819 | 0.4933 |
| 0.9501 | 33.0 | 24750 | 0.9793 | 0.5017 |
| 0.9432 | 34.0 | 25500 | 0.9768 | 0.5033 |
| 0.9384 | 35.0 | 26250 | 0.9745 | 0.505 |
| 0.9356 | 36.0 | 27000 | 0.9724 | 0.505 |
| 0.9023 | 37.0 | 27750 | 0.9705 | 0.5067 |
| 0.9257 | 38.0 | 28500 | 0.9687 | 0.5083 |
| 0.9635 | 39.0 | 29250 | 0.9672 | 0.5083 |
| 0.9335 | 40.0 | 30000 | 0.9658 | 0.51 |
| 0.8943 | 41.0 | 30750 | 0.9645 | 0.51 |
| 0.9485 | 42.0 | 31500 | 0.9635 | 0.51 |
| 0.976 | 43.0 | 32250 | 0.9626 | 0.51 |
| 0.9386 | 44.0 | 33000 | 0.9619 | 0.51 |
| 0.9526 | 45.0 | 33750 | 0.9613 | 0.51 |
| 0.9016 | 46.0 | 34500 | 0.9608 | 0.51 |
| 0.9008 | 47.0 | 35250 | 0.9605 | 0.51 |
| 0.9525 | 48.0 | 36000 | 0.9603 | 0.51 |
| 0.8965 | 49.0 | 36750 | 0.9602 | 0.51 |
| 0.8897 | 50.0 | 37500 | 0.9601 | 0.51 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_sgd_0001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_sgd_0001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3767
- Accuracy: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.081 | 1.0 | 750 | 1.0837 | 0.4183 |
| 0.9664 | 2.0 | 1500 | 0.9960 | 0.4867 |
| 0.9109 | 3.0 | 2250 | 0.9172 | 0.5567 |
| 0.8062 | 4.0 | 3000 | 0.8454 | 0.6017 |
| 0.7721 | 5.0 | 3750 | 0.7755 | 0.6483 |
| 0.6595 | 6.0 | 4500 | 0.7101 | 0.6883 |
| 0.5526 | 7.0 | 5250 | 0.6552 | 0.715 |
| 0.6103 | 8.0 | 6000 | 0.6092 | 0.7533 |
| 0.6584 | 9.0 | 6750 | 0.5725 | 0.7633 |
| 0.5611 | 10.0 | 7500 | 0.5452 | 0.7983 |
| 0.5536 | 11.0 | 8250 | 0.5225 | 0.8067 |
| 0.522 | 12.0 | 9000 | 0.5053 | 0.8117 |
| 0.5314 | 13.0 | 9750 | 0.4913 | 0.8167 |
| 0.4936 | 14.0 | 10500 | 0.4799 | 0.8267 |
| 0.5195 | 15.0 | 11250 | 0.4692 | 0.8283 |
| 0.4075 | 16.0 | 12000 | 0.4606 | 0.8283 |
| 0.4566 | 17.0 | 12750 | 0.4529 | 0.8317 |
| 0.4172 | 18.0 | 13500 | 0.4445 | 0.8367 |
| 0.4556 | 19.0 | 14250 | 0.4390 | 0.83 |
| 0.4667 | 20.0 | 15000 | 0.4334 | 0.8333 |
| 0.3932 | 21.0 | 15750 | 0.4273 | 0.8333 |
| 0.4625 | 22.0 | 16500 | 0.4229 | 0.8367 |
| 0.418 | 23.0 | 17250 | 0.4180 | 0.8383 |
| 0.3957 | 24.0 | 18000 | 0.4143 | 0.8367 |
| 0.4114 | 25.0 | 18750 | 0.4106 | 0.8367 |
| 0.4039 | 26.0 | 19500 | 0.4070 | 0.8367 |
| 0.3652 | 27.0 | 20250 | 0.4039 | 0.84 |
| 0.3862 | 28.0 | 21000 | 0.4011 | 0.84 |
| 0.4364 | 29.0 | 21750 | 0.3984 | 0.84 |
| 0.3781 | 30.0 | 22500 | 0.3966 | 0.8417 |
| 0.3636 | 31.0 | 23250 | 0.3941 | 0.8383 |
| 0.3588 | 32.0 | 24000 | 0.3920 | 0.84 |
| 0.4007 | 33.0 | 24750 | 0.3903 | 0.8417 |
| 0.3328 | 34.0 | 25500 | 0.3888 | 0.84 |
| 0.3699 | 35.0 | 26250 | 0.3865 | 0.8417 |
| 0.3686 | 36.0 | 27000 | 0.3852 | 0.8433 |
| 0.315 | 37.0 | 27750 | 0.3840 | 0.8467 |
| 0.3799 | 38.0 | 28500 | 0.3828 | 0.8467 |
| 0.3659 | 39.0 | 29250 | 0.3817 | 0.8467 |
| 0.3715 | 40.0 | 30000 | 0.3806 | 0.8467 |
| 0.3582 | 41.0 | 30750 | 0.3798 | 0.8467 |
| 0.4093 | 42.0 | 31500 | 0.3792 | 0.8483 |
| 0.3651 | 43.0 | 32250 | 0.3786 | 0.8483 |
| 0.3713 | 44.0 | 33000 | 0.3782 | 0.85 |
| 0.3675 | 45.0 | 33750 | 0.3776 | 0.85 |
| 0.3336 | 46.0 | 34500 | 0.3773 | 0.85 |
| 0.4464 | 47.0 | 35250 | 0.3769 | 0.85 |
| 0.3703 | 48.0 | 36000 | 0.3768 | 0.85 |
| 0.366 | 49.0 | 36750 | 0.3767 | 0.85 |
| 0.311 | 50.0 | 37500 | 0.3767 | 0.85 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_sgd_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_sgd_00001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9599
- Accuracy: 0.5233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.3812 | 1.0 | 750 | 1.2940 | 0.36 |
| 1.3175 | 2.0 | 1500 | 1.2389 | 0.3683 |
| 1.2691 | 3.0 | 2250 | 1.1984 | 0.3733 |
| 1.221 | 4.0 | 3000 | 1.1690 | 0.385 |
| 1.1712 | 5.0 | 3750 | 1.1479 | 0.3967 |
| 1.1485 | 6.0 | 4500 | 1.1321 | 0.4067 |
| 1.0967 | 7.0 | 5250 | 1.1194 | 0.3917 |
| 1.0947 | 8.0 | 6000 | 1.1088 | 0.4017 |
| 1.1331 | 9.0 | 6750 | 1.0995 | 0.405 |
| 1.0758 | 10.0 | 7500 | 1.0911 | 0.4167 |
| 1.0859 | 11.0 | 8250 | 1.0832 | 0.4117 |
| 1.074 | 12.0 | 9000 | 1.0758 | 0.4217 |
| 1.0354 | 13.0 | 9750 | 1.0688 | 0.4233 |
| 1.0611 | 14.0 | 10500 | 1.0620 | 0.4217 |
| 1.0504 | 15.0 | 11250 | 1.0556 | 0.425 |
| 1.0195 | 16.0 | 12000 | 1.0495 | 0.4367 |
| 1.0374 | 17.0 | 12750 | 1.0437 | 0.4383 |
| 1.0062 | 18.0 | 13500 | 1.0380 | 0.4433 |
| 1.0602 | 19.0 | 14250 | 1.0326 | 0.45 |
| 1.024 | 20.0 | 15000 | 1.0275 | 0.4533 |
| 0.9853 | 21.0 | 15750 | 1.0225 | 0.4567 |
| 1.024 | 22.0 | 16500 | 1.0178 | 0.46 |
| 1.0062 | 23.0 | 17250 | 1.0132 | 0.4617 |
| 0.9775 | 24.0 | 18000 | 1.0089 | 0.4683 |
| 0.9615 | 25.0 | 18750 | 1.0048 | 0.4733 |
| 0.9865 | 26.0 | 19500 | 1.0008 | 0.4783 |
| 0.9677 | 27.0 | 20250 | 0.9971 | 0.4867 |
| 0.9698 | 28.0 | 21000 | 0.9935 | 0.4867 |
| 0.9829 | 29.0 | 21750 | 0.9901 | 0.49 |
| 0.9556 | 30.0 | 22500 | 0.9870 | 0.49 |
| 0.963 | 31.0 | 23250 | 0.9840 | 0.4917 |
| 0.9489 | 32.0 | 24000 | 0.9813 | 0.495 |
| 0.9694 | 33.0 | 24750 | 0.9787 | 0.4967 |
| 0.9392 | 34.0 | 25500 | 0.9762 | 0.4967 |
| 0.9586 | 35.0 | 26250 | 0.9740 | 0.5 |
| 0.9291 | 36.0 | 27000 | 0.9720 | 0.5083 |
| 0.9064 | 37.0 | 27750 | 0.9701 | 0.5117 |
| 0.9352 | 38.0 | 28500 | 0.9684 | 0.5117 |
| 0.9164 | 39.0 | 29250 | 0.9668 | 0.5133 |
| 0.9501 | 40.0 | 30000 | 0.9654 | 0.515 |
| 0.8967 | 41.0 | 30750 | 0.9642 | 0.5167 |
| 0.9489 | 42.0 | 31500 | 0.9632 | 0.5167 |
| 0.9594 | 43.0 | 32250 | 0.9623 | 0.52 |
| 0.9042 | 44.0 | 33000 | 0.9616 | 0.5217 |
| 0.9218 | 45.0 | 33750 | 0.9610 | 0.5217 |
| 0.9234 | 46.0 | 34500 | 0.9605 | 0.5217 |
| 0.9392 | 47.0 | 35250 | 0.9602 | 0.5217 |
| 0.9497 | 48.0 | 36000 | 0.9600 | 0.525 |
| 0.9139 | 49.0 | 36750 | 0.9599 | 0.5233 |
| 0.8915 | 50.0 | 37500 | 0.9599 | 0.5233 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_sgd_001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_sgd_001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2805
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5873 | 1.0 | 750 | 0.5427 | 0.8017 |
| 0.4134 | 2.0 | 1500 | 0.4078 | 0.8383 |
| 0.4003 | 3.0 | 2250 | 0.3567 | 0.8583 |
| 0.322 | 4.0 | 3000 | 0.3309 | 0.8733 |
| 0.3592 | 5.0 | 3750 | 0.3090 | 0.8767 |
| 0.2384 | 6.0 | 4500 | 0.3021 | 0.8717 |
| 0.2287 | 7.0 | 5250 | 0.2872 | 0.8833 |
| 0.2763 | 8.0 | 6000 | 0.2770 | 0.8883 |
| 0.301 | 9.0 | 6750 | 0.2801 | 0.89 |
| 0.2498 | 10.0 | 7500 | 0.2717 | 0.8933 |
| 0.2639 | 11.0 | 8250 | 0.2693 | 0.8967 |
| 0.2576 | 12.0 | 9000 | 0.2726 | 0.8967 |
| 0.2998 | 13.0 | 9750 | 0.2655 | 0.905 |
| 0.2222 | 14.0 | 10500 | 0.2676 | 0.8933 |
| 0.2757 | 15.0 | 11250 | 0.2607 | 0.8933 |
| 0.1644 | 16.0 | 12000 | 0.2662 | 0.91 |
| 0.2069 | 17.0 | 12750 | 0.2656 | 0.9033 |
| 0.2175 | 18.0 | 13500 | 0.2618 | 0.9067 |
| 0.2174 | 19.0 | 14250 | 0.2668 | 0.9 |
| 0.1626 | 20.0 | 15000 | 0.2708 | 0.8983 |
| 0.1772 | 21.0 | 15750 | 0.2632 | 0.9017 |
| 0.1739 | 22.0 | 16500 | 0.2644 | 0.9017 |
| 0.2129 | 23.0 | 17250 | 0.2644 | 0.8983 |
| 0.1768 | 24.0 | 18000 | 0.2642 | 0.8983 |
| 0.1436 | 25.0 | 18750 | 0.2692 | 0.8933 |
| 0.1864 | 26.0 | 19500 | 0.2647 | 0.8983 |
| 0.13 | 27.0 | 20250 | 0.2627 | 0.8967 |
| 0.1786 | 28.0 | 21000 | 0.2674 | 0.8967 |
| 0.1885 | 29.0 | 21750 | 0.2653 | 0.895 |
| 0.1896 | 30.0 | 22500 | 0.2757 | 0.8867 |
| 0.1887 | 31.0 | 23250 | 0.2629 | 0.8983 |
| 0.1377 | 32.0 | 24000 | 0.2703 | 0.89 |
| 0.1805 | 33.0 | 24750 | 0.2693 | 0.8917 |
| 0.1524 | 34.0 | 25500 | 0.2706 | 0.89 |
| 0.1113 | 35.0 | 26250 | 0.2737 | 0.8883 |
| 0.153 | 36.0 | 27000 | 0.2742 | 0.8867 |
| 0.1281 | 37.0 | 27750 | 0.2787 | 0.8817 |
| 0.112 | 38.0 | 28500 | 0.2764 | 0.885 |
| 0.1149 | 39.0 | 29250 | 0.2767 | 0.885 |
| 0.136 | 40.0 | 30000 | 0.2752 | 0.8833 |
| 0.1297 | 41.0 | 30750 | 0.2749 | 0.8867 |
| 0.1614 | 42.0 | 31500 | 0.2776 | 0.8833 |
| 0.1176 | 43.0 | 32250 | 0.2769 | 0.8817 |
| 0.1355 | 44.0 | 33000 | 0.2814 | 0.8817 |
| 0.1418 | 45.0 | 33750 | 0.2806 | 0.8833 |
| 0.1165 | 46.0 | 34500 | 0.2801 | 0.8817 |
| 0.1556 | 47.0 | 35250 | 0.2815 | 0.88 |
| 0.1322 | 48.0 | 36000 | 0.2803 | 0.8817 |
| 0.1369 | 49.0 | 36750 | 0.2803 | 0.8833 |
| 0.1026 | 50.0 | 37500 | 0.2805 | 0.88 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
A2H0H0R1/mobilenet_v2_1.0_224-plant-disease
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilenet_v2_1.0_224-plant-disease
This model is a fine-tuned version of [google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3579
- Accuracy: 0.9330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 100
- eval_batch_size: 100
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 400
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0369 | 1.0 | 158 | 0.9116 | 0.8417 |
| 0.4523 | 2.0 | 316 | 0.4556 | 0.9038 |
| 0.3848 | 3.0 | 474 | 0.3579 | 0.9330 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"apple___apple_scab",
"apple___black_rot",
"apple___cedar_apple_rust",
"apple___healthy",
"blueberry___healthy",
"cherry_(including_sour)___powdery_mildew",
"cherry_(including_sour)___healthy",
"corn_(maize)___cercospora_leaf_spot gray_leaf_spot",
"corn_(maize)___common_rust_",
"corn_(maize)___northern_leaf_blight",
"corn_(maize)___healthy",
"grape___black_rot",
"grape___esca_(black_measles)",
"grape___leaf_blight_(isariopsis_leaf_spot)",
"grape___healthy",
"orange___haunglongbing_(citrus_greening)",
"peach___bacterial_spot",
"peach___healthy",
"pepper,_bell___bacterial_spot",
"pepper,_bell___healthy",
"potato___early_blight",
"potato___late_blight",
"potato___healthy",
"raspberry___healthy",
"soybean___healthy",
"squash___powdery_mildew",
"strawberry___leaf_scorch",
"strawberry___healthy",
"tomato___bacterial_spot",
"tomato___early_blight",
"tomato___late_blight",
"tomato___leaf_mold",
"tomato___septoria_leaf_spot",
"tomato___spider_mites two-spotted_spider_mite",
"tomato___target_spot",
"tomato___tomato_yellow_leaf_curl_virus",
"tomato___tomato_mosaic_virus",
"tomato___healthy"
] |
khaled44/vit-large-beans-demo-v5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-beans-demo-v5
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6497
- Accuracy: 0.7335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 35
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.581 | 1.25 | 1000 | 0.6497 | 0.7335 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"dr",
"g",
"nd",
"wd",
"other"
] |
A2H0H0R1/mobilenet_v2_1.0_224-plant-disease2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilenet_v2_1.0_224-plant-disease2
This model is a fine-tuned version of [google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1510
- Accuracy: 0.9639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 100
- eval_batch_size: 100
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 400
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2171 | 1.0 | 158 | 1.0595 | 0.8188 |
| 0.4082 | 2.0 | 316 | 0.3154 | 0.9387 |
| 0.295 | 3.0 | 474 | 0.2191 | 0.9555 |
| 0.2266 | 4.0 | 633 | 0.1747 | 0.9595 |
| 0.2168 | 5.0 | 791 | 0.2135 | 0.9499 |
| 0.2091 | 5.99 | 948 | 0.1510 | 0.9639 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"apple___apple_scab",
"apple___black_rot",
"apple___cedar_apple_rust",
"apple___healthy",
"blueberry___healthy",
"cherry_(including_sour)___powdery_mildew",
"cherry_(including_sour)___healthy",
"corn_(maize)___cercospora_leaf_spot gray_leaf_spot",
"corn_(maize)___common_rust_",
"corn_(maize)___northern_leaf_blight",
"corn_(maize)___healthy",
"grape___black_rot",
"grape___esca_(black_measles)",
"grape___leaf_blight_(isariopsis_leaf_spot)",
"grape___healthy",
"orange___haunglongbing_(citrus_greening)",
"peach___bacterial_spot",
"peach___healthy",
"pepper,_bell___bacterial_spot",
"pepper,_bell___healthy",
"potato___early_blight",
"potato___late_blight",
"potato___healthy",
"raspberry___healthy",
"soybean___healthy",
"squash___powdery_mildew",
"strawberry___leaf_scorch",
"strawberry___healthy",
"tomato___bacterial_spot",
"tomato___early_blight",
"tomato___late_blight",
"tomato___leaf_mold",
"tomato___septoria_leaf_spot",
"tomato___spider_mites two-spotted_spider_mite",
"tomato___target_spot",
"tomato___tomato_yellow_leaf_curl_virus",
"tomato___tomato_mosaic_virus",
"tomato___healthy"
] |
hkivancoral/smids_10x_deit_tiny_rms_001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_rms_001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8993
- Accuracy: 0.8047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8539 | 1.0 | 751 | 0.7969 | 0.6144 |
| 0.672 | 2.0 | 1502 | 0.7985 | 0.6194 |
| 0.6698 | 3.0 | 2253 | 0.6650 | 0.6962 |
| 0.6307 | 4.0 | 3004 | 0.6193 | 0.7379 |
| 0.6465 | 5.0 | 3755 | 0.8387 | 0.6244 |
| 0.6942 | 6.0 | 4506 | 0.5940 | 0.7329 |
| 0.5576 | 7.0 | 5257 | 0.5999 | 0.7312 |
| 0.5549 | 8.0 | 6008 | 0.6970 | 0.6795 |
| 0.5488 | 9.0 | 6759 | 0.5984 | 0.7229 |
| 0.5437 | 10.0 | 7510 | 0.5960 | 0.7412 |
| 0.5332 | 11.0 | 8261 | 0.5776 | 0.7362 |
| 0.5215 | 12.0 | 9012 | 0.5476 | 0.7880 |
| 0.5333 | 13.0 | 9763 | 0.5233 | 0.7713 |
| 0.4806 | 14.0 | 10514 | 0.5633 | 0.7629 |
| 0.4615 | 15.0 | 11265 | 0.5133 | 0.7830 |
| 0.535 | 16.0 | 12016 | 0.5400 | 0.7362 |
| 0.4592 | 17.0 | 12767 | 0.5660 | 0.7546 |
| 0.4186 | 18.0 | 13518 | 0.5267 | 0.7913 |
| 0.4986 | 19.0 | 14269 | 0.5193 | 0.7846 |
| 0.4777 | 20.0 | 15020 | 0.5053 | 0.7880 |
| 0.4305 | 21.0 | 15771 | 0.5032 | 0.7846 |
| 0.4268 | 22.0 | 16522 | 0.5151 | 0.7830 |
| 0.3969 | 23.0 | 17273 | 0.5356 | 0.7846 |
| 0.4042 | 24.0 | 18024 | 0.5574 | 0.7780 |
| 0.3889 | 25.0 | 18775 | 0.5262 | 0.7679 |
| 0.4127 | 26.0 | 19526 | 0.5352 | 0.7696 |
| 0.3984 | 27.0 | 20277 | 0.5102 | 0.7796 |
| 0.4155 | 28.0 | 21028 | 0.5170 | 0.8030 |
| 0.2889 | 29.0 | 21779 | 0.5194 | 0.8080 |
| 0.2798 | 30.0 | 22530 | 0.5140 | 0.8063 |
| 0.3345 | 31.0 | 23281 | 0.5542 | 0.8097 |
| 0.3266 | 32.0 | 24032 | 0.5467 | 0.7763 |
| 0.3139 | 33.0 | 24783 | 0.5408 | 0.8013 |
| 0.2727 | 34.0 | 25534 | 0.6157 | 0.7997 |
| 0.2326 | 35.0 | 26285 | 0.6453 | 0.8080 |
| 0.2862 | 36.0 | 27036 | 0.6184 | 0.8080 |
| 0.2315 | 37.0 | 27787 | 0.5988 | 0.8097 |
| 0.2292 | 38.0 | 28538 | 0.5807 | 0.8164 |
| 0.1347 | 39.0 | 29289 | 0.6678 | 0.8247 |
| 0.2469 | 40.0 | 30040 | 0.7259 | 0.8080 |
| 0.1108 | 41.0 | 30791 | 0.8996 | 0.7813 |
| 0.1298 | 42.0 | 31542 | 0.8857 | 0.8247 |
| 0.1746 | 43.0 | 32293 | 0.9061 | 0.8197 |
| 0.0873 | 44.0 | 33044 | 1.0379 | 0.8230 |
| 0.087 | 45.0 | 33795 | 1.2137 | 0.8130 |
| 0.1277 | 46.0 | 34546 | 1.3333 | 0.8197 |
| 0.069 | 47.0 | 35297 | 1.4653 | 0.8164 |
| 0.0206 | 48.0 | 36048 | 1.6979 | 0.8063 |
| 0.0279 | 49.0 | 36799 | 1.8534 | 0.8063 |
| 0.02 | 50.0 | 37550 | 1.8993 | 0.8047 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_rms_001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_rms_001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1068
- Accuracy: 0.8020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8332 | 1.0 | 750 | 0.7567 | 0.6090 |
| 0.7395 | 2.0 | 1500 | 0.7599 | 0.6123 |
| 0.682 | 3.0 | 2250 | 0.6859 | 0.6905 |
| 0.725 | 4.0 | 3000 | 0.6463 | 0.7171 |
| 0.6632 | 5.0 | 3750 | 0.6560 | 0.7238 |
| 0.5777 | 6.0 | 4500 | 0.6347 | 0.7072 |
| 0.6357 | 7.0 | 5250 | 0.6141 | 0.7321 |
| 0.595 | 8.0 | 6000 | 0.6313 | 0.7121 |
| 0.5551 | 9.0 | 6750 | 0.6406 | 0.6955 |
| 0.5544 | 10.0 | 7500 | 0.5482 | 0.7720 |
| 0.5611 | 11.0 | 8250 | 0.5288 | 0.7704 |
| 0.6632 | 12.0 | 9000 | 0.5868 | 0.7537 |
| 0.5709 | 13.0 | 9750 | 0.6149 | 0.7288 |
| 0.4511 | 14.0 | 10500 | 0.4977 | 0.8020 |
| 0.4295 | 15.0 | 11250 | 0.5625 | 0.7770 |
| 0.4618 | 16.0 | 12000 | 0.5273 | 0.7837 |
| 0.4342 | 17.0 | 12750 | 0.5207 | 0.7804 |
| 0.4253 | 18.0 | 13500 | 0.5301 | 0.7720 |
| 0.4352 | 19.0 | 14250 | 0.5236 | 0.7754 |
| 0.418 | 20.0 | 15000 | 0.5318 | 0.7804 |
| 0.4496 | 21.0 | 15750 | 0.5216 | 0.7970 |
| 0.4003 | 22.0 | 16500 | 0.5391 | 0.7720 |
| 0.4411 | 23.0 | 17250 | 0.4904 | 0.8003 |
| 0.3266 | 24.0 | 18000 | 0.5436 | 0.7854 |
| 0.3733 | 25.0 | 18750 | 0.6780 | 0.7521 |
| 0.3536 | 26.0 | 19500 | 0.5100 | 0.8003 |
| 0.4154 | 27.0 | 20250 | 0.5545 | 0.8020 |
| 0.414 | 28.0 | 21000 | 0.5841 | 0.7937 |
| 0.3146 | 29.0 | 21750 | 0.5867 | 0.7887 |
| 0.3401 | 30.0 | 22500 | 0.5923 | 0.7987 |
| 0.2331 | 31.0 | 23250 | 0.6367 | 0.7837 |
| 0.238 | 32.0 | 24000 | 0.6276 | 0.8070 |
| 0.209 | 33.0 | 24750 | 0.6337 | 0.8070 |
| 0.2121 | 34.0 | 25500 | 0.6961 | 0.7854 |
| 0.2544 | 35.0 | 26250 | 0.7936 | 0.7870 |
| 0.2442 | 36.0 | 27000 | 0.7270 | 0.7970 |
| 0.2459 | 37.0 | 27750 | 0.7553 | 0.8020 |
| 0.1428 | 38.0 | 28500 | 0.8600 | 0.7987 |
| 0.0788 | 39.0 | 29250 | 0.9727 | 0.7937 |
| 0.1811 | 40.0 | 30000 | 1.0324 | 0.7937 |
| 0.1405 | 41.0 | 30750 | 1.0037 | 0.8103 |
| 0.1282 | 42.0 | 31500 | 1.1830 | 0.7937 |
| 0.0664 | 43.0 | 32250 | 1.2624 | 0.7970 |
| 0.04 | 44.0 | 33000 | 1.4942 | 0.7987 |
| 0.0582 | 45.0 | 33750 | 1.4631 | 0.8103 |
| 0.0738 | 46.0 | 34500 | 1.6687 | 0.8120 |
| 0.0282 | 47.0 | 35250 | 1.8321 | 0.8087 |
| 0.0021 | 48.0 | 36000 | 1.9181 | 0.8087 |
| 0.01 | 49.0 | 36750 | 2.0036 | 0.8037 |
| 0.0004 | 50.0 | 37500 | 2.1068 | 0.8020 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_rms_001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_rms_001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5645
- Accuracy: 0.8183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8687 | 1.0 | 750 | 0.8350 | 0.5417 |
| 0.7645 | 2.0 | 1500 | 0.8377 | 0.545 |
| 0.8147 | 3.0 | 2250 | 0.8321 | 0.5633 |
| 0.707 | 4.0 | 3000 | 0.8069 | 0.5617 |
| 0.7456 | 5.0 | 3750 | 0.7498 | 0.66 |
| 0.6732 | 6.0 | 4500 | 0.6947 | 0.7067 |
| 0.7173 | 7.0 | 5250 | 0.6562 | 0.7233 |
| 0.6807 | 8.0 | 6000 | 0.6396 | 0.735 |
| 0.5542 | 9.0 | 6750 | 0.6404 | 0.725 |
| 0.5945 | 10.0 | 7500 | 0.6253 | 0.715 |
| 0.5981 | 11.0 | 8250 | 0.6007 | 0.7333 |
| 0.6124 | 12.0 | 9000 | 0.5926 | 0.7467 |
| 0.5651 | 13.0 | 9750 | 0.6373 | 0.725 |
| 0.5876 | 14.0 | 10500 | 0.6106 | 0.735 |
| 0.595 | 15.0 | 11250 | 0.5814 | 0.7417 |
| 0.6259 | 16.0 | 12000 | 0.6014 | 0.755 |
| 0.5932 | 17.0 | 12750 | 0.6177 | 0.7433 |
| 0.5894 | 18.0 | 13500 | 0.7384 | 0.68 |
| 0.605 | 19.0 | 14250 | 0.6249 | 0.715 |
| 0.5663 | 20.0 | 15000 | 0.6124 | 0.7367 |
| 0.5134 | 21.0 | 15750 | 0.5785 | 0.7433 |
| 0.6186 | 22.0 | 16500 | 0.5747 | 0.7533 |
| 0.5238 | 23.0 | 17250 | 0.5818 | 0.76 |
| 0.5431 | 24.0 | 18000 | 0.5901 | 0.73 |
| 0.5802 | 25.0 | 18750 | 0.5751 | 0.7583 |
| 0.532 | 26.0 | 19500 | 0.6079 | 0.745 |
| 0.4391 | 27.0 | 20250 | 0.5654 | 0.7683 |
| 0.5546 | 28.0 | 21000 | 0.5837 | 0.7717 |
| 0.5308 | 29.0 | 21750 | 0.5546 | 0.76 |
| 0.5138 | 30.0 | 22500 | 0.5584 | 0.7633 |
| 0.4508 | 31.0 | 23250 | 0.5616 | 0.78 |
| 0.4928 | 32.0 | 24000 | 0.5495 | 0.7683 |
| 0.5015 | 33.0 | 24750 | 0.5514 | 0.7717 |
| 0.4951 | 34.0 | 25500 | 0.5352 | 0.7683 |
| 0.47 | 35.0 | 26250 | 0.5246 | 0.7767 |
| 0.4942 | 36.0 | 27000 | 0.5348 | 0.7833 |
| 0.4733 | 37.0 | 27750 | 0.5546 | 0.7833 |
| 0.4787 | 38.0 | 28500 | 0.5356 | 0.7883 |
| 0.4477 | 39.0 | 29250 | 0.5284 | 0.795 |
| 0.5359 | 40.0 | 30000 | 0.5502 | 0.8 |
| 0.4568 | 41.0 | 30750 | 0.5425 | 0.7883 |
| 0.4376 | 42.0 | 31500 | 0.5402 | 0.79 |
| 0.4262 | 43.0 | 32250 | 0.5808 | 0.7617 |
| 0.4405 | 44.0 | 33000 | 0.5553 | 0.7983 |
| 0.3884 | 45.0 | 33750 | 0.5497 | 0.7783 |
| 0.37 | 46.0 | 34500 | 0.5855 | 0.8067 |
| 0.413 | 47.0 | 35250 | 0.5591 | 0.815 |
| 0.3776 | 48.0 | 36000 | 0.5614 | 0.8067 |
| 0.3505 | 49.0 | 36750 | 0.5713 | 0.805 |
| 0.3537 | 50.0 | 37500 | 0.5645 | 0.8183 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_rms_001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_rms_001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8355
- Accuracy: 0.8117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7719 | 1.0 | 750 | 0.7663 | 0.6417 |
| 0.7164 | 2.0 | 1500 | 0.6422 | 0.7167 |
| 0.6166 | 3.0 | 2250 | 0.5992 | 0.7233 |
| 0.5878 | 4.0 | 3000 | 0.6512 | 0.6783 |
| 0.5907 | 5.0 | 3750 | 0.5898 | 0.7183 |
| 0.5259 | 6.0 | 4500 | 0.5668 | 0.7617 |
| 0.4934 | 7.0 | 5250 | 0.5092 | 0.7817 |
| 0.5606 | 8.0 | 6000 | 0.5230 | 0.76 |
| 0.5352 | 9.0 | 6750 | 0.5089 | 0.7733 |
| 0.4475 | 10.0 | 7500 | 0.5880 | 0.745 |
| 0.4921 | 11.0 | 8250 | 0.5315 | 0.765 |
| 0.5457 | 12.0 | 9000 | 0.5773 | 0.7583 |
| 0.4353 | 13.0 | 9750 | 0.5700 | 0.7533 |
| 0.4266 | 14.0 | 10500 | 0.5929 | 0.7633 |
| 0.4011 | 15.0 | 11250 | 0.5510 | 0.7883 |
| 0.4243 | 16.0 | 12000 | 0.5772 | 0.7633 |
| 0.2788 | 17.0 | 12750 | 0.5913 | 0.7817 |
| 0.36 | 18.0 | 13500 | 0.5472 | 0.7767 |
| 0.3727 | 19.0 | 14250 | 0.5501 | 0.7867 |
| 0.2873 | 20.0 | 15000 | 0.6706 | 0.77 |
| 0.3441 | 21.0 | 15750 | 0.5563 | 0.8083 |
| 0.3741 | 22.0 | 16500 | 0.5905 | 0.7767 |
| 0.2914 | 23.0 | 17250 | 0.6313 | 0.785 |
| 0.3593 | 24.0 | 18000 | 0.5992 | 0.7983 |
| 0.2548 | 25.0 | 18750 | 0.6167 | 0.8 |
| 0.2172 | 26.0 | 19500 | 0.6453 | 0.785 |
| 0.1957 | 27.0 | 20250 | 0.6311 | 0.815 |
| 0.2482 | 28.0 | 21000 | 0.7520 | 0.8067 |
| 0.1858 | 29.0 | 21750 | 0.7460 | 0.7917 |
| 0.1724 | 30.0 | 22500 | 0.6735 | 0.8183 |
| 0.1536 | 31.0 | 23250 | 0.8260 | 0.7933 |
| 0.1432 | 32.0 | 24000 | 0.9327 | 0.765 |
| 0.1489 | 33.0 | 24750 | 0.8695 | 0.7967 |
| 0.1215 | 34.0 | 25500 | 0.8392 | 0.8167 |
| 0.1302 | 35.0 | 26250 | 1.0000 | 0.8133 |
| 0.0677 | 36.0 | 27000 | 1.0715 | 0.8083 |
| 0.0727 | 37.0 | 27750 | 1.1501 | 0.7983 |
| 0.1221 | 38.0 | 28500 | 1.3342 | 0.7883 |
| 0.0469 | 39.0 | 29250 | 1.3213 | 0.8 |
| 0.068 | 40.0 | 30000 | 1.4945 | 0.8 |
| 0.0607 | 41.0 | 30750 | 1.4763 | 0.8133 |
| 0.0293 | 42.0 | 31500 | 1.8072 | 0.79 |
| 0.0304 | 43.0 | 32250 | 2.0290 | 0.7817 |
| 0.0187 | 44.0 | 33000 | 2.2554 | 0.7867 |
| 0.0136 | 45.0 | 33750 | 2.3220 | 0.8 |
| 0.0034 | 46.0 | 34500 | 2.4619 | 0.8033 |
| 0.0045 | 47.0 | 35250 | 2.5490 | 0.8 |
| 0.0159 | 48.0 | 36000 | 2.5993 | 0.825 |
| 0.0004 | 49.0 | 36750 | 2.7895 | 0.8083 |
| 0.0001 | 50.0 | 37500 | 2.8355 | 0.8117 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
A2H0H0R1/resnet-50-plant-disease
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-plant-disease
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3609
- Accuracy: 0.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 100
- eval_batch_size: 100
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 400
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.4023 | 1.0 | 158 | 3.2949 | 0.4071 |
| 1.9184 | 2.0 | 316 | 1.5580 | 0.7788 |
| 0.94 | 3.0 | 474 | 0.7401 | 0.8761 |
| 0.6491 | 4.0 | 633 | 0.4772 | 0.9118 |
| 0.5516 | 5.0 | 791 | 0.3857 | 0.9242 |
| 0.5164 | 5.99 | 948 | 0.3609 | 0.9286 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"apple apple scab",
"apple black rot",
"apple cedar apple rust",
"apple healthy",
"blueberry healthy",
"cherry (including sour) powdery mildew",
"cherry (including sour) healthy",
"corn (maize) cercospora leaf spot gray leaf spot",
"corn (maize) common rust ",
"corn (maize) northern leaf blight",
"corn (maize) healthy",
"grape black rot",
"grape esca (black measles)",
"grape leaf blight (isariopsis leaf spot)",
"grape healthy",
"orange haunglongbing (citrus greening)",
"peach bacterial spot",
"peach healthy",
"pepper, bell bacterial spot",
"pepper, bell healthy",
"potato early blight",
"potato late blight",
"potato healthy",
"raspberry healthy",
"soybean healthy",
"squash powdery mildew",
"strawberry leaf scorch",
"strawberry healthy",
"tomato bacterial spot",
"tomato early blight",
"tomato late blight",
"tomato leaf mold",
"tomato septoria leaf spot",
"tomato spider mites two-spotted spider mite",
"tomato target spot",
"tomato tomato yellow leaf curl virus",
"tomato tomato mosaic virus",
"tomato healthy"
] |
hkivancoral/smids_10x_deit_tiny_rms_001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_rms_001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1358
- Accuracy: 0.77
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.842 | 1.0 | 750 | 0.7944 | 0.635 |
| 0.7921 | 2.0 | 1500 | 0.7461 | 0.68 |
| 0.7457 | 3.0 | 2250 | 0.7489 | 0.6567 |
| 0.7198 | 4.0 | 3000 | 0.6696 | 0.7017 |
| 0.7308 | 5.0 | 3750 | 0.6733 | 0.7117 |
| 0.6476 | 6.0 | 4500 | 0.6584 | 0.7183 |
| 0.6495 | 7.0 | 5250 | 0.6399 | 0.72 |
| 0.6634 | 8.0 | 6000 | 0.6560 | 0.6933 |
| 0.7106 | 9.0 | 6750 | 0.6143 | 0.7217 |
| 0.6252 | 10.0 | 7500 | 0.6122 | 0.7117 |
| 0.622 | 11.0 | 8250 | 0.5967 | 0.7217 |
| 0.5747 | 12.0 | 9000 | 0.6620 | 0.6833 |
| 0.5895 | 13.0 | 9750 | 0.5480 | 0.7533 |
| 0.5822 | 14.0 | 10500 | 0.5552 | 0.7517 |
| 0.5153 | 15.0 | 11250 | 0.5659 | 0.7583 |
| 0.6055 | 16.0 | 12000 | 0.6107 | 0.7233 |
| 0.575 | 17.0 | 12750 | 0.5677 | 0.7617 |
| 0.5736 | 18.0 | 13500 | 0.5602 | 0.7667 |
| 0.5782 | 19.0 | 14250 | 0.5634 | 0.76 |
| 0.6129 | 20.0 | 15000 | 0.5635 | 0.745 |
| 0.5336 | 21.0 | 15750 | 0.5596 | 0.755 |
| 0.506 | 22.0 | 16500 | 0.5757 | 0.76 |
| 0.524 | 23.0 | 17250 | 0.5491 | 0.7817 |
| 0.4616 | 24.0 | 18000 | 0.5444 | 0.775 |
| 0.5681 | 25.0 | 18750 | 0.5513 | 0.775 |
| 0.5138 | 26.0 | 19500 | 0.5393 | 0.77 |
| 0.3668 | 27.0 | 20250 | 0.5531 | 0.7683 |
| 0.4576 | 28.0 | 21000 | 0.5461 | 0.7833 |
| 0.4869 | 29.0 | 21750 | 0.5490 | 0.7817 |
| 0.4448 | 30.0 | 22500 | 0.5673 | 0.7817 |
| 0.4739 | 31.0 | 23250 | 0.5856 | 0.7717 |
| 0.3935 | 32.0 | 24000 | 0.5695 | 0.7983 |
| 0.4839 | 33.0 | 24750 | 0.5444 | 0.7983 |
| 0.3678 | 34.0 | 25500 | 0.5927 | 0.77 |
| 0.3843 | 35.0 | 26250 | 0.5986 | 0.7833 |
| 0.4018 | 36.0 | 27000 | 0.6231 | 0.7783 |
| 0.3249 | 37.0 | 27750 | 0.6467 | 0.7483 |
| 0.3738 | 38.0 | 28500 | 0.7366 | 0.76 |
| 0.3927 | 39.0 | 29250 | 0.6338 | 0.7633 |
| 0.315 | 40.0 | 30000 | 0.6392 | 0.78 |
| 0.2962 | 41.0 | 30750 | 0.7177 | 0.775 |
| 0.2563 | 42.0 | 31500 | 0.7289 | 0.7717 |
| 0.2899 | 43.0 | 32250 | 0.7576 | 0.7733 |
| 0.2733 | 44.0 | 33000 | 0.7845 | 0.7717 |
| 0.2911 | 45.0 | 33750 | 0.8279 | 0.77 |
| 0.2308 | 46.0 | 34500 | 0.8639 | 0.7767 |
| 0.2511 | 47.0 | 35250 | 0.9705 | 0.7667 |
| 0.1763 | 48.0 | 36000 | 1.0471 | 0.7633 |
| 0.1753 | 49.0 | 36750 | 1.1025 | 0.775 |
| 0.1437 | 50.0 | 37500 | 1.1358 | 0.77 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
A2H0H0R1/mobilenet_v2_1.0_224-plant-disease-new
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilenet_v2_1.0_224-plant-disease-new
This model is a fine-tuned version of [google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1287
- Accuracy: 0.9600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 100
- eval_batch_size: 100
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 400
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5043 | 1.0 | 366 | 0.4476 | 0.8886 |
| 0.2492 | 2.0 | 733 | 0.2550 | 0.9281 |
| 0.2069 | 3.0 | 1100 | 0.2332 | 0.9247 |
| 0.1716 | 4.0 | 1467 | 0.3329 | 0.8960 |
| 0.1602 | 5.0 | 1833 | 0.1999 | 0.9388 |
| 0.1633 | 5.99 | 2196 | 0.1287 | 0.9600 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"apple apple scab",
"apple black rot",
"apple cedar apple rust",
"apple healthy",
"blueberry healthy",
"cherry (including sour) powdery mildew",
"cherry (including sour) healthy",
"corn (maize) cercospora leaf spot gray leaf spot",
"corn (maize) common rust ",
"corn (maize) northern leaf blight",
"corn (maize) healthy",
"grape black rot",
"grape esca (black measles)",
"grape leaf blight (isariopsis leaf spot)",
"grape healthy",
"orange haunglongbing (citrus greening)",
"peach bacterial spot",
"peach healthy",
"pepper, bell bacterial spot",
"pepper, bell healthy",
"potato early blight",
"potato late blight",
"potato healthy",
"raspberry healthy",
"soybean healthy",
"squash powdery mildew",
"strawberry leaf scorch",
"strawberry healthy",
"tomato bacterial spot",
"tomato early blight",
"tomato late blight",
"tomato leaf mold",
"tomato septoria leaf spot",
"tomato spider mites two-spotted spider mite",
"tomato target spot",
"tomato tomato yellow leaf curl virus",
"tomato tomato mosaic virus",
"tomato healthy"
] |
codewithaman/swin-tiny-patch4-window7-224-finetuned-brain-ich
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-brain-ich
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0123
- Accuracy: 0.9961
- F1: 0.9961
- Recall: 0.9961
- Precision: 0.9961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.1234 | 1.0 | 180 | 0.0450 | 0.9840 | 0.9840 | 0.9840 | 0.9840 |
| 0.0837 | 2.0 | 360 | 0.0198 | 0.9926 | 0.9926 | 0.9926 | 0.9926 |
| 0.0373 | 3.0 | 540 | 0.0123 | 0.9961 | 0.9961 | 0.9961 | 0.9961 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
[
"gliomas tumor",
"meningiomas tumor",
"pituitary tumor"
] |
codewithaman/vit-base-patch16-224-in21k-finetuned-brain-ich
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# winwithaman/vit-base-patch16-224-in21k-finetuned-brain-ich
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an brain hemorrhage dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2848
- Train Accuracy: 0.9969
- Train Top-3-accuracy: 0.9992
- Validation Loss: 0.3786
- Validation Accuracy: 0.9590
- Validation Top-3-accuracy: 0.9892
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1230, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 2.2199 | 0.4215 | 0.6564 | 1.8634 | 0.5702 | 0.8099 | 0 |
| 1.5448 | 0.6976 | 0.8797 | 1.3110 | 0.7603 | 0.9028 | 1 |
| 1.0494 | 0.8694 | 0.9519 | 0.9507 | 0.8855 | 0.9590 | 2 |
| 0.7408 | 0.9381 | 0.9824 | 0.7499 | 0.9114 | 0.9806 | 3 |
| 0.5428 | 0.9756 | 0.9939 | 0.5831 | 0.9460 | 0.9849 | 4 |
| 0.4169 | 0.9901 | 0.9977 | 0.4895 | 0.9525 | 0.9914 | 5 |
| 0.3371 | 0.9947 | 0.9977 | 0.4194 | 0.9611 | 0.9892 | 6 |
| 0.2848 | 0.9969 | 0.9992 | 0.3786 | 0.9590 | 0.9892 | 7 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"no",
"yes"
] |
DazMashaly/swin-finetuned-food101
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-finetuned-food101
This model is a fine-tuned version of [microsoft/swin-large-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-large-patch4-window7-224-in22k) on the zindi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5697
- Accuracy: 0.7666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7373 | 1.0 | 173 | 0.6503 | 0.7366 |
| 0.6106 | 2.0 | 347 | 0.5950 | 0.7503 |
| 0.5135 | 2.99 | 519 | 0.5697 | 0.7666 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"dr",
"g",
"nd",
"wd",
"other"
] |
A2H0H0R1/swin-tiny-patch4-window7-224-plant-disease-new
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-plant-disease-new
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0163
- Accuracy: 0.9945
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 100
- eval_batch_size: 100
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 400
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1728 | 1.0 | 366 | 0.0762 | 0.9750 |
| 0.1022 | 2.0 | 733 | 0.0430 | 0.9855 |
| 0.0806 | 3.0 | 1100 | 0.0265 | 0.9912 |
| 0.0623 | 4.0 | 1467 | 0.0229 | 0.9931 |
| 0.046 | 5.0 | 1833 | 0.0189 | 0.9937 |
| 0.0471 | 5.99 | 2196 | 0.0163 | 0.9945 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"apple apple scab",
"apple black rot",
"apple cedar apple rust",
"apple healthy",
"blueberry healthy",
"cherry (including sour) powdery mildew",
"cherry (including sour) healthy",
"corn (maize) cercospora leaf spot gray leaf spot",
"corn (maize) common rust ",
"corn (maize) northern leaf blight",
"corn (maize) healthy",
"grape black rot",
"grape esca (black measles)",
"grape leaf blight (isariopsis leaf spot)",
"grape healthy",
"orange haunglongbing (citrus greening)",
"peach bacterial spot",
"peach healthy",
"pepper, bell bacterial spot",
"pepper, bell healthy",
"potato early blight",
"potato late blight",
"potato healthy",
"raspberry healthy",
"soybean healthy",
"squash powdery mildew",
"strawberry leaf scorch",
"strawberry healthy",
"tomato bacterial spot",
"tomato early blight",
"tomato late blight",
"tomato leaf mold",
"tomato septoria leaf spot",
"tomato spider mites two-spotted spider mite",
"tomato target spot",
"tomato tomato yellow leaf curl virus",
"tomato tomato mosaic virus",
"tomato healthy"
] |
hkivancoral/smids_10x_deit_tiny_rms_0001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_rms_0001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5641
- Accuracy: 0.7746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.9582 | 1.0 | 751 | 0.8489 | 0.5409 |
| 0.6559 | 2.0 | 1502 | 0.7002 | 0.6861 |
| 0.7114 | 3.0 | 2253 | 0.6777 | 0.6761 |
| 0.6817 | 4.0 | 3004 | 0.6465 | 0.6928 |
| 0.7123 | 5.0 | 3755 | 0.7746 | 0.6277 |
| 0.7345 | 6.0 | 4506 | 0.6493 | 0.7062 |
| 0.5686 | 7.0 | 5257 | 0.6822 | 0.7145 |
| 0.6113 | 8.0 | 6008 | 0.7258 | 0.6594 |
| 0.5841 | 9.0 | 6759 | 0.7856 | 0.6694 |
| 0.5988 | 10.0 | 7510 | 0.5594 | 0.7746 |
| 0.5433 | 11.0 | 8261 | 0.5624 | 0.7696 |
| 0.5465 | 12.0 | 9012 | 0.6273 | 0.7396 |
| 0.5648 | 13.0 | 9763 | 0.5471 | 0.7646 |
| 0.5274 | 14.0 | 10514 | 0.5762 | 0.7596 |
| 0.5506 | 15.0 | 11265 | 0.6408 | 0.7279 |
| 0.5782 | 16.0 | 12016 | 0.6203 | 0.7346 |
| 0.6505 | 17.0 | 12767 | 0.6410 | 0.7429 |
| 0.5965 | 18.0 | 13518 | 0.6045 | 0.7245 |
| 0.5759 | 19.0 | 14269 | 0.6441 | 0.7229 |
| 0.6397 | 20.0 | 15020 | 0.5941 | 0.7529 |
| 0.5964 | 21.0 | 15771 | 0.5925 | 0.7496 |
| 0.5874 | 22.0 | 16522 | 0.5857 | 0.7462 |
| 0.4809 | 23.0 | 17273 | 0.5684 | 0.7596 |
| 0.5688 | 24.0 | 18024 | 0.5817 | 0.7663 |
| 0.5465 | 25.0 | 18775 | 0.5474 | 0.7663 |
| 0.5665 | 26.0 | 19526 | 0.7830 | 0.6611 |
| 0.5896 | 27.0 | 20277 | 0.5680 | 0.7513 |
| 0.6236 | 28.0 | 21028 | 0.6568 | 0.6811 |
| 0.5166 | 29.0 | 21779 | 0.6086 | 0.7446 |
| 0.452 | 30.0 | 22530 | 0.5634 | 0.7579 |
| 0.4919 | 31.0 | 23281 | 0.5616 | 0.7780 |
| 0.5586 | 32.0 | 24032 | 0.5217 | 0.7679 |
| 0.4558 | 33.0 | 24783 | 0.5662 | 0.7513 |
| 0.4855 | 34.0 | 25534 | 0.5721 | 0.7613 |
| 0.4522 | 35.0 | 26285 | 0.5872 | 0.7663 |
| 0.527 | 36.0 | 27036 | 0.5260 | 0.7696 |
| 0.5143 | 37.0 | 27787 | 0.5855 | 0.7429 |
| 0.4104 | 38.0 | 28538 | 0.5552 | 0.7713 |
| 0.3918 | 39.0 | 29289 | 0.5587 | 0.7763 |
| 0.4719 | 40.0 | 30040 | 0.5354 | 0.7863 |
| 0.4591 | 41.0 | 30791 | 0.5453 | 0.7629 |
| 0.3895 | 42.0 | 31542 | 0.5592 | 0.7746 |
| 0.4622 | 43.0 | 32293 | 0.5533 | 0.7796 |
| 0.3949 | 44.0 | 33044 | 0.5492 | 0.7947 |
| 0.4199 | 45.0 | 33795 | 0.5523 | 0.7780 |
| 0.4316 | 46.0 | 34546 | 0.5332 | 0.7896 |
| 0.3892 | 47.0 | 35297 | 0.5380 | 0.7846 |
| 0.3721 | 48.0 | 36048 | 0.5481 | 0.7880 |
| 0.3964 | 49.0 | 36799 | 0.5559 | 0.7846 |
| 0.3446 | 50.0 | 37550 | 0.5641 | 0.7746 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_rms_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_rms_00001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9553
- Accuracy: 0.9115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2396 | 1.0 | 751 | 0.2956 | 0.8932 |
| 0.1563 | 2.0 | 1502 | 0.3511 | 0.8748 |
| 0.1378 | 3.0 | 2253 | 0.3209 | 0.9032 |
| 0.0749 | 4.0 | 3004 | 0.4152 | 0.8965 |
| 0.0902 | 5.0 | 3755 | 0.4990 | 0.8998 |
| 0.0311 | 6.0 | 4506 | 0.6972 | 0.8948 |
| 0.0511 | 7.0 | 5257 | 0.6707 | 0.8915 |
| 0.0692 | 8.0 | 6008 | 0.7791 | 0.8948 |
| 0.0655 | 9.0 | 6759 | 0.7801 | 0.8965 |
| 0.0191 | 10.0 | 7510 | 0.8995 | 0.8948 |
| 0.0314 | 11.0 | 8261 | 0.8069 | 0.8965 |
| 0.0092 | 12.0 | 9012 | 0.8789 | 0.8881 |
| 0.0207 | 13.0 | 9763 | 0.9080 | 0.8915 |
| 0.0001 | 14.0 | 10514 | 0.9075 | 0.8982 |
| 0.0137 | 15.0 | 11265 | 1.1216 | 0.8915 |
| 0.0289 | 16.0 | 12016 | 1.0268 | 0.8932 |
| 0.0749 | 17.0 | 12767 | 0.9684 | 0.8965 |
| 0.0004 | 18.0 | 13518 | 0.9374 | 0.8948 |
| 0.0016 | 19.0 | 14269 | 0.9146 | 0.8998 |
| 0.0 | 20.0 | 15020 | 0.8660 | 0.9115 |
| 0.0 | 21.0 | 15771 | 0.8768 | 0.9132 |
| 0.0205 | 22.0 | 16522 | 0.9737 | 0.8932 |
| 0.0 | 23.0 | 17273 | 0.8857 | 0.9065 |
| 0.0 | 24.0 | 18024 | 0.9206 | 0.9015 |
| 0.0 | 25.0 | 18775 | 0.9882 | 0.9032 |
| 0.0051 | 26.0 | 19526 | 0.9311 | 0.9048 |
| 0.0005 | 27.0 | 20277 | 0.9916 | 0.8831 |
| 0.0 | 28.0 | 21028 | 0.8978 | 0.9065 |
| 0.0147 | 29.0 | 21779 | 0.9817 | 0.8998 |
| 0.0 | 30.0 | 22530 | 0.9072 | 0.9132 |
| 0.0 | 31.0 | 23281 | 0.9000 | 0.9032 |
| 0.0 | 32.0 | 24032 | 0.9908 | 0.9048 |
| 0.0 | 33.0 | 24783 | 0.9477 | 0.8998 |
| 0.0001 | 34.0 | 25534 | 0.9361 | 0.8998 |
| 0.0 | 35.0 | 26285 | 0.9285 | 0.9048 |
| 0.0 | 36.0 | 27036 | 0.9622 | 0.9048 |
| 0.0 | 37.0 | 27787 | 0.9080 | 0.9082 |
| 0.0 | 38.0 | 28538 | 1.0318 | 0.9065 |
| 0.0 | 39.0 | 29289 | 0.8954 | 0.9115 |
| 0.0 | 40.0 | 30040 | 0.9047 | 0.9098 |
| 0.0 | 41.0 | 30791 | 0.9568 | 0.9098 |
| 0.0 | 42.0 | 31542 | 0.9648 | 0.9082 |
| 0.0 | 43.0 | 32293 | 0.9575 | 0.9098 |
| 0.0 | 44.0 | 33044 | 0.9498 | 0.9132 |
| 0.0 | 45.0 | 33795 | 0.9583 | 0.9098 |
| 0.0 | 46.0 | 34546 | 0.9523 | 0.9115 |
| 0.0 | 47.0 | 35297 | 0.9525 | 0.9115 |
| 0.0 | 48.0 | 36048 | 0.9545 | 0.9115 |
| 0.0 | 49.0 | 36799 | 0.9543 | 0.9115 |
| 0.0 | 50.0 | 37550 | 0.9553 | 0.9115 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/hushem_40x_deit_tiny_adamax_001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_adamax_001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2164
- Accuracy: 0.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1419 | 1.0 | 215 | 1.1495 | 0.6667 |
| 0.2513 | 2.0 | 430 | 1.6987 | 0.6667 |
| 0.0735 | 3.0 | 645 | 1.5261 | 0.7556 |
| 0.0999 | 4.0 | 860 | 1.1841 | 0.7556 |
| 0.0224 | 5.0 | 1075 | 1.9099 | 0.7556 |
| 0.0347 | 6.0 | 1290 | 1.5424 | 0.7778 |
| 0.0153 | 7.0 | 1505 | 2.1022 | 0.7111 |
| 0.0692 | 8.0 | 1720 | 2.1279 | 0.6667 |
| 0.032 | 9.0 | 1935 | 1.4685 | 0.8 |
| 0.012 | 10.0 | 2150 | 1.8478 | 0.7333 |
| 0.0122 | 11.0 | 2365 | 2.0002 | 0.7333 |
| 0.0062 | 12.0 | 2580 | 1.8733 | 0.8 |
| 0.0592 | 13.0 | 2795 | 1.7430 | 0.7778 |
| 0.0 | 14.0 | 3010 | 1.4594 | 0.8 |
| 0.0388 | 15.0 | 3225 | 2.5836 | 0.6889 |
| 0.0003 | 16.0 | 3440 | 2.4594 | 0.6889 |
| 0.0001 | 17.0 | 3655 | 1.4002 | 0.8222 |
| 0.0004 | 18.0 | 3870 | 2.8500 | 0.6667 |
| 0.0028 | 19.0 | 4085 | 2.8669 | 0.6889 |
| 0.0 | 20.0 | 4300 | 1.9979 | 0.7556 |
| 0.0 | 21.0 | 4515 | 1.9762 | 0.7778 |
| 0.0 | 22.0 | 4730 | 1.9694 | 0.7778 |
| 0.0 | 23.0 | 4945 | 1.9654 | 0.7778 |
| 0.0 | 24.0 | 5160 | 1.9624 | 0.7778 |
| 0.0 | 25.0 | 5375 | 1.9616 | 0.7778 |
| 0.0 | 26.0 | 5590 | 1.9605 | 0.8 |
| 0.0 | 27.0 | 5805 | 1.9607 | 0.8 |
| 0.0 | 28.0 | 6020 | 1.9631 | 0.7778 |
| 0.0 | 29.0 | 6235 | 1.9640 | 0.7778 |
| 0.0 | 30.0 | 6450 | 1.9713 | 0.7778 |
| 0.0 | 31.0 | 6665 | 1.9756 | 0.7778 |
| 0.0 | 32.0 | 6880 | 1.9854 | 0.7778 |
| 0.0 | 33.0 | 7095 | 1.9940 | 0.7778 |
| 0.0 | 34.0 | 7310 | 2.0033 | 0.7778 |
| 0.0 | 35.0 | 7525 | 2.0146 | 0.7778 |
| 0.0 | 36.0 | 7740 | 2.0257 | 0.7778 |
| 0.0 | 37.0 | 7955 | 2.0418 | 0.7778 |
| 0.0 | 38.0 | 8170 | 2.0556 | 0.7778 |
| 0.0 | 39.0 | 8385 | 2.0718 | 0.7778 |
| 0.0 | 40.0 | 8600 | 2.0881 | 0.7778 |
| 0.0 | 41.0 | 8815 | 2.1047 | 0.7778 |
| 0.0 | 42.0 | 9030 | 2.1211 | 0.7778 |
| 0.0 | 43.0 | 9245 | 2.1375 | 0.7778 |
| 0.0 | 44.0 | 9460 | 2.1536 | 0.7778 |
| 0.0 | 45.0 | 9675 | 2.1694 | 0.7778 |
| 0.0 | 46.0 | 9890 | 2.1838 | 0.7778 |
| 0.0 | 47.0 | 10105 | 2.1973 | 0.7778 |
| 0.0 | 48.0 | 10320 | 2.2074 | 0.7778 |
| 0.0 | 49.0 | 10535 | 2.2141 | 0.7778 |
| 0.0 | 50.0 | 10750 | 2.2164 | 0.7778 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/smids_10x_deit_tiny_rms_0001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_rms_0001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7512
- Accuracy: 0.8469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7916 | 1.0 | 750 | 0.7607 | 0.5990 |
| 0.7137 | 2.0 | 1500 | 0.7292 | 0.6922 |
| 0.7389 | 3.0 | 2250 | 0.6487 | 0.6905 |
| 0.641 | 4.0 | 3000 | 0.6309 | 0.7055 |
| 0.6314 | 5.0 | 3750 | 0.6856 | 0.6905 |
| 0.5224 | 6.0 | 4500 | 0.5750 | 0.7554 |
| 0.5522 | 7.0 | 5250 | 0.5498 | 0.7537 |
| 0.5542 | 8.0 | 6000 | 0.5375 | 0.7587 |
| 0.5077 | 9.0 | 6750 | 0.5514 | 0.7604 |
| 0.4953 | 10.0 | 7500 | 0.5328 | 0.7870 |
| 0.4839 | 11.0 | 8250 | 0.5735 | 0.7454 |
| 0.5867 | 12.0 | 9000 | 0.5344 | 0.7820 |
| 0.5019 | 13.0 | 9750 | 0.5724 | 0.7720 |
| 0.4158 | 14.0 | 10500 | 0.5203 | 0.7870 |
| 0.338 | 15.0 | 11250 | 0.5452 | 0.7903 |
| 0.3872 | 16.0 | 12000 | 0.4927 | 0.8070 |
| 0.3271 | 17.0 | 12750 | 0.5792 | 0.8070 |
| 0.3473 | 18.0 | 13500 | 0.4754 | 0.8203 |
| 0.3721 | 19.0 | 14250 | 0.4949 | 0.7887 |
| 0.3629 | 20.0 | 15000 | 0.4870 | 0.8136 |
| 0.3126 | 21.0 | 15750 | 0.4639 | 0.8303 |
| 0.2547 | 22.0 | 16500 | 0.5296 | 0.8236 |
| 0.3002 | 23.0 | 17250 | 0.5506 | 0.8003 |
| 0.2834 | 24.0 | 18000 | 0.5134 | 0.8186 |
| 0.2447 | 25.0 | 18750 | 0.6771 | 0.8053 |
| 0.211 | 26.0 | 19500 | 0.5840 | 0.8120 |
| 0.2265 | 27.0 | 20250 | 0.6139 | 0.8369 |
| 0.2018 | 28.0 | 21000 | 0.7828 | 0.8203 |
| 0.1385 | 29.0 | 21750 | 0.7229 | 0.8186 |
| 0.1626 | 30.0 | 22500 | 0.6948 | 0.8369 |
| 0.1046 | 31.0 | 23250 | 0.7987 | 0.8419 |
| 0.0986 | 32.0 | 24000 | 0.6857 | 0.8552 |
| 0.1014 | 33.0 | 24750 | 0.9042 | 0.8303 |
| 0.0734 | 34.0 | 25500 | 0.9163 | 0.8403 |
| 0.03 | 35.0 | 26250 | 0.9836 | 0.8353 |
| 0.0303 | 36.0 | 27000 | 0.9935 | 0.8436 |
| 0.0569 | 37.0 | 27750 | 1.1088 | 0.8286 |
| 0.0188 | 38.0 | 28500 | 1.2743 | 0.8319 |
| 0.0192 | 39.0 | 29250 | 1.3350 | 0.8386 |
| 0.032 | 40.0 | 30000 | 1.3569 | 0.8419 |
| 0.01 | 41.0 | 30750 | 1.2223 | 0.8353 |
| 0.0249 | 42.0 | 31500 | 1.4712 | 0.8436 |
| 0.0031 | 43.0 | 32250 | 1.5420 | 0.8403 |
| 0.0008 | 44.0 | 33000 | 1.5564 | 0.8419 |
| 0.026 | 45.0 | 33750 | 1.5884 | 0.8502 |
| 0.0017 | 46.0 | 34500 | 1.7455 | 0.8386 |
| 0.0001 | 47.0 | 35250 | 1.7636 | 0.8353 |
| 0.0001 | 48.0 | 36000 | 1.7269 | 0.8453 |
| 0.0 | 49.0 | 36750 | 1.7457 | 0.8453 |
| 0.0 | 50.0 | 37500 | 1.7512 | 0.8469 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_rms_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_rms_00001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2287
- Accuracy: 0.8869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2512 | 1.0 | 750 | 0.3089 | 0.8735 |
| 0.118 | 2.0 | 1500 | 0.3336 | 0.8819 |
| 0.0986 | 3.0 | 2250 | 0.3751 | 0.8735 |
| 0.1049 | 4.0 | 3000 | 0.4900 | 0.8819 |
| 0.0487 | 5.0 | 3750 | 0.5969 | 0.8835 |
| 0.0234 | 6.0 | 4500 | 0.6491 | 0.8835 |
| 0.036 | 7.0 | 5250 | 0.7457 | 0.8869 |
| 0.001 | 8.0 | 6000 | 0.8757 | 0.8719 |
| 0.0148 | 9.0 | 6750 | 0.8581 | 0.8869 |
| 0.0298 | 10.0 | 7500 | 1.0727 | 0.8719 |
| 0.0019 | 11.0 | 8250 | 1.0234 | 0.8719 |
| 0.0146 | 12.0 | 9000 | 1.1199 | 0.8702 |
| 0.0069 | 13.0 | 9750 | 1.0417 | 0.8785 |
| 0.0001 | 14.0 | 10500 | 1.0745 | 0.8819 |
| 0.0 | 15.0 | 11250 | 1.0294 | 0.8802 |
| 0.0091 | 16.0 | 12000 | 1.1025 | 0.8802 |
| 0.0186 | 17.0 | 12750 | 1.0736 | 0.8835 |
| 0.0 | 18.0 | 13500 | 1.0297 | 0.8769 |
| 0.047 | 19.0 | 14250 | 1.1126 | 0.8819 |
| 0.0 | 20.0 | 15000 | 1.1842 | 0.8785 |
| 0.0 | 21.0 | 15750 | 1.2771 | 0.8686 |
| 0.0198 | 22.0 | 16500 | 1.1311 | 0.8869 |
| 0.0357 | 23.0 | 17250 | 1.1425 | 0.8869 |
| 0.0 | 24.0 | 18000 | 1.1413 | 0.8885 |
| 0.0 | 25.0 | 18750 | 1.1558 | 0.8852 |
| 0.0 | 26.0 | 19500 | 1.1246 | 0.8869 |
| 0.0271 | 27.0 | 20250 | 1.2507 | 0.8752 |
| 0.0 | 28.0 | 21000 | 1.1107 | 0.8902 |
| 0.0 | 29.0 | 21750 | 1.1979 | 0.8852 |
| 0.0 | 30.0 | 22500 | 1.2404 | 0.8869 |
| 0.0 | 31.0 | 23250 | 1.2332 | 0.8819 |
| 0.0 | 32.0 | 24000 | 1.3008 | 0.8819 |
| 0.0 | 33.0 | 24750 | 1.3101 | 0.8819 |
| 0.0 | 34.0 | 25500 | 1.3030 | 0.8869 |
| 0.0 | 35.0 | 26250 | 1.1931 | 0.8835 |
| 0.0 | 36.0 | 27000 | 1.2127 | 0.8802 |
| 0.0203 | 37.0 | 27750 | 1.1903 | 0.8802 |
| 0.0 | 38.0 | 28500 | 1.2617 | 0.8869 |
| 0.0 | 39.0 | 29250 | 1.1890 | 0.8935 |
| 0.0 | 40.0 | 30000 | 1.2276 | 0.8869 |
| 0.0 | 41.0 | 30750 | 1.1665 | 0.8885 |
| 0.0 | 42.0 | 31500 | 1.1610 | 0.8852 |
| 0.0 | 43.0 | 32250 | 1.2105 | 0.8902 |
| 0.0 | 44.0 | 33000 | 1.2243 | 0.8885 |
| 0.0031 | 45.0 | 33750 | 1.2267 | 0.8902 |
| 0.0 | 46.0 | 34500 | 1.2227 | 0.8885 |
| 0.0 | 47.0 | 35250 | 1.2254 | 0.8869 |
| 0.0 | 48.0 | 36000 | 1.2244 | 0.8852 |
| 0.0 | 49.0 | 36750 | 1.2292 | 0.8869 |
| 0.0 | 50.0 | 37500 | 1.2287 | 0.8869 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_rms_0001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_rms_0001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2831
- Accuracy: 0.745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8666 | 1.0 | 750 | 0.8178 | 0.5867 |
| 0.793 | 2.0 | 1500 | 0.8394 | 0.5383 |
| 0.7315 | 3.0 | 2250 | 0.8051 | 0.6133 |
| 0.6465 | 4.0 | 3000 | 0.7374 | 0.65 |
| 0.703 | 5.0 | 3750 | 0.7241 | 0.6517 |
| 0.6607 | 6.0 | 4500 | 0.6935 | 0.6617 |
| 0.6906 | 7.0 | 5250 | 0.6781 | 0.675 |
| 0.673 | 8.0 | 6000 | 0.6701 | 0.7 |
| 0.5876 | 9.0 | 6750 | 0.6156 | 0.715 |
| 0.5761 | 10.0 | 7500 | 0.6686 | 0.6883 |
| 0.695 | 11.0 | 8250 | 0.6673 | 0.675 |
| 0.5527 | 12.0 | 9000 | 0.6193 | 0.7183 |
| 0.5532 | 13.0 | 9750 | 0.6407 | 0.6983 |
| 0.6398 | 14.0 | 10500 | 0.6327 | 0.7267 |
| 0.5686 | 15.0 | 11250 | 0.6250 | 0.71 |
| 0.6507 | 16.0 | 12000 | 0.6131 | 0.7183 |
| 0.586 | 17.0 | 12750 | 0.5959 | 0.7367 |
| 0.6263 | 18.0 | 13500 | 0.6433 | 0.7083 |
| 0.5943 | 19.0 | 14250 | 0.5766 | 0.7467 |
| 0.6095 | 20.0 | 15000 | 0.5801 | 0.7383 |
| 0.4915 | 21.0 | 15750 | 0.5843 | 0.7467 |
| 0.5994 | 22.0 | 16500 | 0.5711 | 0.74 |
| 0.4915 | 23.0 | 17250 | 0.5881 | 0.7367 |
| 0.5455 | 24.0 | 18000 | 0.5829 | 0.73 |
| 0.5646 | 25.0 | 18750 | 0.6056 | 0.73 |
| 0.4802 | 26.0 | 19500 | 0.5993 | 0.73 |
| 0.4066 | 27.0 | 20250 | 0.5797 | 0.7617 |
| 0.5295 | 28.0 | 21000 | 0.6131 | 0.7433 |
| 0.4838 | 29.0 | 21750 | 0.5976 | 0.7533 |
| 0.454 | 30.0 | 22500 | 0.5851 | 0.755 |
| 0.3428 | 31.0 | 23250 | 0.6240 | 0.745 |
| 0.3934 | 32.0 | 24000 | 0.6108 | 0.755 |
| 0.3564 | 33.0 | 24750 | 0.6563 | 0.755 |
| 0.4234 | 34.0 | 25500 | 0.6360 | 0.7633 |
| 0.3741 | 35.0 | 26250 | 0.6145 | 0.765 |
| 0.3785 | 36.0 | 27000 | 0.6637 | 0.7583 |
| 0.3282 | 37.0 | 27750 | 0.6548 | 0.7817 |
| 0.3768 | 38.0 | 28500 | 0.7250 | 0.7483 |
| 0.3263 | 39.0 | 29250 | 0.6603 | 0.7633 |
| 0.3862 | 40.0 | 30000 | 0.6936 | 0.7617 |
| 0.2705 | 41.0 | 30750 | 0.7486 | 0.7733 |
| 0.2694 | 42.0 | 31500 | 0.8322 | 0.7683 |
| 0.2855 | 43.0 | 32250 | 0.8068 | 0.7733 |
| 0.2669 | 44.0 | 33000 | 0.9199 | 0.755 |
| 0.2143 | 45.0 | 33750 | 0.9335 | 0.7667 |
| 0.1925 | 46.0 | 34500 | 1.0133 | 0.76 |
| 0.1987 | 47.0 | 35250 | 1.0665 | 0.745 |
| 0.1978 | 48.0 | 36000 | 1.1590 | 0.75 |
| 0.1441 | 49.0 | 36750 | 1.2474 | 0.7517 |
| 0.1372 | 50.0 | 37500 | 1.2831 | 0.745 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_10x_deit_tiny_rms_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_rms_00001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1108
- Accuracy: 0.8983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2467 | 1.0 | 750 | 0.2589 | 0.9033 |
| 0.1838 | 2.0 | 1500 | 0.3381 | 0.8833 |
| 0.1037 | 3.0 | 2250 | 0.3939 | 0.8817 |
| 0.0799 | 4.0 | 3000 | 0.4142 | 0.895 |
| 0.053 | 5.0 | 3750 | 0.4932 | 0.9 |
| 0.0314 | 6.0 | 4500 | 0.5727 | 0.8883 |
| 0.0421 | 7.0 | 5250 | 0.7156 | 0.8967 |
| 0.002 | 8.0 | 6000 | 0.7860 | 0.8983 |
| 0.001 | 9.0 | 6750 | 0.8016 | 0.9017 |
| 0.0004 | 10.0 | 7500 | 0.8581 | 0.8983 |
| 0.0041 | 11.0 | 8250 | 0.9068 | 0.8783 |
| 0.0193 | 12.0 | 9000 | 1.1367 | 0.875 |
| 0.0008 | 13.0 | 9750 | 0.9798 | 0.8917 |
| 0.0002 | 14.0 | 10500 | 1.0132 | 0.885 |
| 0.0001 | 15.0 | 11250 | 0.9596 | 0.9017 |
| 0.0 | 16.0 | 12000 | 0.9560 | 0.8967 |
| 0.0155 | 17.0 | 12750 | 0.9136 | 0.8967 |
| 0.0 | 18.0 | 13500 | 0.9761 | 0.8967 |
| 0.0 | 19.0 | 14250 | 1.1111 | 0.8983 |
| 0.0001 | 20.0 | 15000 | 1.0728 | 0.8933 |
| 0.0212 | 21.0 | 15750 | 0.9842 | 0.8917 |
| 0.0 | 22.0 | 16500 | 1.1318 | 0.88 |
| 0.0001 | 23.0 | 17250 | 0.9825 | 0.9 |
| 0.0 | 24.0 | 18000 | 1.1665 | 0.8917 |
| 0.0 | 25.0 | 18750 | 1.0643 | 0.8933 |
| 0.0 | 26.0 | 19500 | 1.0722 | 0.885 |
| 0.0 | 27.0 | 20250 | 1.1414 | 0.8867 |
| 0.0 | 28.0 | 21000 | 1.0530 | 0.89 |
| 0.0 | 29.0 | 21750 | 1.0157 | 0.89 |
| 0.0 | 30.0 | 22500 | 1.0643 | 0.8833 |
| 0.0001 | 31.0 | 23250 | 1.1945 | 0.88 |
| 0.0 | 32.0 | 24000 | 1.0688 | 0.885 |
| 0.0 | 33.0 | 24750 | 1.1471 | 0.8933 |
| 0.0001 | 34.0 | 25500 | 1.0005 | 0.8883 |
| 0.0 | 35.0 | 26250 | 1.0647 | 0.89 |
| 0.0 | 36.0 | 27000 | 1.0647 | 0.9 |
| 0.0 | 37.0 | 27750 | 1.1102 | 0.9 |
| 0.0 | 38.0 | 28500 | 1.0870 | 0.8967 |
| 0.0 | 39.0 | 29250 | 1.1068 | 0.89 |
| 0.0 | 40.0 | 30000 | 1.0975 | 0.9 |
| 0.0 | 41.0 | 30750 | 1.1055 | 0.895 |
| 0.0 | 42.0 | 31500 | 1.1130 | 0.9 |
| 0.0 | 43.0 | 32250 | 1.1059 | 0.8983 |
| 0.0 | 44.0 | 33000 | 1.1016 | 0.8967 |
| 0.0 | 45.0 | 33750 | 1.1080 | 0.8967 |
| 0.0 | 46.0 | 34500 | 1.1091 | 0.8967 |
| 0.0 | 47.0 | 35250 | 1.1121 | 0.8967 |
| 0.0 | 48.0 | 36000 | 1.1138 | 0.8983 |
| 0.0 | 49.0 | 36750 | 1.1129 | 0.8967 |
| 0.0 | 50.0 | 37500 | 1.1108 | 0.8983 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
SaladSlayer00/image_classification_resnet
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# SaladSlayer00/image_classification_resnet
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2581
- Validation Loss: 1.6399
- Validation Accuracy: 0.5823
- Epoch: 11
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:---------------:|:-------------------:|:-----:|
| 7.0750 | 4.8746 | 0.0090 | 0 |
| 4.6468 | 4.5229 | 0.0538 | 1 |
| 4.3211 | 4.1033 | 0.1209 | 2 |
| 3.8784 | 3.6736 | 0.1859 | 3 |
| 3.4274 | 3.2193 | 0.2419 | 4 |
| 3.0071 | 2.8524 | 0.3012 | 5 |
| 2.6239 | 2.5632 | 0.3651 | 6 |
| 2.2925 | 2.2959 | 0.4233 | 7 |
| 1.9792 | 2.1138 | 0.4882 | 8 |
| 1.7199 | 1.9271 | 0.5174 | 9 |
| 1.4845 | 1.7643 | 0.5666 | 10 |
| 1.2581 | 1.6399 | 0.5823 | 11 |
### Framework versions
- Transformers 4.36.2
- TensorFlow 2.15.0
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"ben_affleck",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"tom_ellis",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"danielle_panabaker",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"madelaine_petsch",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"katharine_mcphee",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"camila_mendes",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"melissa_fumero",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"anthony_mackie",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"natalie_portman",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"cristiano_ronaldo",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"tom_hiddleston",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"logan_lerman",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"lili_reinhart",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"elon_musk",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"bobby_morley",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"brie_larson",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"josh_radnor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"eliza_taylor",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"alexandra_daddario",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"krysten_ritter",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"zendaya",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"jeff_bezos",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"gal_gadot",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"zoe_saldana",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"shakira_isabel_mebarak",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"mark_zuckerberg",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"marie_avgeropoulos",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"neil_patrick_harris",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"chris_hemsworth",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"elizabeth_lail",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"richard_harmon",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"chris_evans",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"kiernen_shipka",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"natalie_dormer",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"alvaro_morte",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"stephen_amell",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"alex_lawther",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"irina_shayk",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"amanda_crew",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"wentworth_miller",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"katherine_langford",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"penn_badgley",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"barack_obama",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"christian_bale",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"nadia_hilker",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"morena_baccarin",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"chris_pratt",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"anne_hathaway",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"emma_stone",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"ellen_page",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"robert_de_niro",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"tom_holland",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"sarah_wayne_callies",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"inbar_lavi",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"scarlett_johansson",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"tom_hardy",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"megan_fox",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"pedro_alonso",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"brenton_thwaites",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"keanu_reeves",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"andy_samberg",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"rebecca_ferguson",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"alycia_dabnem_carey",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"dwayne_johnson",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"rihanna",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"miley_cyrus",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"zac_efron",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"amber_heard",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"robert_downey_jr",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"leonardo_dicaprio",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"selena_gomez",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"barbara_palvin",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"emilia_clarke",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"morgan_freeman",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"gwyneth_paltrow",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"maria_pedraza",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"jeremy_renner",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"tom_cruise",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"jimmy_fallon",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"hugh_jackman",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"sophie_turner",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"tuppence_middleton",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jessica_barden",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"jennifer_lawrence",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"millie_bobby_brown",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"ursula_corbero",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"bill_gates",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"mark_ruffalo",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"avril_lavigne",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"maisie_williams",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"margot_robbie",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"elizabeth_olsen",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"brian_j._smith",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"grant_gustin",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"rami_malek",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"taylor_swift",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"emma_watson",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"jake_mcdorman",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"adriana_lima",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"henry_cavil",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"lindsey_morgan",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"dominic_purcell",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"jason_momoa",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"johnny_depp",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi",
"lionel_messi"
] |
yuanhuaisen/autotrain-5dt5f-vyqcu
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 1.0895181894302368
f1_macro: 0.16666666666666666
f1_micro: 0.3333333333333333
f1_weighted: 0.16666666666666666
precision_macro: 0.1111111111111111
precision_micro: 0.3333333333333333
precision_weighted: 0.1111111111111111
recall_macro: 0.3333333333333333
recall_micro: 0.3333333333333333
recall_weighted: 0.3333333333333333
accuracy: 0.3333333333333333
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.