model_id
stringlengths
7
105
model_card
stringlengths
1
130k
model_labels
listlengths
2
80k
huggingspark/vit-base-patch16-224-finetuned-flower
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-flower This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 2.1.0+cu121 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "daisy", "dandelion", "roses", "sunflowers", "tulips" ]
hkivancoral/smids_10x_deit_tiny_rms_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_tiny_rms_0001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.3327 - Accuracy: 0.8367 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.7568 | 1.0 | 750 | 0.7218 | 0.6717 | | 0.6894 | 2.0 | 1500 | 0.6565 | 0.6583 | | 0.6167 | 3.0 | 2250 | 0.6175 | 0.7017 | | 0.6218 | 4.0 | 3000 | 0.6221 | 0.705 | | 0.5637 | 5.0 | 3750 | 0.5565 | 0.7667 | | 0.5037 | 6.0 | 4500 | 0.5481 | 0.7533 | | 0.441 | 7.0 | 5250 | 0.5584 | 0.7667 | | 0.5186 | 8.0 | 6000 | 0.5150 | 0.78 | | 0.4475 | 9.0 | 6750 | 0.5478 | 0.775 | | 0.4117 | 10.0 | 7500 | 0.5408 | 0.7867 | | 0.4227 | 11.0 | 8250 | 0.4928 | 0.8017 | | 0.4314 | 12.0 | 9000 | 0.5195 | 0.785 | | 0.3606 | 13.0 | 9750 | 0.4998 | 0.8033 | | 0.3206 | 14.0 | 10500 | 0.5323 | 0.8083 | | 0.2628 | 15.0 | 11250 | 0.5344 | 0.79 | | 0.2274 | 16.0 | 12000 | 0.5778 | 0.81 | | 0.1649 | 17.0 | 12750 | 0.5559 | 0.8283 | | 0.2197 | 18.0 | 13500 | 0.5838 | 0.8017 | | 0.242 | 19.0 | 14250 | 0.5757 | 0.83 | | 0.178 | 20.0 | 15000 | 0.6143 | 0.8183 | | 0.1596 | 21.0 | 15750 | 0.6500 | 0.8267 | | 0.1681 | 22.0 | 16500 | 0.7191 | 0.83 | | 0.1356 | 23.0 | 17250 | 0.6652 | 0.83 | | 0.154 | 24.0 | 18000 | 0.7572 | 0.835 | | 0.0973 | 25.0 | 18750 | 0.7886 | 0.8283 | | 0.1206 | 26.0 | 19500 | 0.9030 | 0.8033 | | 0.0856 | 27.0 | 20250 | 1.0266 | 0.8083 | | 0.0629 | 28.0 | 21000 | 0.8154 | 0.8333 | | 0.0803 | 29.0 | 21750 | 1.0582 | 0.8133 | | 0.0608 | 30.0 | 22500 | 1.1240 | 0.8317 | | 0.0468 | 31.0 | 23250 | 1.1197 | 0.8183 | | 0.0343 | 32.0 | 24000 | 1.2322 | 0.8217 | | 0.0156 | 33.0 | 24750 | 1.3344 | 0.8367 | | 0.0192 | 34.0 | 25500 | 1.3961 | 0.8133 | | 0.0219 | 35.0 | 26250 | 1.5315 | 0.8033 | | 0.0147 | 36.0 | 27000 | 1.5425 | 0.8233 | | 0.0123 | 37.0 | 27750 | 1.6413 | 0.835 | | 0.0089 | 38.0 | 28500 | 1.7045 | 0.8167 | | 0.0003 | 39.0 | 29250 | 1.6054 | 0.8183 | | 0.0102 | 40.0 | 30000 | 1.6942 | 0.825 | | 0.0008 | 41.0 | 30750 | 1.7260 | 0.84 | | 0.0077 | 42.0 | 31500 | 1.9643 | 0.8217 | | 0.0048 | 43.0 | 32250 | 2.0335 | 0.825 | | 0.0015 | 44.0 | 33000 | 2.2512 | 0.8367 | | 0.0015 | 45.0 | 33750 | 2.1796 | 0.8333 | | 0.0001 | 46.0 | 34500 | 2.2799 | 0.83 | | 0.0 | 47.0 | 35250 | 2.2493 | 0.8317 | | 0.0 | 48.0 | 36000 | 2.3177 | 0.8417 | | 0.0 | 49.0 | 36750 | 2.3130 | 0.8317 | | 0.0 | 50.0 | 37500 | 2.3327 | 0.8367 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_tiny_rms_00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_tiny_rms_00001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3970 - Accuracy: 0.875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2502 | 1.0 | 750 | 0.3491 | 0.8717 | | 0.1602 | 2.0 | 1500 | 0.3787 | 0.8617 | | 0.0975 | 3.0 | 2250 | 0.4137 | 0.8717 | | 0.0908 | 4.0 | 3000 | 0.4782 | 0.8767 | | 0.061 | 5.0 | 3750 | 0.6214 | 0.88 | | 0.0396 | 6.0 | 4500 | 0.7566 | 0.885 | | 0.062 | 7.0 | 5250 | 0.8425 | 0.8683 | | 0.0007 | 8.0 | 6000 | 0.8436 | 0.8817 | | 0.0025 | 9.0 | 6750 | 0.9779 | 0.88 | | 0.0255 | 10.0 | 7500 | 1.1344 | 0.8683 | | 0.0352 | 11.0 | 8250 | 1.1423 | 0.8633 | | 0.0318 | 12.0 | 9000 | 1.0833 | 0.865 | | 0.0002 | 13.0 | 9750 | 1.1038 | 0.8767 | | 0.0084 | 14.0 | 10500 | 1.0965 | 0.87 | | 0.0351 | 15.0 | 11250 | 1.1367 | 0.8717 | | 0.0252 | 16.0 | 12000 | 1.2686 | 0.875 | | 0.0093 | 17.0 | 12750 | 1.0965 | 0.8833 | | 0.0 | 18.0 | 13500 | 1.2326 | 0.8683 | | 0.0001 | 19.0 | 14250 | 1.3027 | 0.8683 | | 0.0 | 20.0 | 15000 | 1.1789 | 0.8783 | | 0.0 | 21.0 | 15750 | 1.2125 | 0.875 | | 0.0 | 22.0 | 16500 | 1.1748 | 0.8767 | | 0.0001 | 23.0 | 17250 | 1.1790 | 0.87 | | 0.0 | 24.0 | 18000 | 1.3742 | 0.8733 | | 0.0 | 25.0 | 18750 | 1.2377 | 0.8783 | | 0.0003 | 26.0 | 19500 | 1.3192 | 0.8783 | | 0.0 | 27.0 | 20250 | 1.2703 | 0.8817 | | 0.0001 | 28.0 | 21000 | 1.2600 | 0.8833 | | 0.0 | 29.0 | 21750 | 1.2702 | 0.8867 | | 0.0 | 30.0 | 22500 | 1.2442 | 0.8917 | | 0.0 | 31.0 | 23250 | 1.2963 | 0.8817 | | 0.0 | 32.0 | 24000 | 1.4012 | 0.88 | | 0.0 | 33.0 | 24750 | 1.4514 | 0.8767 | | 0.0 | 34.0 | 25500 | 1.4381 | 0.8733 | | 0.0 | 35.0 | 26250 | 1.3220 | 0.8783 | | 0.0 | 36.0 | 27000 | 1.3859 | 0.8767 | | 0.0 | 37.0 | 27750 | 1.2544 | 0.8817 | | 0.0 | 38.0 | 28500 | 1.2536 | 0.8817 | | 0.0 | 39.0 | 29250 | 1.3812 | 0.88 | | 0.0 | 40.0 | 30000 | 1.3350 | 0.8733 | | 0.0 | 41.0 | 30750 | 1.4121 | 0.8733 | | 0.0 | 42.0 | 31500 | 1.4110 | 0.8733 | | 0.0 | 43.0 | 32250 | 1.4115 | 0.875 | | 0.0 | 44.0 | 33000 | 1.3934 | 0.8783 | | 0.0 | 45.0 | 33750 | 1.3917 | 0.88 | | 0.0 | 46.0 | 34500 | 1.3899 | 0.88 | | 0.0 | 47.0 | 35250 | 1.3925 | 0.88 | | 0.0 | 48.0 | 36000 | 1.3926 | 0.88 | | 0.0 | 49.0 | 36750 | 1.3955 | 0.875 | | 0.0 | 50.0 | 37500 | 1.3970 | 0.875 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_tiny_rms_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_tiny_rms_0001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4754 - Accuracy: 0.83 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.8611 | 1.0 | 750 | 0.8101 | 0.59 | | 0.7337 | 2.0 | 1500 | 0.7328 | 0.5933 | | 0.6092 | 3.0 | 2250 | 0.7436 | 0.6133 | | 0.4788 | 4.0 | 3000 | 0.6067 | 0.7117 | | 0.5206 | 5.0 | 3750 | 0.5059 | 0.79 | | 0.4534 | 6.0 | 4500 | 0.5164 | 0.7733 | | 0.4278 | 7.0 | 5250 | 0.5196 | 0.7883 | | 0.421 | 8.0 | 6000 | 0.4508 | 0.82 | | 0.3903 | 9.0 | 6750 | 0.4527 | 0.8067 | | 0.3955 | 10.0 | 7500 | 0.4845 | 0.7867 | | 0.4005 | 11.0 | 8250 | 0.4149 | 0.835 | | 0.3221 | 12.0 | 9000 | 0.4996 | 0.815 | | 0.3587 | 13.0 | 9750 | 0.4119 | 0.8433 | | 0.2736 | 14.0 | 10500 | 0.4909 | 0.85 | | 0.2907 | 15.0 | 11250 | 0.4508 | 0.8217 | | 0.2288 | 16.0 | 12000 | 0.5192 | 0.825 | | 0.2378 | 17.0 | 12750 | 0.4692 | 0.8383 | | 0.2297 | 18.0 | 13500 | 0.5883 | 0.8217 | | 0.2065 | 19.0 | 14250 | 0.4741 | 0.8483 | | 0.1668 | 20.0 | 15000 | 0.5264 | 0.82 | | 0.1867 | 21.0 | 15750 | 0.5865 | 0.8217 | | 0.1397 | 22.0 | 16500 | 0.5812 | 0.8283 | | 0.15 | 23.0 | 17250 | 0.7165 | 0.83 | | 0.083 | 24.0 | 18000 | 0.6760 | 0.8267 | | 0.3309 | 25.0 | 18750 | 0.8204 | 0.79 | | 0.6907 | 26.0 | 19500 | 0.6426 | 0.7217 | | 0.4092 | 27.0 | 20250 | 0.7365 | 0.72 | | 0.4562 | 28.0 | 21000 | 0.4704 | 0.7817 | | 0.4588 | 29.0 | 21750 | 0.4367 | 0.7983 | | 0.4089 | 30.0 | 22500 | 0.5011 | 0.7883 | | 0.4005 | 31.0 | 23250 | 0.5056 | 0.805 | | 0.2986 | 32.0 | 24000 | 0.4328 | 0.83 | | 0.3319 | 33.0 | 24750 | 0.5242 | 0.805 | | 0.2453 | 34.0 | 25500 | 0.4883 | 0.8383 | | 0.2459 | 35.0 | 26250 | 0.4886 | 0.8233 | | 0.2873 | 36.0 | 27000 | 0.5046 | 0.8217 | | 0.1814 | 37.0 | 27750 | 0.5539 | 0.8133 | | 0.2073 | 38.0 | 28500 | 0.5379 | 0.8233 | | 0.2284 | 39.0 | 29250 | 0.5765 | 0.815 | | 0.1577 | 40.0 | 30000 | 0.6100 | 0.8333 | | 0.1325 | 41.0 | 30750 | 0.6730 | 0.8333 | | 0.1239 | 42.0 | 31500 | 0.7561 | 0.8333 | | 0.0741 | 43.0 | 32250 | 0.8428 | 0.8217 | | 0.0999 | 44.0 | 33000 | 0.9214 | 0.8167 | | 0.0863 | 45.0 | 33750 | 0.9203 | 0.8367 | | 0.0855 | 46.0 | 34500 | 1.0656 | 0.83 | | 0.064 | 47.0 | 35250 | 1.2523 | 0.815 | | 0.0407 | 48.0 | 36000 | 1.2659 | 0.83 | | 0.0522 | 49.0 | 36750 | 1.4174 | 0.835 | | 0.0025 | 50.0 | 37500 | 1.4754 | 0.83 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_tiny_rms_00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_tiny_rms_00001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0880 - Accuracy: 0.9017 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2267 | 1.0 | 750 | 0.2310 | 0.9017 | | 0.1688 | 2.0 | 1500 | 0.3133 | 0.8767 | | 0.1207 | 3.0 | 2250 | 0.3307 | 0.8917 | | 0.0544 | 4.0 | 3000 | 0.3393 | 0.8933 | | 0.0685 | 5.0 | 3750 | 0.4794 | 0.8817 | | 0.0086 | 6.0 | 4500 | 0.5790 | 0.8967 | | 0.0247 | 7.0 | 5250 | 0.6674 | 0.895 | | 0.0176 | 8.0 | 6000 | 0.8292 | 0.89 | | 0.0446 | 9.0 | 6750 | 0.8047 | 0.8933 | | 0.0267 | 10.0 | 7500 | 0.8263 | 0.8983 | | 0.0376 | 11.0 | 8250 | 0.8873 | 0.9 | | 0.0007 | 12.0 | 9000 | 0.9745 | 0.8883 | | 0.0033 | 13.0 | 9750 | 0.8924 | 0.9033 | | 0.0025 | 14.0 | 10500 | 0.8544 | 0.905 | | 0.0004 | 15.0 | 11250 | 1.0444 | 0.8967 | | 0.0002 | 16.0 | 12000 | 1.0224 | 0.8933 | | 0.0105 | 17.0 | 12750 | 1.0131 | 0.8883 | | 0.014 | 18.0 | 13500 | 0.9525 | 0.9033 | | 0.0 | 19.0 | 14250 | 0.9850 | 0.8983 | | 0.0 | 20.0 | 15000 | 1.1461 | 0.89 | | 0.003 | 21.0 | 15750 | 0.9659 | 0.9033 | | 0.0004 | 22.0 | 16500 | 1.0728 | 0.8967 | | 0.0257 | 23.0 | 17250 | 1.0751 | 0.9 | | 0.006 | 24.0 | 18000 | 1.1593 | 0.8933 | | 0.0242 | 25.0 | 18750 | 1.1220 | 0.8917 | | 0.0 | 26.0 | 19500 | 1.0931 | 0.9033 | | 0.0 | 27.0 | 20250 | 1.0693 | 0.9017 | | 0.0 | 28.0 | 21000 | 1.2173 | 0.88 | | 0.0 | 29.0 | 21750 | 1.0569 | 0.905 | | 0.0118 | 30.0 | 22500 | 1.1864 | 0.8917 | | 0.0 | 31.0 | 23250 | 1.2141 | 0.8967 | | 0.0 | 32.0 | 24000 | 1.1888 | 0.8933 | | 0.0 | 33.0 | 24750 | 1.1513 | 0.8983 | | 0.0 | 34.0 | 25500 | 1.2676 | 0.89 | | 0.0 | 35.0 | 26250 | 1.1568 | 0.8983 | | 0.0 | 36.0 | 27000 | 1.1800 | 0.89 | | 0.0068 | 37.0 | 27750 | 1.1482 | 0.9033 | | 0.0 | 38.0 | 28500 | 1.0989 | 0.9 | | 0.0 | 39.0 | 29250 | 1.1226 | 0.9 | | 0.0 | 40.0 | 30000 | 1.1146 | 0.8983 | | 0.0 | 41.0 | 30750 | 1.0950 | 0.9 | | 0.0 | 42.0 | 31500 | 1.0906 | 0.9 | | 0.0 | 43.0 | 32250 | 1.0973 | 0.9 | | 0.0 | 44.0 | 33000 | 1.0952 | 0.9 | | 0.0 | 45.0 | 33750 | 1.0896 | 0.8983 | | 0.0 | 46.0 | 34500 | 1.0935 | 0.8983 | | 0.0 | 47.0 | 35250 | 1.0921 | 0.8983 | | 0.0 | 48.0 | 36000 | 1.0899 | 0.8983 | | 0.0 | 49.0 | 36750 | 1.0869 | 0.9017 | | 0.0 | 50.0 | 37500 | 1.0880 | 0.9017 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/hushem_40x_deit_tiny_adamax_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_adamax_001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.2299 - Accuracy: 0.7333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2388 | 1.0 | 215 | 1.5575 | 0.6444 | | 0.1132 | 2.0 | 430 | 1.5074 | 0.7111 | | 0.0448 | 3.0 | 645 | 1.5221 | 0.7111 | | 0.0294 | 4.0 | 860 | 2.1715 | 0.6444 | | 0.0359 | 5.0 | 1075 | 2.5882 | 0.6667 | | 0.041 | 6.0 | 1290 | 3.4111 | 0.6667 | | 0.0762 | 7.0 | 1505 | 2.4705 | 0.6444 | | 0.014 | 8.0 | 1720 | 2.7547 | 0.6222 | | 0.0243 | 9.0 | 1935 | 2.3966 | 0.6889 | | 0.0008 | 10.0 | 2150 | 2.0479 | 0.6889 | | 0.0216 | 11.0 | 2365 | 2.7964 | 0.6889 | | 0.0033 | 12.0 | 2580 | 1.0097 | 0.8 | | 0.0068 | 13.0 | 2795 | 2.2109 | 0.7556 | | 0.0003 | 14.0 | 3010 | 2.4149 | 0.6889 | | 0.0104 | 15.0 | 3225 | 2.3252 | 0.6889 | | 0.0005 | 16.0 | 3440 | 1.9451 | 0.8 | | 0.0158 | 17.0 | 3655 | 2.3698 | 0.7333 | | 0.0 | 18.0 | 3870 | 2.5005 | 0.6889 | | 0.0001 | 19.0 | 4085 | 2.8562 | 0.7333 | | 0.0002 | 20.0 | 4300 | 2.0511 | 0.7778 | | 0.0 | 21.0 | 4515 | 2.4064 | 0.6889 | | 0.0 | 22.0 | 4730 | 2.3086 | 0.7111 | | 0.0 | 23.0 | 4945 | 2.2846 | 0.7111 | | 0.0 | 24.0 | 5160 | 2.2636 | 0.7111 | | 0.0 | 25.0 | 5375 | 2.2498 | 0.7111 | | 0.0 | 26.0 | 5590 | 2.2376 | 0.7111 | | 0.0 | 27.0 | 5805 | 2.2259 | 0.7111 | | 0.0 | 28.0 | 6020 | 2.2168 | 0.7111 | | 0.0 | 29.0 | 6235 | 2.2093 | 0.7111 | | 0.0 | 30.0 | 6450 | 2.2030 | 0.7111 | | 0.0 | 31.0 | 6665 | 2.1968 | 0.7111 | | 0.0 | 32.0 | 6880 | 2.1940 | 0.7111 | | 0.0 | 33.0 | 7095 | 2.1890 | 0.7111 | | 0.0 | 34.0 | 7310 | 2.1860 | 0.7111 | | 0.0 | 35.0 | 7525 | 2.1850 | 0.7111 | | 0.0 | 36.0 | 7740 | 2.1848 | 0.7333 | | 0.0 | 37.0 | 7955 | 2.1855 | 0.7333 | | 0.0 | 38.0 | 8170 | 2.1862 | 0.7333 | | 0.0 | 39.0 | 8385 | 2.1884 | 0.7333 | | 0.0 | 40.0 | 8600 | 2.1906 | 0.7333 | | 0.0 | 41.0 | 8815 | 2.1950 | 0.7333 | | 0.0 | 42.0 | 9030 | 2.1985 | 0.7333 | | 0.0 | 43.0 | 9245 | 2.2034 | 0.7333 | | 0.0 | 44.0 | 9460 | 2.2083 | 0.7333 | | 0.0 | 45.0 | 9675 | 2.2137 | 0.7333 | | 0.0 | 46.0 | 9890 | 2.2189 | 0.7333 | | 0.0 | 47.0 | 10105 | 2.2227 | 0.7333 | | 0.0 | 48.0 | 10320 | 2.2266 | 0.7333 | | 0.0 | 49.0 | 10535 | 2.2294 | 0.7333 | | 0.0 | 50.0 | 10750 | 2.2299 | 0.7333 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_adamax_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_adamax_0001_fold1 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.6972 - Accuracy: 0.8 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0059 | 1.0 | 215 | 1.4884 | 0.6667 | | 0.0007 | 2.0 | 430 | 0.8474 | 0.7556 | | 0.0002 | 3.0 | 645 | 1.0073 | 0.8 | | 0.0 | 4.0 | 860 | 0.9250 | 0.8 | | 0.0 | 5.0 | 1075 | 0.9207 | 0.8 | | 0.0 | 6.0 | 1290 | 0.9264 | 0.8 | | 0.0 | 7.0 | 1505 | 0.9356 | 0.8 | | 0.0 | 8.0 | 1720 | 0.9374 | 0.8 | | 0.0 | 9.0 | 1935 | 0.9494 | 0.8 | | 0.0 | 10.0 | 2150 | 0.9562 | 0.8 | | 0.0 | 11.0 | 2365 | 0.9850 | 0.8 | | 0.0 | 12.0 | 2580 | 0.9946 | 0.8 | | 0.0 | 13.0 | 2795 | 1.0078 | 0.8 | | 0.0 | 14.0 | 3010 | 1.0232 | 0.8 | | 0.0 | 15.0 | 3225 | 1.0338 | 0.8 | | 0.0 | 16.0 | 3440 | 1.0587 | 0.8 | | 0.0 | 17.0 | 3655 | 1.0729 | 0.8 | | 0.0 | 18.0 | 3870 | 1.0955 | 0.8 | | 0.0 | 19.0 | 4085 | 1.1141 | 0.8 | | 0.0 | 20.0 | 4300 | 1.1377 | 0.8 | | 0.0 | 21.0 | 4515 | 1.1576 | 0.8 | | 0.0 | 22.0 | 4730 | 1.1774 | 0.8 | | 0.0 | 23.0 | 4945 | 1.1940 | 0.8 | | 0.0 | 24.0 | 5160 | 1.2165 | 0.8 | | 0.0 | 25.0 | 5375 | 1.2327 | 0.8 | | 0.0 | 26.0 | 5590 | 1.2591 | 0.8 | | 0.0 | 27.0 | 5805 | 1.2789 | 0.8 | | 0.0 | 28.0 | 6020 | 1.3041 | 0.8 | | 0.0 | 29.0 | 6235 | 1.3242 | 0.8 | | 0.0 | 30.0 | 6450 | 1.3543 | 0.8 | | 0.0 | 31.0 | 6665 | 1.3823 | 0.8 | | 0.0 | 32.0 | 6880 | 1.4030 | 0.8 | | 0.0 | 33.0 | 7095 | 1.4211 | 0.8 | | 0.0 | 34.0 | 7310 | 1.4411 | 0.8 | | 0.0 | 35.0 | 7525 | 1.4760 | 0.8 | | 0.0 | 36.0 | 7740 | 1.4941 | 0.8 | | 0.0 | 37.0 | 7955 | 1.5110 | 0.8 | | 0.0 | 38.0 | 8170 | 1.5338 | 0.8 | | 0.0 | 39.0 | 8385 | 1.5520 | 0.8 | | 0.0 | 40.0 | 8600 | 1.5687 | 0.8 | | 0.0 | 41.0 | 8815 | 1.5911 | 0.8 | | 0.0 | 42.0 | 9030 | 1.6052 | 0.8 | | 0.0 | 43.0 | 9245 | 1.6245 | 0.8 | | 0.0 | 44.0 | 9460 | 1.6414 | 0.8 | | 0.0 | 45.0 | 9675 | 1.6584 | 0.8 | | 0.0 | 46.0 | 9890 | 1.6706 | 0.8 | | 0.0 | 47.0 | 10105 | 1.6833 | 0.8 | | 0.0 | 48.0 | 10320 | 1.6907 | 0.8 | | 0.0 | 49.0 | 10535 | 1.6958 | 0.8 | | 0.0 | 50.0 | 10750 | 1.6972 | 0.8 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_adamax_001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_adamax_001_fold1 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.7272 - Accuracy: 0.8 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1818 | 1.0 | 215 | 1.1690 | 0.6667 | | 0.0471 | 2.0 | 430 | 1.2084 | 0.7556 | | 0.0323 | 3.0 | 645 | 1.4622 | 0.7556 | | 0.0035 | 4.0 | 860 | 1.8637 | 0.6667 | | 0.0576 | 5.0 | 1075 | 1.1848 | 0.7778 | | 0.0306 | 6.0 | 1290 | 2.1315 | 0.6667 | | 0.0267 | 7.0 | 1505 | 1.9833 | 0.6889 | | 0.0299 | 8.0 | 1720 | 2.0731 | 0.6444 | | 0.0 | 9.0 | 1935 | 1.0026 | 0.7778 | | 0.0 | 10.0 | 2150 | 1.0394 | 0.7778 | | 0.0 | 11.0 | 2365 | 1.0442 | 0.8 | | 0.0 | 12.0 | 2580 | 1.0674 | 0.8 | | 0.0 | 13.0 | 2795 | 1.0839 | 0.8 | | 0.0 | 14.0 | 3010 | 1.1001 | 0.8 | | 0.0 | 15.0 | 3225 | 1.1157 | 0.8 | | 0.0 | 16.0 | 3440 | 1.1301 | 0.8 | | 0.0 | 17.0 | 3655 | 1.1460 | 0.8 | | 0.0 | 18.0 | 3870 | 1.1598 | 0.8 | | 0.0 | 19.0 | 4085 | 1.1755 | 0.8 | | 0.0 | 20.0 | 4300 | 1.1926 | 0.8 | | 0.0 | 21.0 | 4515 | 1.2103 | 0.8 | | 0.0 | 22.0 | 4730 | 1.2273 | 0.7778 | | 0.0 | 23.0 | 4945 | 1.2450 | 0.7778 | | 0.0 | 24.0 | 5160 | 1.2642 | 0.7778 | | 0.0 | 25.0 | 5375 | 1.2848 | 0.7778 | | 0.0 | 26.0 | 5590 | 1.3072 | 0.7778 | | 0.0 | 27.0 | 5805 | 1.3310 | 0.7778 | | 0.0 | 28.0 | 6020 | 1.3521 | 0.7778 | | 0.0 | 29.0 | 6235 | 1.3737 | 0.7778 | | 0.0 | 30.0 | 6450 | 1.3961 | 0.7778 | | 0.0 | 31.0 | 6665 | 1.4197 | 0.7778 | | 0.0 | 32.0 | 6880 | 1.4440 | 0.7778 | | 0.0 | 33.0 | 7095 | 1.4721 | 0.7778 | | 0.0 | 34.0 | 7310 | 1.5015 | 0.7778 | | 0.0 | 35.0 | 7525 | 1.5290 | 0.7778 | | 0.0 | 36.0 | 7740 | 1.5563 | 0.7778 | | 0.0 | 37.0 | 7955 | 1.5831 | 0.7778 | | 0.0 | 38.0 | 8170 | 1.6075 | 0.7778 | | 0.0 | 39.0 | 8385 | 1.6265 | 0.8 | | 0.0 | 40.0 | 8600 | 1.6444 | 0.8 | | 0.0 | 41.0 | 8815 | 1.6599 | 0.8 | | 0.0 | 42.0 | 9030 | 1.6734 | 0.8 | | 0.0 | 43.0 | 9245 | 1.6847 | 0.8 | | 0.0 | 44.0 | 9460 | 1.6936 | 0.8 | | 0.0 | 45.0 | 9675 | 1.7031 | 0.8 | | 0.0 | 46.0 | 9890 | 1.7109 | 0.8 | | 0.0 | 47.0 | 10105 | 1.7178 | 0.8 | | 0.0 | 48.0 | 10320 | 1.7232 | 0.8 | | 0.0 | 49.0 | 10535 | 1.7265 | 0.8 | | 0.0 | 50.0 | 10750 | 1.7272 | 0.8 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_tiny_adamax_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_adamax_001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2217 - Accuracy: 0.8837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3354 | 1.0 | 217 | 0.9126 | 0.6744 | | 0.1561 | 2.0 | 434 | 1.0121 | 0.6977 | | 0.0656 | 3.0 | 651 | 1.0046 | 0.7442 | | 0.07 | 4.0 | 868 | 0.8401 | 0.7674 | | 0.0448 | 5.0 | 1085 | 1.1419 | 0.7907 | | 0.0667 | 6.0 | 1302 | 1.2686 | 0.8372 | | 0.0198 | 7.0 | 1519 | 1.1351 | 0.8372 | | 0.009 | 8.0 | 1736 | 1.6617 | 0.8140 | | 0.0113 | 9.0 | 1953 | 1.1646 | 0.8372 | | 0.0284 | 10.0 | 2170 | 0.8325 | 0.9302 | | 0.0093 | 11.0 | 2387 | 1.2543 | 0.8372 | | 0.0217 | 12.0 | 2604 | 1.5838 | 0.8140 | | 0.0059 | 13.0 | 2821 | 1.3486 | 0.8605 | | 0.0271 | 14.0 | 3038 | 0.9639 | 0.8837 | | 0.0039 | 15.0 | 3255 | 0.9754 | 0.9070 | | 0.0 | 16.0 | 3472 | 1.4140 | 0.8140 | | 0.0 | 17.0 | 3689 | 1.2390 | 0.8837 | | 0.0011 | 18.0 | 3906 | 1.3250 | 0.8605 | | 0.0004 | 19.0 | 4123 | 1.2826 | 0.8372 | | 0.0002 | 20.0 | 4340 | 1.1795 | 0.8605 | | 0.0 | 21.0 | 4557 | 0.9932 | 0.8837 | | 0.0 | 22.0 | 4774 | 0.9999 | 0.8837 | | 0.0 | 23.0 | 4991 | 1.0055 | 0.8837 | | 0.0 | 24.0 | 5208 | 1.0099 | 0.8837 | | 0.0 | 25.0 | 5425 | 1.0179 | 0.8837 | | 0.0 | 26.0 | 5642 | 1.0241 | 0.8837 | | 0.0 | 27.0 | 5859 | 1.0302 | 0.8837 | | 0.0 | 28.0 | 6076 | 1.0378 | 0.8837 | | 0.0 | 29.0 | 6293 | 1.0445 | 0.8837 | | 0.0 | 30.0 | 6510 | 1.0519 | 0.8837 | | 0.0 | 31.0 | 6727 | 1.0608 | 0.8837 | | 0.0 | 32.0 | 6944 | 1.0697 | 0.8837 | | 0.0 | 33.0 | 7161 | 1.0787 | 0.8837 | | 0.0 | 34.0 | 7378 | 1.0879 | 0.8837 | | 0.0 | 35.0 | 7595 | 1.0974 | 0.8837 | | 0.0 | 36.0 | 7812 | 1.1074 | 0.8837 | | 0.0 | 37.0 | 8029 | 1.1171 | 0.8837 | | 0.0 | 38.0 | 8246 | 1.1279 | 0.8837 | | 0.0 | 39.0 | 8463 | 1.1383 | 0.8837 | | 0.0 | 40.0 | 8680 | 1.1481 | 0.8837 | | 0.0 | 41.0 | 8897 | 1.1586 | 0.8837 | | 0.0 | 42.0 | 9114 | 1.1687 | 0.8837 | | 0.0 | 43.0 | 9331 | 1.1785 | 0.8837 | | 0.0 | 44.0 | 9548 | 1.1883 | 0.8837 | | 0.0 | 45.0 | 9765 | 1.1969 | 0.8837 | | 0.0 | 46.0 | 9982 | 1.2048 | 0.8837 | | 0.0 | 47.0 | 10199 | 1.2119 | 0.8837 | | 0.0 | 48.0 | 10416 | 1.2174 | 0.8837 | | 0.0 | 49.0 | 10633 | 1.2210 | 0.8837 | | 0.0 | 50.0 | 10850 | 1.2217 | 0.8837 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_adamax_0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_adamax_0001_fold2 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.0670 - Accuracy: 0.7556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.002 | 1.0 | 215 | 1.2697 | 0.7333 | | 0.0002 | 2.0 | 430 | 0.9917 | 0.8444 | | 0.0001 | 3.0 | 645 | 1.1998 | 0.7556 | | 0.0 | 4.0 | 860 | 1.2338 | 0.7778 | | 0.0 | 5.0 | 1075 | 1.2738 | 0.7778 | | 0.0 | 6.0 | 1290 | 1.2969 | 0.7556 | | 0.0 | 7.0 | 1505 | 1.3149 | 0.7556 | | 0.0 | 8.0 | 1720 | 1.3423 | 0.7333 | | 0.0 | 9.0 | 1935 | 1.3583 | 0.7333 | | 0.0 | 10.0 | 2150 | 1.3912 | 0.7333 | | 0.0 | 11.0 | 2365 | 1.4063 | 0.7333 | | 0.0 | 12.0 | 2580 | 1.4145 | 0.7333 | | 0.0 | 13.0 | 2795 | 1.4372 | 0.7333 | | 0.0 | 14.0 | 3010 | 1.4564 | 0.7333 | | 0.0 | 15.0 | 3225 | 1.4726 | 0.7556 | | 0.0 | 16.0 | 3440 | 1.4921 | 0.7556 | | 0.0 | 17.0 | 3655 | 1.5141 | 0.7556 | | 0.0 | 18.0 | 3870 | 1.5335 | 0.7556 | | 0.0 | 19.0 | 4085 | 1.5550 | 0.7556 | | 0.0 | 20.0 | 4300 | 1.5712 | 0.7556 | | 0.0 | 21.0 | 4515 | 1.5913 | 0.7556 | | 0.0 | 22.0 | 4730 | 1.6117 | 0.7556 | | 0.0 | 23.0 | 4945 | 1.6330 | 0.7556 | | 0.0 | 24.0 | 5160 | 1.6556 | 0.7556 | | 0.0 | 25.0 | 5375 | 1.6731 | 0.7556 | | 0.0 | 26.0 | 5590 | 1.6917 | 0.7333 | | 0.0 | 27.0 | 5805 | 1.7181 | 0.7556 | | 0.0 | 28.0 | 6020 | 1.7381 | 0.7556 | | 0.0 | 29.0 | 6235 | 1.7621 | 0.7333 | | 0.0 | 30.0 | 6450 | 1.7829 | 0.7556 | | 0.0 | 31.0 | 6665 | 1.8067 | 0.7556 | | 0.0 | 32.0 | 6880 | 1.8347 | 0.7556 | | 0.0 | 33.0 | 7095 | 1.8539 | 0.7556 | | 0.0 | 34.0 | 7310 | 1.8794 | 0.7556 | | 0.0 | 35.0 | 7525 | 1.9029 | 0.7556 | | 0.0 | 36.0 | 7740 | 1.9298 | 0.7556 | | 0.0 | 37.0 | 7955 | 1.9525 | 0.7556 | | 0.0 | 38.0 | 8170 | 1.9656 | 0.7556 | | 0.0 | 39.0 | 8385 | 1.9838 | 0.7556 | | 0.0 | 40.0 | 8600 | 2.0019 | 0.7556 | | 0.0 | 41.0 | 8815 | 2.0209 | 0.7556 | | 0.0 | 42.0 | 9030 | 2.0377 | 0.7556 | | 0.0 | 43.0 | 9245 | 2.0436 | 0.7556 | | 0.0 | 44.0 | 9460 | 2.0515 | 0.7556 | | 0.0 | 45.0 | 9675 | 2.0554 | 0.7556 | | 0.0 | 46.0 | 9890 | 2.0579 | 0.7556 | | 0.0 | 47.0 | 10105 | 2.0613 | 0.7556 | | 0.0 | 48.0 | 10320 | 2.0650 | 0.7556 | | 0.0 | 49.0 | 10535 | 2.0662 | 0.7556 | | 0.0 | 50.0 | 10750 | 2.0670 | 0.7556 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_adamax_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_adamax_001_fold2 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 3.0511 - Accuracy: 0.7778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.091 | 1.0 | 215 | 1.8850 | 0.7111 | | 0.0695 | 2.0 | 430 | 1.9470 | 0.6444 | | 0.0617 | 3.0 | 645 | 1.5137 | 0.7556 | | 0.0146 | 4.0 | 860 | 1.8770 | 0.7333 | | 0.0437 | 5.0 | 1075 | 2.0442 | 0.7333 | | 0.0116 | 6.0 | 1290 | 1.4759 | 0.8 | | 0.0035 | 7.0 | 1505 | 2.8704 | 0.6444 | | 0.0344 | 8.0 | 1720 | 2.7808 | 0.6444 | | 0.0121 | 9.0 | 1935 | 2.7705 | 0.6889 | | 0.0003 | 10.0 | 2150 | 2.2137 | 0.7556 | | 0.0626 | 11.0 | 2365 | 2.2191 | 0.7111 | | 0.0001 | 12.0 | 2580 | 2.1839 | 0.7111 | | 0.0106 | 13.0 | 2795 | 2.7238 | 0.6444 | | 0.0001 | 14.0 | 3010 | 3.0158 | 0.6444 | | 0.0001 | 15.0 | 3225 | 2.3783 | 0.7556 | | 0.0 | 16.0 | 3440 | 2.3314 | 0.7556 | | 0.0 | 17.0 | 3655 | 2.3547 | 0.7556 | | 0.0 | 18.0 | 3870 | 2.3759 | 0.7556 | | 0.0 | 19.0 | 4085 | 2.3984 | 0.7556 | | 0.0 | 20.0 | 4300 | 2.4203 | 0.7556 | | 0.0 | 21.0 | 4515 | 2.4425 | 0.7556 | | 0.0 | 22.0 | 4730 | 2.4655 | 0.7556 | | 0.0 | 23.0 | 4945 | 2.4901 | 0.7556 | | 0.0 | 24.0 | 5160 | 2.5187 | 0.7556 | | 0.0 | 25.0 | 5375 | 2.5510 | 0.7556 | | 0.0 | 26.0 | 5590 | 2.5849 | 0.7556 | | 0.0 | 27.0 | 5805 | 2.6187 | 0.7556 | | 0.0 | 28.0 | 6020 | 2.6506 | 0.7556 | | 0.0 | 29.0 | 6235 | 2.6816 | 0.7556 | | 0.0 | 30.0 | 6450 | 2.7120 | 0.7556 | | 0.0 | 31.0 | 6665 | 2.7426 | 0.7778 | | 0.0 | 32.0 | 6880 | 2.7738 | 0.7778 | | 0.0 | 33.0 | 7095 | 2.8047 | 0.7778 | | 0.0 | 34.0 | 7310 | 2.8362 | 0.7778 | | 0.0 | 35.0 | 7525 | 2.8656 | 0.7778 | | 0.0 | 36.0 | 7740 | 2.8925 | 0.7778 | | 0.0 | 37.0 | 7955 | 2.9162 | 0.7778 | | 0.0 | 38.0 | 8170 | 2.9371 | 0.7778 | | 0.0 | 39.0 | 8385 | 2.9556 | 0.7778 | | 0.0 | 40.0 | 8600 | 2.9717 | 0.7778 | | 0.0 | 41.0 | 8815 | 2.9862 | 0.7778 | | 0.0 | 42.0 | 9030 | 2.9991 | 0.7778 | | 0.0 | 43.0 | 9245 | 3.0106 | 0.7778 | | 0.0 | 44.0 | 9460 | 3.0207 | 0.7778 | | 0.0 | 45.0 | 9675 | 3.0295 | 0.7778 | | 0.0 | 46.0 | 9890 | 3.0369 | 0.7778 | | 0.0 | 47.0 | 10105 | 3.0430 | 0.7778 | | 0.0 | 48.0 | 10320 | 3.0475 | 0.7778 | | 0.0 | 49.0 | 10535 | 3.0503 | 0.7778 | | 0.0 | 50.0 | 10750 | 3.0511 | 0.7778 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_tiny_adamax_001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_adamax_001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5391 - Accuracy: 0.9524 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2113 | 1.0 | 219 | 0.5669 | 0.8571 | | 0.1247 | 2.0 | 438 | 0.3852 | 0.8571 | | 0.0797 | 3.0 | 657 | 0.3243 | 0.8810 | | 0.079 | 4.0 | 876 | 0.2935 | 0.9286 | | 0.1755 | 5.0 | 1095 | 0.3153 | 0.8810 | | 0.1228 | 6.0 | 1314 | 0.4983 | 0.9048 | | 0.047 | 7.0 | 1533 | 0.4737 | 0.9048 | | 0.0236 | 8.0 | 1752 | 0.2530 | 0.9286 | | 0.0027 | 9.0 | 1971 | 0.9366 | 0.8810 | | 0.0257 | 10.0 | 2190 | 0.8815 | 0.8810 | | 0.032 | 11.0 | 2409 | 0.7642 | 0.9048 | | 0.0025 | 12.0 | 2628 | 0.6321 | 0.9286 | | 0.0 | 13.0 | 2847 | 0.4805 | 0.9048 | | 0.0406 | 14.0 | 3066 | 0.7911 | 0.9286 | | 0.0286 | 15.0 | 3285 | 0.2463 | 0.9048 | | 0.0029 | 16.0 | 3504 | 0.0537 | 0.9762 | | 0.0065 | 17.0 | 3723 | 0.3008 | 0.9286 | | 0.0001 | 18.0 | 3942 | 0.8021 | 0.8810 | | 0.0 | 19.0 | 4161 | 0.3160 | 0.9762 | | 0.0084 | 20.0 | 4380 | 1.2037 | 0.8333 | | 0.0 | 21.0 | 4599 | 0.5426 | 0.9286 | | 0.0001 | 22.0 | 4818 | 0.3468 | 0.9524 | | 0.0204 | 23.0 | 5037 | 0.7324 | 0.9286 | | 0.0 | 24.0 | 5256 | 0.8099 | 0.9048 | | 0.0 | 25.0 | 5475 | 1.1998 | 0.8810 | | 0.0 | 26.0 | 5694 | 0.5294 | 0.9524 | | 0.0 | 27.0 | 5913 | 0.5383 | 0.9524 | | 0.0 | 28.0 | 6132 | 0.5204 | 0.9524 | | 0.0 | 29.0 | 6351 | 0.5193 | 0.9524 | | 0.0 | 30.0 | 6570 | 0.5189 | 0.9524 | | 0.0 | 31.0 | 6789 | 0.5187 | 0.9524 | | 0.0 | 32.0 | 7008 | 0.5190 | 0.9524 | | 0.0 | 33.0 | 7227 | 0.5187 | 0.9524 | | 0.0 | 34.0 | 7446 | 0.5193 | 0.9524 | | 0.0 | 35.0 | 7665 | 0.5201 | 0.9524 | | 0.0 | 36.0 | 7884 | 0.5213 | 0.9524 | | 0.0 | 37.0 | 8103 | 0.5225 | 0.9524 | | 0.0 | 38.0 | 8322 | 0.5239 | 0.9524 | | 0.0 | 39.0 | 8541 | 0.5256 | 0.9524 | | 0.0 | 40.0 | 8760 | 0.5271 | 0.9524 | | 0.0 | 41.0 | 8979 | 0.5287 | 0.9524 | | 0.0 | 42.0 | 9198 | 0.5302 | 0.9524 | | 0.0 | 43.0 | 9417 | 0.5318 | 0.9524 | | 0.0 | 44.0 | 9636 | 0.5333 | 0.9524 | | 0.0 | 45.0 | 9855 | 0.5348 | 0.9524 | | 0.0 | 46.0 | 10074 | 0.5359 | 0.9524 | | 0.0 | 47.0 | 10293 | 0.5372 | 0.9524 | | 0.0 | 48.0 | 10512 | 0.5381 | 0.9524 | | 0.0 | 49.0 | 10731 | 0.5389 | 0.9524 | | 0.0 | 50.0 | 10950 | 0.5391 | 0.9524 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_adamax_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_adamax_0001_fold3 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0266 - Accuracy: 0.9070 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0054 | 1.0 | 217 | 0.4033 | 0.9070 | | 0.0006 | 2.0 | 434 | 0.7216 | 0.8837 | | 0.0001 | 3.0 | 651 | 0.6016 | 0.9070 | | 0.0001 | 4.0 | 868 | 0.5981 | 0.9070 | | 0.0 | 5.0 | 1085 | 0.6065 | 0.9070 | | 0.0 | 6.0 | 1302 | 0.6108 | 0.9070 | | 0.0 | 7.0 | 1519 | 0.6197 | 0.9070 | | 0.0 | 8.0 | 1736 | 0.6284 | 0.9070 | | 0.0 | 9.0 | 1953 | 0.6384 | 0.9070 | | 0.0 | 10.0 | 2170 | 0.6478 | 0.9070 | | 0.0 | 11.0 | 2387 | 0.6570 | 0.9070 | | 0.0 | 12.0 | 2604 | 0.6667 | 0.9070 | | 0.0 | 13.0 | 2821 | 0.6782 | 0.9070 | | 0.0 | 14.0 | 3038 | 0.6908 | 0.9070 | | 0.0 | 15.0 | 3255 | 0.6989 | 0.9070 | | 0.0 | 16.0 | 3472 | 0.7116 | 0.9070 | | 0.0 | 17.0 | 3689 | 0.7199 | 0.9070 | | 0.0 | 18.0 | 3906 | 0.7341 | 0.9070 | | 0.0 | 19.0 | 4123 | 0.7428 | 0.9070 | | 0.0 | 20.0 | 4340 | 0.7587 | 0.9070 | | 0.0 | 21.0 | 4557 | 0.7688 | 0.9070 | | 0.0 | 22.0 | 4774 | 0.7786 | 0.9070 | | 0.0 | 23.0 | 4991 | 0.7937 | 0.9070 | | 0.0 | 24.0 | 5208 | 0.8013 | 0.9070 | | 0.0 | 25.0 | 5425 | 0.8180 | 0.9070 | | 0.0 | 26.0 | 5642 | 0.8297 | 0.9070 | | 0.0 | 27.0 | 5859 | 0.8396 | 0.9070 | | 0.0 | 28.0 | 6076 | 0.8494 | 0.9070 | | 0.0 | 29.0 | 6293 | 0.8602 | 0.9070 | | 0.0 | 30.0 | 6510 | 0.8755 | 0.9070 | | 0.0 | 31.0 | 6727 | 0.8899 | 0.9070 | | 0.0 | 32.0 | 6944 | 0.8970 | 0.9070 | | 0.0 | 33.0 | 7161 | 0.9066 | 0.9070 | | 0.0 | 34.0 | 7378 | 0.9206 | 0.9070 | | 0.0 | 35.0 | 7595 | 0.9267 | 0.9070 | | 0.0 | 36.0 | 7812 | 0.9397 | 0.9070 | | 0.0 | 37.0 | 8029 | 0.9495 | 0.9070 | | 0.0 | 38.0 | 8246 | 0.9575 | 0.9070 | | 0.0 | 39.0 | 8463 | 0.9677 | 0.9070 | | 0.0 | 40.0 | 8680 | 0.9799 | 0.9070 | | 0.0 | 41.0 | 8897 | 0.9884 | 0.9070 | | 0.0 | 42.0 | 9114 | 0.9997 | 0.9070 | | 0.0 | 43.0 | 9331 | 1.0060 | 0.9070 | | 0.0 | 44.0 | 9548 | 1.0089 | 0.9070 | | 0.0 | 45.0 | 9765 | 1.0138 | 0.9070 | | 0.0 | 46.0 | 9982 | 1.0186 | 0.9070 | | 0.0 | 47.0 | 10199 | 1.0225 | 0.9070 | | 0.0 | 48.0 | 10416 | 1.0236 | 0.9070 | | 0.0 | 49.0 | 10633 | 1.0261 | 0.9070 | | 0.0 | 50.0 | 10850 | 1.0266 | 0.9070 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_adamax_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_adamax_001_fold3 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.0078 - Accuracy: 0.8372 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1615 | 1.0 | 217 | 0.8891 | 0.7674 | | 0.0732 | 2.0 | 434 | 0.4751 | 0.8372 | | 0.0924 | 3.0 | 651 | 1.0550 | 0.8140 | | 0.0215 | 4.0 | 868 | 0.8222 | 0.8140 | | 0.0108 | 5.0 | 1085 | 1.0431 | 0.7907 | | 0.0556 | 6.0 | 1302 | 1.0073 | 0.7907 | | 0.004 | 7.0 | 1519 | 1.0088 | 0.8372 | | 0.0126 | 8.0 | 1736 | 0.8896 | 0.8605 | | 0.0277 | 9.0 | 1953 | 1.5634 | 0.8140 | | 0.0001 | 10.0 | 2170 | 1.2793 | 0.8605 | | 0.0292 | 11.0 | 2387 | 1.1442 | 0.7674 | | 0.0544 | 12.0 | 2604 | 1.8210 | 0.7674 | | 0.0066 | 13.0 | 2821 | 1.1859 | 0.7907 | | 0.0012 | 14.0 | 3038 | 1.1578 | 0.7907 | | 0.0282 | 15.0 | 3255 | 1.6928 | 0.7442 | | 0.0001 | 16.0 | 3472 | 1.3989 | 0.8372 | | 0.0 | 17.0 | 3689 | 1.4295 | 0.8372 | | 0.0 | 18.0 | 3906 | 1.4591 | 0.8372 | | 0.0 | 19.0 | 4123 | 1.4845 | 0.8372 | | 0.0 | 20.0 | 4340 | 1.5077 | 0.8372 | | 0.0 | 21.0 | 4557 | 1.5294 | 0.8372 | | 0.0 | 22.0 | 4774 | 1.5502 | 0.8372 | | 0.0 | 23.0 | 4991 | 1.5707 | 0.8372 | | 0.0 | 24.0 | 5208 | 1.5907 | 0.8372 | | 0.0 | 25.0 | 5425 | 1.6108 | 0.8372 | | 0.0 | 26.0 | 5642 | 1.6313 | 0.8372 | | 0.0 | 27.0 | 5859 | 1.6525 | 0.8372 | | 0.0 | 28.0 | 6076 | 1.6750 | 0.8372 | | 0.0 | 29.0 | 6293 | 1.6980 | 0.8372 | | 0.0 | 30.0 | 6510 | 1.7211 | 0.8372 | | 0.0 | 31.0 | 6727 | 1.7443 | 0.8372 | | 0.0 | 32.0 | 6944 | 1.7671 | 0.8372 | | 0.0 | 33.0 | 7161 | 1.7884 | 0.8372 | | 0.0 | 34.0 | 7378 | 1.8089 | 0.8372 | | 0.0 | 35.0 | 7595 | 1.8281 | 0.8372 | | 0.0 | 36.0 | 7812 | 1.8470 | 0.8372 | | 0.0 | 37.0 | 8029 | 1.8657 | 0.8372 | | 0.0 | 38.0 | 8246 | 1.8832 | 0.8372 | | 0.0 | 39.0 | 8463 | 1.9003 | 0.8372 | | 0.0 | 40.0 | 8680 | 1.9168 | 0.8372 | | 0.0 | 41.0 | 8897 | 1.9322 | 0.8372 | | 0.0 | 42.0 | 9114 | 1.9463 | 0.8372 | | 0.0 | 43.0 | 9331 | 1.9591 | 0.8372 | | 0.0 | 44.0 | 9548 | 1.9706 | 0.8372 | | 0.0 | 45.0 | 9765 | 1.9809 | 0.8372 | | 0.0 | 46.0 | 9982 | 1.9901 | 0.8372 | | 0.0 | 47.0 | 10199 | 1.9976 | 0.8372 | | 0.0 | 48.0 | 10416 | 2.0033 | 0.8372 | | 0.0 | 49.0 | 10633 | 2.0069 | 0.8372 | | 0.0 | 50.0 | 10850 | 2.0078 | 0.8372 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_tiny_adamax_001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_adamax_001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0843 - Accuracy: 0.8293 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2361 | 1.0 | 220 | 0.7775 | 0.7805 | | 0.2436 | 2.0 | 440 | 1.3306 | 0.7561 | | 0.1462 | 3.0 | 660 | 0.9347 | 0.7805 | | 0.071 | 4.0 | 880 | 0.5663 | 0.8537 | | 0.1006 | 5.0 | 1100 | 0.2859 | 0.8780 | | 0.1238 | 6.0 | 1320 | 0.9526 | 0.8293 | | 0.0201 | 7.0 | 1540 | 1.1774 | 0.8049 | | 0.0017 | 8.0 | 1760 | 1.4587 | 0.7561 | | 0.0368 | 9.0 | 1980 | 0.7868 | 0.8537 | | 0.0002 | 10.0 | 2200 | 0.8716 | 0.8780 | | 0.0421 | 11.0 | 2420 | 0.9525 | 0.8049 | | 0.0055 | 12.0 | 2640 | 1.5979 | 0.7805 | | 0.0103 | 13.0 | 2860 | 0.4608 | 0.9024 | | 0.0 | 14.0 | 3080 | 1.1806 | 0.8049 | | 0.0 | 15.0 | 3300 | 1.1203 | 0.8293 | | 0.0 | 16.0 | 3520 | 1.1285 | 0.8293 | | 0.0 | 17.0 | 3740 | 1.1228 | 0.8293 | | 0.0 | 18.0 | 3960 | 1.1188 | 0.8293 | | 0.0 | 19.0 | 4180 | 1.1166 | 0.8293 | | 0.0 | 20.0 | 4400 | 1.1122 | 0.8293 | | 0.0 | 21.0 | 4620 | 1.1096 | 0.8293 | | 0.0 | 22.0 | 4840 | 1.1085 | 0.8293 | | 0.0 | 23.0 | 5060 | 1.1063 | 0.8293 | | 0.0 | 24.0 | 5280 | 1.1040 | 0.8293 | | 0.0 | 25.0 | 5500 | 1.1028 | 0.8293 | | 0.0 | 26.0 | 5720 | 1.0996 | 0.8293 | | 0.0 | 27.0 | 5940 | 1.0984 | 0.8293 | | 0.0 | 28.0 | 6160 | 1.0966 | 0.8293 | | 0.0 | 29.0 | 6380 | 1.0939 | 0.8293 | | 0.0 | 30.0 | 6600 | 1.0930 | 0.8293 | | 0.0 | 31.0 | 6820 | 1.0903 | 0.8293 | | 0.0 | 32.0 | 7040 | 1.0890 | 0.8293 | | 0.0 | 33.0 | 7260 | 1.0876 | 0.8293 | | 0.0 | 34.0 | 7480 | 1.0855 | 0.8293 | | 0.0 | 35.0 | 7700 | 1.0853 | 0.8293 | | 0.0 | 36.0 | 7920 | 1.0829 | 0.8293 | | 0.0 | 37.0 | 8140 | 1.0834 | 0.8293 | | 0.0 | 38.0 | 8360 | 1.0821 | 0.8293 | | 0.0 | 39.0 | 8580 | 1.0819 | 0.8293 | | 0.0 | 40.0 | 8800 | 1.0819 | 0.8293 | | 0.0 | 41.0 | 9020 | 1.0821 | 0.8293 | | 0.0 | 42.0 | 9240 | 1.0825 | 0.8293 | | 0.0 | 43.0 | 9460 | 1.0825 | 0.8293 | | 0.0 | 44.0 | 9680 | 1.0818 | 0.8293 | | 0.0 | 45.0 | 9900 | 1.0822 | 0.8293 | | 0.0 | 46.0 | 10120 | 1.0832 | 0.8293 | | 0.0 | 47.0 | 10340 | 1.0843 | 0.8293 | | 0.0 | 48.0 | 10560 | 1.0840 | 0.8293 | | 0.0 | 49.0 | 10780 | 1.0843 | 0.8293 | | 0.0 | 50.0 | 11000 | 1.0843 | 0.8293 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
dima806/hand_gestures_image_detection
Returns hand gesture based on image with about 96% accuracy. See https://www.kaggle.com/code/dima806/hand-gestures-image-detection-vit for more details. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6449300e3adf50d864095b90/hGABRUvyao5roojmQY79K.png) ``` Classification report: precision recall f1-score support call 0.9256 0.9752 0.9498 11825 dislike 0.9784 0.9862 0.9823 11826 fist 0.9833 0.9870 0.9851 11826 four 0.9140 0.9357 0.9247 11826 like 0.9761 0.9101 0.9420 11825 mute 0.9831 0.9964 0.9897 11826 ok 0.9586 0.9658 0.9622 11825 one 0.9708 0.9453 0.9579 11826 palm 0.9764 0.9637 0.9700 11826 peace 0.9187 0.9367 0.9276 11825 peace_inverted 0.9784 0.9748 0.9766 11826 rock 0.9439 0.9361 0.9400 11825 stop 0.9502 0.9723 0.9611 11825 stop_inverted 0.9828 0.9546 0.9685 11826 three 0.9135 0.9068 0.9101 11826 three2 0.9799 0.9670 0.9734 11826 two_up 0.9570 0.9766 0.9667 11826 two_up_inverted 0.9754 0.9703 0.9729 11825 accuracy 0.9589 212861 macro avg 0.9592 0.9589 0.9589 212861 weighted avg 0.9592 0.9589 0.9589 212861 ```
[ "call", "dislike", "fist", "four", "like", "mute", "ok", "one", "palm", "peace", "peace_inverted", "rock", "stop", "stop_inverted", "three", "three2", "two_up", "two_up_inverted" ]
hkivancoral/hushem_40x_deit_base_adamax_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_adamax_0001_fold4 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0988 - Accuracy: 0.9762 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0682 | 1.0 | 219 | 0.3355 | 0.9286 | | 0.0006 | 2.0 | 438 | 0.0818 | 0.9762 | | 0.0026 | 3.0 | 657 | 0.4435 | 0.8810 | | 0.0001 | 4.0 | 876 | 0.1410 | 0.9762 | | 0.0 | 5.0 | 1095 | 0.1325 | 0.9524 | | 0.0 | 6.0 | 1314 | 0.1311 | 0.9524 | | 0.0 | 7.0 | 1533 | 0.1285 | 0.9524 | | 0.0 | 8.0 | 1752 | 0.1304 | 0.9524 | | 0.0 | 9.0 | 1971 | 0.1305 | 0.9524 | | 0.0 | 10.0 | 2190 | 0.1306 | 0.9762 | | 0.0 | 11.0 | 2409 | 0.1299 | 0.9762 | | 0.0 | 12.0 | 2628 | 0.1308 | 0.9762 | | 0.0 | 13.0 | 2847 | 0.1297 | 0.9762 | | 0.0 | 14.0 | 3066 | 0.1299 | 0.9762 | | 0.0 | 15.0 | 3285 | 0.1283 | 0.9762 | | 0.0 | 16.0 | 3504 | 0.1307 | 0.9762 | | 0.0 | 17.0 | 3723 | 0.1314 | 0.9762 | | 0.0 | 18.0 | 3942 | 0.1318 | 0.9762 | | 0.0 | 19.0 | 4161 | 0.1316 | 0.9762 | | 0.0 | 20.0 | 4380 | 0.1321 | 0.9762 | | 0.0 | 21.0 | 4599 | 0.1311 | 0.9762 | | 0.0 | 22.0 | 4818 | 0.1309 | 0.9762 | | 0.0 | 23.0 | 5037 | 0.1314 | 0.9762 | | 0.0 | 24.0 | 5256 | 0.1307 | 0.9762 | | 0.0 | 25.0 | 5475 | 0.1305 | 0.9762 | | 0.0 | 26.0 | 5694 | 0.1328 | 0.9762 | | 0.0 | 27.0 | 5913 | 0.1314 | 0.9762 | | 0.0 | 28.0 | 6132 | 0.1318 | 0.9762 | | 0.0 | 29.0 | 6351 | 0.1318 | 0.9762 | | 0.0 | 30.0 | 6570 | 0.1304 | 0.9762 | | 0.0 | 31.0 | 6789 | 0.1298 | 0.9762 | | 0.0 | 32.0 | 7008 | 0.1302 | 0.9762 | | 0.0 | 33.0 | 7227 | 0.1299 | 0.9762 | | 0.0 | 34.0 | 7446 | 0.1307 | 0.9762 | | 0.0 | 35.0 | 7665 | 0.1289 | 0.9762 | | 0.0 | 36.0 | 7884 | 0.1272 | 0.9762 | | 0.0 | 37.0 | 8103 | 0.1257 | 0.9762 | | 0.0 | 38.0 | 8322 | 0.1265 | 0.9762 | | 0.0 | 39.0 | 8541 | 0.1244 | 0.9762 | | 0.0 | 40.0 | 8760 | 0.1224 | 0.9762 | | 0.0 | 41.0 | 8979 | 0.1200 | 0.9762 | | 0.0 | 42.0 | 9198 | 0.1172 | 0.9762 | | 0.0 | 43.0 | 9417 | 0.1170 | 0.9762 | | 0.0 | 44.0 | 9636 | 0.1123 | 0.9762 | | 0.0 | 45.0 | 9855 | 0.1089 | 0.9762 | | 0.0 | 46.0 | 10074 | 0.1076 | 0.9762 | | 0.0 | 47.0 | 10293 | 0.1032 | 0.9762 | | 0.0 | 48.0 | 10512 | 0.0999 | 0.9762 | | 0.0 | 49.0 | 10731 | 0.0983 | 0.9762 | | 0.0 | 50.0 | 10950 | 0.0988 | 0.9762 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_adamax_001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_adamax_001_fold4 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4788 - Accuracy: 0.9286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1955 | 1.0 | 219 | 0.5166 | 0.8571 | | 0.0615 | 2.0 | 438 | 0.3282 | 0.8810 | | 0.1238 | 3.0 | 657 | 0.0965 | 0.9524 | | 0.008 | 4.0 | 876 | 0.3113 | 0.9524 | | 0.0154 | 5.0 | 1095 | 0.8754 | 0.8810 | | 0.0116 | 6.0 | 1314 | 0.5560 | 0.9048 | | 0.0123 | 7.0 | 1533 | 0.5006 | 0.8333 | | 0.0073 | 8.0 | 1752 | 1.1863 | 0.7857 | | 0.0145 | 9.0 | 1971 | 0.2536 | 0.9048 | | 0.0007 | 10.0 | 2190 | 0.3826 | 0.9286 | | 0.0006 | 11.0 | 2409 | 0.1332 | 0.9762 | | 0.0002 | 12.0 | 2628 | 0.7313 | 0.8810 | | 0.0 | 13.0 | 2847 | 0.5030 | 0.9048 | | 0.0 | 14.0 | 3066 | 0.4838 | 0.9048 | | 0.0 | 15.0 | 3285 | 0.4726 | 0.9286 | | 0.0 | 16.0 | 3504 | 0.4645 | 0.9286 | | 0.0 | 17.0 | 3723 | 0.4599 | 0.9286 | | 0.0 | 18.0 | 3942 | 0.4563 | 0.9286 | | 0.0 | 19.0 | 4161 | 0.4541 | 0.9286 | | 0.0 | 20.0 | 4380 | 0.4522 | 0.9286 | | 0.0 | 21.0 | 4599 | 0.4527 | 0.9286 | | 0.0 | 22.0 | 4818 | 0.4513 | 0.9286 | | 0.0 | 23.0 | 5037 | 0.4519 | 0.9286 | | 0.0 | 24.0 | 5256 | 0.4525 | 0.9286 | | 0.0 | 25.0 | 5475 | 0.4545 | 0.9286 | | 0.0 | 26.0 | 5694 | 0.4558 | 0.9286 | | 0.0 | 27.0 | 5913 | 0.4548 | 0.9286 | | 0.0 | 28.0 | 6132 | 0.4567 | 0.9286 | | 0.0 | 29.0 | 6351 | 0.4592 | 0.9286 | | 0.0 | 30.0 | 6570 | 0.4613 | 0.9286 | | 0.0 | 31.0 | 6789 | 0.4633 | 0.9286 | | 0.0 | 32.0 | 7008 | 0.4657 | 0.9286 | | 0.0 | 33.0 | 7227 | 0.4670 | 0.9286 | | 0.0 | 34.0 | 7446 | 0.4694 | 0.9286 | | 0.0 | 35.0 | 7665 | 0.4721 | 0.9286 | | 0.0 | 36.0 | 7884 | 0.4739 | 0.9286 | | 0.0 | 37.0 | 8103 | 0.4760 | 0.9286 | | 0.0 | 38.0 | 8322 | 0.4771 | 0.9286 | | 0.0 | 39.0 | 8541 | 0.4783 | 0.9286 | | 0.0 | 40.0 | 8760 | 0.4790 | 0.9286 | | 0.0 | 41.0 | 8979 | 0.4792 | 0.9286 | | 0.0 | 42.0 | 9198 | 0.4797 | 0.9286 | | 0.0 | 43.0 | 9417 | 0.4798 | 0.9286 | | 0.0 | 44.0 | 9636 | 0.4799 | 0.9286 | | 0.0 | 45.0 | 9855 | 0.4797 | 0.9286 | | 0.0 | 46.0 | 10074 | 0.4797 | 0.9286 | | 0.0 | 47.0 | 10293 | 0.4794 | 0.9286 | | 0.0 | 48.0 | 10512 | 0.4789 | 0.9286 | | 0.0 | 49.0 | 10731 | 0.4785 | 0.9286 | | 0.0 | 50.0 | 10950 | 0.4788 | 0.9286 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_adamax_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_adamax_0001_fold5 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0346 - Accuracy: 0.8780 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.031 | 1.0 | 220 | 0.4305 | 0.8537 | | 0.0086 | 2.0 | 440 | 0.4798 | 0.8537 | | 0.0001 | 3.0 | 660 | 0.5506 | 0.8537 | | 0.0001 | 4.0 | 880 | 0.6022 | 0.8537 | | 0.0 | 5.0 | 1100 | 0.6090 | 0.8537 | | 0.0 | 6.0 | 1320 | 0.6235 | 0.8537 | | 0.0 | 7.0 | 1540 | 0.6332 | 0.8537 | | 0.0 | 8.0 | 1760 | 0.6512 | 0.8537 | | 0.0 | 9.0 | 1980 | 0.6655 | 0.8780 | | 0.0 | 10.0 | 2200 | 0.6776 | 0.8780 | | 0.0 | 11.0 | 2420 | 0.6885 | 0.8780 | | 0.0 | 12.0 | 2640 | 0.6981 | 0.8780 | | 0.0 | 13.0 | 2860 | 0.7088 | 0.8780 | | 0.0 | 14.0 | 3080 | 0.7235 | 0.8780 | | 0.0 | 15.0 | 3300 | 0.7311 | 0.8780 | | 0.0 | 16.0 | 3520 | 0.7440 | 0.8780 | | 0.0 | 17.0 | 3740 | 0.7529 | 0.8780 | | 0.0 | 18.0 | 3960 | 0.7675 | 0.8780 | | 0.0 | 19.0 | 4180 | 0.7773 | 0.8780 | | 0.0 | 20.0 | 4400 | 0.7907 | 0.8780 | | 0.0 | 21.0 | 4620 | 0.8020 | 0.8780 | | 0.0 | 22.0 | 4840 | 0.8114 | 0.8780 | | 0.0 | 23.0 | 5060 | 0.8261 | 0.8780 | | 0.0 | 24.0 | 5280 | 0.8349 | 0.8780 | | 0.0 | 25.0 | 5500 | 0.8489 | 0.8780 | | 0.0 | 26.0 | 5720 | 0.8635 | 0.8780 | | 0.0 | 27.0 | 5940 | 0.8766 | 0.8780 | | 0.0 | 28.0 | 6160 | 0.8818 | 0.8780 | | 0.0 | 29.0 | 6380 | 0.8949 | 0.8780 | | 0.0 | 30.0 | 6600 | 0.9083 | 0.8780 | | 0.0 | 31.0 | 6820 | 0.9225 | 0.8780 | | 0.0 | 32.0 | 7040 | 0.9291 | 0.8780 | | 0.0 | 33.0 | 7260 | 0.9474 | 0.8780 | | 0.0 | 34.0 | 7480 | 0.9571 | 0.8780 | | 0.0 | 35.0 | 7700 | 0.9707 | 0.8780 | | 0.0 | 36.0 | 7920 | 0.9772 | 0.8780 | | 0.0 | 37.0 | 8140 | 0.9933 | 0.8780 | | 0.0 | 38.0 | 8360 | 1.0011 | 0.8780 | | 0.0 | 39.0 | 8580 | 1.0070 | 0.8780 | | 0.0 | 40.0 | 8800 | 1.0136 | 0.8780 | | 0.0 | 41.0 | 9020 | 1.0168 | 0.8780 | | 0.0 | 42.0 | 9240 | 1.0247 | 0.8780 | | 0.0 | 43.0 | 9460 | 1.0260 | 0.8780 | | 0.0 | 44.0 | 9680 | 1.0300 | 0.8780 | | 0.0 | 45.0 | 9900 | 1.0349 | 0.8780 | | 0.0 | 46.0 | 10120 | 1.0343 | 0.8780 | | 0.0 | 47.0 | 10340 | 1.0321 | 0.8780 | | 0.0 | 48.0 | 10560 | 1.0333 | 0.8780 | | 0.0 | 49.0 | 10780 | 1.0344 | 0.8780 | | 0.0 | 50.0 | 11000 | 1.0346 | 0.8780 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_adamax_001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_adamax_001_fold5 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4744 - Accuracy: 0.9024 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2741 | 1.0 | 220 | 0.7783 | 0.7317 | | 0.1315 | 2.0 | 440 | 0.8361 | 0.8293 | | 0.0434 | 3.0 | 660 | 1.2093 | 0.7317 | | 0.0894 | 4.0 | 880 | 0.3662 | 0.9024 | | 0.0527 | 5.0 | 1100 | 0.4869 | 0.9024 | | 0.0501 | 6.0 | 1320 | 0.3025 | 0.9512 | | 0.001 | 7.0 | 1540 | 0.5424 | 0.9024 | | 0.023 | 8.0 | 1760 | 0.6857 | 0.9024 | | 0.0256 | 9.0 | 1980 | 1.0120 | 0.8780 | | 0.0166 | 10.0 | 2200 | 0.5130 | 0.8780 | | 0.0004 | 11.0 | 2420 | 0.4559 | 0.8780 | | 0.0217 | 12.0 | 2640 | 1.2147 | 0.7805 | | 0.0002 | 13.0 | 2860 | 0.2445 | 0.9268 | | 0.0 | 14.0 | 3080 | 0.2240 | 0.9268 | | 0.0 | 15.0 | 3300 | 0.2460 | 0.9268 | | 0.0 | 16.0 | 3520 | 0.2550 | 0.9268 | | 0.0 | 17.0 | 3740 | 0.2642 | 0.9268 | | 0.0 | 18.0 | 3960 | 0.2728 | 0.9268 | | 0.0 | 19.0 | 4180 | 0.2816 | 0.9268 | | 0.0 | 20.0 | 4400 | 0.2891 | 0.9268 | | 0.0 | 21.0 | 4620 | 0.2969 | 0.9268 | | 0.0 | 22.0 | 4840 | 0.3042 | 0.9268 | | 0.0 | 23.0 | 5060 | 0.3121 | 0.9268 | | 0.0 | 24.0 | 5280 | 0.3198 | 0.9268 | | 0.0 | 25.0 | 5500 | 0.3281 | 0.9268 | | 0.0 | 26.0 | 5720 | 0.3364 | 0.9268 | | 0.0 | 27.0 | 5940 | 0.3437 | 0.9268 | | 0.0 | 28.0 | 6160 | 0.3515 | 0.9268 | | 0.0 | 29.0 | 6380 | 0.3592 | 0.9268 | | 0.0 | 30.0 | 6600 | 0.3667 | 0.9268 | | 0.0 | 31.0 | 6820 | 0.3741 | 0.9268 | | 0.0 | 32.0 | 7040 | 0.3811 | 0.9268 | | 0.0 | 33.0 | 7260 | 0.3879 | 0.9268 | | 0.0 | 34.0 | 7480 | 0.3952 | 0.9268 | | 0.0 | 35.0 | 7700 | 0.4021 | 0.9268 | | 0.0 | 36.0 | 7920 | 0.4093 | 0.9268 | | 0.0 | 37.0 | 8140 | 0.4163 | 0.9268 | | 0.0 | 38.0 | 8360 | 0.4226 | 0.9268 | | 0.0 | 39.0 | 8580 | 0.4295 | 0.9268 | | 0.0 | 40.0 | 8800 | 0.4358 | 0.9268 | | 0.0 | 41.0 | 9020 | 0.4415 | 0.9268 | | 0.0 | 42.0 | 9240 | 0.4471 | 0.9268 | | 0.0 | 43.0 | 9460 | 0.4531 | 0.9268 | | 0.0 | 44.0 | 9680 | 0.4575 | 0.9268 | | 0.0 | 45.0 | 9900 | 0.4619 | 0.9268 | | 0.0 | 46.0 | 10120 | 0.4664 | 0.9268 | | 0.0 | 47.0 | 10340 | 0.4696 | 0.9268 | | 0.0 | 48.0 | 10560 | 0.4724 | 0.9024 | | 0.0 | 49.0 | 10780 | 0.4740 | 0.9024 | | 0.0 | 50.0 | 11000 | 0.4744 | 0.9024 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_adamax_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_adamax_00001_fold1 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.7464 - Accuracy: 0.7778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.175 | 1.0 | 215 | 0.7361 | 0.7556 | | 0.0109 | 2.0 | 430 | 0.6497 | 0.8 | | 0.0023 | 3.0 | 645 | 0.7453 | 0.8222 | | 0.001 | 4.0 | 860 | 0.7854 | 0.8222 | | 0.0006 | 5.0 | 1075 | 0.8105 | 0.8222 | | 0.0004 | 6.0 | 1290 | 0.8328 | 0.8222 | | 0.0003 | 7.0 | 1505 | 0.8638 | 0.8222 | | 0.0002 | 8.0 | 1720 | 0.8701 | 0.8222 | | 0.0002 | 9.0 | 1935 | 0.9048 | 0.8222 | | 0.0001 | 10.0 | 2150 | 0.9203 | 0.8 | | 0.0001 | 11.0 | 2365 | 0.9399 | 0.8 | | 0.0001 | 12.0 | 2580 | 0.9611 | 0.8 | | 0.0001 | 13.0 | 2795 | 0.9847 | 0.8 | | 0.0001 | 14.0 | 3010 | 1.0078 | 0.8 | | 0.0 | 15.0 | 3225 | 1.0165 | 0.8 | | 0.0 | 16.0 | 3440 | 1.0509 | 0.8 | | 0.0 | 17.0 | 3655 | 1.0662 | 0.8 | | 0.0 | 18.0 | 3870 | 1.0960 | 0.8 | | 0.0 | 19.0 | 4085 | 1.1102 | 0.8 | | 0.0 | 20.0 | 4300 | 1.1333 | 0.8 | | 0.0 | 21.0 | 4515 | 1.1560 | 0.8 | | 0.0 | 22.0 | 4730 | 1.1835 | 0.8 | | 0.0 | 23.0 | 4945 | 1.2066 | 0.8 | | 0.0 | 24.0 | 5160 | 1.2238 | 0.8 | | 0.0 | 25.0 | 5375 | 1.2452 | 0.8 | | 0.0 | 26.0 | 5590 | 1.2607 | 0.8 | | 0.0 | 27.0 | 5805 | 1.2985 | 0.8 | | 0.0 | 28.0 | 6020 | 1.3142 | 0.7778 | | 0.0 | 29.0 | 6235 | 1.3455 | 0.7778 | | 0.0 | 30.0 | 6450 | 1.3849 | 0.7778 | | 0.0 | 31.0 | 6665 | 1.4087 | 0.7778 | | 0.0 | 32.0 | 6880 | 1.4316 | 0.7778 | | 0.0 | 33.0 | 7095 | 1.4372 | 0.7778 | | 0.0 | 34.0 | 7310 | 1.4578 | 0.7778 | | 0.0 | 35.0 | 7525 | 1.5115 | 0.7778 | | 0.0 | 36.0 | 7740 | 1.5151 | 0.7778 | | 0.0 | 37.0 | 7955 | 1.5376 | 0.7778 | | 0.0 | 38.0 | 8170 | 1.5694 | 0.7778 | | 0.0 | 39.0 | 8385 | 1.5967 | 0.7778 | | 0.0 | 40.0 | 8600 | 1.6099 | 0.7778 | | 0.0 | 41.0 | 8815 | 1.6278 | 0.7778 | | 0.0 | 42.0 | 9030 | 1.6372 | 0.7778 | | 0.0 | 43.0 | 9245 | 1.6697 | 0.7778 | | 0.0 | 44.0 | 9460 | 1.6889 | 0.7778 | | 0.0 | 45.0 | 9675 | 1.6985 | 0.7778 | | 0.0 | 46.0 | 9890 | 1.7202 | 0.7778 | | 0.0 | 47.0 | 10105 | 1.7225 | 0.7778 | | 0.0 | 48.0 | 10320 | 1.7406 | 0.7778 | | 0.0 | 49.0 | 10535 | 1.7437 | 0.7778 | | 0.0 | 50.0 | 10750 | 1.7464 | 0.7778 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_sgd_001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_sgd_001_fold1 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7430 - Accuracy: 0.7556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.2392 | 1.0 | 215 | 1.3895 | 0.2667 | | 1.1003 | 2.0 | 430 | 1.3294 | 0.3333 | | 1.0196 | 3.0 | 645 | 1.2624 | 0.4444 | | 0.8639 | 4.0 | 860 | 1.1946 | 0.4889 | | 0.731 | 5.0 | 1075 | 1.1313 | 0.5111 | | 0.6646 | 6.0 | 1290 | 1.0718 | 0.5556 | | 0.545 | 7.0 | 1505 | 1.0254 | 0.6 | | 0.4701 | 8.0 | 1720 | 0.9800 | 0.6444 | | 0.4065 | 9.0 | 1935 | 0.9495 | 0.6222 | | 0.3851 | 10.0 | 2150 | 0.9148 | 0.6667 | | 0.3271 | 11.0 | 2365 | 0.8947 | 0.6667 | | 0.2977 | 12.0 | 2580 | 0.8732 | 0.6889 | | 0.2671 | 13.0 | 2795 | 0.8416 | 0.7111 | | 0.2428 | 14.0 | 3010 | 0.8450 | 0.6889 | | 0.2387 | 15.0 | 3225 | 0.8270 | 0.7111 | | 0.1988 | 16.0 | 3440 | 0.8218 | 0.7111 | | 0.1804 | 17.0 | 3655 | 0.8107 | 0.7333 | | 0.1681 | 18.0 | 3870 | 0.8058 | 0.7333 | | 0.1475 | 19.0 | 4085 | 0.7968 | 0.7333 | | 0.1494 | 20.0 | 4300 | 0.7851 | 0.7556 | | 0.1288 | 21.0 | 4515 | 0.7807 | 0.7556 | | 0.1265 | 22.0 | 4730 | 0.7751 | 0.7556 | | 0.1136 | 23.0 | 4945 | 0.7744 | 0.7556 | | 0.094 | 24.0 | 5160 | 0.7654 | 0.7556 | | 0.0987 | 25.0 | 5375 | 0.7661 | 0.7556 | | 0.096 | 26.0 | 5590 | 0.7527 | 0.7556 | | 0.084 | 27.0 | 5805 | 0.7535 | 0.7556 | | 0.069 | 28.0 | 6020 | 0.7589 | 0.7556 | | 0.0764 | 29.0 | 6235 | 0.7612 | 0.7556 | | 0.067 | 30.0 | 6450 | 0.7558 | 0.7556 | | 0.0458 | 31.0 | 6665 | 0.7531 | 0.7333 | | 0.0687 | 32.0 | 6880 | 0.7463 | 0.7556 | | 0.0414 | 33.0 | 7095 | 0.7445 | 0.7556 | | 0.0522 | 34.0 | 7310 | 0.7378 | 0.7556 | | 0.0521 | 35.0 | 7525 | 0.7477 | 0.7556 | | 0.0458 | 36.0 | 7740 | 0.7370 | 0.7556 | | 0.0586 | 37.0 | 7955 | 0.7425 | 0.7556 | | 0.0551 | 38.0 | 8170 | 0.7441 | 0.7556 | | 0.0389 | 39.0 | 8385 | 0.7437 | 0.7556 | | 0.0335 | 40.0 | 8600 | 0.7446 | 0.7556 | | 0.0337 | 41.0 | 8815 | 0.7439 | 0.7556 | | 0.0431 | 42.0 | 9030 | 0.7421 | 0.7556 | | 0.0392 | 43.0 | 9245 | 0.7439 | 0.7556 | | 0.03 | 44.0 | 9460 | 0.7447 | 0.7556 | | 0.0402 | 45.0 | 9675 | 0.7426 | 0.7556 | | 0.0313 | 46.0 | 9890 | 0.7416 | 0.7556 | | 0.0341 | 47.0 | 10105 | 0.7428 | 0.7556 | | 0.0375 | 48.0 | 10320 | 0.7420 | 0.7556 | | 0.0432 | 49.0 | 10535 | 0.7428 | 0.7556 | | 0.0389 | 50.0 | 10750 | 0.7430 | 0.7556 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_tiny_adamax_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_adamax_0001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3786 - Accuracy: 0.8444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0587 | 1.0 | 215 | 0.8379 | 0.7778 | | 0.0029 | 2.0 | 430 | 0.8134 | 0.8222 | | 0.0061 | 3.0 | 645 | 0.6824 | 0.8667 | | 0.0003 | 4.0 | 860 | 0.8964 | 0.8444 | | 0.0004 | 5.0 | 1075 | 1.1389 | 0.8 | | 0.0069 | 6.0 | 1290 | 0.8847 | 0.8222 | | 0.0014 | 7.0 | 1505 | 0.9407 | 0.8444 | | 0.0208 | 8.0 | 1720 | 1.2665 | 0.8 | | 0.0 | 9.0 | 1935 | 0.7746 | 0.8222 | | 0.0001 | 10.0 | 2150 | 0.9541 | 0.8222 | | 0.0 | 11.0 | 2365 | 1.3297 | 0.7556 | | 0.0 | 12.0 | 2580 | 1.2887 | 0.7778 | | 0.0 | 13.0 | 2795 | 1.2405 | 0.7778 | | 0.0 | 14.0 | 3010 | 1.2098 | 0.8 | | 0.0 | 15.0 | 3225 | 1.1905 | 0.8 | | 0.0 | 16.0 | 3440 | 1.1775 | 0.8 | | 0.0 | 17.0 | 3655 | 1.1699 | 0.8 | | 0.0 | 18.0 | 3870 | 1.1668 | 0.8 | | 0.0 | 19.0 | 4085 | 1.1651 | 0.8 | | 0.0 | 20.0 | 4300 | 1.1645 | 0.8 | | 0.0 | 21.0 | 4515 | 1.1663 | 0.8 | | 0.0 | 22.0 | 4730 | 1.1709 | 0.8 | | 0.0 | 23.0 | 4945 | 1.1752 | 0.8 | | 0.0 | 24.0 | 5160 | 1.1807 | 0.8 | | 0.0 | 25.0 | 5375 | 1.1874 | 0.8222 | | 0.0 | 26.0 | 5590 | 1.1925 | 0.8222 | | 0.0 | 27.0 | 5805 | 1.1999 | 0.8222 | | 0.0 | 28.0 | 6020 | 1.2057 | 0.8222 | | 0.0 | 29.0 | 6235 | 1.2150 | 0.8222 | | 0.0 | 30.0 | 6450 | 1.2228 | 0.8222 | | 0.0 | 31.0 | 6665 | 1.2334 | 0.8222 | | 0.0 | 32.0 | 6880 | 1.2399 | 0.8222 | | 0.0 | 33.0 | 7095 | 1.2440 | 0.8222 | | 0.0 | 34.0 | 7310 | 1.2539 | 0.8222 | | 0.0 | 35.0 | 7525 | 1.2643 | 0.8222 | | 0.0 | 36.0 | 7740 | 1.2752 | 0.8222 | | 0.0 | 37.0 | 7955 | 1.2837 | 0.8222 | | 0.0 | 38.0 | 8170 | 1.2941 | 0.8222 | | 0.0 | 39.0 | 8385 | 1.3057 | 0.8444 | | 0.0 | 40.0 | 8600 | 1.3171 | 0.8444 | | 0.0 | 41.0 | 8815 | 1.3233 | 0.8444 | | 0.0 | 42.0 | 9030 | 1.3334 | 0.8444 | | 0.0 | 43.0 | 9245 | 1.3422 | 0.8444 | | 0.0 | 44.0 | 9460 | 1.3487 | 0.8444 | | 0.0 | 45.0 | 9675 | 1.3569 | 0.8444 | | 0.0 | 46.0 | 9890 | 1.3629 | 0.8444 | | 0.0 | 47.0 | 10105 | 1.3713 | 0.8444 | | 0.0 | 48.0 | 10320 | 1.3761 | 0.8444 | | 0.0 | 49.0 | 10535 | 1.3795 | 0.8444 | | 0.0 | 50.0 | 10750 | 1.3786 | 0.8444 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_sgd_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_sgd_001_fold2 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9714 - Accuracy: 0.7111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.2707 | 1.0 | 215 | 1.3448 | 0.3333 | | 1.0768 | 2.0 | 430 | 1.2879 | 0.3333 | | 0.9642 | 3.0 | 645 | 1.2312 | 0.3556 | | 0.8173 | 4.0 | 860 | 1.1793 | 0.4222 | | 0.7029 | 5.0 | 1075 | 1.1361 | 0.4889 | | 0.6404 | 6.0 | 1290 | 1.1006 | 0.5111 | | 0.5591 | 7.0 | 1505 | 1.0646 | 0.5778 | | 0.4274 | 8.0 | 1720 | 1.0467 | 0.5778 | | 0.3944 | 9.0 | 1935 | 1.0267 | 0.6444 | | 0.3254 | 10.0 | 2150 | 1.0079 | 0.6222 | | 0.2604 | 11.0 | 2365 | 0.9958 | 0.6222 | | 0.2631 | 12.0 | 2580 | 0.9759 | 0.6222 | | 0.2337 | 13.0 | 2795 | 0.9617 | 0.6 | | 0.1789 | 14.0 | 3010 | 0.9588 | 0.6222 | | 0.1879 | 15.0 | 3225 | 0.9460 | 0.6222 | | 0.1684 | 16.0 | 3440 | 0.9372 | 0.6222 | | 0.1577 | 17.0 | 3655 | 0.9384 | 0.6444 | | 0.14 | 18.0 | 3870 | 0.9410 | 0.6444 | | 0.1197 | 19.0 | 4085 | 0.9384 | 0.6444 | | 0.1254 | 20.0 | 4300 | 0.9412 | 0.6444 | | 0.1072 | 21.0 | 4515 | 0.9296 | 0.6444 | | 0.0973 | 22.0 | 4730 | 0.9322 | 0.6444 | | 0.0821 | 23.0 | 4945 | 0.9340 | 0.6444 | | 0.0927 | 24.0 | 5160 | 0.9345 | 0.6667 | | 0.0715 | 25.0 | 5375 | 0.9358 | 0.6667 | | 0.0724 | 26.0 | 5590 | 0.9414 | 0.6889 | | 0.0815 | 27.0 | 5805 | 0.9356 | 0.6667 | | 0.0671 | 28.0 | 6020 | 0.9387 | 0.6889 | | 0.053 | 29.0 | 6235 | 0.9438 | 0.6889 | | 0.0671 | 30.0 | 6450 | 0.9381 | 0.7111 | | 0.0428 | 31.0 | 6665 | 0.9431 | 0.7111 | | 0.041 | 32.0 | 6880 | 0.9407 | 0.7111 | | 0.0371 | 33.0 | 7095 | 0.9476 | 0.7111 | | 0.0372 | 34.0 | 7310 | 0.9501 | 0.7111 | | 0.0416 | 35.0 | 7525 | 0.9484 | 0.7111 | | 0.0375 | 36.0 | 7740 | 0.9551 | 0.7111 | | 0.0443 | 37.0 | 7955 | 0.9530 | 0.7111 | | 0.031 | 38.0 | 8170 | 0.9549 | 0.7111 | | 0.0359 | 39.0 | 8385 | 0.9537 | 0.7111 | | 0.0327 | 40.0 | 8600 | 0.9553 | 0.7111 | | 0.0313 | 41.0 | 8815 | 0.9602 | 0.7111 | | 0.0312 | 42.0 | 9030 | 0.9634 | 0.7111 | | 0.0302 | 43.0 | 9245 | 0.9659 | 0.7111 | | 0.0284 | 44.0 | 9460 | 0.9687 | 0.7111 | | 0.0286 | 45.0 | 9675 | 0.9696 | 0.7111 | | 0.0307 | 46.0 | 9890 | 0.9699 | 0.7111 | | 0.0251 | 47.0 | 10105 | 0.9708 | 0.7111 | | 0.0291 | 48.0 | 10320 | 0.9714 | 0.7111 | | 0.0372 | 49.0 | 10535 | 0.9713 | 0.7111 | | 0.0296 | 50.0 | 10750 | 0.9714 | 0.7111 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_adamax_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_adamax_00001_fold2 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.1410 - Accuracy: 0.7556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1698 | 1.0 | 215 | 0.9036 | 0.6667 | | 0.0086 | 2.0 | 430 | 0.8771 | 0.8 | | 0.0023 | 3.0 | 645 | 0.9467 | 0.8 | | 0.0009 | 4.0 | 860 | 1.0021 | 0.7778 | | 0.0006 | 5.0 | 1075 | 1.0414 | 0.7556 | | 0.0004 | 6.0 | 1290 | 1.0789 | 0.7556 | | 0.0003 | 7.0 | 1505 | 1.0927 | 0.7778 | | 0.0002 | 8.0 | 1720 | 1.1233 | 0.7778 | | 0.0002 | 9.0 | 1935 | 1.1652 | 0.7778 | | 0.0001 | 10.0 | 2150 | 1.1805 | 0.7778 | | 0.0001 | 11.0 | 2365 | 1.2046 | 0.7778 | | 0.0001 | 12.0 | 2580 | 1.2366 | 0.7778 | | 0.0001 | 13.0 | 2795 | 1.2540 | 0.7778 | | 0.0001 | 14.0 | 3010 | 1.2856 | 0.7778 | | 0.0 | 15.0 | 3225 | 1.3104 | 0.7778 | | 0.0 | 16.0 | 3440 | 1.3434 | 0.7778 | | 0.0 | 17.0 | 3655 | 1.3705 | 0.7778 | | 0.0 | 18.0 | 3870 | 1.3922 | 0.7778 | | 0.0 | 19.0 | 4085 | 1.4221 | 0.7778 | | 0.0 | 20.0 | 4300 | 1.4557 | 0.7778 | | 0.0 | 21.0 | 4515 | 1.4854 | 0.7778 | | 0.0 | 22.0 | 4730 | 1.5092 | 0.7778 | | 0.0 | 23.0 | 4945 | 1.5343 | 0.7778 | | 0.0 | 24.0 | 5160 | 1.5541 | 0.7778 | | 0.0 | 25.0 | 5375 | 1.5830 | 0.7778 | | 0.0 | 26.0 | 5590 | 1.6177 | 0.7778 | | 0.0 | 27.0 | 5805 | 1.6474 | 0.7778 | | 0.0 | 28.0 | 6020 | 1.6634 | 0.7778 | | 0.0 | 29.0 | 6235 | 1.6875 | 0.7778 | | 0.0 | 30.0 | 6450 | 1.7106 | 0.7778 | | 0.0 | 31.0 | 6665 | 1.7484 | 0.7778 | | 0.0 | 32.0 | 6880 | 1.7797 | 0.7778 | | 0.0 | 33.0 | 7095 | 1.8167 | 0.7778 | | 0.0 | 34.0 | 7310 | 1.8422 | 0.7778 | | 0.0 | 35.0 | 7525 | 1.8678 | 0.7778 | | 0.0 | 36.0 | 7740 | 1.8865 | 0.7778 | | 0.0 | 37.0 | 7955 | 1.9143 | 0.7778 | | 0.0 | 38.0 | 8170 | 1.9225 | 0.7778 | | 0.0 | 39.0 | 8385 | 1.9621 | 0.7778 | | 0.0 | 40.0 | 8600 | 1.9777 | 0.7556 | | 0.0 | 41.0 | 8815 | 2.0240 | 0.7778 | | 0.0 | 42.0 | 9030 | 2.0141 | 0.7556 | | 0.0 | 43.0 | 9245 | 2.0463 | 0.7556 | | 0.0 | 44.0 | 9460 | 2.0688 | 0.7556 | | 0.0 | 45.0 | 9675 | 2.0919 | 0.7556 | | 0.0 | 46.0 | 9890 | 2.1123 | 0.7556 | | 0.0 | 47.0 | 10105 | 2.1294 | 0.7556 | | 0.0 | 48.0 | 10320 | 2.1354 | 0.7556 | | 0.0 | 49.0 | 10535 | 2.1448 | 0.7556 | | 0.0 | 50.0 | 10750 | 2.1410 | 0.7556 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_tiny_adamax_0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_adamax_0001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.9983 - Accuracy: 0.7556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0219 | 1.0 | 215 | 0.6349 | 0.8667 | | 0.0009 | 2.0 | 430 | 1.0022 | 0.7333 | | 0.0057 | 3.0 | 645 | 1.0734 | 0.7556 | | 0.0006 | 4.0 | 860 | 1.3398 | 0.7778 | | 0.0 | 5.0 | 1075 | 1.6890 | 0.7333 | | 0.0 | 6.0 | 1290 | 1.6522 | 0.7111 | | 0.0 | 7.0 | 1505 | 1.6220 | 0.7111 | | 0.0 | 8.0 | 1720 | 1.6021 | 0.7333 | | 0.0 | 9.0 | 1935 | 1.5870 | 0.7333 | | 0.0 | 10.0 | 2150 | 1.5842 | 0.7333 | | 0.0 | 11.0 | 2365 | 1.5782 | 0.7333 | | 0.0 | 12.0 | 2580 | 1.5625 | 0.7333 | | 0.0 | 13.0 | 2795 | 1.5601 | 0.7333 | | 0.0 | 14.0 | 3010 | 1.5521 | 0.7333 | | 0.0 | 15.0 | 3225 | 1.5637 | 0.7333 | | 0.0 | 16.0 | 3440 | 1.5652 | 0.7778 | | 0.0 | 17.0 | 3655 | 1.5622 | 0.7333 | | 0.0 | 18.0 | 3870 | 1.5700 | 0.7778 | | 0.0 | 19.0 | 4085 | 1.5813 | 0.7778 | | 0.0 | 20.0 | 4300 | 1.5874 | 0.7556 | | 0.0 | 21.0 | 4515 | 1.5931 | 0.7556 | | 0.0 | 22.0 | 4730 | 1.6081 | 0.7556 | | 0.0 | 23.0 | 4945 | 1.6167 | 0.7556 | | 0.0 | 24.0 | 5160 | 1.6398 | 0.7556 | | 0.0 | 25.0 | 5375 | 1.6448 | 0.7556 | | 0.0 | 26.0 | 5590 | 1.6610 | 0.7556 | | 0.0 | 27.0 | 5805 | 1.6849 | 0.7333 | | 0.0 | 28.0 | 6020 | 1.6982 | 0.7556 | | 0.0 | 29.0 | 6235 | 1.7059 | 0.7556 | | 0.0 | 30.0 | 6450 | 1.7216 | 0.7556 | | 0.0 | 31.0 | 6665 | 1.7579 | 0.7556 | | 0.0 | 32.0 | 6880 | 1.7634 | 0.7556 | | 0.0 | 33.0 | 7095 | 1.7775 | 0.7556 | | 0.0 | 34.0 | 7310 | 1.8193 | 0.7556 | | 0.0 | 35.0 | 7525 | 1.8288 | 0.7556 | | 0.0 | 36.0 | 7740 | 1.8617 | 0.7556 | | 0.0 | 37.0 | 7955 | 1.8992 | 0.7556 | | 0.0 | 38.0 | 8170 | 1.9097 | 0.7556 | | 0.0 | 39.0 | 8385 | 1.9200 | 0.7556 | | 0.0 | 40.0 | 8600 | 1.9431 | 0.7556 | | 0.0 | 41.0 | 8815 | 1.9378 | 0.7556 | | 0.0 | 42.0 | 9030 | 1.9739 | 0.7556 | | 0.0 | 43.0 | 9245 | 1.9777 | 0.7556 | | 0.0 | 44.0 | 9460 | 1.9924 | 0.7556 | | 0.0 | 45.0 | 9675 | 1.9923 | 0.7556 | | 0.0 | 46.0 | 9890 | 1.9872 | 0.7556 | | 0.0 | 47.0 | 10105 | 2.0011 | 0.7556 | | 0.0 | 48.0 | 10320 | 2.0002 | 0.7556 | | 0.0 | 49.0 | 10535 | 1.9945 | 0.7556 | | 0.0 | 50.0 | 10750 | 1.9983 | 0.7556 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
vit54155/vit-base-patch16-224-in21k-euroSat
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # vit54155/vit-base-patch16-224-in21k-euroSat This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6316 - Train Accuracy: 0.6693 - Train Top-3-accuracy: 1.0 - Validation Loss: 0.6555 - Validation Accuracy: 0.6320 - Validation Top-3-accuracy: 1.0 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 360, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 0.6316 | 0.6693 | 1.0 | 0.6555 | 0.6320 | 1.0 | 0 | ### Framework versions - Transformers 4.36.2 - TensorFlow 2.13.0 - Datasets 2.16.0 - Tokenizers 0.15.0
[ "anormal", "normal" ]
hkivancoral/hushem_40x_deit_tiny_adamax_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_adamax_0001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8703 - Accuracy: 0.9070 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0405 | 1.0 | 217 | 0.7475 | 0.8372 | | 0.0174 | 2.0 | 434 | 0.3741 | 0.8837 | | 0.0347 | 3.0 | 651 | 0.7246 | 0.8605 | | 0.0 | 4.0 | 868 | 0.4948 | 0.9070 | | 0.0001 | 5.0 | 1085 | 0.5865 | 0.8837 | | 0.0 | 6.0 | 1302 | 0.7334 | 0.8837 | | 0.0 | 7.0 | 1519 | 0.7116 | 0.8837 | | 0.0 | 8.0 | 1736 | 0.7193 | 0.8837 | | 0.0 | 9.0 | 1953 | 0.7217 | 0.8837 | | 0.0 | 10.0 | 2170 | 0.7267 | 0.9070 | | 0.0 | 11.0 | 2387 | 0.7322 | 0.9070 | | 0.0 | 12.0 | 2604 | 0.7327 | 0.9070 | | 0.0 | 13.0 | 2821 | 0.7378 | 0.9070 | | 0.0 | 14.0 | 3038 | 0.7396 | 0.9070 | | 0.0 | 15.0 | 3255 | 0.7467 | 0.9070 | | 0.0 | 16.0 | 3472 | 0.7473 | 0.9070 | | 0.0 | 17.0 | 3689 | 0.7591 | 0.9070 | | 0.0 | 18.0 | 3906 | 0.7633 | 0.9070 | | 0.0 | 19.0 | 4123 | 0.7713 | 0.9070 | | 0.0 | 20.0 | 4340 | 0.7767 | 0.9070 | | 0.0 | 21.0 | 4557 | 0.7800 | 0.9070 | | 0.0 | 22.0 | 4774 | 0.7902 | 0.9070 | | 0.0 | 23.0 | 4991 | 0.7915 | 0.9070 | | 0.0 | 24.0 | 5208 | 0.8041 | 0.9070 | | 0.0 | 25.0 | 5425 | 0.8046 | 0.9070 | | 0.0 | 26.0 | 5642 | 0.8231 | 0.8837 | | 0.0 | 27.0 | 5859 | 0.8344 | 0.8837 | | 0.0 | 28.0 | 6076 | 0.8309 | 0.8837 | | 0.0 | 29.0 | 6293 | 0.8419 | 0.8837 | | 0.0 | 30.0 | 6510 | 0.8453 | 0.8837 | | 0.0 | 31.0 | 6727 | 0.8681 | 0.8837 | | 0.0 | 32.0 | 6944 | 0.8660 | 0.8837 | | 0.0 | 33.0 | 7161 | 0.8697 | 0.8837 | | 0.0 | 34.0 | 7378 | 0.8846 | 0.8837 | | 0.0 | 35.0 | 7595 | 0.8902 | 0.8837 | | 0.0 | 36.0 | 7812 | 0.8974 | 0.8837 | | 0.0 | 37.0 | 8029 | 0.8857 | 0.8837 | | 0.0 | 38.0 | 8246 | 0.8854 | 0.8837 | | 0.0 | 39.0 | 8463 | 0.8857 | 0.8837 | | 0.0 | 40.0 | 8680 | 0.8940 | 0.8837 | | 0.0 | 41.0 | 8897 | 0.8956 | 0.8837 | | 0.0 | 42.0 | 9114 | 0.8901 | 0.8837 | | 0.0 | 43.0 | 9331 | 0.8853 | 0.9070 | | 0.0 | 44.0 | 9548 | 0.8877 | 0.8837 | | 0.0 | 45.0 | 9765 | 0.8928 | 0.9070 | | 0.0 | 46.0 | 9982 | 0.8951 | 0.9070 | | 0.0 | 47.0 | 10199 | 0.8786 | 0.9070 | | 0.0 | 48.0 | 10416 | 0.8807 | 0.9070 | | 0.0 | 49.0 | 10633 | 0.8732 | 0.9070 | | 0.0 | 50.0 | 10850 | 0.8703 | 0.9070 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_sgd_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_sgd_001_fold3 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4834 - Accuracy: 0.7674 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.2567 | 1.0 | 217 | 1.3908 | 0.3023 | | 1.1156 | 2.0 | 434 | 1.3183 | 0.4186 | | 0.9891 | 3.0 | 651 | 1.2352 | 0.5116 | | 0.902 | 4.0 | 868 | 1.1401 | 0.5814 | | 0.7383 | 5.0 | 1085 | 1.0533 | 0.6047 | | 0.6659 | 6.0 | 1302 | 0.9783 | 0.6279 | | 0.577 | 7.0 | 1519 | 0.9088 | 0.6047 | | 0.5084 | 8.0 | 1736 | 0.8504 | 0.6512 | | 0.4618 | 9.0 | 1953 | 0.8112 | 0.6512 | | 0.3986 | 10.0 | 2170 | 0.7644 | 0.6744 | | 0.3262 | 11.0 | 2387 | 0.7405 | 0.6744 | | 0.3187 | 12.0 | 2604 | 0.7073 | 0.7442 | | 0.287 | 13.0 | 2821 | 0.6756 | 0.7442 | | 0.2667 | 14.0 | 3038 | 0.6524 | 0.7674 | | 0.2566 | 15.0 | 3255 | 0.6373 | 0.7674 | | 0.2206 | 16.0 | 3472 | 0.6121 | 0.7674 | | 0.1851 | 17.0 | 3689 | 0.6018 | 0.7674 | | 0.1802 | 18.0 | 3906 | 0.5901 | 0.7674 | | 0.1691 | 19.0 | 4123 | 0.5735 | 0.7674 | | 0.1555 | 20.0 | 4340 | 0.5642 | 0.7674 | | 0.1532 | 21.0 | 4557 | 0.5647 | 0.7907 | | 0.1287 | 22.0 | 4774 | 0.5473 | 0.7907 | | 0.1172 | 23.0 | 4991 | 0.5337 | 0.7907 | | 0.1215 | 24.0 | 5208 | 0.5344 | 0.7907 | | 0.1 | 25.0 | 5425 | 0.5177 | 0.7907 | | 0.1218 | 26.0 | 5642 | 0.5181 | 0.7907 | | 0.0935 | 27.0 | 5859 | 0.5065 | 0.7907 | | 0.0833 | 28.0 | 6076 | 0.4985 | 0.7907 | | 0.0714 | 29.0 | 6293 | 0.4998 | 0.7907 | | 0.0825 | 30.0 | 6510 | 0.4944 | 0.7907 | | 0.0754 | 31.0 | 6727 | 0.4956 | 0.7674 | | 0.0765 | 32.0 | 6944 | 0.4881 | 0.7674 | | 0.0774 | 33.0 | 7161 | 0.4958 | 0.7674 | | 0.057 | 34.0 | 7378 | 0.4894 | 0.7674 | | 0.0663 | 35.0 | 7595 | 0.4882 | 0.7674 | | 0.059 | 36.0 | 7812 | 0.4848 | 0.7674 | | 0.0537 | 37.0 | 8029 | 0.4865 | 0.7674 | | 0.0454 | 38.0 | 8246 | 0.4882 | 0.7674 | | 0.0514 | 39.0 | 8463 | 0.4854 | 0.7674 | | 0.0629 | 40.0 | 8680 | 0.4861 | 0.7674 | | 0.0453 | 41.0 | 8897 | 0.4865 | 0.7674 | | 0.0447 | 42.0 | 9114 | 0.4837 | 0.7674 | | 0.0452 | 43.0 | 9331 | 0.4805 | 0.7907 | | 0.0545 | 44.0 | 9548 | 0.4818 | 0.7907 | | 0.0444 | 45.0 | 9765 | 0.4816 | 0.7907 | | 0.0454 | 46.0 | 9982 | 0.4835 | 0.7674 | | 0.0369 | 47.0 | 10199 | 0.4841 | 0.7674 | | 0.0401 | 48.0 | 10416 | 0.4827 | 0.7907 | | 0.0524 | 49.0 | 10633 | 0.4835 | 0.7674 | | 0.0394 | 50.0 | 10850 | 0.4834 | 0.7674 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_adamax_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_adamax_00001_fold3 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7075 - Accuracy: 0.9302 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2263 | 1.0 | 217 | 0.4817 | 0.7907 | | 0.0168 | 2.0 | 434 | 0.4089 | 0.8605 | | 0.0026 | 3.0 | 651 | 0.3730 | 0.9070 | | 0.0013 | 4.0 | 868 | 0.4093 | 0.9070 | | 0.0007 | 5.0 | 1085 | 0.4236 | 0.9070 | | 0.0005 | 6.0 | 1302 | 0.4344 | 0.9070 | | 0.0004 | 7.0 | 1519 | 0.4366 | 0.9070 | | 0.0003 | 8.0 | 1736 | 0.4561 | 0.9070 | | 0.0002 | 9.0 | 1953 | 0.4646 | 0.9070 | | 0.0001 | 10.0 | 2170 | 0.4712 | 0.9070 | | 0.0001 | 11.0 | 2387 | 0.4696 | 0.9070 | | 0.0001 | 12.0 | 2604 | 0.4779 | 0.9070 | | 0.0001 | 13.0 | 2821 | 0.4883 | 0.9070 | | 0.0001 | 14.0 | 3038 | 0.4911 | 0.9070 | | 0.0 | 15.0 | 3255 | 0.4887 | 0.9070 | | 0.0 | 16.0 | 3472 | 0.5049 | 0.9070 | | 0.0 | 17.0 | 3689 | 0.5115 | 0.9070 | | 0.0 | 18.0 | 3906 | 0.5246 | 0.9070 | | 0.0 | 19.0 | 4123 | 0.5207 | 0.9070 | | 0.0 | 20.0 | 4340 | 0.5310 | 0.9070 | | 0.0 | 21.0 | 4557 | 0.5341 | 0.9070 | | 0.0 | 22.0 | 4774 | 0.5389 | 0.9070 | | 0.0 | 23.0 | 4991 | 0.5470 | 0.9070 | | 0.0 | 24.0 | 5208 | 0.5525 | 0.9070 | | 0.0 | 25.0 | 5425 | 0.5607 | 0.9070 | | 0.0 | 26.0 | 5642 | 0.5630 | 0.9070 | | 0.0 | 27.0 | 5859 | 0.5707 | 0.9302 | | 0.0 | 28.0 | 6076 | 0.5785 | 0.9302 | | 0.0 | 29.0 | 6293 | 0.5816 | 0.9302 | | 0.0 | 30.0 | 6510 | 0.5927 | 0.9302 | | 0.0 | 31.0 | 6727 | 0.6021 | 0.9302 | | 0.0 | 32.0 | 6944 | 0.6045 | 0.9302 | | 0.0 | 33.0 | 7161 | 0.6209 | 0.9302 | | 0.0 | 34.0 | 7378 | 0.6273 | 0.9302 | | 0.0 | 35.0 | 7595 | 0.6296 | 0.9302 | | 0.0 | 36.0 | 7812 | 0.6372 | 0.9302 | | 0.0 | 37.0 | 8029 | 0.6432 | 0.9302 | | 0.0 | 38.0 | 8246 | 0.6544 | 0.9302 | | 0.0 | 39.0 | 8463 | 0.6520 | 0.9302 | | 0.0 | 40.0 | 8680 | 0.6641 | 0.9302 | | 0.0 | 41.0 | 8897 | 0.6713 | 0.9302 | | 0.0 | 42.0 | 9114 | 0.6757 | 0.9302 | | 0.0 | 43.0 | 9331 | 0.6829 | 0.9302 | | 0.0 | 44.0 | 9548 | 0.6913 | 0.9302 | | 0.0 | 45.0 | 9765 | 0.6942 | 0.9302 | | 0.0 | 46.0 | 9982 | 0.7019 | 0.9302 | | 0.0 | 47.0 | 10199 | 0.7046 | 0.9302 | | 0.0 | 48.0 | 10416 | 0.7061 | 0.9302 | | 0.0 | 49.0 | 10633 | 0.7073 | 0.9302 | | 0.0 | 50.0 | 10850 | 0.7075 | 0.9302 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_tiny_adamax_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_adamax_0001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1224 - Accuracy: 0.9762 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0999 | 1.0 | 219 | 0.2863 | 0.8810 | | 0.0106 | 2.0 | 438 | 0.0557 | 0.9524 | | 0.0032 | 3.0 | 657 | 0.1838 | 0.9762 | | 0.0003 | 4.0 | 876 | 0.0728 | 0.9762 | | 0.0109 | 5.0 | 1095 | 0.1935 | 0.9762 | | 0.0 | 6.0 | 1314 | 0.0601 | 0.9762 | | 0.0 | 7.0 | 1533 | 0.1576 | 0.9762 | | 0.0 | 8.0 | 1752 | 0.1618 | 0.9762 | | 0.0 | 9.0 | 1971 | 0.1684 | 0.9762 | | 0.0 | 10.0 | 2190 | 0.1720 | 0.9762 | | 0.0 | 11.0 | 2409 | 0.1705 | 0.9762 | | 0.0 | 12.0 | 2628 | 0.1761 | 0.9762 | | 0.0 | 13.0 | 2847 | 0.1758 | 0.9762 | | 0.0 | 14.0 | 3066 | 0.1752 | 0.9762 | | 0.0 | 15.0 | 3285 | 0.1769 | 0.9762 | | 0.0 | 16.0 | 3504 | 0.1750 | 0.9762 | | 0.0 | 17.0 | 3723 | 0.1767 | 0.9762 | | 0.0 | 18.0 | 3942 | 0.1778 | 0.9762 | | 0.0 | 19.0 | 4161 | 0.1748 | 0.9762 | | 0.0 | 20.0 | 4380 | 0.1777 | 0.9762 | | 0.0 | 21.0 | 4599 | 0.1775 | 0.9762 | | 0.0 | 22.0 | 4818 | 0.1734 | 0.9762 | | 0.0 | 23.0 | 5037 | 0.1752 | 0.9762 | | 0.0 | 24.0 | 5256 | 0.1709 | 0.9762 | | 0.0 | 25.0 | 5475 | 0.1680 | 0.9762 | | 0.0 | 26.0 | 5694 | 0.1718 | 0.9762 | | 0.0 | 27.0 | 5913 | 0.1738 | 0.9762 | | 0.0 | 28.0 | 6132 | 0.1754 | 0.9762 | | 0.0 | 29.0 | 6351 | 0.1694 | 0.9762 | | 0.0 | 30.0 | 6570 | 0.1671 | 0.9762 | | 0.0 | 31.0 | 6789 | 0.1676 | 0.9762 | | 0.0 | 32.0 | 7008 | 0.1684 | 0.9762 | | 0.0 | 33.0 | 7227 | 0.1579 | 0.9762 | | 0.0 | 34.0 | 7446 | 0.1646 | 0.9762 | | 0.0 | 35.0 | 7665 | 0.1705 | 0.9762 | | 0.0 | 36.0 | 7884 | 0.1608 | 0.9762 | | 0.0 | 37.0 | 8103 | 0.1657 | 0.9762 | | 0.0 | 38.0 | 8322 | 0.1625 | 0.9762 | | 0.0 | 39.0 | 8541 | 0.1523 | 0.9762 | | 0.0 | 40.0 | 8760 | 0.1553 | 0.9762 | | 0.0 | 41.0 | 8979 | 0.1442 | 0.9762 | | 0.0 | 42.0 | 9198 | 0.1409 | 0.9762 | | 0.0 | 43.0 | 9417 | 0.1436 | 0.9762 | | 0.0 | 44.0 | 9636 | 0.1410 | 0.9762 | | 0.0 | 45.0 | 9855 | 0.1340 | 0.9762 | | 0.0 | 46.0 | 10074 | 0.1301 | 0.9762 | | 0.0 | 47.0 | 10293 | 0.1236 | 0.9762 | | 0.0 | 48.0 | 10512 | 0.1220 | 0.9762 | | 0.0 | 49.0 | 10731 | 0.1222 | 0.9762 | | 0.0 | 50.0 | 10950 | 0.1224 | 0.9762 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_sgd_001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_sgd_001_fold4 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2186 - Accuracy: 0.9286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.2742 | 1.0 | 219 | 1.3421 | 0.4286 | | 1.1364 | 2.0 | 438 | 1.2693 | 0.4048 | | 0.9912 | 3.0 | 657 | 1.1701 | 0.5238 | | 0.8292 | 4.0 | 876 | 1.0493 | 0.6429 | | 0.7771 | 5.0 | 1095 | 0.9148 | 0.7143 | | 0.6956 | 6.0 | 1314 | 0.8048 | 0.7381 | | 0.519 | 7.0 | 1533 | 0.7062 | 0.8095 | | 0.5042 | 8.0 | 1752 | 0.6401 | 0.7857 | | 0.4397 | 9.0 | 1971 | 0.5785 | 0.8333 | | 0.3933 | 10.0 | 2190 | 0.5338 | 0.8571 | | 0.341 | 11.0 | 2409 | 0.4959 | 0.8810 | | 0.3345 | 12.0 | 2628 | 0.4569 | 0.8810 | | 0.2949 | 13.0 | 2847 | 0.4265 | 0.9048 | | 0.2608 | 14.0 | 3066 | 0.3999 | 0.9286 | | 0.2368 | 15.0 | 3285 | 0.3796 | 0.9286 | | 0.2257 | 16.0 | 3504 | 0.3614 | 0.9286 | | 0.232 | 17.0 | 3723 | 0.3430 | 0.9286 | | 0.1928 | 18.0 | 3942 | 0.3249 | 0.9286 | | 0.1804 | 19.0 | 4161 | 0.3144 | 0.9286 | | 0.1542 | 20.0 | 4380 | 0.3019 | 0.9048 | | 0.1333 | 21.0 | 4599 | 0.2915 | 0.9286 | | 0.1333 | 22.0 | 4818 | 0.2894 | 0.9048 | | 0.1178 | 23.0 | 5037 | 0.2746 | 0.9286 | | 0.1098 | 24.0 | 5256 | 0.2771 | 0.9048 | | 0.1099 | 25.0 | 5475 | 0.2649 | 0.9048 | | 0.0836 | 26.0 | 5694 | 0.2732 | 0.9048 | | 0.0751 | 27.0 | 5913 | 0.2625 | 0.9048 | | 0.0745 | 28.0 | 6132 | 0.2608 | 0.9048 | | 0.0826 | 29.0 | 6351 | 0.2526 | 0.9048 | | 0.079 | 30.0 | 6570 | 0.2463 | 0.9286 | | 0.0659 | 31.0 | 6789 | 0.2439 | 0.9048 | | 0.0738 | 32.0 | 7008 | 0.2422 | 0.9286 | | 0.0683 | 33.0 | 7227 | 0.2335 | 0.9286 | | 0.0674 | 34.0 | 7446 | 0.2343 | 0.9048 | | 0.0633 | 35.0 | 7665 | 0.2311 | 0.9048 | | 0.0608 | 36.0 | 7884 | 0.2259 | 0.9286 | | 0.0543 | 37.0 | 8103 | 0.2239 | 0.9286 | | 0.0444 | 38.0 | 8322 | 0.2256 | 0.9286 | | 0.0496 | 39.0 | 8541 | 0.2255 | 0.9286 | | 0.0513 | 40.0 | 8760 | 0.2253 | 0.9286 | | 0.0449 | 41.0 | 8979 | 0.2226 | 0.9286 | | 0.0449 | 42.0 | 9198 | 0.2216 | 0.9286 | | 0.0549 | 43.0 | 9417 | 0.2202 | 0.9286 | | 0.0488 | 44.0 | 9636 | 0.2213 | 0.9286 | | 0.0437 | 45.0 | 9855 | 0.2208 | 0.9286 | | 0.0362 | 46.0 | 10074 | 0.2201 | 0.9286 | | 0.0622 | 47.0 | 10293 | 0.2188 | 0.9286 | | 0.0546 | 48.0 | 10512 | 0.2185 | 0.9286 | | 0.0472 | 49.0 | 10731 | 0.2186 | 0.9286 | | 0.0581 | 50.0 | 10950 | 0.2186 | 0.9286 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_adamax_00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_adamax_00001_fold4 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2776 - Accuracy: 0.9524 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2891 | 1.0 | 219 | 0.3655 | 0.9048 | | 0.0271 | 2.0 | 438 | 0.1551 | 0.9762 | | 0.0059 | 3.0 | 657 | 0.1424 | 0.9762 | | 0.0011 | 4.0 | 876 | 0.1398 | 0.9762 | | 0.0007 | 5.0 | 1095 | 0.1496 | 0.9762 | | 0.0005 | 6.0 | 1314 | 0.1466 | 0.9762 | | 0.0003 | 7.0 | 1533 | 0.1409 | 0.9762 | | 0.0002 | 8.0 | 1752 | 0.1498 | 0.9762 | | 0.0002 | 9.0 | 1971 | 0.1564 | 0.9762 | | 0.0001 | 10.0 | 2190 | 0.1656 | 0.9524 | | 0.0001 | 11.0 | 2409 | 0.1807 | 0.9524 | | 0.0001 | 12.0 | 2628 | 0.1735 | 0.9762 | | 0.0001 | 13.0 | 2847 | 0.1728 | 0.9762 | | 0.0001 | 14.0 | 3066 | 0.1752 | 0.9762 | | 0.0 | 15.0 | 3285 | 0.1830 | 0.9524 | | 0.0 | 16.0 | 3504 | 0.1909 | 0.9762 | | 0.0 | 17.0 | 3723 | 0.1856 | 0.9762 | | 0.0 | 18.0 | 3942 | 0.1931 | 0.9762 | | 0.0 | 19.0 | 4161 | 0.1937 | 0.9762 | | 0.0 | 20.0 | 4380 | 0.2012 | 0.9762 | | 0.0 | 21.0 | 4599 | 0.1972 | 0.9762 | | 0.0 | 22.0 | 4818 | 0.2059 | 0.9762 | | 0.0 | 23.0 | 5037 | 0.2072 | 0.9762 | | 0.0 | 24.0 | 5256 | 0.2139 | 0.9762 | | 0.0 | 25.0 | 5475 | 0.2220 | 0.9524 | | 0.0 | 26.0 | 5694 | 0.2242 | 0.9762 | | 0.0 | 27.0 | 5913 | 0.2291 | 0.9524 | | 0.0 | 28.0 | 6132 | 0.2302 | 0.9524 | | 0.0 | 29.0 | 6351 | 0.2283 | 0.9524 | | 0.0 | 30.0 | 6570 | 0.2384 | 0.9524 | | 0.0 | 31.0 | 6789 | 0.2437 | 0.9524 | | 0.0 | 32.0 | 7008 | 0.2389 | 0.9762 | | 0.0 | 33.0 | 7227 | 0.2474 | 0.9524 | | 0.0 | 34.0 | 7446 | 0.2474 | 0.9524 | | 0.0 | 35.0 | 7665 | 0.2453 | 0.9524 | | 0.0 | 36.0 | 7884 | 0.2498 | 0.9524 | | 0.0 | 37.0 | 8103 | 0.2535 | 0.9524 | | 0.0 | 38.0 | 8322 | 0.2499 | 0.9762 | | 0.0 | 39.0 | 8541 | 0.2607 | 0.9524 | | 0.0 | 40.0 | 8760 | 0.2656 | 0.9524 | | 0.0 | 41.0 | 8979 | 0.2652 | 0.9524 | | 0.0 | 42.0 | 9198 | 0.2609 | 0.9524 | | 0.0 | 43.0 | 9417 | 0.2697 | 0.9524 | | 0.0 | 44.0 | 9636 | 0.2693 | 0.9524 | | 0.0 | 45.0 | 9855 | 0.2763 | 0.9524 | | 0.0 | 46.0 | 10074 | 0.2779 | 0.9524 | | 0.0 | 47.0 | 10293 | 0.2750 | 0.9524 | | 0.0 | 48.0 | 10512 | 0.2730 | 0.9524 | | 0.0 | 49.0 | 10731 | 0.2766 | 0.9524 | | 0.0 | 50.0 | 10950 | 0.2776 | 0.9524 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_tiny_adamax_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_adamax_0001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9183 - Accuracy: 0.8537 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0769 | 1.0 | 220 | 0.5269 | 0.8293 | | 0.0227 | 2.0 | 440 | 0.7159 | 0.8293 | | 0.0008 | 3.0 | 660 | 1.1921 | 0.7805 | | 0.0031 | 4.0 | 880 | 0.8277 | 0.8537 | | 0.0006 | 5.0 | 1100 | 0.9658 | 0.8049 | | 0.0 | 6.0 | 1320 | 0.5880 | 0.8780 | | 0.0007 | 7.0 | 1540 | 0.9103 | 0.8293 | | 0.0006 | 8.0 | 1760 | 0.4548 | 0.8780 | | 0.0029 | 9.0 | 1980 | 1.0525 | 0.8293 | | 0.0 | 10.0 | 2200 | 0.8563 | 0.8293 | | 0.0 | 11.0 | 2420 | 0.7625 | 0.8537 | | 0.0 | 12.0 | 2640 | 0.7582 | 0.8537 | | 0.0 | 13.0 | 2860 | 0.7509 | 0.8537 | | 0.0 | 14.0 | 3080 | 0.7628 | 0.8537 | | 0.0 | 15.0 | 3300 | 0.7694 | 0.8537 | | 0.0 | 16.0 | 3520 | 0.7739 | 0.8537 | | 0.0 | 17.0 | 3740 | 0.7753 | 0.8537 | | 0.0 | 18.0 | 3960 | 0.7841 | 0.8780 | | 0.0 | 19.0 | 4180 | 0.7863 | 0.8780 | | 0.0 | 20.0 | 4400 | 0.8002 | 0.8537 | | 0.0 | 21.0 | 4620 | 0.7935 | 0.8537 | | 0.0 | 22.0 | 4840 | 0.8153 | 0.8537 | | 0.0 | 23.0 | 5060 | 0.8135 | 0.8537 | | 0.0 | 24.0 | 5280 | 0.8169 | 0.8537 | | 0.0 | 25.0 | 5500 | 0.8219 | 0.8537 | | 0.0 | 26.0 | 5720 | 0.8262 | 0.8537 | | 0.0 | 27.0 | 5940 | 0.8371 | 0.8537 | | 0.0 | 28.0 | 6160 | 0.8311 | 0.8537 | | 0.0 | 29.0 | 6380 | 0.8379 | 0.8537 | | 0.0 | 30.0 | 6600 | 0.8615 | 0.8537 | | 0.0 | 31.0 | 6820 | 0.8776 | 0.8537 | | 0.0 | 32.0 | 7040 | 0.8507 | 0.8537 | | 0.0 | 33.0 | 7260 | 0.8611 | 0.8537 | | 0.0 | 34.0 | 7480 | 0.8621 | 0.8537 | | 0.0 | 35.0 | 7700 | 0.8793 | 0.8537 | | 0.0 | 36.0 | 7920 | 0.8811 | 0.8537 | | 0.0 | 37.0 | 8140 | 0.8839 | 0.8537 | | 0.0 | 38.0 | 8360 | 0.8830 | 0.8537 | | 0.0 | 39.0 | 8580 | 0.8720 | 0.8537 | | 0.0 | 40.0 | 8800 | 0.8855 | 0.8537 | | 0.0 | 41.0 | 9020 | 0.8862 | 0.8537 | | 0.0 | 42.0 | 9240 | 0.9183 | 0.8537 | | 0.0 | 43.0 | 9460 | 0.8971 | 0.8537 | | 0.0 | 44.0 | 9680 | 0.9190 | 0.8537 | | 0.0 | 45.0 | 9900 | 0.9193 | 0.8537 | | 0.0 | 46.0 | 10120 | 0.9157 | 0.8537 | | 0.0 | 47.0 | 10340 | 0.9115 | 0.8537 | | 0.0 | 48.0 | 10560 | 0.9160 | 0.8537 | | 0.0 | 49.0 | 10780 | 0.9155 | 0.8537 | | 0.0 | 50.0 | 11000 | 0.9183 | 0.8537 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_sgd_001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_sgd_001_fold5 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5235 - Accuracy: 0.8049 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.2691 | 1.0 | 220 | 1.3176 | 0.3415 | | 1.1324 | 2.0 | 440 | 1.2295 | 0.4634 | | 1.003 | 3.0 | 660 | 1.1173 | 0.6341 | | 0.8718 | 4.0 | 880 | 0.9888 | 0.6585 | | 0.7662 | 5.0 | 1100 | 0.8700 | 0.6829 | | 0.6305 | 6.0 | 1320 | 0.7780 | 0.6585 | | 0.552 | 7.0 | 1540 | 0.7068 | 0.6829 | | 0.4791 | 8.0 | 1760 | 0.6670 | 0.6829 | | 0.413 | 9.0 | 1980 | 0.6302 | 0.6829 | | 0.3827 | 10.0 | 2200 | 0.6050 | 0.7073 | | 0.3215 | 11.0 | 2420 | 0.5880 | 0.7073 | | 0.2953 | 12.0 | 2640 | 0.5689 | 0.7073 | | 0.2691 | 13.0 | 2860 | 0.5551 | 0.7073 | | 0.255 | 14.0 | 3080 | 0.5391 | 0.7317 | | 0.2205 | 15.0 | 3300 | 0.5338 | 0.7561 | | 0.2031 | 16.0 | 3520 | 0.5276 | 0.8049 | | 0.1827 | 17.0 | 3740 | 0.5158 | 0.8049 | | 0.178 | 18.0 | 3960 | 0.5117 | 0.8049 | | 0.1722 | 19.0 | 4180 | 0.5070 | 0.8293 | | 0.1354 | 20.0 | 4400 | 0.5054 | 0.8293 | | 0.1154 | 21.0 | 4620 | 0.5008 | 0.8293 | | 0.1032 | 22.0 | 4840 | 0.5031 | 0.8293 | | 0.123 | 23.0 | 5060 | 0.5052 | 0.8293 | | 0.0925 | 24.0 | 5280 | 0.5012 | 0.8049 | | 0.1004 | 25.0 | 5500 | 0.5002 | 0.8293 | | 0.1106 | 26.0 | 5720 | 0.5000 | 0.8293 | | 0.0932 | 27.0 | 5940 | 0.5018 | 0.8293 | | 0.0974 | 28.0 | 6160 | 0.5069 | 0.8293 | | 0.0749 | 29.0 | 6380 | 0.5067 | 0.8293 | | 0.0626 | 30.0 | 6600 | 0.5071 | 0.8293 | | 0.058 | 31.0 | 6820 | 0.5023 | 0.8293 | | 0.0771 | 32.0 | 7040 | 0.5068 | 0.8293 | | 0.0537 | 33.0 | 7260 | 0.5089 | 0.8049 | | 0.0443 | 34.0 | 7480 | 0.5110 | 0.8049 | | 0.0529 | 35.0 | 7700 | 0.5102 | 0.8049 | | 0.056 | 36.0 | 7920 | 0.5123 | 0.8293 | | 0.0373 | 37.0 | 8140 | 0.5147 | 0.8293 | | 0.0662 | 38.0 | 8360 | 0.5122 | 0.8293 | | 0.0489 | 39.0 | 8580 | 0.5155 | 0.8293 | | 0.0389 | 40.0 | 8800 | 0.5166 | 0.8293 | | 0.0414 | 41.0 | 9020 | 0.5205 | 0.8049 | | 0.0455 | 42.0 | 9240 | 0.5225 | 0.8293 | | 0.0397 | 43.0 | 9460 | 0.5226 | 0.8049 | | 0.0345 | 44.0 | 9680 | 0.5228 | 0.8049 | | 0.0281 | 45.0 | 9900 | 0.5217 | 0.8049 | | 0.0392 | 46.0 | 10120 | 0.5231 | 0.8049 | | 0.0436 | 47.0 | 10340 | 0.5235 | 0.8293 | | 0.0347 | 48.0 | 10560 | 0.5238 | 0.8049 | | 0.0331 | 49.0 | 10780 | 0.5237 | 0.8049 | | 0.0457 | 50.0 | 11000 | 0.5235 | 0.8049 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_adamax_00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_adamax_00001_fold5 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3162 - Accuracy: 0.8780 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2212 | 1.0 | 220 | 0.5037 | 0.7805 | | 0.0167 | 2.0 | 440 | 0.4288 | 0.8049 | | 0.0048 | 3.0 | 660 | 0.5660 | 0.8293 | | 0.001 | 4.0 | 880 | 0.5808 | 0.8049 | | 0.0006 | 5.0 | 1100 | 0.5916 | 0.8049 | | 0.0005 | 6.0 | 1320 | 0.6221 | 0.8293 | | 0.0003 | 7.0 | 1540 | 0.6354 | 0.8293 | | 0.0002 | 8.0 | 1760 | 0.6592 | 0.8293 | | 0.0002 | 9.0 | 1980 | 0.6836 | 0.8293 | | 0.0001 | 10.0 | 2200 | 0.7195 | 0.8537 | | 0.0001 | 11.0 | 2420 | 0.7292 | 0.8293 | | 0.0001 | 12.0 | 2640 | 0.7556 | 0.8537 | | 0.0001 | 13.0 | 2860 | 0.7481 | 0.8537 | | 0.0001 | 14.0 | 3080 | 0.7541 | 0.8537 | | 0.0 | 15.0 | 3300 | 0.7642 | 0.8537 | | 0.0 | 16.0 | 3520 | 0.7944 | 0.8537 | | 0.0 | 17.0 | 3740 | 0.8081 | 0.8537 | | 0.0 | 18.0 | 3960 | 0.8431 | 0.8537 | | 0.0 | 19.0 | 4180 | 0.8377 | 0.8537 | | 0.0 | 20.0 | 4400 | 0.8619 | 0.8537 | | 0.0 | 21.0 | 4620 | 0.8688 | 0.8537 | | 0.0 | 22.0 | 4840 | 0.9067 | 0.8537 | | 0.0 | 23.0 | 5060 | 0.9298 | 0.8537 | | 0.0 | 24.0 | 5280 | 0.9319 | 0.8537 | | 0.0 | 25.0 | 5500 | 0.9416 | 0.8537 | | 0.0 | 26.0 | 5720 | 0.9575 | 0.8537 | | 0.0 | 27.0 | 5940 | 0.9826 | 0.8537 | | 0.0 | 28.0 | 6160 | 0.9800 | 0.8537 | | 0.0 | 29.0 | 6380 | 0.9999 | 0.8537 | | 0.0 | 30.0 | 6600 | 1.0189 | 0.8537 | | 0.0 | 31.0 | 6820 | 1.0648 | 0.8537 | | 0.0 | 32.0 | 7040 | 1.0627 | 0.8537 | | 0.0 | 33.0 | 7260 | 1.0899 | 0.8780 | | 0.0 | 34.0 | 7480 | 1.1141 | 0.8780 | | 0.0 | 35.0 | 7700 | 1.1351 | 0.8537 | | 0.0 | 36.0 | 7920 | 1.1265 | 0.8780 | | 0.0 | 37.0 | 8140 | 1.1654 | 0.8780 | | 0.0 | 38.0 | 8360 | 1.1754 | 0.8780 | | 0.0 | 39.0 | 8580 | 1.1881 | 0.8780 | | 0.0 | 40.0 | 8800 | 1.1930 | 0.8780 | | 0.0 | 41.0 | 9020 | 1.2376 | 0.8780 | | 0.0 | 42.0 | 9240 | 1.2450 | 0.8780 | | 0.0 | 43.0 | 9460 | 1.2371 | 0.8780 | | 0.0 | 44.0 | 9680 | 1.2839 | 0.8780 | | 0.0 | 45.0 | 9900 | 1.2844 | 0.8780 | | 0.0 | 46.0 | 10120 | 1.2849 | 0.8780 | | 0.0 | 47.0 | 10340 | 1.3098 | 0.8780 | | 0.0 | 48.0 | 10560 | 1.3232 | 0.8780 | | 0.0 | 49.0 | 10780 | 1.3105 | 0.8780 | | 0.0 | 50.0 | 11000 | 1.3162 | 0.8780 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
imfarzanansari/skintelligent-acne
# Acne Severity Detection Model ## Overview This model card provides documentation for the Acne Severity Detection model checkpoint used in the Hugging Face pipeline. The model is designed to assess acne severity levels, ranging from clear skin to very severe acne. ## Model Details The checkpoint includes the following files: - **`config.json`**: Model configuration settings. - **`model.safetensors`**: Serialized model parameters and architecture. - **`optimizer.pt`**: Optimizer state capturing the current model optimization. - **`preprocessor_config.json`**: Configuration file for the preprocessor. - **`rng_state.pth`**: Random number generator state for reproducibility. - **`scheduler.pt`**: Scheduler state for controlling learning rate schedules. - **`trainer_state.json`**: Trainer state with information about the training process. - **`training_args.bin`**: Binary file storing training arguments. ## Usage To utilize the model checkpoint, follow these steps: 1. Download this repository. 2. Ensure the required dependencies are installed (`pip install -r requirements.txt`). ## Severity Levels - **Level -1**: Clear Skin - **Level 0**: Occasional Spots - **Level 1**: Mild Acne - **Level 2**: Moderate Acne - **Level 3**: Severe Acne - **Level 4**: Very Severe Acne ## Important Notes - The model card provides insight into the model's purpose, capabilities, and usage instructions. - Ensure all necessary files are present in the `checkpoint` directory for proper functionality. ## License This Acne Severity Detection model checkpoint is licensed under the [MIT License](LICENSE). Please review and adhere to the license when using or modifying the code.
[ "level -1", "level 0", "level 1", "level 2", "level 3" ]
yuanhuaisen/autotrain-0uz9r-dmir6
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: nan f1_macro: 0.16666666666666666 f1_micro: 0.3333333333333333 f1_weighted: 0.16666666666666666 precision_macro: 0.1111111111111111 precision_micro: 0.3333333333333333 precision_weighted: 0.1111111111111111 recall_macro: 0.3333333333333333 recall_micro: 0.3333333333333333 recall_weighted: 0.3333333333333333 accuracy: 0.3333333333333333
[ "11covered_with_a_quilt_and_only_the_head_exposed", "12covered_with_a_quilt_and_exposed_other_parts_of_the_body", "13has_nothing_to_do_with_11_and_12_above" ]
chanhua/autotrain-g6laz-7afl8
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: nan f1_macro: 0.16666666666666666 f1_micro: 0.3333333333333333 f1_weighted: 0.16666666666666666 precision_macro: 0.1111111111111111 precision_micro: 0.3333333333333333 precision_weighted: 0.1111111111111111 recall_macro: 0.3333333333333333 recall_micro: 0.3333333333333333 recall_weighted: 0.3333333333333333 accuracy: 0.3333333333333333
[ "11covered_with_a_quilt_and_only_the_head_exposed", "12covered_with_a_quilt_and_exposed_other_parts_of_the_body", "13has_nothing_to_do_with_11_and_12_above" ]
chanhua/autotrain-zzbhy-dqgkj
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 1.0984116792678833 f1_macro: 0.16666666666666666 f1_micro: 0.3333333333333333 f1_weighted: 0.16666666666666666 precision_macro: 0.1111111111111111 precision_micro: 0.3333333333333333 precision_weighted: 0.1111111111111111 recall_macro: 0.3333333333333333 recall_micro: 0.3333333333333333 recall_weighted: 0.3333333333333333 accuracy: 0.3333333333333333
[ "11covered_with_a_quilt_and_only_the_head_exposed", "12covered_with_a_quilt_and_exposed_other_parts_of_the_body", "13has_nothing_to_do_with_11_and_12_above" ]
chanhua/autotrain-7p556-nc0f8
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 1.0925332307815552 f1_macro: 0.16666666666666666 f1_micro: 0.3333333333333333 f1_weighted: 0.16666666666666666 precision_macro: 0.1111111111111111 precision_micro: 0.3333333333333333 precision_weighted: 0.1111111111111111 recall_macro: 0.3333333333333333 recall_micro: 0.3333333333333333 recall_weighted: 0.3333333333333333 accuracy: 0.3333333333333333
[ "11covered_with_a_quilt_and_only_the_head_exposed", "12covered_with_a_quilt_and_exposed_other_parts_of_the_body", "13has_nothing_to_do_with_11_and_12_above" ]
chanhua/autotrain-ar615-cxc9m
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: nan f1_macro: 0.16666666666666666 f1_micro: 0.3333333333333333 f1_weighted: 0.16666666666666666 precision_macro: 0.1111111111111111 precision_micro: 0.3333333333333333 precision_weighted: 0.1111111111111111 recall_macro: 0.3333333333333333 recall_micro: 0.3333333333333333 recall_weighted: 0.3333333333333333 accuracy: 0.3333333333333333
[ "11covered_with_a_quilt_and_only_the_head_exposed", "12covered_with_a_quilt_and_exposed_other_parts_of_the_body", "13has_nothing_to_do_with_11_and_12_above" ]
LuoFengBit/autotrain-i6iyx-bc6au
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 3.476023547841741e+16 f1_macro: 0.16666666666666666 f1_micro: 0.3333333333333333 f1_weighted: 0.16666666666666666 precision_macro: 0.1111111111111111 precision_micro: 0.3333333333333333 precision_weighted: 0.1111111111111111 recall_macro: 0.3333333333333333 recall_micro: 0.3333333333333333 recall_weighted: 0.3333333333333333 accuracy: 0.3333333333333333
[ "11covered_with_a_quilt_and_only_the_head_exposed", "12covered_with_a_quilt_and_exposed_other_parts_of_the_body", "13has_nothing_to_do_with_11_and_12_above" ]
JungleWong/wong
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: nan f1_macro: 0.16666666666666666 f1_micro: 0.3333333333333333 f1_weighted: 0.16666666666666666 precision_macro: 0.1111111111111111 precision_micro: 0.3333333333333333 precision_weighted: 0.1111111111111111 recall_macro: 0.3333333333333333 recall_micro: 0.3333333333333333 recall_weighted: 0.3333333333333333 accuracy: 0.3333333333333333
[ "11covered_with_a_quilt_and_only_the_head_exposed", "12covered_with_a_quilt_and_exposed_other_parts_of_the_body", "13has_nothing_to_do_with_11_and_12_above" ]
lxl2023/autotrain-v7eqd-8qq
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: nan f1_macro: 0.16666666666666666 f1_micro: 0.3333333333333333 f1_weighted: 0.16666666666666666 precision_macro: 0.1111111111111111 precision_micro: 0.3333333333333333 precision_weighted: 0.1111111111111111 recall_macro: 0.3333333333333333 recall_micro: 0.3333333333333333 recall_weighted: 0.3333333333333333 accuracy: 0.3333333333333333
[ "11covered_with_a_quilt_and_only_the_head_exposed", "12covered_with_a_quilt_and_exposed_other_parts_of_the_body", "13has_nothing_to_do_with_11_and_12_above" ]
JungleWong/wong_autotrain
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 3.847293969571684e+27 f1_macro: 0.16666666666666666 f1_micro: 0.3333333333333333 f1_weighted: 0.16666666666666666 precision_macro: 0.1111111111111111 precision_micro: 0.3333333333333333 precision_weighted: 0.1111111111111111 recall_macro: 0.3333333333333333 recall_micro: 0.3333333333333333 recall_weighted: 0.3333333333333333 accuracy: 0.3333333333333333
[ "11covered_with_a_quilt_and_only_the_head_exposed", "12covered_with_a_quilt_and_exposed_other_parts_of_the_body", "13has_nothing_to_do_with_11_and_12_above" ]
lxl2023/autotrain-lhrqo-h9twf
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: nan f1_macro: 0.16666666666666666 f1_micro: 0.3333333333333333 f1_weighted: 0.16666666666666666 precision_macro: 0.1111111111111111 precision_micro: 0.3333333333333333 precision_weighted: 0.1111111111111111 recall_macro: 0.3333333333333333 recall_micro: 0.3333333333333333 recall_weighted: 0.3333333333333333 accuracy: 0.3333333333333333
[ "11covered_with_a_quilt_and_only_the_head_exposed", "12covered_with_a_quilt_and_exposed_other_parts_of_the_body", "13has_nothing_to_do_with_11_and_12_above" ]
chanhua/autotrain-0uv3s-vxfry
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: nan f1_macro: 0.16666666666666666 f1_micro: 0.3333333333333333 f1_weighted: 0.16666666666666666 precision_macro: 0.1111111111111111 precision_micro: 0.3333333333333333 precision_weighted: 0.1111111111111111 recall_macro: 0.3333333333333333 recall_micro: 0.3333333333333333 recall_weighted: 0.3333333333333333 accuracy: 0.3333333333333333
[ "11covered_with_a_quilt_and_only_the_head_exposed", "12covered_with_a_quilt_and_exposed_other_parts_of_the_body", "13has_nothing_to_do_with_11_and_12_above" ]
lxl2023/autotrain-6l0e2-ufos2
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: nan f1_macro: 0.16666666666666666 f1_micro: 0.3333333333333333 f1_weighted: 0.16666666666666666 precision_macro: 0.1111111111111111 precision_micro: 0.3333333333333333 precision_weighted: 0.1111111111111111 recall_macro: 0.3333333333333333 recall_micro: 0.3333333333333333 recall_weighted: 0.3333333333333333 accuracy: 0.3333333333333333
[ "11covered_with_a_quilt_and_only_the_head_exposed", "12covered_with_a_quilt_and_exposed_other_parts_of_the_body", "13has_nothing_to_do_with_11_and_12_above" ]
JungleWong/quilt_classfication
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 1.0986123085021973 f1_macro: 0.16666666666666666 f1_micro: 0.3333333333333333 f1_weighted: 0.16666666666666666 precision_macro: 0.1111111111111111 precision_micro: 0.3333333333333333 precision_weighted: 0.1111111111111111 recall_macro: 0.3333333333333333 recall_micro: 0.3333333333333333 recall_weighted: 0.3333333333333333 accuracy: 0.3333333333333333
[ "11covered_with_a_quilt_and_only_the_head_exposed", "12covered_with_a_quilt_and_exposed_other_parts_of_the_body" ]
hkivancoral/hushem_40x_deit_tiny_adamax_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_adamax_00001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.7285 - Accuracy: 0.7778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3612 | 1.0 | 215 | 0.8053 | 0.7111 | | 0.0954 | 2.0 | 430 | 0.7023 | 0.7111 | | 0.0263 | 3.0 | 645 | 0.7672 | 0.7333 | | 0.0113 | 4.0 | 860 | 0.8377 | 0.7778 | | 0.0012 | 5.0 | 1075 | 0.9748 | 0.7778 | | 0.0007 | 6.0 | 1290 | 0.9997 | 0.7778 | | 0.0004 | 7.0 | 1505 | 1.1150 | 0.7556 | | 0.0003 | 8.0 | 1720 | 1.1439 | 0.7556 | | 0.0002 | 9.0 | 1935 | 1.2019 | 0.7556 | | 0.0001 | 10.0 | 2150 | 1.2424 | 0.7556 | | 0.0001 | 11.0 | 2365 | 1.2284 | 0.7556 | | 0.0001 | 12.0 | 2580 | 1.2809 | 0.7556 | | 0.0001 | 13.0 | 2795 | 1.3071 | 0.7556 | | 0.0001 | 14.0 | 3010 | 1.3721 | 0.7556 | | 0.0 | 15.0 | 3225 | 1.3804 | 0.7556 | | 0.0 | 16.0 | 3440 | 1.3850 | 0.7556 | | 0.0 | 17.0 | 3655 | 1.4005 | 0.7556 | | 0.0 | 18.0 | 3870 | 1.4317 | 0.7556 | | 0.0 | 19.0 | 4085 | 1.4823 | 0.7556 | | 0.0 | 20.0 | 4300 | 1.4810 | 0.7556 | | 0.0 | 21.0 | 4515 | 1.4751 | 0.7556 | | 0.0 | 22.0 | 4730 | 1.5073 | 0.7556 | | 0.0 | 23.0 | 4945 | 1.5283 | 0.7333 | | 0.0 | 24.0 | 5160 | 1.5592 | 0.7556 | | 0.0 | 25.0 | 5375 | 1.5298 | 0.7556 | | 0.0 | 26.0 | 5590 | 1.5228 | 0.7778 | | 0.0 | 27.0 | 5805 | 1.5617 | 0.7556 | | 0.0 | 28.0 | 6020 | 1.5609 | 0.7778 | | 0.0 | 29.0 | 6235 | 1.5791 | 0.7556 | | 0.0 | 30.0 | 6450 | 1.6043 | 0.7778 | | 0.0 | 31.0 | 6665 | 1.6159 | 0.7556 | | 0.0 | 32.0 | 6880 | 1.6584 | 0.7556 | | 0.0 | 33.0 | 7095 | 1.6250 | 0.7778 | | 0.0 | 34.0 | 7310 | 1.6097 | 0.7778 | | 0.0 | 35.0 | 7525 | 1.6615 | 0.7778 | | 0.0 | 36.0 | 7740 | 1.6489 | 0.7778 | | 0.0 | 37.0 | 7955 | 1.6559 | 0.7778 | | 0.0 | 38.0 | 8170 | 1.6854 | 0.7778 | | 0.0 | 39.0 | 8385 | 1.6826 | 0.7778 | | 0.0 | 40.0 | 8600 | 1.7344 | 0.7333 | | 0.0 | 41.0 | 8815 | 1.7007 | 0.7778 | | 0.0 | 42.0 | 9030 | 1.6800 | 0.7778 | | 0.0 | 43.0 | 9245 | 1.7149 | 0.7778 | | 0.0 | 44.0 | 9460 | 1.7189 | 0.7556 | | 0.0 | 45.0 | 9675 | 1.7288 | 0.7778 | | 0.0 | 46.0 | 9890 | 1.7097 | 0.7778 | | 0.0 | 47.0 | 10105 | 1.7285 | 0.7778 | | 0.0 | 48.0 | 10320 | 1.7184 | 0.7778 | | 0.0 | 49.0 | 10535 | 1.7322 | 0.7778 | | 0.0 | 50.0 | 10750 | 1.7285 | 0.7778 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_sgd_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_sgd_0001_fold1 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2920 - Accuracy: 0.3778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.3635 | 1.0 | 215 | 1.4586 | 0.2889 | | 1.3564 | 2.0 | 430 | 1.4485 | 0.2889 | | 1.3735 | 3.0 | 645 | 1.4395 | 0.2889 | | 1.3415 | 4.0 | 860 | 1.4312 | 0.2889 | | 1.3033 | 5.0 | 1075 | 1.4236 | 0.2889 | | 1.3111 | 6.0 | 1290 | 1.4165 | 0.2667 | | 1.2796 | 7.0 | 1505 | 1.4098 | 0.2667 | | 1.265 | 8.0 | 1720 | 1.4035 | 0.2667 | | 1.2454 | 9.0 | 1935 | 1.3975 | 0.2667 | | 1.2437 | 10.0 | 2150 | 1.3919 | 0.2667 | | 1.2689 | 11.0 | 2365 | 1.3867 | 0.2667 | | 1.212 | 12.0 | 2580 | 1.3818 | 0.2667 | | 1.2193 | 13.0 | 2795 | 1.3771 | 0.2667 | | 1.2167 | 14.0 | 3010 | 1.3726 | 0.2667 | | 1.205 | 15.0 | 3225 | 1.3683 | 0.2667 | | 1.2084 | 16.0 | 3440 | 1.3641 | 0.2889 | | 1.1861 | 17.0 | 3655 | 1.3601 | 0.3333 | | 1.1898 | 18.0 | 3870 | 1.3563 | 0.3556 | | 1.1745 | 19.0 | 4085 | 1.3526 | 0.3556 | | 1.1602 | 20.0 | 4300 | 1.3489 | 0.3556 | | 1.1523 | 21.0 | 4515 | 1.3454 | 0.3556 | | 1.1329 | 22.0 | 4730 | 1.3420 | 0.3556 | | 1.1475 | 23.0 | 4945 | 1.3387 | 0.3556 | | 1.1333 | 24.0 | 5160 | 1.3354 | 0.3556 | | 1.1285 | 25.0 | 5375 | 1.3322 | 0.3333 | | 1.0938 | 26.0 | 5590 | 1.3292 | 0.3333 | | 1.0832 | 27.0 | 5805 | 1.3262 | 0.3333 | | 1.0889 | 28.0 | 6020 | 1.3234 | 0.3333 | | 1.0886 | 29.0 | 6235 | 1.3206 | 0.3333 | | 1.0684 | 30.0 | 6450 | 1.3180 | 0.3333 | | 1.0707 | 31.0 | 6665 | 1.3154 | 0.3333 | | 1.068 | 32.0 | 6880 | 1.3130 | 0.3333 | | 1.0647 | 33.0 | 7095 | 1.3107 | 0.3556 | | 1.0516 | 34.0 | 7310 | 1.3085 | 0.3556 | | 1.0515 | 35.0 | 7525 | 1.3064 | 0.3556 | | 1.0477 | 36.0 | 7740 | 1.3045 | 0.3556 | | 1.0685 | 37.0 | 7955 | 1.3027 | 0.3556 | | 1.0459 | 38.0 | 8170 | 1.3010 | 0.3556 | | 1.0276 | 39.0 | 8385 | 1.2995 | 0.3556 | | 1.016 | 40.0 | 8600 | 1.2981 | 0.3556 | | 1.044 | 41.0 | 8815 | 1.2969 | 0.3556 | | 1.0849 | 42.0 | 9030 | 1.2957 | 0.3556 | | 1.0504 | 43.0 | 9245 | 1.2948 | 0.3778 | | 1.0115 | 44.0 | 9460 | 1.2940 | 0.3778 | | 1.0336 | 45.0 | 9675 | 1.2933 | 0.3778 | | 1.0415 | 46.0 | 9890 | 1.2928 | 0.3778 | | 1.013 | 47.0 | 10105 | 1.2924 | 0.3778 | | 1.0207 | 48.0 | 10320 | 1.2921 | 0.3778 | | 1.054 | 49.0 | 10535 | 1.2920 | 0.3778 | | 1.0317 | 50.0 | 10750 | 1.2920 | 0.3778 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_sgd_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_sgd_00001_fold1 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4455 - Accuracy: 0.2889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.379 | 1.0 | 215 | 1.4681 | 0.2889 | | 1.3967 | 2.0 | 430 | 1.4670 | 0.2889 | | 1.423 | 3.0 | 645 | 1.4660 | 0.2889 | | 1.4018 | 4.0 | 860 | 1.4650 | 0.2889 | | 1.3899 | 5.0 | 1075 | 1.4640 | 0.2889 | | 1.4076 | 6.0 | 1290 | 1.4631 | 0.2889 | | 1.3743 | 7.0 | 1505 | 1.4622 | 0.2889 | | 1.3724 | 8.0 | 1720 | 1.4613 | 0.2889 | | 1.3757 | 9.0 | 1935 | 1.4604 | 0.2889 | | 1.3783 | 10.0 | 2150 | 1.4596 | 0.2889 | | 1.4141 | 11.0 | 2365 | 1.4589 | 0.2889 | | 1.3702 | 12.0 | 2580 | 1.4581 | 0.2889 | | 1.3842 | 13.0 | 2795 | 1.4574 | 0.2889 | | 1.3926 | 14.0 | 3010 | 1.4567 | 0.2889 | | 1.3764 | 15.0 | 3225 | 1.4560 | 0.2889 | | 1.3955 | 16.0 | 3440 | 1.4553 | 0.2889 | | 1.3752 | 17.0 | 3655 | 1.4547 | 0.2889 | | 1.3872 | 18.0 | 3870 | 1.4541 | 0.2889 | | 1.3795 | 19.0 | 4085 | 1.4535 | 0.2889 | | 1.3768 | 20.0 | 4300 | 1.4530 | 0.2889 | | 1.3609 | 21.0 | 4515 | 1.4524 | 0.2889 | | 1.3552 | 22.0 | 4730 | 1.4519 | 0.2889 | | 1.3869 | 23.0 | 4945 | 1.4514 | 0.2889 | | 1.3741 | 24.0 | 5160 | 1.4510 | 0.2889 | | 1.3721 | 25.0 | 5375 | 1.4505 | 0.2889 | | 1.3593 | 26.0 | 5590 | 1.4501 | 0.2889 | | 1.3536 | 27.0 | 5805 | 1.4497 | 0.2889 | | 1.3543 | 28.0 | 6020 | 1.4493 | 0.2889 | | 1.3589 | 29.0 | 6235 | 1.4489 | 0.2889 | | 1.3445 | 30.0 | 6450 | 1.4486 | 0.2889 | | 1.3539 | 31.0 | 6665 | 1.4483 | 0.2889 | | 1.3535 | 32.0 | 6880 | 1.4480 | 0.2889 | | 1.3498 | 33.0 | 7095 | 1.4477 | 0.2889 | | 1.3497 | 34.0 | 7310 | 1.4474 | 0.2889 | | 1.3582 | 35.0 | 7525 | 1.4472 | 0.2889 | | 1.354 | 36.0 | 7740 | 1.4469 | 0.2889 | | 1.3681 | 37.0 | 7955 | 1.4467 | 0.2889 | | 1.346 | 38.0 | 8170 | 1.4465 | 0.2889 | | 1.3468 | 39.0 | 8385 | 1.4463 | 0.2889 | | 1.3488 | 40.0 | 8600 | 1.4462 | 0.2889 | | 1.3542 | 41.0 | 8815 | 1.4460 | 0.2889 | | 1.3813 | 42.0 | 9030 | 1.4459 | 0.2889 | | 1.3585 | 43.0 | 9245 | 1.4458 | 0.2889 | | 1.3347 | 44.0 | 9460 | 1.4457 | 0.2889 | | 1.3527 | 45.0 | 9675 | 1.4456 | 0.2889 | | 1.3601 | 46.0 | 9890 | 1.4456 | 0.2889 | | 1.3484 | 47.0 | 10105 | 1.4455 | 0.2889 | | 1.3543 | 48.0 | 10320 | 1.4455 | 0.2889 | | 1.3639 | 49.0 | 10535 | 1.4455 | 0.2889 | | 1.3697 | 50.0 | 10750 | 1.4455 | 0.2889 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_tiny_adamax_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_adamax_00001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 3.0098 - Accuracy: 0.6444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3471 | 1.0 | 215 | 1.0795 | 0.5778 | | 0.0972 | 2.0 | 430 | 0.8936 | 0.6889 | | 0.0313 | 3.0 | 645 | 0.8957 | 0.6444 | | 0.0058 | 4.0 | 860 | 1.0591 | 0.6889 | | 0.0015 | 5.0 | 1075 | 1.2340 | 0.7111 | | 0.0006 | 6.0 | 1290 | 1.2875 | 0.6889 | | 0.0004 | 7.0 | 1505 | 1.3860 | 0.6889 | | 0.0002 | 8.0 | 1720 | 1.4571 | 0.6889 | | 0.0002 | 9.0 | 1935 | 1.5144 | 0.6667 | | 0.0001 | 10.0 | 2150 | 1.5648 | 0.6889 | | 0.0001 | 11.0 | 2365 | 1.6166 | 0.6667 | | 0.0001 | 12.0 | 2580 | 1.6547 | 0.6889 | | 0.0001 | 13.0 | 2795 | 1.7064 | 0.6667 | | 0.0001 | 14.0 | 3010 | 1.7513 | 0.6667 | | 0.0 | 15.0 | 3225 | 1.7849 | 0.6667 | | 0.0 | 16.0 | 3440 | 1.8291 | 0.6667 | | 0.0 | 17.0 | 3655 | 1.8746 | 0.6667 | | 0.0 | 18.0 | 3870 | 1.9137 | 0.6667 | | 0.0 | 19.0 | 4085 | 1.9589 | 0.6667 | | 0.0 | 20.0 | 4300 | 2.0103 | 0.6667 | | 0.0 | 21.0 | 4515 | 2.0484 | 0.6667 | | 0.0 | 22.0 | 4730 | 2.0885 | 0.6667 | | 0.0 | 23.0 | 4945 | 2.1272 | 0.6667 | | 0.0 | 24.0 | 5160 | 2.1691 | 0.6667 | | 0.0 | 25.0 | 5375 | 2.2032 | 0.6667 | | 0.0 | 26.0 | 5590 | 2.2512 | 0.6667 | | 0.0 | 27.0 | 5805 | 2.2928 | 0.6667 | | 0.0 | 28.0 | 6020 | 2.3366 | 0.6667 | | 0.0 | 29.0 | 6235 | 2.3684 | 0.6667 | | 0.0 | 30.0 | 6450 | 2.4080 | 0.6667 | | 0.0 | 31.0 | 6665 | 2.4434 | 0.6667 | | 0.0 | 32.0 | 6880 | 2.4884 | 0.6667 | | 0.0 | 33.0 | 7095 | 2.5184 | 0.6667 | | 0.0 | 34.0 | 7310 | 2.5603 | 0.6667 | | 0.0 | 35.0 | 7525 | 2.6005 | 0.6667 | | 0.0 | 36.0 | 7740 | 2.6418 | 0.6444 | | 0.0 | 37.0 | 7955 | 2.6720 | 0.6444 | | 0.0 | 38.0 | 8170 | 2.7124 | 0.6444 | | 0.0 | 39.0 | 8385 | 2.7569 | 0.6444 | | 0.0 | 40.0 | 8600 | 2.7908 | 0.6444 | | 0.0 | 41.0 | 8815 | 2.8243 | 0.6444 | | 0.0 | 42.0 | 9030 | 2.8592 | 0.6444 | | 0.0 | 43.0 | 9245 | 2.8889 | 0.6444 | | 0.0 | 44.0 | 9460 | 2.9143 | 0.6444 | | 0.0 | 45.0 | 9675 | 2.9439 | 0.6444 | | 0.0 | 46.0 | 9890 | 2.9703 | 0.6444 | | 0.0 | 47.0 | 10105 | 2.9822 | 0.6444 | | 0.0 | 48.0 | 10320 | 3.0050 | 0.6444 | | 0.0 | 49.0 | 10535 | 3.0086 | 0.6444 | | 0.0 | 50.0 | 10750 | 3.0098 | 0.6444 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_sgd_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_sgd_00001_fold2 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3898 - Accuracy: 0.3111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.4168 | 1.0 | 215 | 1.4077 | 0.2444 | | 1.3843 | 2.0 | 430 | 1.4068 | 0.2444 | | 1.4045 | 3.0 | 645 | 1.4059 | 0.2444 | | 1.3944 | 4.0 | 860 | 1.4051 | 0.2444 | | 1.3979 | 5.0 | 1075 | 1.4043 | 0.2444 | | 1.4212 | 6.0 | 1290 | 1.4036 | 0.2667 | | 1.4197 | 7.0 | 1505 | 1.4029 | 0.2667 | | 1.369 | 8.0 | 1720 | 1.4022 | 0.2667 | | 1.3853 | 9.0 | 1935 | 1.4015 | 0.2667 | | 1.4053 | 10.0 | 2150 | 1.4008 | 0.2667 | | 1.3723 | 11.0 | 2365 | 1.4002 | 0.2667 | | 1.3571 | 12.0 | 2580 | 1.3996 | 0.2667 | | 1.3936 | 13.0 | 2795 | 1.3990 | 0.2667 | | 1.3779 | 14.0 | 3010 | 1.3985 | 0.2667 | | 1.3861 | 15.0 | 3225 | 1.3979 | 0.2667 | | 1.4005 | 16.0 | 3440 | 1.3974 | 0.2889 | | 1.3769 | 17.0 | 3655 | 1.3969 | 0.2889 | | 1.3909 | 18.0 | 3870 | 1.3964 | 0.2889 | | 1.3834 | 19.0 | 4085 | 1.3960 | 0.2889 | | 1.3642 | 20.0 | 4300 | 1.3956 | 0.2889 | | 1.3863 | 21.0 | 4515 | 1.3951 | 0.2889 | | 1.3863 | 22.0 | 4730 | 1.3947 | 0.2889 | | 1.3703 | 23.0 | 4945 | 1.3944 | 0.2889 | | 1.3733 | 24.0 | 5160 | 1.3940 | 0.2889 | | 1.3751 | 25.0 | 5375 | 1.3937 | 0.3111 | | 1.3799 | 26.0 | 5590 | 1.3933 | 0.3111 | | 1.3637 | 27.0 | 5805 | 1.3930 | 0.3111 | | 1.3658 | 28.0 | 6020 | 1.3927 | 0.3111 | | 1.3837 | 29.0 | 6235 | 1.3924 | 0.3111 | | 1.3573 | 30.0 | 6450 | 1.3922 | 0.3111 | | 1.3483 | 31.0 | 6665 | 1.3919 | 0.3111 | | 1.3737 | 32.0 | 6880 | 1.3917 | 0.3111 | | 1.3567 | 33.0 | 7095 | 1.3915 | 0.3111 | | 1.3764 | 34.0 | 7310 | 1.3913 | 0.3111 | | 1.3646 | 35.0 | 7525 | 1.3911 | 0.3111 | | 1.3557 | 36.0 | 7740 | 1.3909 | 0.3111 | | 1.3829 | 37.0 | 7955 | 1.3907 | 0.3111 | | 1.3713 | 38.0 | 8170 | 1.3906 | 0.3111 | | 1.3468 | 39.0 | 8385 | 1.3905 | 0.3111 | | 1.3527 | 40.0 | 8600 | 1.3903 | 0.3111 | | 1.3629 | 41.0 | 8815 | 1.3902 | 0.3111 | | 1.3464 | 42.0 | 9030 | 1.3901 | 0.3111 | | 1.3709 | 43.0 | 9245 | 1.3901 | 0.3111 | | 1.3524 | 44.0 | 9460 | 1.3900 | 0.3111 | | 1.3532 | 45.0 | 9675 | 1.3899 | 0.3111 | | 1.3657 | 46.0 | 9890 | 1.3899 | 0.3111 | | 1.3891 | 47.0 | 10105 | 1.3899 | 0.3111 | | 1.3666 | 48.0 | 10320 | 1.3898 | 0.3111 | | 1.3713 | 49.0 | 10535 | 1.3898 | 0.3111 | | 1.3614 | 50.0 | 10750 | 1.3898 | 0.3111 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_sgd_0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_sgd_0001_fold2 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2543 - Accuracy: 0.3333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.3992 | 1.0 | 215 | 1.3998 | 0.2667 | | 1.3515 | 2.0 | 430 | 1.3921 | 0.3111 | | 1.3529 | 3.0 | 645 | 1.3851 | 0.3111 | | 1.3241 | 4.0 | 860 | 1.3787 | 0.3111 | | 1.3174 | 5.0 | 1075 | 1.3730 | 0.3333 | | 1.331 | 6.0 | 1290 | 1.3675 | 0.3333 | | 1.3242 | 7.0 | 1505 | 1.3623 | 0.3333 | | 1.2565 | 8.0 | 1720 | 1.3574 | 0.3111 | | 1.2719 | 9.0 | 1935 | 1.3525 | 0.3333 | | 1.2647 | 10.0 | 2150 | 1.3477 | 0.3333 | | 1.2293 | 11.0 | 2365 | 1.3430 | 0.3556 | | 1.1981 | 12.0 | 2580 | 1.3386 | 0.3556 | | 1.2258 | 13.0 | 2795 | 1.3342 | 0.3556 | | 1.1901 | 14.0 | 3010 | 1.3299 | 0.3111 | | 1.1988 | 15.0 | 3225 | 1.3257 | 0.3333 | | 1.2048 | 16.0 | 3440 | 1.3214 | 0.3333 | | 1.1848 | 17.0 | 3655 | 1.3173 | 0.3333 | | 1.1799 | 18.0 | 3870 | 1.3134 | 0.3333 | | 1.1629 | 19.0 | 4085 | 1.3095 | 0.3333 | | 1.1482 | 20.0 | 4300 | 1.3058 | 0.3333 | | 1.1478 | 21.0 | 4515 | 1.3022 | 0.3333 | | 1.1472 | 22.0 | 4730 | 1.2988 | 0.3333 | | 1.1143 | 23.0 | 4945 | 1.2955 | 0.3333 | | 1.1122 | 24.0 | 5160 | 1.2923 | 0.3333 | | 1.1 | 25.0 | 5375 | 1.2893 | 0.3111 | | 1.1089 | 26.0 | 5590 | 1.2864 | 0.3111 | | 1.1001 | 27.0 | 5805 | 1.2836 | 0.3333 | | 1.0858 | 28.0 | 6020 | 1.2810 | 0.3333 | | 1.093 | 29.0 | 6235 | 1.2786 | 0.3333 | | 1.081 | 30.0 | 6450 | 1.2761 | 0.3333 | | 1.0426 | 31.0 | 6665 | 1.2739 | 0.3333 | | 1.0757 | 32.0 | 6880 | 1.2718 | 0.3333 | | 1.0524 | 33.0 | 7095 | 1.2698 | 0.3333 | | 1.0725 | 34.0 | 7310 | 1.2680 | 0.3333 | | 1.055 | 35.0 | 7525 | 1.2662 | 0.3333 | | 1.0308 | 36.0 | 7740 | 1.2646 | 0.3333 | | 1.0978 | 37.0 | 7955 | 1.2631 | 0.3333 | | 1.0264 | 38.0 | 8170 | 1.2617 | 0.3333 | | 1.0508 | 39.0 | 8385 | 1.2604 | 0.3333 | | 1.0208 | 40.0 | 8600 | 1.2593 | 0.3333 | | 1.0505 | 41.0 | 8815 | 1.2583 | 0.3333 | | 1.0147 | 42.0 | 9030 | 1.2574 | 0.3333 | | 1.0355 | 43.0 | 9245 | 1.2566 | 0.3333 | | 1.0064 | 44.0 | 9460 | 1.2559 | 0.3333 | | 1.019 | 45.0 | 9675 | 1.2554 | 0.3333 | | 1.0292 | 46.0 | 9890 | 1.2549 | 0.3333 | | 1.0584 | 47.0 | 10105 | 1.2546 | 0.3333 | | 1.0425 | 48.0 | 10320 | 1.2544 | 0.3333 | | 1.0211 | 49.0 | 10535 | 1.2543 | 0.3333 | | 1.0221 | 50.0 | 10750 | 1.2543 | 0.3333 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_tiny_adamax_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_adamax_00001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8349 - Accuracy: 0.9070 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.424 | 1.0 | 217 | 0.5360 | 0.8140 | | 0.1714 | 2.0 | 434 | 0.4093 | 0.8372 | | 0.024 | 3.0 | 651 | 0.3706 | 0.8372 | | 0.0076 | 4.0 | 868 | 0.3232 | 0.8605 | | 0.0117 | 5.0 | 1085 | 0.4002 | 0.8605 | | 0.0014 | 6.0 | 1302 | 0.3510 | 0.8837 | | 0.0013 | 7.0 | 1519 | 0.3890 | 0.8837 | | 0.0003 | 8.0 | 1736 | 0.4966 | 0.8837 | | 0.0002 | 9.0 | 1953 | 0.4570 | 0.8837 | | 0.0001 | 10.0 | 2170 | 0.5366 | 0.8837 | | 0.0001 | 11.0 | 2387 | 0.4687 | 0.8837 | | 0.0001 | 12.0 | 2604 | 0.5121 | 0.8837 | | 0.0001 | 13.0 | 2821 | 0.5347 | 0.8837 | | 0.0001 | 14.0 | 3038 | 0.5583 | 0.8837 | | 0.0 | 15.0 | 3255 | 0.5404 | 0.8837 | | 0.0 | 16.0 | 3472 | 0.5914 | 0.8837 | | 0.0 | 17.0 | 3689 | 0.5903 | 0.8837 | | 0.0 | 18.0 | 3906 | 0.5962 | 0.8837 | | 0.0 | 19.0 | 4123 | 0.6082 | 0.8837 | | 0.0 | 20.0 | 4340 | 0.6491 | 0.9070 | | 0.0 | 21.0 | 4557 | 0.6647 | 0.8837 | | 0.0 | 22.0 | 4774 | 0.6416 | 0.8837 | | 0.0 | 23.0 | 4991 | 0.6353 | 0.9070 | | 0.0 | 24.0 | 5208 | 0.6866 | 0.9070 | | 0.0 | 25.0 | 5425 | 0.6552 | 0.9070 | | 0.0 | 26.0 | 5642 | 0.7023 | 0.9070 | | 0.0 | 27.0 | 5859 | 0.6738 | 0.9070 | | 0.0 | 28.0 | 6076 | 0.7119 | 0.9070 | | 0.0 | 29.0 | 6293 | 0.7453 | 0.9070 | | 0.0 | 30.0 | 6510 | 0.7641 | 0.9070 | | 0.0 | 31.0 | 6727 | 0.7753 | 0.9070 | | 0.0 | 32.0 | 6944 | 0.7598 | 0.9070 | | 0.0 | 33.0 | 7161 | 0.7952 | 0.9070 | | 0.0 | 34.0 | 7378 | 0.7621 | 0.9070 | | 0.0 | 35.0 | 7595 | 0.7849 | 0.9070 | | 0.0 | 36.0 | 7812 | 0.7647 | 0.9070 | | 0.0 | 37.0 | 8029 | 0.7761 | 0.9070 | | 0.0 | 38.0 | 8246 | 0.8153 | 0.9070 | | 0.0 | 39.0 | 8463 | 0.8099 | 0.9070 | | 0.0 | 40.0 | 8680 | 0.8036 | 0.9070 | | 0.0 | 41.0 | 8897 | 0.8358 | 0.9070 | | 0.0 | 42.0 | 9114 | 0.8036 | 0.9070 | | 0.0 | 43.0 | 9331 | 0.8414 | 0.9070 | | 0.0 | 44.0 | 9548 | 0.8111 | 0.9070 | | 0.0 | 45.0 | 9765 | 0.8271 | 0.9070 | | 0.0 | 46.0 | 9982 | 0.8237 | 0.9070 | | 0.0 | 47.0 | 10199 | 0.8249 | 0.9070 | | 0.0 | 48.0 | 10416 | 0.8315 | 0.9070 | | 0.0 | 49.0 | 10633 | 0.8343 | 0.9070 | | 0.0 | 50.0 | 10850 | 0.8349 | 0.9070 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_sgd_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_sgd_00001_fold3 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4462 - Accuracy: 0.2326 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.395 | 1.0 | 217 | 1.4656 | 0.2326 | | 1.3668 | 2.0 | 434 | 1.4647 | 0.2326 | | 1.3731 | 3.0 | 651 | 1.4638 | 0.2326 | | 1.4025 | 4.0 | 868 | 1.4629 | 0.2326 | | 1.3639 | 5.0 | 1085 | 1.4620 | 0.2326 | | 1.3879 | 6.0 | 1302 | 1.4612 | 0.2326 | | 1.3778 | 7.0 | 1519 | 1.4604 | 0.2326 | | 1.3612 | 8.0 | 1736 | 1.4596 | 0.2326 | | 1.3705 | 9.0 | 1953 | 1.4589 | 0.2326 | | 1.3789 | 10.0 | 2170 | 1.4582 | 0.2326 | | 1.361 | 11.0 | 2387 | 1.4575 | 0.2326 | | 1.3859 | 12.0 | 2604 | 1.4568 | 0.2326 | | 1.3896 | 13.0 | 2821 | 1.4562 | 0.2326 | | 1.3672 | 14.0 | 3038 | 1.4556 | 0.2326 | | 1.3726 | 15.0 | 3255 | 1.4550 | 0.2326 | | 1.3714 | 16.0 | 3472 | 1.4544 | 0.2326 | | 1.3679 | 17.0 | 3689 | 1.4539 | 0.2326 | | 1.3686 | 18.0 | 3906 | 1.4533 | 0.2326 | | 1.3759 | 19.0 | 4123 | 1.4528 | 0.2326 | | 1.3715 | 20.0 | 4340 | 1.4524 | 0.2326 | | 1.3586 | 21.0 | 4557 | 1.4519 | 0.2326 | | 1.3602 | 22.0 | 4774 | 1.4515 | 0.2326 | | 1.3784 | 23.0 | 4991 | 1.4511 | 0.2326 | | 1.3361 | 24.0 | 5208 | 1.4507 | 0.2326 | | 1.3639 | 25.0 | 5425 | 1.4503 | 0.2326 | | 1.3719 | 26.0 | 5642 | 1.4499 | 0.2326 | | 1.37 | 27.0 | 5859 | 1.4496 | 0.2326 | | 1.3566 | 28.0 | 6076 | 1.4493 | 0.2326 | | 1.3167 | 29.0 | 6293 | 1.4490 | 0.2326 | | 1.3522 | 30.0 | 6510 | 1.4487 | 0.2326 | | 1.3828 | 31.0 | 6727 | 1.4484 | 0.2326 | | 1.3664 | 32.0 | 6944 | 1.4481 | 0.2326 | | 1.3407 | 33.0 | 7161 | 1.4479 | 0.2326 | | 1.3633 | 34.0 | 7378 | 1.4477 | 0.2326 | | 1.3611 | 35.0 | 7595 | 1.4475 | 0.2326 | | 1.3425 | 36.0 | 7812 | 1.4473 | 0.2326 | | 1.3418 | 37.0 | 8029 | 1.4471 | 0.2326 | | 1.3479 | 38.0 | 8246 | 1.4470 | 0.2326 | | 1.3576 | 39.0 | 8463 | 1.4468 | 0.2326 | | 1.3408 | 40.0 | 8680 | 1.4467 | 0.2326 | | 1.3433 | 41.0 | 8897 | 1.4466 | 0.2326 | | 1.3828 | 42.0 | 9114 | 1.4465 | 0.2326 | | 1.3558 | 43.0 | 9331 | 1.4464 | 0.2326 | | 1.3616 | 44.0 | 9548 | 1.4463 | 0.2326 | | 1.36 | 45.0 | 9765 | 1.4463 | 0.2326 | | 1.3465 | 46.0 | 9982 | 1.4462 | 0.2326 | | 1.3484 | 47.0 | 10199 | 1.4462 | 0.2326 | | 1.3826 | 48.0 | 10416 | 1.4462 | 0.2326 | | 1.3656 | 49.0 | 10633 | 1.4462 | 0.2326 | | 1.3363 | 50.0 | 10850 | 1.4462 | 0.2326 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_sgd_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_sgd_0001_fold3 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2710 - Accuracy: 0.4651 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.3787 | 1.0 | 217 | 1.4567 | 0.2326 | | 1.3411 | 2.0 | 434 | 1.4476 | 0.2326 | | 1.3346 | 3.0 | 651 | 1.4398 | 0.2326 | | 1.3522 | 4.0 | 868 | 1.4325 | 0.2558 | | 1.295 | 5.0 | 1085 | 1.4257 | 0.2558 | | 1.3027 | 6.0 | 1302 | 1.4192 | 0.2791 | | 1.2908 | 7.0 | 1519 | 1.4129 | 0.3023 | | 1.2684 | 8.0 | 1736 | 1.4068 | 0.3023 | | 1.2597 | 9.0 | 1953 | 1.4007 | 0.3023 | | 1.2504 | 10.0 | 2170 | 1.3948 | 0.3023 | | 1.2181 | 11.0 | 2387 | 1.3891 | 0.3023 | | 1.2286 | 12.0 | 2604 | 1.3834 | 0.3023 | | 1.229 | 13.0 | 2821 | 1.3779 | 0.3023 | | 1.2118 | 14.0 | 3038 | 1.3725 | 0.3256 | | 1.1939 | 15.0 | 3255 | 1.3673 | 0.3256 | | 1.2054 | 16.0 | 3472 | 1.3622 | 0.3488 | | 1.1836 | 17.0 | 3689 | 1.3572 | 0.3721 | | 1.1754 | 18.0 | 3906 | 1.3524 | 0.3721 | | 1.1872 | 19.0 | 4123 | 1.3477 | 0.3721 | | 1.1652 | 20.0 | 4340 | 1.3431 | 0.3721 | | 1.1396 | 21.0 | 4557 | 1.3387 | 0.3721 | | 1.1373 | 22.0 | 4774 | 1.3343 | 0.3953 | | 1.1381 | 23.0 | 4991 | 1.3300 | 0.3953 | | 1.101 | 24.0 | 5208 | 1.3259 | 0.3953 | | 1.1305 | 25.0 | 5425 | 1.3219 | 0.4186 | | 1.1458 | 26.0 | 5642 | 1.3181 | 0.4186 | | 1.0969 | 27.0 | 5859 | 1.3143 | 0.4186 | | 1.092 | 28.0 | 6076 | 1.3106 | 0.4186 | | 1.0422 | 29.0 | 6293 | 1.3071 | 0.4186 | | 1.07 | 30.0 | 6510 | 1.3037 | 0.4419 | | 1.097 | 31.0 | 6727 | 1.3005 | 0.4419 | | 1.1048 | 32.0 | 6944 | 1.2974 | 0.4419 | | 1.0657 | 33.0 | 7161 | 1.2945 | 0.4419 | | 1.0841 | 34.0 | 7378 | 1.2918 | 0.4419 | | 1.0697 | 35.0 | 7595 | 1.2891 | 0.4419 | | 1.0586 | 36.0 | 7812 | 1.2867 | 0.4419 | | 1.0346 | 37.0 | 8029 | 1.2845 | 0.4419 | | 1.0364 | 38.0 | 8246 | 1.2824 | 0.4651 | | 1.055 | 39.0 | 8463 | 1.2804 | 0.4651 | | 1.0391 | 40.0 | 8680 | 1.2787 | 0.4651 | | 1.0408 | 41.0 | 8897 | 1.2771 | 0.4651 | | 1.0911 | 42.0 | 9114 | 1.2757 | 0.4651 | | 1.042 | 43.0 | 9331 | 1.2745 | 0.4651 | | 1.0562 | 44.0 | 9548 | 1.2735 | 0.4651 | | 1.0444 | 45.0 | 9765 | 1.2727 | 0.4651 | | 1.0551 | 46.0 | 9982 | 1.2720 | 0.4651 | | 1.0314 | 47.0 | 10199 | 1.2715 | 0.4651 | | 1.067 | 48.0 | 10416 | 1.2712 | 0.4651 | | 1.0573 | 49.0 | 10633 | 1.2710 | 0.4651 | | 1.0022 | 50.0 | 10850 | 1.2710 | 0.4651 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_tiny_adamax_00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_adamax_00001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0865 - Accuracy: 0.9048 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5149 | 1.0 | 219 | 0.5408 | 0.7857 | | 0.1388 | 2.0 | 438 | 0.2883 | 0.8571 | | 0.0532 | 3.0 | 657 | 0.2259 | 0.9048 | | 0.0146 | 4.0 | 876 | 0.3103 | 0.8810 | | 0.0044 | 5.0 | 1095 | 0.2128 | 0.9048 | | 0.001 | 6.0 | 1314 | 0.4066 | 0.8571 | | 0.0004 | 7.0 | 1533 | 0.5492 | 0.8571 | | 0.0003 | 8.0 | 1752 | 0.5191 | 0.8571 | | 0.0002 | 9.0 | 1971 | 0.5554 | 0.8571 | | 0.0002 | 10.0 | 2190 | 0.6021 | 0.8571 | | 0.0001 | 11.0 | 2409 | 0.6325 | 0.8571 | | 0.0001 | 12.0 | 2628 | 0.5941 | 0.8810 | | 0.0001 | 13.0 | 2847 | 0.6178 | 0.8810 | | 0.0 | 14.0 | 3066 | 0.6345 | 0.8810 | | 0.0 | 15.0 | 3285 | 0.6789 | 0.8810 | | 0.0 | 16.0 | 3504 | 0.6912 | 0.8810 | | 0.0 | 17.0 | 3723 | 0.6975 | 0.8810 | | 0.0 | 18.0 | 3942 | 0.7160 | 0.8810 | | 0.0 | 19.0 | 4161 | 0.7194 | 0.8810 | | 0.0 | 20.0 | 4380 | 0.7354 | 0.8810 | | 0.0 | 21.0 | 4599 | 0.7292 | 0.9048 | | 0.0 | 22.0 | 4818 | 0.7594 | 0.9048 | | 0.0 | 23.0 | 5037 | 0.7524 | 0.9048 | | 0.0 | 24.0 | 5256 | 0.7681 | 0.9048 | | 0.0 | 25.0 | 5475 | 0.7964 | 0.9048 | | 0.0 | 26.0 | 5694 | 0.8348 | 0.9048 | | 0.0 | 27.0 | 5913 | 0.8454 | 0.9048 | | 0.0 | 28.0 | 6132 | 0.8650 | 0.9048 | | 0.0 | 29.0 | 6351 | 0.8560 | 0.9048 | | 0.0 | 30.0 | 6570 | 0.8777 | 0.9048 | | 0.0 | 31.0 | 6789 | 0.8901 | 0.9048 | | 0.0 | 32.0 | 7008 | 0.9135 | 0.9048 | | 0.0 | 33.0 | 7227 | 0.9102 | 0.9048 | | 0.0 | 34.0 | 7446 | 0.9561 | 0.9048 | | 0.0 | 35.0 | 7665 | 0.9681 | 0.9048 | | 0.0 | 36.0 | 7884 | 0.9813 | 0.9048 | | 0.0 | 37.0 | 8103 | 0.9769 | 0.9048 | | 0.0 | 38.0 | 8322 | 1.0135 | 0.9048 | | 0.0 | 39.0 | 8541 | 1.0218 | 0.9048 | | 0.0 | 40.0 | 8760 | 1.0098 | 0.9048 | | 0.0 | 41.0 | 8979 | 1.0382 | 0.9048 | | 0.0 | 42.0 | 9198 | 1.0217 | 0.9048 | | 0.0 | 43.0 | 9417 | 1.0481 | 0.9048 | | 0.0 | 44.0 | 9636 | 1.0751 | 0.9048 | | 0.0 | 45.0 | 9855 | 1.0579 | 0.9048 | | 0.0 | 46.0 | 10074 | 1.0662 | 0.9048 | | 0.0 | 47.0 | 10293 | 1.0827 | 0.9048 | | 0.0 | 48.0 | 10512 | 1.0853 | 0.9048 | | 0.0 | 49.0 | 10731 | 1.0917 | 0.9048 | | 0.0 | 50.0 | 10950 | 1.0865 | 0.9048 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_sgd_00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_sgd_00001_fold4 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3940 - Accuracy: 0.3095 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.4075 | 1.0 | 219 | 1.4139 | 0.2857 | | 1.4006 | 2.0 | 438 | 1.4128 | 0.2857 | | 1.397 | 3.0 | 657 | 1.4119 | 0.2857 | | 1.4009 | 4.0 | 876 | 1.4109 | 0.2857 | | 1.4192 | 5.0 | 1095 | 1.4100 | 0.2857 | | 1.4068 | 6.0 | 1314 | 1.4091 | 0.2857 | | 1.4024 | 7.0 | 1533 | 1.4083 | 0.2857 | | 1.3965 | 8.0 | 1752 | 1.4075 | 0.2857 | | 1.3783 | 9.0 | 1971 | 1.4067 | 0.3095 | | 1.3738 | 10.0 | 2190 | 1.4060 | 0.3095 | | 1.3936 | 11.0 | 2409 | 1.4053 | 0.3095 | | 1.3746 | 12.0 | 2628 | 1.4046 | 0.3095 | | 1.3536 | 13.0 | 2847 | 1.4040 | 0.3095 | | 1.4005 | 14.0 | 3066 | 1.4033 | 0.3095 | | 1.3798 | 15.0 | 3285 | 1.4027 | 0.3095 | | 1.3748 | 16.0 | 3504 | 1.4022 | 0.3095 | | 1.3581 | 17.0 | 3723 | 1.4016 | 0.3095 | | 1.3695 | 18.0 | 3942 | 1.4011 | 0.3095 | | 1.366 | 19.0 | 4161 | 1.4006 | 0.3095 | | 1.3735 | 20.0 | 4380 | 1.4001 | 0.3095 | | 1.3732 | 21.0 | 4599 | 1.3997 | 0.3095 | | 1.3632 | 22.0 | 4818 | 1.3992 | 0.3095 | | 1.3525 | 23.0 | 5037 | 1.3988 | 0.3095 | | 1.3845 | 24.0 | 5256 | 1.3984 | 0.3095 | | 1.363 | 25.0 | 5475 | 1.3980 | 0.3095 | | 1.3693 | 26.0 | 5694 | 1.3977 | 0.3095 | | 1.3693 | 27.0 | 5913 | 1.3973 | 0.3095 | | 1.3914 | 28.0 | 6132 | 1.3970 | 0.3095 | | 1.3857 | 29.0 | 6351 | 1.3967 | 0.3095 | | 1.3681 | 30.0 | 6570 | 1.3964 | 0.3095 | | 1.3619 | 31.0 | 6789 | 1.3962 | 0.3095 | | 1.3666 | 32.0 | 7008 | 1.3959 | 0.3095 | | 1.3733 | 33.0 | 7227 | 1.3957 | 0.3095 | | 1.3572 | 34.0 | 7446 | 1.3955 | 0.3095 | | 1.3715 | 35.0 | 7665 | 1.3953 | 0.3095 | | 1.3581 | 36.0 | 7884 | 1.3951 | 0.3095 | | 1.3453 | 37.0 | 8103 | 1.3949 | 0.3095 | | 1.3666 | 38.0 | 8322 | 1.3948 | 0.3095 | | 1.3416 | 39.0 | 8541 | 1.3946 | 0.3095 | | 1.3435 | 40.0 | 8760 | 1.3945 | 0.3095 | | 1.3731 | 41.0 | 8979 | 1.3944 | 0.3095 | | 1.3652 | 42.0 | 9198 | 1.3943 | 0.3095 | | 1.3499 | 43.0 | 9417 | 1.3942 | 0.3095 | | 1.3629 | 44.0 | 9636 | 1.3941 | 0.3095 | | 1.3332 | 45.0 | 9855 | 1.3941 | 0.3095 | | 1.3535 | 46.0 | 10074 | 1.3940 | 0.3095 | | 1.3876 | 47.0 | 10293 | 1.3940 | 0.3095 | | 1.363 | 48.0 | 10512 | 1.3940 | 0.3095 | | 1.3575 | 49.0 | 10731 | 1.3940 | 0.3095 | | 1.3466 | 50.0 | 10950 | 1.3940 | 0.3095 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_sgd_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_sgd_0001_fold4 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2151 - Accuracy: 0.4286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.3918 | 1.0 | 219 | 1.4045 | 0.3095 | | 1.3704 | 2.0 | 438 | 1.3956 | 0.3095 | | 1.3491 | 3.0 | 657 | 1.3880 | 0.3333 | | 1.3369 | 4.0 | 876 | 1.3811 | 0.3333 | | 1.3406 | 5.0 | 1095 | 1.3747 | 0.3333 | | 1.3171 | 6.0 | 1314 | 1.3686 | 0.3333 | | 1.2982 | 7.0 | 1533 | 1.3628 | 0.3571 | | 1.2896 | 8.0 | 1752 | 1.3571 | 0.3571 | | 1.2549 | 9.0 | 1971 | 1.3513 | 0.3810 | | 1.2384 | 10.0 | 2190 | 1.3457 | 0.4048 | | 1.2507 | 11.0 | 2409 | 1.3401 | 0.4286 | | 1.2362 | 12.0 | 2628 | 1.3346 | 0.4286 | | 1.1966 | 13.0 | 2847 | 1.3293 | 0.4286 | | 1.2279 | 14.0 | 3066 | 1.3240 | 0.4286 | | 1.2136 | 15.0 | 3285 | 1.3188 | 0.4286 | | 1.1856 | 16.0 | 3504 | 1.3138 | 0.4286 | | 1.1941 | 17.0 | 3723 | 1.3088 | 0.4286 | | 1.1805 | 18.0 | 3942 | 1.3039 | 0.4286 | | 1.1554 | 19.0 | 4161 | 1.2991 | 0.4048 | | 1.1709 | 20.0 | 4380 | 1.2943 | 0.4048 | | 1.1523 | 21.0 | 4599 | 1.2895 | 0.4048 | | 1.138 | 22.0 | 4818 | 1.2848 | 0.4048 | | 1.0984 | 23.0 | 5037 | 1.2803 | 0.4048 | | 1.1405 | 24.0 | 5256 | 1.2759 | 0.4048 | | 1.1028 | 25.0 | 5475 | 1.2716 | 0.4286 | | 1.1236 | 26.0 | 5694 | 1.2674 | 0.4286 | | 1.0819 | 27.0 | 5913 | 1.2634 | 0.4286 | | 1.1245 | 28.0 | 6132 | 1.2595 | 0.4286 | | 1.0929 | 29.0 | 6351 | 1.2557 | 0.4286 | | 1.0861 | 30.0 | 6570 | 1.2521 | 0.4048 | | 1.082 | 31.0 | 6789 | 1.2486 | 0.4048 | | 1.0826 | 32.0 | 7008 | 1.2452 | 0.4048 | | 1.0889 | 33.0 | 7227 | 1.2420 | 0.4048 | | 1.052 | 34.0 | 7446 | 1.2390 | 0.4286 | | 1.056 | 35.0 | 7665 | 1.2361 | 0.4286 | | 1.0391 | 36.0 | 7884 | 1.2333 | 0.4286 | | 1.0236 | 37.0 | 8103 | 1.2307 | 0.4286 | | 1.0474 | 38.0 | 8322 | 1.2283 | 0.4286 | | 1.0069 | 39.0 | 8541 | 1.2261 | 0.4286 | | 1.0443 | 40.0 | 8760 | 1.2242 | 0.4286 | | 1.0711 | 41.0 | 8979 | 1.2223 | 0.4048 | | 1.053 | 42.0 | 9198 | 1.2207 | 0.4286 | | 1.0356 | 43.0 | 9417 | 1.2193 | 0.4286 | | 1.0491 | 44.0 | 9636 | 1.2181 | 0.4286 | | 0.9928 | 45.0 | 9855 | 1.2171 | 0.4286 | | 1.0402 | 46.0 | 10074 | 1.2163 | 0.4286 | | 1.0792 | 47.0 | 10293 | 1.2157 | 0.4286 | | 1.0146 | 48.0 | 10512 | 1.2153 | 0.4286 | | 1.0325 | 49.0 | 10731 | 1.2152 | 0.4286 | | 1.0249 | 50.0 | 10950 | 1.2151 | 0.4286 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_tiny_adamax_00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_adamax_00001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8352 - Accuracy: 0.8537 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3813 | 1.0 | 220 | 0.6093 | 0.7561 | | 0.131 | 2.0 | 440 | 0.4372 | 0.8293 | | 0.0714 | 3.0 | 660 | 0.6223 | 0.7805 | | 0.0083 | 4.0 | 880 | 0.5773 | 0.8537 | | 0.0038 | 5.0 | 1100 | 0.5967 | 0.8537 | | 0.0013 | 6.0 | 1320 | 0.7213 | 0.8537 | | 0.0005 | 7.0 | 1540 | 0.6555 | 0.8537 | | 0.0003 | 8.0 | 1760 | 0.7129 | 0.8537 | | 0.0002 | 9.0 | 1980 | 0.6903 | 0.8537 | | 0.0001 | 10.0 | 2200 | 0.7139 | 0.8537 | | 0.0001 | 11.0 | 2420 | 0.7461 | 0.8537 | | 0.0001 | 12.0 | 2640 | 0.7296 | 0.8537 | | 0.0001 | 13.0 | 2860 | 0.7461 | 0.8537 | | 0.0001 | 14.0 | 3080 | 0.7537 | 0.8537 | | 0.0 | 15.0 | 3300 | 0.7347 | 0.8537 | | 0.0 | 16.0 | 3520 | 0.7586 | 0.8537 | | 0.0 | 17.0 | 3740 | 0.7585 | 0.8537 | | 0.0 | 18.0 | 3960 | 0.7603 | 0.8537 | | 0.0 | 19.0 | 4180 | 0.7375 | 0.8537 | | 0.0 | 20.0 | 4400 | 0.7584 | 0.8537 | | 0.0 | 21.0 | 4620 | 0.7582 | 0.8537 | | 0.0 | 22.0 | 4840 | 0.7660 | 0.8537 | | 0.0 | 23.0 | 5060 | 0.7826 | 0.8537 | | 0.0 | 24.0 | 5280 | 0.7552 | 0.8537 | | 0.0 | 25.0 | 5500 | 0.7401 | 0.8537 | | 0.0 | 26.0 | 5720 | 0.7783 | 0.8537 | | 0.0 | 27.0 | 5940 | 0.7654 | 0.8537 | | 0.0 | 28.0 | 6160 | 0.7518 | 0.8537 | | 0.0 | 29.0 | 6380 | 0.7644 | 0.8537 | | 0.0 | 30.0 | 6600 | 0.7962 | 0.8537 | | 0.0 | 31.0 | 6820 | 0.8050 | 0.8537 | | 0.0 | 32.0 | 7040 | 0.7846 | 0.8537 | | 0.0 | 33.0 | 7260 | 0.7663 | 0.8537 | | 0.0 | 34.0 | 7480 | 0.7669 | 0.8780 | | 0.0 | 35.0 | 7700 | 0.7816 | 0.8780 | | 0.0 | 36.0 | 7920 | 0.7902 | 0.8537 | | 0.0 | 37.0 | 8140 | 0.7775 | 0.8537 | | 0.0 | 38.0 | 8360 | 0.8004 | 0.8537 | | 0.0 | 39.0 | 8580 | 0.7724 | 0.8537 | | 0.0 | 40.0 | 8800 | 0.7795 | 0.8780 | | 0.0 | 41.0 | 9020 | 0.8084 | 0.8537 | | 0.0 | 42.0 | 9240 | 0.8224 | 0.8537 | | 0.0 | 43.0 | 9460 | 0.8366 | 0.8293 | | 0.0 | 44.0 | 9680 | 0.8236 | 0.8780 | | 0.0 | 45.0 | 9900 | 0.8365 | 0.8293 | | 0.0 | 46.0 | 10120 | 0.8207 | 0.8537 | | 0.0 | 47.0 | 10340 | 0.8439 | 0.8293 | | 0.0 | 48.0 | 10560 | 0.8465 | 0.8537 | | 0.0 | 49.0 | 10780 | 0.8311 | 0.8537 | | 0.0 | 50.0 | 11000 | 0.8352 | 0.8537 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_sgd_00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_sgd_00001_fold5 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3600 - Accuracy: 0.2683 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.4054 | 1.0 | 220 | 1.3726 | 0.2683 | | 1.4058 | 2.0 | 440 | 1.3720 | 0.2683 | | 1.4165 | 3.0 | 660 | 1.3713 | 0.2683 | | 1.3927 | 4.0 | 880 | 1.3707 | 0.2683 | | 1.4116 | 5.0 | 1100 | 1.3701 | 0.2683 | | 1.3812 | 6.0 | 1320 | 1.3696 | 0.2683 | | 1.4048 | 7.0 | 1540 | 1.3690 | 0.2683 | | 1.3591 | 8.0 | 1760 | 1.3685 | 0.2683 | | 1.4052 | 9.0 | 1980 | 1.3680 | 0.2683 | | 1.3882 | 10.0 | 2200 | 1.3676 | 0.2683 | | 1.3863 | 11.0 | 2420 | 1.3671 | 0.2683 | | 1.3965 | 12.0 | 2640 | 1.3667 | 0.2683 | | 1.4035 | 13.0 | 2860 | 1.3663 | 0.2683 | | 1.3813 | 14.0 | 3080 | 1.3659 | 0.2683 | | 1.3758 | 15.0 | 3300 | 1.3655 | 0.2683 | | 1.3822 | 16.0 | 3520 | 1.3651 | 0.2683 | | 1.3951 | 17.0 | 3740 | 1.3648 | 0.2683 | | 1.3934 | 18.0 | 3960 | 1.3645 | 0.2683 | | 1.3858 | 19.0 | 4180 | 1.3642 | 0.2683 | | 1.3848 | 20.0 | 4400 | 1.3639 | 0.2683 | | 1.379 | 21.0 | 4620 | 1.3636 | 0.2683 | | 1.3756 | 22.0 | 4840 | 1.3633 | 0.2683 | | 1.3759 | 23.0 | 5060 | 1.3630 | 0.2683 | | 1.3873 | 24.0 | 5280 | 1.3628 | 0.2683 | | 1.3701 | 25.0 | 5500 | 1.3626 | 0.2683 | | 1.3779 | 26.0 | 5720 | 1.3623 | 0.2683 | | 1.3916 | 27.0 | 5940 | 1.3621 | 0.2683 | | 1.3605 | 28.0 | 6160 | 1.3619 | 0.2927 | | 1.3832 | 29.0 | 6380 | 1.3617 | 0.2927 | | 1.3641 | 30.0 | 6600 | 1.3616 | 0.2927 | | 1.3806 | 31.0 | 6820 | 1.3614 | 0.2927 | | 1.3762 | 32.0 | 7040 | 1.3612 | 0.2927 | | 1.3671 | 33.0 | 7260 | 1.3611 | 0.2683 | | 1.3772 | 34.0 | 7480 | 1.3610 | 0.2683 | | 1.3577 | 35.0 | 7700 | 1.3608 | 0.2683 | | 1.3486 | 36.0 | 7920 | 1.3607 | 0.2683 | | 1.3611 | 37.0 | 8140 | 1.3606 | 0.2683 | | 1.355 | 38.0 | 8360 | 1.3605 | 0.2683 | | 1.3682 | 39.0 | 8580 | 1.3604 | 0.2683 | | 1.3669 | 40.0 | 8800 | 1.3603 | 0.2683 | | 1.3585 | 41.0 | 9020 | 1.3603 | 0.2683 | | 1.3549 | 42.0 | 9240 | 1.3602 | 0.2683 | | 1.3669 | 43.0 | 9460 | 1.3602 | 0.2683 | | 1.353 | 44.0 | 9680 | 1.3601 | 0.2683 | | 1.3499 | 45.0 | 9900 | 1.3601 | 0.2683 | | 1.3573 | 46.0 | 10120 | 1.3600 | 0.2683 | | 1.384 | 47.0 | 10340 | 1.3600 | 0.2683 | | 1.367 | 48.0 | 10560 | 1.3600 | 0.2683 | | 1.3753 | 49.0 | 10780 | 1.3600 | 0.2683 | | 1.3687 | 50.0 | 11000 | 1.3600 | 0.2683 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_sgd_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_sgd_0001_fold5 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1678 - Accuracy: 0.5122 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.3902 | 1.0 | 220 | 1.3668 | 0.2683 | | 1.375 | 2.0 | 440 | 1.3610 | 0.2927 | | 1.3643 | 3.0 | 660 | 1.3560 | 0.2683 | | 1.3352 | 4.0 | 880 | 1.3513 | 0.2683 | | 1.343 | 5.0 | 1100 | 1.3466 | 0.2683 | | 1.2985 | 6.0 | 1320 | 1.3416 | 0.2683 | | 1.3152 | 7.0 | 1540 | 1.3365 | 0.2927 | | 1.2618 | 8.0 | 1760 | 1.3311 | 0.3171 | | 1.2728 | 9.0 | 1980 | 1.3254 | 0.3415 | | 1.2604 | 10.0 | 2200 | 1.3195 | 0.3415 | | 1.2446 | 11.0 | 2420 | 1.3136 | 0.3415 | | 1.2322 | 12.0 | 2640 | 1.3076 | 0.3902 | | 1.2519 | 13.0 | 2860 | 1.3017 | 0.4146 | | 1.2115 | 14.0 | 3080 | 1.2958 | 0.4146 | | 1.2112 | 15.0 | 3300 | 1.2899 | 0.4390 | | 1.1892 | 16.0 | 3520 | 1.2841 | 0.4390 | | 1.1942 | 17.0 | 3740 | 1.2784 | 0.4390 | | 1.2008 | 18.0 | 3960 | 1.2727 | 0.4390 | | 1.1853 | 19.0 | 4180 | 1.2671 | 0.4390 | | 1.1573 | 20.0 | 4400 | 1.2615 | 0.4634 | | 1.1577 | 21.0 | 4620 | 1.2560 | 0.4634 | | 1.1317 | 22.0 | 4840 | 1.2506 | 0.4634 | | 1.1597 | 23.0 | 5060 | 1.2453 | 0.4878 | | 1.1283 | 24.0 | 5280 | 1.2401 | 0.4878 | | 1.1168 | 25.0 | 5500 | 1.2349 | 0.4634 | | 1.142 | 26.0 | 5720 | 1.2300 | 0.4634 | | 1.1324 | 27.0 | 5940 | 1.2251 | 0.4634 | | 1.1074 | 28.0 | 6160 | 1.2203 | 0.4634 | | 1.107 | 29.0 | 6380 | 1.2157 | 0.4634 | | 1.098 | 30.0 | 6600 | 1.2113 | 0.4634 | | 1.1034 | 31.0 | 6820 | 1.2071 | 0.4634 | | 1.0941 | 32.0 | 7040 | 1.2031 | 0.4634 | | 1.0839 | 33.0 | 7260 | 1.1993 | 0.4634 | | 1.0528 | 34.0 | 7480 | 1.1956 | 0.4634 | | 1.0292 | 35.0 | 7700 | 1.1922 | 0.4634 | | 1.0585 | 36.0 | 7920 | 1.1890 | 0.4634 | | 1.0434 | 37.0 | 8140 | 1.1859 | 0.4634 | | 1.0597 | 38.0 | 8360 | 1.1831 | 0.4634 | | 1.0626 | 39.0 | 8580 | 1.1805 | 0.4634 | | 1.0375 | 40.0 | 8800 | 1.1782 | 0.4634 | | 1.0422 | 41.0 | 9020 | 1.1761 | 0.4634 | | 1.0304 | 42.0 | 9240 | 1.1742 | 0.4634 | | 1.0373 | 43.0 | 9460 | 1.1726 | 0.4878 | | 1.0134 | 44.0 | 9680 | 1.1712 | 0.4878 | | 1.0323 | 45.0 | 9900 | 1.1701 | 0.4878 | | 1.0327 | 46.0 | 10120 | 1.1692 | 0.5122 | | 1.0599 | 47.0 | 10340 | 1.1685 | 0.5122 | | 1.0079 | 48.0 | 10560 | 1.1681 | 0.5122 | | 1.0145 | 49.0 | 10780 | 1.1679 | 0.5122 | | 1.0358 | 50.0 | 11000 | 1.1678 | 0.5122 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
clewiston/autotrain-vlxo9-2s7eh
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 1.347914457321167 f1_macro: 0.196969696969697 f1_micro: 0.65 f1_weighted: 0.5121212121212122 precision_macro: 0.1625 precision_micro: 0.65 precision_weighted: 0.42250000000000004 recall_macro: 0.25 recall_micro: 0.65 recall_weighted: 0.65 accuracy: 0.65
[ "multiple", "none", "partial", "whole" ]
yuanhuaisen/autotrain-r6fhf-a4d7f
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 0.4276469945907593 f1_macro: 0.8530901722391085 f1_micro: 0.875 f1_weighted: 0.878488532743852 precision_macro: 0.8621098104793757 precision_micro: 0.875 precision_weighted: 0.8893636933718457 recall_macro: 0.8544277360066833 recall_micro: 0.875 recall_weighted: 0.875 accuracy: 0.875
[ "11covered_with_a_quilt_and_only_the_head_exposed", "12covered_with_a_quilt_and_exposed_other_parts_of_the_body", "13has_nothing_to_do_with_11_and_12_above" ]
chanhua/autotrain-6uoy3-zwdlp
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: nan f1_macro: 0.031825795644891124 f1_micro: 0.10555555555555556 f1_weighted: 0.020156337241764376 precision_macro: 0.017592592592592594 precision_micro: 0.10555555555555556 precision_weighted: 0.011141975308641975 recall_macro: 0.16666666666666666 recall_micro: 0.10555555555555556 recall_weighted: 0.10555555555555556 accuracy: 0.10555555555555556
[ "11covered_with_a_quilt_and_only_the_head_exposed", "12covered_with_a_quilt_and_exposed_other_parts_of_the_body", "13has_nothing_to_do_with_11_and_12_above", "21covered_with_a_quilt,_only_the_head_and_shoulders_exposed", "22covered_with_a_quilt,_exposed_head_and_shoulders_except_for_other_organs", "23has_nothing_to_do_with_21_and_22_above" ]
hkivancoral/hushem_40x_deit_tiny_sgd_001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_sgd_001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9268 - Accuracy: 0.7333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.1534 | 1.0 | 215 | 1.3121 | 0.3778 | | 0.8744 | 2.0 | 430 | 1.2305 | 0.5111 | | 0.7578 | 3.0 | 645 | 1.1023 | 0.5556 | | 0.6354 | 4.0 | 860 | 0.9584 | 0.5778 | | 0.4832 | 5.0 | 1075 | 0.8877 | 0.6444 | | 0.4506 | 6.0 | 1290 | 0.8214 | 0.6889 | | 0.3619 | 7.0 | 1505 | 0.8077 | 0.6889 | | 0.3187 | 8.0 | 1720 | 0.7845 | 0.6667 | | 0.2423 | 9.0 | 1935 | 0.7629 | 0.7111 | | 0.2351 | 10.0 | 2150 | 0.7464 | 0.7333 | | 0.2043 | 11.0 | 2365 | 0.7249 | 0.6889 | | 0.1712 | 12.0 | 2580 | 0.7297 | 0.7111 | | 0.1294 | 13.0 | 2795 | 0.7280 | 0.7333 | | 0.1185 | 14.0 | 3010 | 0.7610 | 0.7333 | | 0.1264 | 15.0 | 3225 | 0.7479 | 0.7333 | | 0.0869 | 16.0 | 3440 | 0.7617 | 0.7333 | | 0.0902 | 17.0 | 3655 | 0.7623 | 0.7333 | | 0.0782 | 18.0 | 3870 | 0.7805 | 0.7333 | | 0.071 | 19.0 | 4085 | 0.7715 | 0.7333 | | 0.063 | 20.0 | 4300 | 0.7777 | 0.7333 | | 0.0587 | 21.0 | 4515 | 0.7497 | 0.7333 | | 0.0675 | 22.0 | 4730 | 0.7998 | 0.7333 | | 0.0426 | 23.0 | 4945 | 0.8200 | 0.7333 | | 0.0373 | 24.0 | 5160 | 0.8281 | 0.7111 | | 0.0441 | 25.0 | 5375 | 0.8317 | 0.7111 | | 0.0323 | 26.0 | 5590 | 0.8133 | 0.7111 | | 0.0359 | 27.0 | 5805 | 0.8214 | 0.7111 | | 0.0291 | 28.0 | 6020 | 0.8265 | 0.7111 | | 0.0287 | 29.0 | 6235 | 0.8490 | 0.7111 | | 0.0271 | 30.0 | 6450 | 0.8534 | 0.7111 | | 0.0256 | 31.0 | 6665 | 0.8626 | 0.7111 | | 0.0212 | 32.0 | 6880 | 0.8791 | 0.7111 | | 0.0155 | 33.0 | 7095 | 0.8740 | 0.7333 | | 0.0144 | 34.0 | 7310 | 0.8433 | 0.7333 | | 0.0132 | 35.0 | 7525 | 0.8680 | 0.7333 | | 0.015 | 36.0 | 7740 | 0.8880 | 0.7333 | | 0.0129 | 37.0 | 7955 | 0.8931 | 0.7333 | | 0.018 | 38.0 | 8170 | 0.8891 | 0.7333 | | 0.0092 | 39.0 | 8385 | 0.9122 | 0.7333 | | 0.0085 | 40.0 | 8600 | 0.9159 | 0.7333 | | 0.0124 | 41.0 | 8815 | 0.9199 | 0.7333 | | 0.0125 | 42.0 | 9030 | 0.9056 | 0.7333 | | 0.0107 | 43.0 | 9245 | 0.9191 | 0.7333 | | 0.0095 | 44.0 | 9460 | 0.9083 | 0.7333 | | 0.0115 | 45.0 | 9675 | 0.9189 | 0.7333 | | 0.0088 | 46.0 | 9890 | 0.9241 | 0.7333 | | 0.0065 | 47.0 | 10105 | 0.9299 | 0.7333 | | 0.007 | 48.0 | 10320 | 0.9257 | 0.7333 | | 0.0129 | 49.0 | 10535 | 0.9260 | 0.7333 | | 0.0229 | 50.0 | 10750 | 0.9268 | 0.7333 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_rms_001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_rms_001_fold1 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 7.0637 - Accuracy: 0.5333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.3874 | 1.0 | 215 | 1.5845 | 0.2667 | | 1.2829 | 2.0 | 430 | 1.2221 | 0.4 | | 0.7962 | 3.0 | 645 | 2.1065 | 0.4 | | 0.7528 | 4.0 | 860 | 1.0651 | 0.5556 | | 0.6029 | 5.0 | 1075 | 1.5642 | 0.4889 | | 0.6246 | 6.0 | 1290 | 1.7962 | 0.4222 | | 0.589 | 7.0 | 1505 | 1.4819 | 0.4444 | | 0.6081 | 8.0 | 1720 | 1.4452 | 0.4222 | | 0.4808 | 9.0 | 1935 | 1.4389 | 0.4444 | | 0.4155 | 10.0 | 2150 | 1.7698 | 0.4667 | | 0.4393 | 11.0 | 2365 | 1.4569 | 0.5778 | | 0.4007 | 12.0 | 2580 | 2.1115 | 0.4 | | 0.3758 | 13.0 | 2795 | 1.5230 | 0.5556 | | 0.3244 | 14.0 | 3010 | 2.2901 | 0.4444 | | 0.3063 | 15.0 | 3225 | 2.0129 | 0.4889 | | 0.3072 | 16.0 | 3440 | 2.2969 | 0.5333 | | 0.2444 | 17.0 | 3655 | 2.5054 | 0.4667 | | 0.2293 | 18.0 | 3870 | 2.3449 | 0.4889 | | 0.2391 | 19.0 | 4085 | 2.0401 | 0.6444 | | 0.1843 | 20.0 | 4300 | 2.7271 | 0.5333 | | 0.2073 | 21.0 | 4515 | 2.2599 | 0.4889 | | 0.194 | 22.0 | 4730 | 3.1378 | 0.4444 | | 0.2943 | 23.0 | 4945 | 2.7236 | 0.5333 | | 0.2089 | 24.0 | 5160 | 2.5054 | 0.5778 | | 0.2145 | 25.0 | 5375 | 3.8073 | 0.4667 | | 0.1232 | 26.0 | 5590 | 3.5697 | 0.4889 | | 0.1349 | 27.0 | 5805 | 3.5985 | 0.5333 | | 0.1548 | 28.0 | 6020 | 3.0930 | 0.4889 | | 0.0655 | 29.0 | 6235 | 4.3232 | 0.4889 | | 0.1304 | 30.0 | 6450 | 3.6994 | 0.5333 | | 0.0997 | 31.0 | 6665 | 3.7329 | 0.5333 | | 0.0825 | 32.0 | 6880 | 3.4793 | 0.5333 | | 0.154 | 33.0 | 7095 | 5.2562 | 0.4667 | | 0.1206 | 34.0 | 7310 | 4.5299 | 0.4889 | | 0.1019 | 35.0 | 7525 | 3.6522 | 0.5111 | | 0.019 | 36.0 | 7740 | 3.9235 | 0.5333 | | 0.0485 | 37.0 | 7955 | 4.7342 | 0.5556 | | 0.0155 | 38.0 | 8170 | 4.4779 | 0.5778 | | 0.0142 | 39.0 | 8385 | 4.2139 | 0.5556 | | 0.0256 | 40.0 | 8600 | 5.0724 | 0.5333 | | 0.0211 | 41.0 | 8815 | 4.8895 | 0.4889 | | 0.019 | 42.0 | 9030 | 4.8291 | 0.5556 | | 0.0047 | 43.0 | 9245 | 5.9102 | 0.5333 | | 0.0027 | 44.0 | 9460 | 5.9480 | 0.5556 | | 0.0009 | 45.0 | 9675 | 6.2260 | 0.5333 | | 0.0008 | 46.0 | 9890 | 6.6029 | 0.5556 | | 0.0001 | 47.0 | 10105 | 6.7925 | 0.5556 | | 0.0001 | 48.0 | 10320 | 6.7039 | 0.5333 | | 0.0 | 49.0 | 10535 | 7.0556 | 0.5333 | | 0.0001 | 50.0 | 10750 | 7.0637 | 0.5333 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_rms_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_rms_0001_fold1 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.4716 - Accuracy: 0.8222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0732 | 1.0 | 215 | 0.6365 | 0.7556 | | 0.102 | 2.0 | 430 | 1.5996 | 0.6889 | | 0.0272 | 3.0 | 645 | 0.8191 | 0.8 | | 0.0001 | 4.0 | 860 | 0.8601 | 0.8 | | 0.1057 | 5.0 | 1075 | 0.9797 | 0.7778 | | 0.0863 | 6.0 | 1290 | 1.8140 | 0.6889 | | 0.0131 | 7.0 | 1505 | 1.1217 | 0.8 | | 0.0 | 8.0 | 1720 | 1.2704 | 0.8 | | 0.0 | 9.0 | 1935 | 1.3271 | 0.8222 | | 0.0 | 10.0 | 2150 | 1.4318 | 0.8222 | | 0.0 | 11.0 | 2365 | 1.5412 | 0.7778 | | 0.0 | 12.0 | 2580 | 1.6553 | 0.8 | | 0.0 | 13.0 | 2795 | 1.7666 | 0.8 | | 0.0 | 14.0 | 3010 | 1.8788 | 0.8444 | | 0.0 | 15.0 | 3225 | 1.9590 | 0.8444 | | 0.0 | 16.0 | 3440 | 2.0334 | 0.8444 | | 0.0 | 17.0 | 3655 | 2.0934 | 0.8444 | | 0.0 | 18.0 | 3870 | 2.1509 | 0.8444 | | 0.0 | 19.0 | 4085 | 2.2044 | 0.8444 | | 0.0 | 20.0 | 4300 | 2.2497 | 0.8444 | | 0.0 | 21.0 | 4515 | 2.2856 | 0.8444 | | 0.0 | 22.0 | 4730 | 2.3133 | 0.8444 | | 0.0 | 23.0 | 4945 | 2.3351 | 0.8444 | | 0.0 | 24.0 | 5160 | 2.3530 | 0.8444 | | 0.0 | 25.0 | 5375 | 2.3678 | 0.8444 | | 0.0 | 26.0 | 5590 | 2.3804 | 0.8444 | | 0.0 | 27.0 | 5805 | 2.3914 | 0.8444 | | 0.0 | 28.0 | 6020 | 2.4010 | 0.8222 | | 0.0 | 29.0 | 6235 | 2.4094 | 0.8222 | | 0.0 | 30.0 | 6450 | 2.4170 | 0.8222 | | 0.0 | 31.0 | 6665 | 2.4237 | 0.8222 | | 0.0 | 32.0 | 6880 | 2.4297 | 0.8222 | | 0.0 | 33.0 | 7095 | 2.4351 | 0.8222 | | 0.0 | 34.0 | 7310 | 2.4400 | 0.8222 | | 0.0 | 35.0 | 7525 | 2.4444 | 0.8222 | | 0.0 | 36.0 | 7740 | 2.4484 | 0.8222 | | 0.0 | 37.0 | 7955 | 2.4519 | 0.8222 | | 0.0 | 38.0 | 8170 | 2.4551 | 0.8222 | | 0.0 | 39.0 | 8385 | 2.4580 | 0.8222 | | 0.0 | 40.0 | 8600 | 2.4605 | 0.8222 | | 0.0 | 41.0 | 8815 | 2.4628 | 0.8222 | | 0.0 | 42.0 | 9030 | 2.4647 | 0.8222 | | 0.0 | 43.0 | 9245 | 2.4664 | 0.8222 | | 0.0 | 44.0 | 9460 | 2.4679 | 0.8222 | | 0.0 | 45.0 | 9675 | 2.4691 | 0.8222 | | 0.0 | 46.0 | 9890 | 2.4700 | 0.8222 | | 0.0 | 47.0 | 10105 | 2.4708 | 0.8222 | | 0.0 | 48.0 | 10320 | 2.4713 | 0.8222 | | 0.0 | 49.0 | 10535 | 2.4716 | 0.8222 | | 0.0 | 50.0 | 10750 | 2.4716 | 0.8222 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
chanhua/autotrain-wo576-85kuw
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 1.0984116792678833 f1_macro: 0.16666666666666666 f1_micro: 0.3333333333333333 f1_weighted: 0.16666666666666666 precision_macro: 0.1111111111111111 precision_micro: 0.3333333333333333 precision_weighted: 0.1111111111111111 recall_macro: 0.3333333333333333 recall_micro: 0.3333333333333333 recall_weighted: 0.3333333333333333 accuracy: 0.3333333333333333
[ "11covered_with_a_quilt_and_only_the_head_exposed", "12covered_with_a_quilt_and_exposed_other_parts_of_the_body", "13has_nothing_to_do_with_11_and_12_above" ]
hkivancoral/hushem_40x_deit_tiny_sgd_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_sgd_001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0440 - Accuracy: 0.6889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0975 | 1.0 | 215 | 1.3370 | 0.3778 | | 0.8761 | 2.0 | 430 | 1.2895 | 0.4444 | | 0.7359 | 3.0 | 645 | 1.2565 | 0.4889 | | 0.6277 | 4.0 | 860 | 1.2398 | 0.5556 | | 0.5094 | 5.0 | 1075 | 1.2052 | 0.5556 | | 0.4187 | 6.0 | 1290 | 1.1950 | 0.5778 | | 0.3909 | 7.0 | 1505 | 1.1310 | 0.6 | | 0.3137 | 8.0 | 1720 | 1.1412 | 0.5556 | | 0.2817 | 9.0 | 1935 | 1.0706 | 0.5778 | | 0.2108 | 10.0 | 2150 | 1.0537 | 0.6 | | 0.1785 | 11.0 | 2365 | 1.0606 | 0.5778 | | 0.1677 | 12.0 | 2580 | 1.0202 | 0.5778 | | 0.1602 | 13.0 | 2795 | 1.0251 | 0.5778 | | 0.1355 | 14.0 | 3010 | 1.0164 | 0.6 | | 0.1234 | 15.0 | 3225 | 1.0019 | 0.5778 | | 0.0937 | 16.0 | 3440 | 0.9960 | 0.6 | | 0.0963 | 17.0 | 3655 | 0.9708 | 0.5778 | | 0.0998 | 18.0 | 3870 | 0.9907 | 0.5778 | | 0.0604 | 19.0 | 4085 | 0.9932 | 0.6 | | 0.0724 | 20.0 | 4300 | 0.9792 | 0.5556 | | 0.0616 | 21.0 | 4515 | 0.9528 | 0.5556 | | 0.0591 | 22.0 | 4730 | 0.9741 | 0.5556 | | 0.0433 | 23.0 | 4945 | 0.9824 | 0.5556 | | 0.0476 | 24.0 | 5160 | 0.9907 | 0.5556 | | 0.0326 | 25.0 | 5375 | 0.9714 | 0.5778 | | 0.0325 | 26.0 | 5590 | 0.9834 | 0.6 | | 0.0352 | 27.0 | 5805 | 0.9903 | 0.5778 | | 0.0319 | 28.0 | 6020 | 0.9831 | 0.5778 | | 0.0242 | 29.0 | 6235 | 0.9872 | 0.6 | | 0.0238 | 30.0 | 6450 | 1.0027 | 0.6222 | | 0.0166 | 31.0 | 6665 | 0.9985 | 0.5778 | | 0.0151 | 32.0 | 6880 | 1.0088 | 0.6 | | 0.0176 | 33.0 | 7095 | 1.0180 | 0.6 | | 0.0221 | 34.0 | 7310 | 1.0038 | 0.6444 | | 0.0159 | 35.0 | 7525 | 0.9868 | 0.6667 | | 0.0115 | 36.0 | 7740 | 1.0104 | 0.6444 | | 0.017 | 37.0 | 7955 | 1.0128 | 0.6889 | | 0.0105 | 38.0 | 8170 | 1.0250 | 0.6444 | | 0.0144 | 39.0 | 8385 | 1.0115 | 0.6889 | | 0.0092 | 40.0 | 8600 | 1.0202 | 0.6667 | | 0.0131 | 41.0 | 8815 | 1.0296 | 0.6444 | | 0.0108 | 42.0 | 9030 | 1.0274 | 0.6889 | | 0.0089 | 43.0 | 9245 | 1.0423 | 0.6889 | | 0.0153 | 44.0 | 9460 | 1.0420 | 0.6889 | | 0.0077 | 45.0 | 9675 | 1.0387 | 0.6667 | | 0.0096 | 46.0 | 9890 | 1.0413 | 0.6889 | | 0.0073 | 47.0 | 10105 | 1.0431 | 0.6889 | | 0.0112 | 48.0 | 10320 | 1.0453 | 0.6889 | | 0.0085 | 49.0 | 10535 | 1.0438 | 0.6889 | | 0.01 | 50.0 | 10750 | 1.0440 | 0.6889 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
chanhua/autotrain-82nel-cfd2f
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: nan f1_macro: 0.06666666666666668 f1_micro: 0.20000000000000004 f1_weighted: 0.06666666666666668 precision_macro: 0.04 precision_micro: 0.2 precision_weighted: 0.04 recall_macro: 0.2 recall_micro: 0.2 recall_weighted: 0.2 accuracy: 0.2
[ "10_just_a_pure_cotton_quilt", "11_cover_the_quilt_with_only_the_head_exposed", "12_just_a_pure_face", "13_cover_the_quilt_to_expose_the_head_and_shoulders", "14_has_nothing_to_do_with_11_and_13_above" ]
Yura32000/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6394 - Accuracy: 0.896 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7761 | 0.99 | 62 | 2.5927 | 0.824 | | 1.8745 | 2.0 | 125 | 1.8134 | 0.868 | | 1.5945 | 2.98 | 186 | 1.6394 | 0.896 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.0 - Tokenizers 0.15.0
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
hkivancoral/hushem_40x_deit_base_rms_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_rms_001_fold2 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 8.5594 - Accuracy: 0.5778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.1732 | 1.0 | 215 | 1.0108 | 0.4667 | | 0.7763 | 2.0 | 430 | 1.2138 | 0.5333 | | 0.7021 | 3.0 | 645 | 1.2446 | 0.4 | | 0.6002 | 4.0 | 860 | 1.7707 | 0.4444 | | 0.4988 | 5.0 | 1075 | 2.1116 | 0.4667 | | 0.4269 | 6.0 | 1290 | 2.3849 | 0.5556 | | 0.3366 | 7.0 | 1505 | 2.4322 | 0.5556 | | 0.2961 | 8.0 | 1720 | 3.2646 | 0.5556 | | 0.2377 | 9.0 | 1935 | 3.1438 | 0.5333 | | 0.2435 | 10.0 | 2150 | 3.6031 | 0.5778 | | 0.2593 | 11.0 | 2365 | 3.5951 | 0.4889 | | 0.1482 | 12.0 | 2580 | 3.8372 | 0.5111 | | 0.1871 | 13.0 | 2795 | 3.7490 | 0.6222 | | 0.1246 | 14.0 | 3010 | 3.7977 | 0.5333 | | 0.166 | 15.0 | 3225 | 3.7321 | 0.5778 | | 0.1672 | 16.0 | 3440 | 4.6413 | 0.4889 | | 0.1752 | 17.0 | 3655 | 4.9330 | 0.5556 | | 0.1214 | 18.0 | 3870 | 4.3615 | 0.5556 | | 0.0488 | 19.0 | 4085 | 4.4231 | 0.5111 | | 0.1336 | 20.0 | 4300 | 4.4451 | 0.5778 | | 0.1002 | 21.0 | 4515 | 3.7455 | 0.5778 | | 0.0734 | 22.0 | 4730 | 4.4970 | 0.5556 | | 0.0322 | 23.0 | 4945 | 4.8990 | 0.5333 | | 0.214 | 24.0 | 5160 | 5.1865 | 0.5778 | | 0.1242 | 25.0 | 5375 | 5.0088 | 0.5333 | | 0.0033 | 26.0 | 5590 | 4.9606 | 0.5556 | | 0.0333 | 27.0 | 5805 | 4.4063 | 0.5778 | | 0.0592 | 28.0 | 6020 | 4.1719 | 0.5556 | | 0.0444 | 29.0 | 6235 | 6.2342 | 0.5111 | | 0.0039 | 30.0 | 6450 | 5.9834 | 0.5333 | | 0.003 | 31.0 | 6665 | 6.2329 | 0.5333 | | 0.0008 | 32.0 | 6880 | 6.2499 | 0.6 | | 0.1078 | 33.0 | 7095 | 5.2542 | 0.6222 | | 0.0258 | 34.0 | 7310 | 6.7980 | 0.4889 | | 0.0052 | 35.0 | 7525 | 6.6849 | 0.5333 | | 0.0003 | 36.0 | 7740 | 6.1342 | 0.5556 | | 0.0005 | 37.0 | 7955 | 5.4920 | 0.5778 | | 0.0004 | 38.0 | 8170 | 5.3684 | 0.5778 | | 0.0148 | 39.0 | 8385 | 5.3551 | 0.5556 | | 0.0054 | 40.0 | 8600 | 7.4300 | 0.5111 | | 0.0 | 41.0 | 8815 | 6.8539 | 0.5556 | | 0.0 | 42.0 | 9030 | 6.8688 | 0.5556 | | 0.0 | 43.0 | 9245 | 7.1702 | 0.5778 | | 0.0 | 44.0 | 9460 | 7.4631 | 0.5778 | | 0.0 | 45.0 | 9675 | 7.7338 | 0.5778 | | 0.0 | 46.0 | 9890 | 7.9825 | 0.5778 | | 0.0 | 47.0 | 10105 | 8.2172 | 0.5778 | | 0.0 | 48.0 | 10320 | 8.4047 | 0.5778 | | 0.0 | 49.0 | 10535 | 8.5267 | 0.5778 | | 0.0 | 50.0 | 10750 | 8.5594 | 0.5778 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
chanhua/autotrain-xcbf5-99oqk
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 1.094420313835144 f1_macro: 0.45714285714285713 f1_micro: 0.5714285714285714 f1_weighted: 0.5061224489795919 precision_macro: 0.4666666666666666 precision_micro: 0.5714285714285714 precision_weighted: 0.5428571428571428 recall_macro: 0.5555555555555555 recall_micro: 0.5714285714285714 recall_weighted: 0.5714285714285714 accuracy: 0.5714285714285714
[ "11_cover_the_quilt_with_only_the_head_exposed", "13_cover_the_quilt_to_expose_the_head_and_shoulders", "14_has_nothing_to_do_with_11_and_13_above" ]
hkivancoral/hushem_40x_deit_base_rms_0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_rms_0001_fold2 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 4.0887 - Accuracy: 0.7556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0825 | 1.0 | 215 | 1.5281 | 0.7111 | | 0.0311 | 2.0 | 430 | 1.2158 | 0.8 | | 0.0011 | 3.0 | 645 | 1.8306 | 0.6889 | | 0.0414 | 4.0 | 860 | 2.0416 | 0.7333 | | 0.0002 | 5.0 | 1075 | 2.3340 | 0.6444 | | 0.0027 | 6.0 | 1290 | 1.1579 | 0.7556 | | 0.0001 | 7.0 | 1505 | 2.3412 | 0.6889 | | 0.0 | 8.0 | 1720 | 2.3885 | 0.7111 | | 0.0 | 9.0 | 1935 | 2.4917 | 0.7333 | | 0.0 | 10.0 | 2150 | 2.6169 | 0.7333 | | 0.0 | 11.0 | 2365 | 2.7660 | 0.7333 | | 0.0 | 12.0 | 2580 | 2.9176 | 0.7333 | | 0.0 | 13.0 | 2795 | 3.0652 | 0.7333 | | 0.0 | 14.0 | 3010 | 3.1998 | 0.7556 | | 0.0 | 15.0 | 3225 | 3.3068 | 0.7556 | | 0.0 | 16.0 | 3440 | 3.4034 | 0.7556 | | 0.0 | 17.0 | 3655 | 3.4958 | 0.7556 | | 0.0 | 18.0 | 3870 | 3.5902 | 0.7556 | | 0.0 | 19.0 | 4085 | 3.6748 | 0.7556 | | 0.0 | 20.0 | 4300 | 3.7449 | 0.7556 | | 0.0 | 21.0 | 4515 | 3.7990 | 0.7556 | | 0.0 | 22.0 | 4730 | 3.8408 | 0.7556 | | 0.0 | 23.0 | 4945 | 3.8743 | 0.7556 | | 0.0 | 24.0 | 5160 | 3.9017 | 0.7556 | | 0.0 | 25.0 | 5375 | 3.9247 | 0.7556 | | 0.0 | 26.0 | 5590 | 3.9444 | 0.7556 | | 0.0 | 27.0 | 5805 | 3.9616 | 0.7556 | | 0.0 | 28.0 | 6020 | 3.9766 | 0.7556 | | 0.0 | 29.0 | 6235 | 3.9899 | 0.7556 | | 0.0 | 30.0 | 6450 | 4.0018 | 0.7556 | | 0.0 | 31.0 | 6665 | 4.0124 | 0.7556 | | 0.0 | 32.0 | 6880 | 4.0219 | 0.7556 | | 0.0 | 33.0 | 7095 | 4.0305 | 0.7556 | | 0.0 | 34.0 | 7310 | 4.0382 | 0.7556 | | 0.0 | 35.0 | 7525 | 4.0452 | 0.7556 | | 0.0 | 36.0 | 7740 | 4.0514 | 0.7556 | | 0.0 | 37.0 | 7955 | 4.0571 | 0.7556 | | 0.0 | 38.0 | 8170 | 4.0622 | 0.7556 | | 0.0 | 39.0 | 8385 | 4.0668 | 0.7556 | | 0.0 | 40.0 | 8600 | 4.0708 | 0.7556 | | 0.0 | 41.0 | 8815 | 4.0744 | 0.7556 | | 0.0 | 42.0 | 9030 | 4.0776 | 0.7556 | | 0.0 | 43.0 | 9245 | 4.0803 | 0.7556 | | 0.0 | 44.0 | 9460 | 4.0826 | 0.7556 | | 0.0 | 45.0 | 9675 | 4.0846 | 0.7556 | | 0.0 | 46.0 | 9890 | 4.0861 | 0.7556 | | 0.0 | 47.0 | 10105 | 4.0873 | 0.7556 | | 0.0 | 48.0 | 10320 | 4.0881 | 0.7556 | | 0.0 | 49.0 | 10535 | 4.0886 | 0.7556 | | 0.0 | 50.0 | 10750 | 4.0887 | 0.7556 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_tiny_sgd_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_sgd_001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4899 - Accuracy: 0.8372 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.1798 | 1.0 | 217 | 1.2889 | 0.4186 | | 1.0098 | 2.0 | 434 | 1.1067 | 0.6047 | | 0.7827 | 3.0 | 651 | 0.9427 | 0.6977 | | 0.6326 | 4.0 | 868 | 0.7917 | 0.6977 | | 0.5443 | 5.0 | 1085 | 0.6647 | 0.7907 | | 0.4438 | 6.0 | 1302 | 0.5810 | 0.8140 | | 0.3761 | 7.0 | 1519 | 0.5185 | 0.8372 | | 0.3386 | 8.0 | 1736 | 0.4778 | 0.8140 | | 0.2796 | 9.0 | 1953 | 0.4431 | 0.8605 | | 0.2037 | 10.0 | 2170 | 0.4372 | 0.8605 | | 0.1624 | 11.0 | 2387 | 0.3943 | 0.8837 | | 0.1477 | 12.0 | 2604 | 0.4019 | 0.8605 | | 0.1485 | 13.0 | 2821 | 0.3856 | 0.8605 | | 0.1192 | 14.0 | 3038 | 0.3686 | 0.8605 | | 0.1115 | 15.0 | 3255 | 0.3722 | 0.8605 | | 0.0891 | 16.0 | 3472 | 0.3567 | 0.8837 | | 0.0776 | 17.0 | 3689 | 0.3631 | 0.8605 | | 0.1039 | 18.0 | 3906 | 0.3600 | 0.8605 | | 0.0608 | 19.0 | 4123 | 0.3514 | 0.8605 | | 0.0639 | 20.0 | 4340 | 0.3706 | 0.8605 | | 0.0555 | 21.0 | 4557 | 0.3773 | 0.8605 | | 0.0552 | 22.0 | 4774 | 0.3713 | 0.8372 | | 0.0457 | 23.0 | 4991 | 0.3749 | 0.8372 | | 0.0383 | 24.0 | 5208 | 0.3901 | 0.8372 | | 0.0332 | 25.0 | 5425 | 0.3933 | 0.8372 | | 0.0322 | 26.0 | 5642 | 0.3995 | 0.8372 | | 0.0278 | 27.0 | 5859 | 0.4012 | 0.8372 | | 0.0212 | 28.0 | 6076 | 0.3938 | 0.8372 | | 0.0224 | 29.0 | 6293 | 0.4080 | 0.8372 | | 0.0218 | 30.0 | 6510 | 0.4237 | 0.8372 | | 0.0278 | 31.0 | 6727 | 0.4231 | 0.8372 | | 0.0212 | 32.0 | 6944 | 0.4330 | 0.8372 | | 0.021 | 33.0 | 7161 | 0.4507 | 0.8372 | | 0.0127 | 34.0 | 7378 | 0.4390 | 0.8372 | | 0.0158 | 35.0 | 7595 | 0.4566 | 0.8372 | | 0.0178 | 36.0 | 7812 | 0.4594 | 0.8372 | | 0.0109 | 37.0 | 8029 | 0.4570 | 0.8372 | | 0.0096 | 38.0 | 8246 | 0.4635 | 0.8372 | | 0.0113 | 39.0 | 8463 | 0.4700 | 0.8372 | | 0.0149 | 40.0 | 8680 | 0.4815 | 0.8372 | | 0.0111 | 41.0 | 8897 | 0.4769 | 0.8372 | | 0.0075 | 42.0 | 9114 | 0.4756 | 0.8372 | | 0.0093 | 43.0 | 9331 | 0.4800 | 0.8372 | | 0.009 | 44.0 | 9548 | 0.4851 | 0.8372 | | 0.0065 | 45.0 | 9765 | 0.4808 | 0.8372 | | 0.011 | 46.0 | 9982 | 0.4835 | 0.8372 | | 0.0064 | 47.0 | 10199 | 0.4871 | 0.8372 | | 0.0093 | 48.0 | 10416 | 0.4902 | 0.8372 | | 0.0136 | 49.0 | 10633 | 0.4899 | 0.8372 | | 0.0058 | 50.0 | 10850 | 0.4899 | 0.8372 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_rms_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_rms_001_fold3 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 4.0356 - Accuracy: 0.5581 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.1943 | 1.0 | 217 | 1.3862 | 0.3488 | | 1.2108 | 2.0 | 434 | 1.3456 | 0.3721 | | 0.8764 | 3.0 | 651 | 1.3683 | 0.4884 | | 0.7995 | 4.0 | 868 | 0.8441 | 0.5814 | | 0.8665 | 5.0 | 1085 | 1.2083 | 0.5116 | | 0.7433 | 6.0 | 1302 | 0.7858 | 0.7209 | | 0.7205 | 7.0 | 1519 | 0.8439 | 0.6744 | | 0.6415 | 8.0 | 1736 | 0.6198 | 0.6512 | | 0.6773 | 9.0 | 1953 | 0.8169 | 0.6744 | | 0.5449 | 10.0 | 2170 | 0.8224 | 0.6512 | | 0.5225 | 11.0 | 2387 | 0.7556 | 0.7209 | | 0.5268 | 12.0 | 2604 | 0.8703 | 0.6744 | | 0.41 | 13.0 | 2821 | 0.7919 | 0.6512 | | 0.4695 | 14.0 | 3038 | 0.9473 | 0.6744 | | 0.3173 | 15.0 | 3255 | 1.2235 | 0.6512 | | 0.3283 | 16.0 | 3472 | 1.3091 | 0.6512 | | 0.3212 | 17.0 | 3689 | 1.0773 | 0.6047 | | 0.3662 | 18.0 | 3906 | 0.9193 | 0.6279 | | 0.3712 | 19.0 | 4123 | 0.9811 | 0.6744 | | 0.3483 | 20.0 | 4340 | 1.5620 | 0.5814 | | 0.2594 | 21.0 | 4557 | 1.8035 | 0.5814 | | 0.3019 | 22.0 | 4774 | 1.3880 | 0.6744 | | 0.2498 | 23.0 | 4991 | 1.6113 | 0.5814 | | 0.2349 | 24.0 | 5208 | 1.2780 | 0.6047 | | 0.1589 | 25.0 | 5425 | 1.6674 | 0.6512 | | 0.2341 | 26.0 | 5642 | 1.6966 | 0.6512 | | 0.1986 | 27.0 | 5859 | 1.4673 | 0.6047 | | 0.1141 | 28.0 | 6076 | 1.6993 | 0.6512 | | 0.1291 | 29.0 | 6293 | 2.0265 | 0.5581 | | 0.1273 | 30.0 | 6510 | 1.8689 | 0.6279 | | 0.0887 | 31.0 | 6727 | 1.4863 | 0.6977 | | 0.101 | 32.0 | 6944 | 2.2258 | 0.6279 | | 0.09 | 33.0 | 7161 | 1.6918 | 0.5814 | | 0.063 | 34.0 | 7378 | 2.4040 | 0.5349 | | 0.0263 | 35.0 | 7595 | 2.2869 | 0.5814 | | 0.0357 | 36.0 | 7812 | 2.0118 | 0.6047 | | 0.033 | 37.0 | 8029 | 2.5046 | 0.6279 | | 0.0417 | 38.0 | 8246 | 2.0462 | 0.6512 | | 0.0049 | 39.0 | 8463 | 3.1349 | 0.5814 | | 0.0034 | 40.0 | 8680 | 2.4922 | 0.6279 | | 0.0115 | 41.0 | 8897 | 2.7021 | 0.5581 | | 0.0248 | 42.0 | 9114 | 3.1496 | 0.5116 | | 0.0078 | 43.0 | 9331 | 2.6336 | 0.6279 | | 0.0022 | 44.0 | 9548 | 3.2458 | 0.5349 | | 0.0015 | 45.0 | 9765 | 3.3966 | 0.5349 | | 0.0031 | 46.0 | 9982 | 4.1353 | 0.5116 | | 0.0 | 47.0 | 10199 | 3.5481 | 0.5814 | | 0.0002 | 48.0 | 10416 | 3.8712 | 0.5349 | | 0.0 | 49.0 | 10633 | 4.0305 | 0.5581 | | 0.0 | 50.0 | 10850 | 4.0356 | 0.5581 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_tiny_sgd_001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_sgd_001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2388 - Accuracy: 0.8810 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.2231 | 1.0 | 219 | 1.3846 | 0.2857 | | 0.9485 | 2.0 | 438 | 1.1776 | 0.5238 | | 0.8421 | 3.0 | 657 | 0.9985 | 0.6429 | | 0.6802 | 4.0 | 876 | 0.8236 | 0.7381 | | 0.5815 | 5.0 | 1095 | 0.6866 | 0.7857 | | 0.5091 | 6.0 | 1314 | 0.5853 | 0.8095 | | 0.3792 | 7.0 | 1533 | 0.5105 | 0.8333 | | 0.3552 | 8.0 | 1752 | 0.4443 | 0.8333 | | 0.3174 | 9.0 | 1971 | 0.4029 | 0.8810 | | 0.2621 | 10.0 | 2190 | 0.3730 | 0.8571 | | 0.2168 | 11.0 | 2409 | 0.3473 | 0.8571 | | 0.2263 | 12.0 | 2628 | 0.3296 | 0.9048 | | 0.1689 | 13.0 | 2847 | 0.3233 | 0.9048 | | 0.171 | 14.0 | 3066 | 0.3040 | 0.8810 | | 0.1176 | 15.0 | 3285 | 0.3059 | 0.8810 | | 0.1241 | 16.0 | 3504 | 0.2811 | 0.8571 | | 0.1343 | 17.0 | 3723 | 0.2712 | 0.8571 | | 0.0953 | 18.0 | 3942 | 0.2802 | 0.8571 | | 0.0918 | 19.0 | 4161 | 0.2700 | 0.8571 | | 0.0691 | 20.0 | 4380 | 0.2755 | 0.8571 | | 0.088 | 21.0 | 4599 | 0.2615 | 0.8571 | | 0.0857 | 22.0 | 4818 | 0.2483 | 0.8571 | | 0.0654 | 23.0 | 5037 | 0.2562 | 0.8571 | | 0.0661 | 24.0 | 5256 | 0.2789 | 0.8571 | | 0.0463 | 25.0 | 5475 | 0.2435 | 0.8571 | | 0.0362 | 26.0 | 5694 | 0.2633 | 0.8571 | | 0.0272 | 27.0 | 5913 | 0.2844 | 0.8571 | | 0.041 | 28.0 | 6132 | 0.2942 | 0.8571 | | 0.034 | 29.0 | 6351 | 0.2744 | 0.8571 | | 0.0352 | 30.0 | 6570 | 0.2644 | 0.8810 | | 0.0212 | 31.0 | 6789 | 0.2648 | 0.8810 | | 0.0359 | 32.0 | 7008 | 0.2431 | 0.8810 | | 0.0203 | 33.0 | 7227 | 0.2434 | 0.8810 | | 0.0209 | 34.0 | 7446 | 0.2577 | 0.8810 | | 0.0254 | 35.0 | 7665 | 0.2645 | 0.8810 | | 0.0178 | 36.0 | 7884 | 0.2497 | 0.8810 | | 0.0232 | 37.0 | 8103 | 0.2639 | 0.8810 | | 0.015 | 38.0 | 8322 | 0.2391 | 0.8810 | | 0.0246 | 39.0 | 8541 | 0.2615 | 0.8810 | | 0.0228 | 40.0 | 8760 | 0.2445 | 0.8810 | | 0.0203 | 41.0 | 8979 | 0.2448 | 0.8810 | | 0.014 | 42.0 | 9198 | 0.2402 | 0.8810 | | 0.0231 | 43.0 | 9417 | 0.2372 | 0.8810 | | 0.0179 | 44.0 | 9636 | 0.2499 | 0.8810 | | 0.0197 | 45.0 | 9855 | 0.2540 | 0.8810 | | 0.0118 | 46.0 | 10074 | 0.2416 | 0.8810 | | 0.0134 | 47.0 | 10293 | 0.2401 | 0.8810 | | 0.0155 | 48.0 | 10512 | 0.2412 | 0.8810 | | 0.0103 | 49.0 | 10731 | 0.2400 | 0.8810 | | 0.0156 | 50.0 | 10950 | 0.2388 | 0.8810 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_rms_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_rms_0001_fold3 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.3610 - Accuracy: 0.8372 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0864 | 1.0 | 217 | 0.7004 | 0.8140 | | 0.0159 | 2.0 | 434 | 0.8536 | 0.8372 | | 0.116 | 3.0 | 651 | 0.4860 | 0.9070 | | 0.0015 | 4.0 | 868 | 0.3942 | 0.9302 | | 0.0089 | 5.0 | 1085 | 0.6012 | 0.8372 | | 0.0001 | 6.0 | 1302 | 0.5930 | 0.8605 | | 0.0006 | 7.0 | 1519 | 0.5592 | 0.8837 | | 0.15 | 8.0 | 1736 | 0.5307 | 0.8837 | | 0.0001 | 9.0 | 1953 | 0.5223 | 0.8372 | | 0.0766 | 10.0 | 2170 | 0.7047 | 0.8372 | | 0.0001 | 11.0 | 2387 | 1.3810 | 0.8140 | | 0.0061 | 12.0 | 2604 | 1.1687 | 0.8140 | | 0.0 | 13.0 | 2821 | 1.4554 | 0.8140 | | 0.0 | 14.0 | 3038 | 1.4775 | 0.8372 | | 0.0 | 15.0 | 3255 | 1.5402 | 0.8140 | | 0.0 | 16.0 | 3472 | 1.6119 | 0.8140 | | 0.0 | 17.0 | 3689 | 1.6931 | 0.8140 | | 0.0 | 18.0 | 3906 | 1.7745 | 0.8140 | | 0.0 | 19.0 | 4123 | 1.8507 | 0.8372 | | 0.0 | 20.0 | 4340 | 1.9114 | 0.8372 | | 0.0 | 21.0 | 4557 | 1.9677 | 0.8372 | | 0.0 | 22.0 | 4774 | 2.0255 | 0.8372 | | 0.0 | 23.0 | 4991 | 2.0805 | 0.8372 | | 0.0 | 24.0 | 5208 | 2.1308 | 0.8372 | | 0.0 | 25.0 | 5425 | 2.1719 | 0.8372 | | 0.0 | 26.0 | 5642 | 2.2040 | 0.8372 | | 0.0 | 27.0 | 5859 | 2.2288 | 0.8372 | | 0.0 | 28.0 | 6076 | 2.2485 | 0.8372 | | 0.0 | 29.0 | 6293 | 2.2646 | 0.8372 | | 0.0 | 30.0 | 6510 | 2.2781 | 0.8372 | | 0.0 | 31.0 | 6727 | 2.2896 | 0.8372 | | 0.0 | 32.0 | 6944 | 2.2995 | 0.8372 | | 0.0 | 33.0 | 7161 | 2.3082 | 0.8372 | | 0.0 | 34.0 | 7378 | 2.3158 | 0.8372 | | 0.0 | 35.0 | 7595 | 2.3224 | 0.8372 | | 0.0 | 36.0 | 7812 | 2.3283 | 0.8372 | | 0.0 | 37.0 | 8029 | 2.3335 | 0.8372 | | 0.0 | 38.0 | 8246 | 2.3381 | 0.8372 | | 0.0 | 39.0 | 8463 | 2.3422 | 0.8372 | | 0.0 | 40.0 | 8680 | 2.3458 | 0.8372 | | 0.0 | 41.0 | 8897 | 2.3489 | 0.8372 | | 0.0 | 42.0 | 9114 | 2.3516 | 0.8372 | | 0.0 | 43.0 | 9331 | 2.3540 | 0.8372 | | 0.0 | 44.0 | 9548 | 2.3560 | 0.8372 | | 0.0 | 45.0 | 9765 | 2.3576 | 0.8372 | | 0.0 | 46.0 | 9982 | 2.3589 | 0.8372 | | 0.0 | 47.0 | 10199 | 2.3599 | 0.8372 | | 0.0 | 48.0 | 10416 | 2.3606 | 0.8372 | | 0.0 | 49.0 | 10633 | 2.3610 | 0.8372 | | 0.0 | 50.0 | 10850 | 2.3610 | 0.8372 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_tiny_sgd_001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_sgd_001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5908 - Accuracy: 0.8293 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.1602 | 1.0 | 220 | 1.4008 | 0.2927 | | 0.8954 | 2.0 | 440 | 1.2274 | 0.3902 | | 0.7859 | 3.0 | 660 | 1.0703 | 0.5366 | | 0.6718 | 4.0 | 880 | 0.9455 | 0.6098 | | 0.5505 | 5.0 | 1100 | 0.8399 | 0.6341 | | 0.4372 | 6.0 | 1320 | 0.7728 | 0.7073 | | 0.3616 | 7.0 | 1540 | 0.7172 | 0.7317 | | 0.291 | 8.0 | 1760 | 0.7018 | 0.7317 | | 0.2597 | 9.0 | 1980 | 0.6678 | 0.7317 | | 0.2339 | 10.0 | 2200 | 0.6575 | 0.7073 | | 0.2227 | 11.0 | 2420 | 0.6389 | 0.7073 | | 0.179 | 12.0 | 2640 | 0.6500 | 0.7073 | | 0.1598 | 13.0 | 2860 | 0.6290 | 0.7073 | | 0.1448 | 14.0 | 3080 | 0.6491 | 0.6585 | | 0.1209 | 15.0 | 3300 | 0.6174 | 0.7073 | | 0.1192 | 16.0 | 3520 | 0.6084 | 0.7073 | | 0.1037 | 17.0 | 3740 | 0.6013 | 0.7317 | | 0.0848 | 18.0 | 3960 | 0.5985 | 0.7073 | | 0.1048 | 19.0 | 4180 | 0.5896 | 0.7317 | | 0.0665 | 20.0 | 4400 | 0.6043 | 0.7073 | | 0.0723 | 21.0 | 4620 | 0.5932 | 0.7561 | | 0.0444 | 22.0 | 4840 | 0.5749 | 0.8049 | | 0.0448 | 23.0 | 5060 | 0.5862 | 0.7805 | | 0.0396 | 24.0 | 5280 | 0.5758 | 0.8049 | | 0.0378 | 25.0 | 5500 | 0.5566 | 0.8293 | | 0.0428 | 26.0 | 5720 | 0.5740 | 0.8293 | | 0.0345 | 27.0 | 5940 | 0.5631 | 0.8049 | | 0.0515 | 28.0 | 6160 | 0.5844 | 0.8049 | | 0.0324 | 29.0 | 6380 | 0.5872 | 0.8293 | | 0.0292 | 30.0 | 6600 | 0.5789 | 0.8293 | | 0.0208 | 31.0 | 6820 | 0.5688 | 0.8293 | | 0.0421 | 32.0 | 7040 | 0.5703 | 0.8293 | | 0.0246 | 33.0 | 7260 | 0.5663 | 0.8293 | | 0.0318 | 34.0 | 7480 | 0.5726 | 0.8293 | | 0.0151 | 35.0 | 7700 | 0.5751 | 0.8293 | | 0.0169 | 36.0 | 7920 | 0.5772 | 0.8293 | | 0.017 | 37.0 | 8140 | 0.5665 | 0.8293 | | 0.0393 | 38.0 | 8360 | 0.5815 | 0.8293 | | 0.0218 | 39.0 | 8580 | 0.5765 | 0.8293 | | 0.0156 | 40.0 | 8800 | 0.5742 | 0.8293 | | 0.0183 | 41.0 | 9020 | 0.5956 | 0.8293 | | 0.0155 | 42.0 | 9240 | 0.5886 | 0.8293 | | 0.0134 | 43.0 | 9460 | 0.5775 | 0.8293 | | 0.0186 | 44.0 | 9680 | 0.5921 | 0.8293 | | 0.0177 | 45.0 | 9900 | 0.5863 | 0.8293 | | 0.0115 | 46.0 | 10120 | 0.5918 | 0.8293 | | 0.0196 | 47.0 | 10340 | 0.5892 | 0.8293 | | 0.0172 | 48.0 | 10560 | 0.5892 | 0.8293 | | 0.0129 | 49.0 | 10780 | 0.5910 | 0.8293 | | 0.0197 | 50.0 | 11000 | 0.5908 | 0.8293 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_rms_001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_rms_001_fold4 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.7808 - Accuracy: 0.8095 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.4095 | 1.0 | 219 | 1.4091 | 0.2381 | | 1.3846 | 2.0 | 438 | 1.3865 | 0.2381 | | 1.2802 | 3.0 | 657 | 1.3372 | 0.2381 | | 1.1537 | 4.0 | 876 | 1.4032 | 0.2619 | | 1.177 | 5.0 | 1095 | 1.3147 | 0.4286 | | 1.1719 | 6.0 | 1314 | 0.9703 | 0.6667 | | 1.0403 | 7.0 | 1533 | 1.2271 | 0.4762 | | 0.9188 | 8.0 | 1752 | 0.9431 | 0.5714 | | 0.8565 | 9.0 | 1971 | 1.0056 | 0.5952 | | 0.8519 | 10.0 | 2190 | 0.7845 | 0.6429 | | 0.7519 | 11.0 | 2409 | 0.7049 | 0.6905 | | 0.8514 | 12.0 | 2628 | 0.6628 | 0.7857 | | 0.8808 | 13.0 | 2847 | 0.8006 | 0.7381 | | 0.796 | 14.0 | 3066 | 0.7332 | 0.6905 | | 0.7213 | 15.0 | 3285 | 0.7486 | 0.6905 | | 0.663 | 16.0 | 3504 | 0.4390 | 0.7857 | | 0.5845 | 17.0 | 3723 | 0.9856 | 0.5952 | | 0.5228 | 18.0 | 3942 | 0.6588 | 0.7381 | | 0.5581 | 19.0 | 4161 | 0.6093 | 0.8571 | | 0.518 | 20.0 | 4380 | 0.5316 | 0.6905 | | 0.5058 | 21.0 | 4599 | 0.7052 | 0.7381 | | 0.453 | 22.0 | 4818 | 0.6155 | 0.7143 | | 0.4128 | 23.0 | 5037 | 0.7141 | 0.7381 | | 0.44 | 24.0 | 5256 | 0.6896 | 0.7619 | | 0.3933 | 25.0 | 5475 | 0.6353 | 0.7619 | | 0.3648 | 26.0 | 5694 | 0.7225 | 0.8095 | | 0.2677 | 27.0 | 5913 | 0.6987 | 0.8810 | | 0.3023 | 28.0 | 6132 | 0.8143 | 0.8333 | | 0.332 | 29.0 | 6351 | 0.8300 | 0.8333 | | 0.2772 | 30.0 | 6570 | 0.6339 | 0.7619 | | 0.1878 | 31.0 | 6789 | 0.6694 | 0.8333 | | 0.2152 | 32.0 | 7008 | 0.7930 | 0.7619 | | 0.2378 | 33.0 | 7227 | 0.7856 | 0.7619 | | 0.1874 | 34.0 | 7446 | 0.6614 | 0.8571 | | 0.2043 | 35.0 | 7665 | 0.7218 | 0.8095 | | 0.122 | 36.0 | 7884 | 1.0415 | 0.8333 | | 0.1837 | 37.0 | 8103 | 1.2016 | 0.7381 | | 0.1148 | 38.0 | 8322 | 0.8289 | 0.7857 | | 0.0825 | 39.0 | 8541 | 1.4711 | 0.7381 | | 0.0828 | 40.0 | 8760 | 0.9405 | 0.8810 | | 0.0736 | 41.0 | 8979 | 1.4104 | 0.8810 | | 0.0864 | 42.0 | 9198 | 1.1297 | 0.8333 | | 0.0176 | 43.0 | 9417 | 1.2293 | 0.7857 | | 0.0392 | 44.0 | 9636 | 1.3878 | 0.8095 | | 0.0272 | 45.0 | 9855 | 1.2021 | 0.8571 | | 0.0125 | 46.0 | 10074 | 2.3102 | 0.7619 | | 0.0149 | 47.0 | 10293 | 1.8621 | 0.7857 | | 0.0032 | 48.0 | 10512 | 1.7899 | 0.8333 | | 0.0016 | 49.0 | 10731 | 1.9528 | 0.8095 | | 0.0001 | 50.0 | 10950 | 1.7808 | 0.8095 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_rms_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_rms_0001_fold4 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3583 - Accuracy: 0.9524 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.104 | 1.0 | 219 | 0.2949 | 0.9286 | | 0.01 | 2.0 | 438 | 0.1700 | 0.9524 | | 0.0196 | 3.0 | 657 | 0.6452 | 0.8571 | | 0.0007 | 4.0 | 876 | 0.1624 | 0.9762 | | 0.0273 | 5.0 | 1095 | 0.4258 | 0.9048 | | 0.0129 | 6.0 | 1314 | 0.1517 | 0.9524 | | 0.0582 | 7.0 | 1533 | 0.6754 | 0.9048 | | 0.0004 | 8.0 | 1752 | 0.2532 | 0.9286 | | 0.0 | 9.0 | 1971 | 0.2443 | 0.9286 | | 0.0 | 10.0 | 2190 | 0.2524 | 0.9286 | | 0.0 | 11.0 | 2409 | 0.2616 | 0.9286 | | 0.0 | 12.0 | 2628 | 0.2792 | 0.9286 | | 0.0 | 13.0 | 2847 | 0.2936 | 0.9524 | | 0.0 | 14.0 | 3066 | 0.3043 | 0.9524 | | 0.0 | 15.0 | 3285 | 0.3082 | 0.9524 | | 0.0 | 16.0 | 3504 | 0.3110 | 0.9524 | | 0.0 | 17.0 | 3723 | 0.3145 | 0.9524 | | 0.0 | 18.0 | 3942 | 0.3171 | 0.9524 | | 0.0 | 19.0 | 4161 | 0.3231 | 0.9524 | | 0.0 | 20.0 | 4380 | 0.3294 | 0.9524 | | 0.0 | 21.0 | 4599 | 0.3348 | 0.9524 | | 0.0 | 22.0 | 4818 | 0.3389 | 0.9524 | | 0.0 | 23.0 | 5037 | 0.3420 | 0.9524 | | 0.0 | 24.0 | 5256 | 0.3443 | 0.9524 | | 0.0 | 25.0 | 5475 | 0.3462 | 0.9524 | | 0.0 | 26.0 | 5694 | 0.3477 | 0.9524 | | 0.0 | 27.0 | 5913 | 0.3490 | 0.9524 | | 0.0 | 28.0 | 6132 | 0.3502 | 0.9524 | | 0.0 | 29.0 | 6351 | 0.3512 | 0.9524 | | 0.0 | 30.0 | 6570 | 0.3521 | 0.9524 | | 0.0 | 31.0 | 6789 | 0.3528 | 0.9524 | | 0.0 | 32.0 | 7008 | 0.3535 | 0.9524 | | 0.0 | 33.0 | 7227 | 0.3541 | 0.9524 | | 0.0 | 34.0 | 7446 | 0.3547 | 0.9524 | | 0.0 | 35.0 | 7665 | 0.3552 | 0.9524 | | 0.0 | 36.0 | 7884 | 0.3556 | 0.9524 | | 0.0 | 37.0 | 8103 | 0.3560 | 0.9524 | | 0.0 | 38.0 | 8322 | 0.3564 | 0.9524 | | 0.0 | 39.0 | 8541 | 0.3567 | 0.9524 | | 0.0 | 40.0 | 8760 | 0.3570 | 0.9524 | | 0.0 | 41.0 | 8979 | 0.3573 | 0.9524 | | 0.0 | 42.0 | 9198 | 0.3575 | 0.9524 | | 0.0 | 43.0 | 9417 | 0.3577 | 0.9524 | | 0.0 | 44.0 | 9636 | 0.3578 | 0.9524 | | 0.0 | 45.0 | 9855 | 0.3580 | 0.9524 | | 0.0 | 46.0 | 10074 | 0.3581 | 0.9524 | | 0.0 | 47.0 | 10293 | 0.3582 | 0.9524 | | 0.0 | 48.0 | 10512 | 0.3582 | 0.9524 | | 0.0 | 49.0 | 10731 | 0.3583 | 0.9524 | | 0.0 | 50.0 | 10950 | 0.3583 | 0.9524 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_tiny_sgd_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_sgd_0001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1471 - Accuracy: 0.5333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.4892 | 1.0 | 215 | 1.3892 | 0.2222 | | 1.3775 | 2.0 | 430 | 1.3751 | 0.2889 | | 1.3266 | 3.0 | 645 | 1.3631 | 0.3111 | | 1.2619 | 4.0 | 860 | 1.3523 | 0.3111 | | 1.235 | 5.0 | 1075 | 1.3429 | 0.3111 | | 1.1826 | 6.0 | 1290 | 1.3347 | 0.3778 | | 1.2015 | 7.0 | 1505 | 1.3284 | 0.3778 | | 1.2072 | 8.0 | 1720 | 1.3225 | 0.4 | | 1.1254 | 9.0 | 1935 | 1.3170 | 0.4 | | 1.1293 | 10.0 | 2150 | 1.3118 | 0.3556 | | 1.0925 | 11.0 | 2365 | 1.3069 | 0.3778 | | 1.0731 | 12.0 | 2580 | 1.3024 | 0.3778 | | 1.0421 | 13.0 | 2795 | 1.2976 | 0.3778 | | 1.0531 | 14.0 | 3010 | 1.2928 | 0.3778 | | 1.0284 | 15.0 | 3225 | 1.2877 | 0.3778 | | 1.0283 | 16.0 | 3440 | 1.2824 | 0.4 | | 1.0283 | 17.0 | 3655 | 1.2767 | 0.4222 | | 1.0038 | 18.0 | 3870 | 1.2712 | 0.4444 | | 0.9952 | 19.0 | 4085 | 1.2654 | 0.4444 | | 0.9413 | 20.0 | 4300 | 1.2587 | 0.4444 | | 0.9562 | 21.0 | 4515 | 1.2529 | 0.4667 | | 1.0163 | 22.0 | 4730 | 1.2467 | 0.4667 | | 0.9391 | 23.0 | 4945 | 1.2401 | 0.4667 | | 0.955 | 24.0 | 5160 | 1.2340 | 0.4889 | | 0.9454 | 25.0 | 5375 | 1.2281 | 0.4889 | | 0.9013 | 26.0 | 5590 | 1.2229 | 0.4889 | | 0.8818 | 27.0 | 5805 | 1.2169 | 0.5111 | | 0.8594 | 28.0 | 6020 | 1.2115 | 0.5111 | | 0.8984 | 29.0 | 6235 | 1.2064 | 0.5111 | | 0.8277 | 30.0 | 6450 | 1.2009 | 0.5111 | | 0.8636 | 31.0 | 6665 | 1.1955 | 0.5111 | | 0.8466 | 32.0 | 6880 | 1.1910 | 0.5111 | | 0.8955 | 33.0 | 7095 | 1.1866 | 0.5111 | | 0.817 | 34.0 | 7310 | 1.1825 | 0.5111 | | 0.8132 | 35.0 | 7525 | 1.1781 | 0.5111 | | 0.7914 | 36.0 | 7740 | 1.1742 | 0.5111 | | 0.835 | 37.0 | 7955 | 1.1705 | 0.5111 | | 0.8383 | 38.0 | 8170 | 1.1668 | 0.5111 | | 0.828 | 39.0 | 8385 | 1.1638 | 0.5111 | | 0.7822 | 40.0 | 8600 | 1.1606 | 0.5111 | | 0.8243 | 41.0 | 8815 | 1.1580 | 0.5333 | | 0.9371 | 42.0 | 9030 | 1.1556 | 0.5333 | | 0.8482 | 43.0 | 9245 | 1.1533 | 0.5333 | | 0.8054 | 44.0 | 9460 | 1.1516 | 0.5333 | | 0.8152 | 45.0 | 9675 | 1.1501 | 0.5333 | | 0.8013 | 46.0 | 9890 | 1.1489 | 0.5333 | | 0.7786 | 47.0 | 10105 | 1.1481 | 0.5333 | | 0.7918 | 48.0 | 10320 | 1.1474 | 0.5333 | | 0.8671 | 49.0 | 10535 | 1.1471 | 0.5333 | | 0.8286 | 50.0 | 10750 | 1.1471 | 0.5333 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
yuanhuaisen/autotrain-9oj9k-0pndc
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 0.5109696984291077 f1_macro: 0.7355182828867041 f1_micro: 0.7840909090909092 f1_weighted: 0.7828294512505038 precision_macro: 0.7308866944925176 precision_micro: 0.7840909090909091 precision_weighted: 0.782664525741997 recall_macro: 0.7416666666666667 recall_micro: 0.7840909090909091 recall_weighted: 0.7840909090909091 accuracy: 0.7840909090909091
[ "11covered_with_a_quilt_and_only_the_head_exposed", "12covered_with_a_quilt_and_exposed_other_parts_of_the_body", "13has_nothing_to_do_with_11_and_12_above" ]
hkivancoral/hushem_40x_deit_base_rms_001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_rms_001_fold5 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 3.5781 - Accuracy: 0.7561 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.2271 | 1.0 | 220 | 1.5032 | 0.3902 | | 0.9545 | 2.0 | 440 | 1.5087 | 0.3902 | | 0.8667 | 3.0 | 660 | 1.0714 | 0.4878 | | 0.8154 | 4.0 | 880 | 0.7851 | 0.6098 | | 0.6309 | 5.0 | 1100 | 1.0215 | 0.4878 | | 0.5655 | 6.0 | 1320 | 0.8556 | 0.6098 | | 0.4033 | 7.0 | 1540 | 0.7849 | 0.7073 | | 0.3567 | 8.0 | 1760 | 1.1431 | 0.6585 | | 0.3869 | 9.0 | 1980 | 0.7273 | 0.7561 | | 0.2867 | 10.0 | 2200 | 0.9025 | 0.6341 | | 0.2933 | 11.0 | 2420 | 1.0767 | 0.6829 | | 0.2822 | 12.0 | 2640 | 0.9054 | 0.7561 | | 0.2576 | 13.0 | 2860 | 1.1701 | 0.7073 | | 0.1424 | 14.0 | 3080 | 1.2265 | 0.7317 | | 0.1597 | 15.0 | 3300 | 1.2021 | 0.7317 | | 0.0822 | 16.0 | 3520 | 1.5652 | 0.7073 | | 0.0859 | 17.0 | 3740 | 1.0512 | 0.7561 | | 0.1048 | 18.0 | 3960 | 1.9377 | 0.6341 | | 0.0506 | 19.0 | 4180 | 1.4302 | 0.7561 | | 0.0595 | 20.0 | 4400 | 1.2065 | 0.7073 | | 0.1492 | 21.0 | 4620 | 1.7891 | 0.7073 | | 0.0835 | 22.0 | 4840 | 1.5550 | 0.7561 | | 0.0475 | 23.0 | 5060 | 1.2142 | 0.7317 | | 0.0941 | 24.0 | 5280 | 1.4080 | 0.7073 | | 0.0186 | 25.0 | 5500 | 1.5889 | 0.7561 | | 0.0776 | 26.0 | 5720 | 1.8453 | 0.6829 | | 0.0752 | 27.0 | 5940 | 1.5817 | 0.7805 | | 0.0113 | 28.0 | 6160 | 1.6776 | 0.7805 | | 0.0011 | 29.0 | 6380 | 2.1296 | 0.7317 | | 0.0107 | 30.0 | 6600 | 1.9807 | 0.7073 | | 0.0181 | 31.0 | 6820 | 1.9248 | 0.7073 | | 0.0106 | 32.0 | 7040 | 2.5784 | 0.7317 | | 0.0002 | 33.0 | 7260 | 1.8180 | 0.8049 | | 0.0013 | 34.0 | 7480 | 1.5976 | 0.8049 | | 0.0031 | 35.0 | 7700 | 1.9747 | 0.7317 | | 0.0094 | 36.0 | 7920 | 2.4830 | 0.7317 | | 0.0006 | 37.0 | 8140 | 2.9074 | 0.7561 | | 0.0049 | 38.0 | 8360 | 2.6503 | 0.6829 | | 0.0002 | 39.0 | 8580 | 2.4189 | 0.7561 | | 0.0 | 40.0 | 8800 | 2.4124 | 0.7561 | | 0.0 | 41.0 | 9020 | 2.5470 | 0.7561 | | 0.0 | 42.0 | 9240 | 2.6196 | 0.7805 | | 0.0 | 43.0 | 9460 | 2.7251 | 0.7805 | | 0.0 | 44.0 | 9680 | 2.9457 | 0.7805 | | 0.0 | 45.0 | 9900 | 3.1311 | 0.7805 | | 0.0 | 46.0 | 10120 | 3.2547 | 0.7805 | | 0.0 | 47.0 | 10340 | 3.3567 | 0.7317 | | 0.0 | 48.0 | 10560 | 3.5689 | 0.7561 | | 0.0 | 49.0 | 10780 | 3.5825 | 0.7561 | | 0.0 | 50.0 | 11000 | 3.5781 | 0.7561 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_rms_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_rms_0001_fold5 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.7342 - Accuracy: 0.8537 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0617 | 1.0 | 220 | 0.4998 | 0.8780 | | 0.0015 | 2.0 | 440 | 0.4091 | 0.9024 | | 0.067 | 3.0 | 660 | 0.7752 | 0.9024 | | 0.0007 | 4.0 | 880 | 0.7710 | 0.8293 | | 0.0006 | 5.0 | 1100 | 0.9905 | 0.8293 | | 0.0109 | 6.0 | 1320 | 1.1163 | 0.8049 | | 0.0 | 7.0 | 1540 | 1.0399 | 0.8049 | | 0.0 | 8.0 | 1760 | 1.0747 | 0.8293 | | 0.0 | 9.0 | 1980 | 1.1399 | 0.8537 | | 0.0 | 10.0 | 2200 | 1.2260 | 0.8537 | | 0.0 | 11.0 | 2420 | 1.3150 | 0.8537 | | 0.0 | 12.0 | 2640 | 1.3880 | 0.8537 | | 0.0 | 13.0 | 2860 | 1.4421 | 0.8537 | | 0.0 | 14.0 | 3080 | 1.4689 | 0.8537 | | 0.0 | 15.0 | 3300 | 1.4886 | 0.8537 | | 0.0 | 16.0 | 3520 | 1.5214 | 0.8537 | | 0.0 | 17.0 | 3740 | 1.5517 | 0.8537 | | 0.0 | 18.0 | 3960 | 1.5796 | 0.8537 | | 0.0 | 19.0 | 4180 | 1.6055 | 0.8537 | | 0.0 | 20.0 | 4400 | 1.6255 | 0.8537 | | 0.0 | 21.0 | 4620 | 1.6409 | 0.8537 | | 0.0 | 22.0 | 4840 | 1.6535 | 0.8537 | | 0.0 | 23.0 | 5060 | 1.6637 | 0.8537 | | 0.0 | 24.0 | 5280 | 1.6723 | 0.8537 | | 0.0 | 25.0 | 5500 | 1.6796 | 0.8537 | | 0.0 | 26.0 | 5720 | 1.6858 | 0.8537 | | 0.0 | 27.0 | 5940 | 1.6914 | 0.8537 | | 0.0 | 28.0 | 6160 | 1.6963 | 0.8537 | | 0.0 | 29.0 | 6380 | 1.7007 | 0.8537 | | 0.0 | 30.0 | 6600 | 1.7046 | 0.8537 | | 0.0 | 31.0 | 6820 | 1.7081 | 0.8537 | | 0.0 | 32.0 | 7040 | 1.7112 | 0.8537 | | 0.0 | 33.0 | 7260 | 1.7141 | 0.8537 | | 0.0 | 34.0 | 7480 | 1.7167 | 0.8537 | | 0.0 | 35.0 | 7700 | 1.7191 | 0.8537 | | 0.0 | 36.0 | 7920 | 1.7212 | 0.8537 | | 0.0 | 37.0 | 8140 | 1.7232 | 0.8537 | | 0.0 | 38.0 | 8360 | 1.7249 | 0.8537 | | 0.0 | 39.0 | 8580 | 1.7265 | 0.8537 | | 0.0 | 40.0 | 8800 | 1.7279 | 0.8537 | | 0.0 | 41.0 | 9020 | 1.7292 | 0.8537 | | 0.0 | 42.0 | 9240 | 1.7303 | 0.8537 | | 0.0 | 43.0 | 9460 | 1.7312 | 0.8537 | | 0.0 | 44.0 | 9680 | 1.7321 | 0.8537 | | 0.0 | 45.0 | 9900 | 1.7328 | 0.8537 | | 0.0 | 46.0 | 10120 | 1.7333 | 0.8537 | | 0.0 | 47.0 | 10340 | 1.7337 | 0.8537 | | 0.0 | 48.0 | 10560 | 1.7340 | 0.8537 | | 0.0 | 49.0 | 10780 | 1.7342 | 0.8537 | | 0.0 | 50.0 | 11000 | 1.7342 | 0.8537 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_tiny_sgd_0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_sgd_0001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2757 - Accuracy: 0.4667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.3997 | 1.0 | 215 | 1.4647 | 0.2444 | | 1.3365 | 2.0 | 430 | 1.4330 | 0.2222 | | 1.2917 | 3.0 | 645 | 1.4094 | 0.2444 | | 1.2427 | 4.0 | 860 | 1.3927 | 0.2667 | | 1.2441 | 5.0 | 1075 | 1.3794 | 0.2667 | | 1.1872 | 6.0 | 1290 | 1.3684 | 0.2667 | | 1.1795 | 7.0 | 1505 | 1.3589 | 0.3111 | | 1.1209 | 8.0 | 1720 | 1.3504 | 0.3333 | | 1.1403 | 9.0 | 1935 | 1.3427 | 0.3778 | | 1.0825 | 10.0 | 2150 | 1.3360 | 0.3778 | | 1.0205 | 11.0 | 2365 | 1.3306 | 0.3778 | | 1.0287 | 12.0 | 2580 | 1.3256 | 0.4222 | | 1.0526 | 13.0 | 2795 | 1.3201 | 0.4444 | | 0.979 | 14.0 | 3010 | 1.3158 | 0.4444 | | 1.009 | 15.0 | 3225 | 1.3119 | 0.4444 | | 1.0242 | 16.0 | 3440 | 1.3068 | 0.4444 | | 0.9586 | 17.0 | 3655 | 1.3041 | 0.4222 | | 0.9705 | 18.0 | 3870 | 1.3009 | 0.4222 | | 0.9559 | 19.0 | 4085 | 1.2993 | 0.4222 | | 0.95 | 20.0 | 4300 | 1.2983 | 0.4444 | | 0.9501 | 21.0 | 4515 | 1.2955 | 0.4444 | | 0.9287 | 22.0 | 4730 | 1.2949 | 0.4444 | | 0.8978 | 23.0 | 4945 | 1.2936 | 0.4444 | | 0.8221 | 24.0 | 5160 | 1.2913 | 0.4444 | | 0.8642 | 25.0 | 5375 | 1.2902 | 0.4444 | | 0.8893 | 26.0 | 5590 | 1.2888 | 0.4444 | | 0.8888 | 27.0 | 5805 | 1.2875 | 0.4444 | | 0.8399 | 28.0 | 6020 | 1.2872 | 0.4444 | | 0.8384 | 29.0 | 6235 | 1.2862 | 0.4444 | | 0.8557 | 30.0 | 6450 | 1.2852 | 0.4444 | | 0.8264 | 31.0 | 6665 | 1.2846 | 0.4444 | | 0.7947 | 32.0 | 6880 | 1.2839 | 0.4222 | | 0.7889 | 33.0 | 7095 | 1.2827 | 0.4222 | | 0.829 | 34.0 | 7310 | 1.2822 | 0.4444 | | 0.754 | 35.0 | 7525 | 1.2813 | 0.4444 | | 0.7758 | 36.0 | 7740 | 1.2807 | 0.4444 | | 0.8928 | 37.0 | 7955 | 1.2794 | 0.4444 | | 0.734 | 38.0 | 8170 | 1.2794 | 0.4444 | | 0.7594 | 39.0 | 8385 | 1.2785 | 0.4444 | | 0.775 | 40.0 | 8600 | 1.2779 | 0.4444 | | 0.7835 | 41.0 | 8815 | 1.2773 | 0.4667 | | 0.7569 | 42.0 | 9030 | 1.2769 | 0.4667 | | 0.7974 | 43.0 | 9245 | 1.2769 | 0.4667 | | 0.7959 | 44.0 | 9460 | 1.2766 | 0.4667 | | 0.8113 | 45.0 | 9675 | 1.2762 | 0.4667 | | 0.7344 | 46.0 | 9890 | 1.2759 | 0.4667 | | 0.7955 | 47.0 | 10105 | 1.2758 | 0.4667 | | 0.7831 | 48.0 | 10320 | 1.2757 | 0.4667 | | 0.7467 | 49.0 | 10535 | 1.2757 | 0.4667 | | 0.8192 | 50.0 | 10750 | 1.2757 | 0.4667 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_tiny_sgd_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_sgd_0001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9959 - Accuracy: 0.6744 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.4187 | 1.0 | 217 | 1.4585 | 0.3023 | | 1.3875 | 2.0 | 434 | 1.4331 | 0.2791 | | 1.3134 | 3.0 | 651 | 1.4120 | 0.3256 | | 1.3061 | 4.0 | 868 | 1.3930 | 0.3488 | | 1.3015 | 5.0 | 1085 | 1.3762 | 0.3721 | | 1.2507 | 6.0 | 1302 | 1.3597 | 0.3721 | | 1.2542 | 7.0 | 1519 | 1.3427 | 0.3721 | | 1.2153 | 8.0 | 1736 | 1.3276 | 0.3721 | | 1.2187 | 9.0 | 1953 | 1.3127 | 0.4186 | | 1.1894 | 10.0 | 2170 | 1.2981 | 0.4186 | | 1.1545 | 11.0 | 2387 | 1.2843 | 0.4186 | | 1.1296 | 12.0 | 2604 | 1.2697 | 0.4419 | | 1.1425 | 13.0 | 2821 | 1.2546 | 0.4651 | | 1.1006 | 14.0 | 3038 | 1.2420 | 0.4884 | | 1.101 | 15.0 | 3255 | 1.2295 | 0.4884 | | 1.0751 | 16.0 | 3472 | 1.2159 | 0.4884 | | 1.0907 | 17.0 | 3689 | 1.2031 | 0.4884 | | 1.047 | 18.0 | 3906 | 1.1903 | 0.5116 | | 1.0396 | 19.0 | 4123 | 1.1781 | 0.5581 | | 1.0151 | 20.0 | 4340 | 1.1663 | 0.5581 | | 1.0071 | 21.0 | 4557 | 1.1547 | 0.5581 | | 0.9605 | 22.0 | 4774 | 1.1441 | 0.5814 | | 0.9825 | 23.0 | 4991 | 1.1328 | 0.6047 | | 0.9877 | 24.0 | 5208 | 1.1238 | 0.6047 | | 0.944 | 25.0 | 5425 | 1.1139 | 0.6047 | | 1.0028 | 26.0 | 5642 | 1.1046 | 0.6047 | | 0.9583 | 27.0 | 5859 | 1.0948 | 0.6279 | | 0.9319 | 28.0 | 6076 | 1.0861 | 0.6279 | | 0.8861 | 29.0 | 6293 | 1.0779 | 0.6279 | | 0.9631 | 30.0 | 6510 | 1.0704 | 0.6512 | | 0.8801 | 31.0 | 6727 | 1.0625 | 0.6512 | | 0.9404 | 32.0 | 6944 | 1.0548 | 0.6512 | | 0.9252 | 33.0 | 7161 | 1.0485 | 0.6512 | | 0.8258 | 34.0 | 7378 | 1.0422 | 0.6512 | | 0.8739 | 35.0 | 7595 | 1.0361 | 0.6744 | | 0.8975 | 36.0 | 7812 | 1.0306 | 0.6744 | | 0.8371 | 37.0 | 8029 | 1.0260 | 0.6744 | | 0.8695 | 38.0 | 8246 | 1.0212 | 0.6744 | | 0.8346 | 39.0 | 8463 | 1.0171 | 0.6744 | | 0.8685 | 40.0 | 8680 | 1.0135 | 0.6744 | | 0.8448 | 41.0 | 8897 | 1.0098 | 0.6744 | | 0.8514 | 42.0 | 9114 | 1.0067 | 0.6744 | | 0.8326 | 43.0 | 9331 | 1.0041 | 0.6744 | | 0.8323 | 44.0 | 9548 | 1.0018 | 0.6744 | | 0.8178 | 45.0 | 9765 | 0.9998 | 0.6744 | | 0.8479 | 46.0 | 9982 | 0.9982 | 0.6744 | | 0.8512 | 47.0 | 10199 | 0.9971 | 0.6744 | | 0.851 | 48.0 | 10416 | 0.9963 | 0.6744 | | 0.839 | 49.0 | 10633 | 0.9959 | 0.6744 | | 0.7968 | 50.0 | 10850 | 0.9959 | 0.6744 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
yuanhuaisen/autotrain-55x6s-5uwoq
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: nan f1_macro: 0.12345679012345678 f1_micro: 0.22727272727272727 f1_weighted: 0.08417508417508417 precision_macro: 0.07575757575757576 precision_micro: 0.22727272727272727 precision_weighted: 0.051652892561983466 recall_macro: 0.3333333333333333 recall_micro: 0.22727272727272727 recall_weighted: 0.22727272727272727 accuracy: 0.22727272727272727
[ "11covered_with_a_quilt_and_only_the_head_exposed", "12covered_with_a_quilt_and_exposed_other_parts_of_the_body", "13has_nothing_to_do_with_11_and_12_above" ]
hkivancoral/hushem_40x_deit_tiny_sgd_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_sgd_0001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0643 - Accuracy: 0.5952 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.4577 | 1.0 | 219 | 1.6228 | 0.1667 | | 1.3428 | 2.0 | 438 | 1.5859 | 0.1429 | | 1.3515 | 3.0 | 657 | 1.5534 | 0.1667 | | 1.3164 | 4.0 | 876 | 1.5241 | 0.1905 | | 1.262 | 5.0 | 1095 | 1.5001 | 0.1905 | | 1.2747 | 6.0 | 1314 | 1.4781 | 0.1905 | | 1.2005 | 7.0 | 1533 | 1.4587 | 0.1905 | | 1.2174 | 8.0 | 1752 | 1.4397 | 0.2143 | | 1.1711 | 9.0 | 1971 | 1.4197 | 0.2143 | | 1.1443 | 10.0 | 2190 | 1.4007 | 0.2619 | | 1.1123 | 11.0 | 2409 | 1.3821 | 0.2857 | | 1.164 | 12.0 | 2628 | 1.3642 | 0.4048 | | 1.0774 | 13.0 | 2847 | 1.3471 | 0.3810 | | 1.1066 | 14.0 | 3066 | 1.3290 | 0.3810 | | 1.055 | 15.0 | 3285 | 1.3135 | 0.4286 | | 1.0496 | 16.0 | 3504 | 1.2984 | 0.4048 | | 1.112 | 17.0 | 3723 | 1.2838 | 0.4286 | | 1.0058 | 18.0 | 3942 | 1.2696 | 0.4286 | | 1.0363 | 19.0 | 4161 | 1.2563 | 0.4286 | | 1.0446 | 20.0 | 4380 | 1.2431 | 0.4286 | | 1.0301 | 21.0 | 4599 | 1.2308 | 0.4286 | | 1.0066 | 22.0 | 4818 | 1.2182 | 0.4524 | | 0.9188 | 23.0 | 5037 | 1.2068 | 0.4524 | | 0.9729 | 24.0 | 5256 | 1.1969 | 0.5238 | | 0.9215 | 25.0 | 5475 | 1.1858 | 0.5238 | | 0.9604 | 26.0 | 5694 | 1.1753 | 0.5476 | | 0.9173 | 27.0 | 5913 | 1.1663 | 0.5714 | | 0.9314 | 28.0 | 6132 | 1.1573 | 0.5714 | | 0.8654 | 29.0 | 6351 | 1.1486 | 0.5714 | | 0.9372 | 30.0 | 6570 | 1.1410 | 0.5714 | | 0.9028 | 31.0 | 6789 | 1.1331 | 0.5714 | | 0.9732 | 32.0 | 7008 | 1.1254 | 0.5714 | | 0.9146 | 33.0 | 7227 | 1.1186 | 0.5714 | | 0.8712 | 34.0 | 7446 | 1.1126 | 0.5714 | | 0.8981 | 35.0 | 7665 | 1.1068 | 0.5714 | | 0.8626 | 36.0 | 7884 | 1.1011 | 0.5714 | | 0.884 | 37.0 | 8103 | 1.0956 | 0.5714 | | 0.9119 | 38.0 | 8322 | 1.0906 | 0.5714 | | 0.8378 | 39.0 | 8541 | 1.0862 | 0.5714 | | 0.8095 | 40.0 | 8760 | 1.0823 | 0.5952 | | 0.9067 | 41.0 | 8979 | 1.0785 | 0.5952 | | 0.874 | 42.0 | 9198 | 1.0755 | 0.5714 | | 0.8784 | 43.0 | 9417 | 1.0728 | 0.5714 | | 0.8408 | 44.0 | 9636 | 1.0704 | 0.5714 | | 0.8315 | 45.0 | 9855 | 1.0684 | 0.5714 | | 0.8598 | 46.0 | 10074 | 1.0667 | 0.5714 | | 0.8452 | 47.0 | 10293 | 1.0654 | 0.5952 | | 0.863 | 48.0 | 10512 | 1.0647 | 0.5952 | | 0.8292 | 49.0 | 10731 | 1.0643 | 0.5952 | | 0.7869 | 50.0 | 10950 | 1.0643 | 0.5952 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_base_rms_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_rms_00001_fold1 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1476 - Accuracy: 0.8667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0061 | 1.0 | 215 | 0.5807 | 0.8667 | | 0.0003 | 2.0 | 430 | 0.6211 | 0.8444 | | 0.0001 | 3.0 | 645 | 0.8059 | 0.8222 | | 0.0 | 4.0 | 860 | 0.8142 | 0.8444 | | 0.0 | 5.0 | 1075 | 0.8755 | 0.8222 | | 0.0 | 6.0 | 1290 | 0.9063 | 0.8444 | | 0.0 | 7.0 | 1505 | 0.9620 | 0.8667 | | 0.0 | 8.0 | 1720 | 0.9896 | 0.8667 | | 0.0 | 9.0 | 1935 | 1.0818 | 0.8667 | | 0.0 | 10.0 | 2150 | 1.1238 | 0.8667 | | 0.0 | 11.0 | 2365 | 1.1782 | 0.8667 | | 0.0 | 12.0 | 2580 | 1.2105 | 0.8667 | | 0.0 | 13.0 | 2795 | 1.2229 | 0.8667 | | 0.0 | 14.0 | 3010 | 1.2497 | 0.8667 | | 0.0 | 15.0 | 3225 | 1.2395 | 0.8667 | | 0.0 | 16.0 | 3440 | 1.2297 | 0.8889 | | 0.0 | 17.0 | 3655 | 1.2382 | 0.8889 | | 0.0 | 18.0 | 3870 | 1.2316 | 0.8667 | | 0.0 | 19.0 | 4085 | 1.2222 | 0.8889 | | 0.0 | 20.0 | 4300 | 1.2098 | 0.8889 | | 0.0 | 21.0 | 4515 | 1.2108 | 0.8889 | | 0.0 | 22.0 | 4730 | 1.2160 | 0.8667 | | 0.0 | 23.0 | 4945 | 1.1914 | 0.8889 | | 0.0 | 24.0 | 5160 | 1.2067 | 0.8667 | | 0.0 | 25.0 | 5375 | 1.1881 | 0.8667 | | 0.0 | 26.0 | 5590 | 1.1754 | 0.8667 | | 0.0 | 27.0 | 5805 | 1.1838 | 0.8667 | | 0.0 | 28.0 | 6020 | 1.1945 | 0.8444 | | 0.0 | 29.0 | 6235 | 1.1919 | 0.8444 | | 0.0 | 30.0 | 6450 | 1.1709 | 0.8444 | | 0.0 | 31.0 | 6665 | 1.1710 | 0.8444 | | 0.0 | 32.0 | 6880 | 1.1725 | 0.8444 | | 0.0 | 33.0 | 7095 | 1.1648 | 0.8444 | | 0.0 | 34.0 | 7310 | 1.1652 | 0.8444 | | 0.0 | 35.0 | 7525 | 1.1685 | 0.8444 | | 0.0 | 36.0 | 7740 | 1.1632 | 0.8444 | | 0.0 | 37.0 | 7955 | 1.1596 | 0.8667 | | 0.0 | 38.0 | 8170 | 1.1545 | 0.8667 | | 0.0 | 39.0 | 8385 | 1.1576 | 0.8444 | | 0.0 | 40.0 | 8600 | 1.1585 | 0.8667 | | 0.0 | 41.0 | 8815 | 1.1448 | 0.8667 | | 0.0 | 42.0 | 9030 | 1.1428 | 0.8667 | | 0.0 | 43.0 | 9245 | 1.1526 | 0.8667 | | 0.0 | 44.0 | 9460 | 1.1466 | 0.8667 | | 0.0 | 45.0 | 9675 | 1.1454 | 0.8667 | | 0.0 | 46.0 | 9890 | 1.1467 | 0.8667 | | 0.0 | 47.0 | 10105 | 1.1498 | 0.8667 | | 0.0 | 48.0 | 10320 | 1.1458 | 0.8667 | | 0.0 | 49.0 | 10535 | 1.1472 | 0.8667 | | 0.0 | 50.0 | 10750 | 1.1476 | 0.8667 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_small_adamax_001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_small_adamax_001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.1541 - Accuracy: 0.7556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1447 | 1.0 | 215 | 1.0279 | 0.7556 | | 0.0939 | 2.0 | 430 | 1.6244 | 0.7333 | | 0.025 | 3.0 | 645 | 1.1738 | 0.8 | | 0.0505 | 4.0 | 860 | 1.8318 | 0.6667 | | 0.02 | 5.0 | 1075 | 1.9882 | 0.7333 | | 0.1072 | 6.0 | 1290 | 1.2076 | 0.7333 | | 0.0633 | 7.0 | 1505 | 1.6747 | 0.7333 | | 0.0604 | 8.0 | 1720 | 1.1018 | 0.8 | | 0.0484 | 9.0 | 1935 | 2.2857 | 0.6444 | | 0.0045 | 10.0 | 2150 | 2.0338 | 0.7556 | | 0.0317 | 11.0 | 2365 | 2.1474 | 0.7556 | | 0.0239 | 12.0 | 2580 | 1.5303 | 0.7778 | | 0.0001 | 13.0 | 2795 | 2.3569 | 0.6444 | | 0.0001 | 14.0 | 3010 | 2.2079 | 0.7556 | | 0.0 | 15.0 | 3225 | 1.6648 | 0.7778 | | 0.0065 | 16.0 | 3440 | 1.7779 | 0.7778 | | 0.0 | 17.0 | 3655 | 1.9802 | 0.7556 | | 0.0 | 18.0 | 3870 | 2.1669 | 0.7778 | | 0.0001 | 19.0 | 4085 | 1.9508 | 0.8 | | 0.0 | 20.0 | 4300 | 3.0396 | 0.6889 | | 0.0 | 21.0 | 4515 | 1.8449 | 0.7333 | | 0.0 | 22.0 | 4730 | 1.8614 | 0.7333 | | 0.0 | 23.0 | 4945 | 1.8711 | 0.7333 | | 0.0 | 24.0 | 5160 | 1.8758 | 0.7333 | | 0.0 | 25.0 | 5375 | 1.8839 | 0.7333 | | 0.0 | 26.0 | 5590 | 1.8890 | 0.7111 | | 0.0 | 27.0 | 5805 | 1.8959 | 0.7111 | | 0.0 | 28.0 | 6020 | 1.9021 | 0.7111 | | 0.0 | 29.0 | 6235 | 1.9100 | 0.7111 | | 0.0 | 30.0 | 6450 | 1.9180 | 0.7111 | | 0.0 | 31.0 | 6665 | 1.9279 | 0.7111 | | 0.0 | 32.0 | 6880 | 1.9382 | 0.7333 | | 0.0 | 33.0 | 7095 | 1.9497 | 0.7333 | | 0.0 | 34.0 | 7310 | 1.9619 | 0.7333 | | 0.0 | 35.0 | 7525 | 1.9743 | 0.7333 | | 0.0 | 36.0 | 7740 | 1.9878 | 0.7333 | | 0.0 | 37.0 | 7955 | 2.0026 | 0.7333 | | 0.0 | 38.0 | 8170 | 2.0159 | 0.7333 | | 0.0 | 39.0 | 8385 | 2.0312 | 0.7333 | | 0.0 | 40.0 | 8600 | 2.0457 | 0.7333 | | 0.0 | 41.0 | 8815 | 2.0615 | 0.7333 | | 0.0 | 42.0 | 9030 | 2.0758 | 0.7333 | | 0.0 | 43.0 | 9245 | 2.0899 | 0.7333 | | 0.0 | 44.0 | 9460 | 2.1029 | 0.7333 | | 0.0 | 45.0 | 9675 | 2.1161 | 0.7333 | | 0.0 | 46.0 | 9890 | 2.1279 | 0.7556 | | 0.0 | 47.0 | 10105 | 2.1385 | 0.7556 | | 0.0 | 48.0 | 10320 | 2.1469 | 0.7556 | | 0.0 | 49.0 | 10535 | 2.1525 | 0.7556 | | 0.0 | 50.0 | 10750 | 2.1541 | 0.7556 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_tiny_sgd_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_sgd_0001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1237 - Accuracy: 0.5122 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.4318 | 1.0 | 220 | 1.6149 | 0.1707 | | 1.3729 | 2.0 | 440 | 1.5770 | 0.1951 | | 1.3561 | 3.0 | 660 | 1.5432 | 0.1707 | | 1.3096 | 4.0 | 880 | 1.5177 | 0.1463 | | 1.2756 | 5.0 | 1100 | 1.4946 | 0.1951 | | 1.2485 | 6.0 | 1320 | 1.4738 | 0.2195 | | 1.1719 | 7.0 | 1540 | 1.4569 | 0.2195 | | 1.1324 | 8.0 | 1760 | 1.4401 | 0.2439 | | 1.1522 | 9.0 | 1980 | 1.4251 | 0.2683 | | 1.1548 | 10.0 | 2200 | 1.4097 | 0.2439 | | 1.1099 | 11.0 | 2420 | 1.3960 | 0.2683 | | 1.0799 | 12.0 | 2640 | 1.3821 | 0.2683 | | 1.072 | 13.0 | 2860 | 1.3689 | 0.2927 | | 1.0381 | 14.0 | 3080 | 1.3552 | 0.3171 | | 1.0533 | 15.0 | 3300 | 1.3423 | 0.2927 | | 1.0294 | 16.0 | 3520 | 1.3293 | 0.3171 | | 1.004 | 17.0 | 3740 | 1.3169 | 0.3171 | | 1.0138 | 18.0 | 3960 | 1.3048 | 0.3171 | | 0.9902 | 19.0 | 4180 | 1.2935 | 0.3171 | | 0.9047 | 20.0 | 4400 | 1.2817 | 0.3171 | | 0.9213 | 21.0 | 4620 | 1.2707 | 0.3415 | | 0.9555 | 22.0 | 4840 | 1.2595 | 0.3415 | | 0.9607 | 23.0 | 5060 | 1.2491 | 0.3415 | | 0.9344 | 24.0 | 5280 | 1.2391 | 0.3415 | | 0.8688 | 25.0 | 5500 | 1.2295 | 0.3902 | | 0.9175 | 26.0 | 5720 | 1.2208 | 0.4146 | | 0.887 | 27.0 | 5940 | 1.2120 | 0.4390 | | 0.905 | 28.0 | 6160 | 1.2036 | 0.4634 | | 0.8477 | 29.0 | 6380 | 1.1957 | 0.4878 | | 0.8486 | 30.0 | 6600 | 1.1887 | 0.4878 | | 0.9203 | 31.0 | 6820 | 1.1822 | 0.4878 | | 0.8893 | 32.0 | 7040 | 1.1760 | 0.4878 | | 0.8469 | 33.0 | 7260 | 1.1702 | 0.4878 | | 0.7935 | 34.0 | 7480 | 1.1645 | 0.4878 | | 0.7904 | 35.0 | 7700 | 1.1593 | 0.4878 | | 0.7994 | 36.0 | 7920 | 1.1544 | 0.5122 | | 0.8205 | 37.0 | 8140 | 1.1499 | 0.5122 | | 0.8696 | 38.0 | 8360 | 1.1458 | 0.5122 | | 0.8262 | 39.0 | 8580 | 1.1421 | 0.5122 | | 0.7584 | 40.0 | 8800 | 1.1388 | 0.5122 | | 0.8457 | 41.0 | 9020 | 1.1358 | 0.5122 | | 0.8307 | 42.0 | 9240 | 1.1331 | 0.5122 | | 0.8183 | 43.0 | 9460 | 1.1307 | 0.5122 | | 0.7718 | 44.0 | 9680 | 1.1287 | 0.5122 | | 0.7855 | 45.0 | 9900 | 1.1271 | 0.5122 | | 0.7875 | 46.0 | 10120 | 1.1258 | 0.5122 | | 0.8109 | 47.0 | 10340 | 1.1248 | 0.5122 | | 0.7297 | 48.0 | 10560 | 1.1241 | 0.5122 | | 0.7352 | 49.0 | 10780 | 1.1238 | 0.5122 | | 0.7935 | 50.0 | 11000 | 1.1237 | 0.5122 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_small_adamax_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_small_adamax_001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 3.2920 - Accuracy: 0.7333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1775 | 1.0 | 215 | 1.6855 | 0.7111 | | 0.1537 | 2.0 | 430 | 1.3524 | 0.7111 | | 0.0687 | 3.0 | 645 | 2.1272 | 0.7333 | | 0.0127 | 4.0 | 860 | 1.6443 | 0.7778 | | 0.1338 | 5.0 | 1075 | 1.6931 | 0.7111 | | 0.0106 | 6.0 | 1290 | 2.4757 | 0.6667 | | 0.049 | 7.0 | 1505 | 2.6204 | 0.6889 | | 0.0012 | 8.0 | 1720 | 1.8192 | 0.7333 | | 0.0005 | 9.0 | 1935 | 1.7811 | 0.7556 | | 0.0005 | 10.0 | 2150 | 2.2694 | 0.6889 | | 0.0153 | 11.0 | 2365 | 1.6459 | 0.7333 | | 0.0005 | 12.0 | 2580 | 1.8151 | 0.7778 | | 0.0072 | 13.0 | 2795 | 1.9954 | 0.7556 | | 0.0 | 14.0 | 3010 | 2.3490 | 0.7778 | | 0.0073 | 15.0 | 3225 | 2.3310 | 0.7556 | | 0.0002 | 16.0 | 3440 | 2.4489 | 0.6667 | | 0.0001 | 17.0 | 3655 | 2.8003 | 0.6222 | | 0.0 | 18.0 | 3870 | 2.6717 | 0.7333 | | 0.0 | 19.0 | 4085 | 2.6848 | 0.7333 | | 0.0 | 20.0 | 4300 | 2.6999 | 0.7333 | | 0.0 | 21.0 | 4515 | 2.7166 | 0.7333 | | 0.0 | 22.0 | 4730 | 2.7339 | 0.7333 | | 0.0 | 23.0 | 4945 | 2.7519 | 0.7333 | | 0.0 | 24.0 | 5160 | 2.7709 | 0.7333 | | 0.0 | 25.0 | 5375 | 2.7907 | 0.7333 | | 0.0 | 26.0 | 5590 | 2.8115 | 0.7333 | | 0.0 | 27.0 | 5805 | 2.8327 | 0.7333 | | 0.0 | 28.0 | 6020 | 2.8548 | 0.7333 | | 0.0 | 29.0 | 6235 | 2.8773 | 0.7333 | | 0.0 | 30.0 | 6450 | 2.9001 | 0.7333 | | 0.0 | 31.0 | 6665 | 2.9234 | 0.7333 | | 0.0 | 32.0 | 6880 | 2.9473 | 0.7333 | | 0.0 | 33.0 | 7095 | 2.9712 | 0.7333 | | 0.0 | 34.0 | 7310 | 2.9955 | 0.7333 | | 0.0 | 35.0 | 7525 | 3.0198 | 0.7333 | | 0.0 | 36.0 | 7740 | 3.0443 | 0.7333 | | 0.0 | 37.0 | 7955 | 3.0682 | 0.7333 | | 0.0 | 38.0 | 8170 | 3.0917 | 0.7333 | | 0.0 | 39.0 | 8385 | 3.1162 | 0.7333 | | 0.0 | 40.0 | 8600 | 3.1397 | 0.7333 | | 0.0 | 41.0 | 8815 | 3.1619 | 0.7333 | | 0.0 | 42.0 | 9030 | 3.1849 | 0.7333 | | 0.0 | 43.0 | 9245 | 3.2057 | 0.7333 | | 0.0 | 44.0 | 9460 | 3.2253 | 0.7333 | | 0.0 | 45.0 | 9675 | 3.2434 | 0.7333 | | 0.0 | 46.0 | 9890 | 3.2592 | 0.7333 | | 0.0 | 47.0 | 10105 | 3.2727 | 0.7333 | | 0.0 | 48.0 | 10320 | 3.2833 | 0.7333 | | 0.0 | 49.0 | 10535 | 3.2902 | 0.7333 | | 0.0 | 50.0 | 10750 | 3.2920 | 0.7333 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
SaladSlayer00/new_model
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # SaladSlayer00/new_model This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.2935 - Validation Loss: 1.6986 - Validation Accuracy: 0.5619 - Epoch: 11 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Validation Accuracy | Epoch | |:----------:|:---------------:|:-------------------:|:-----:| | 7.0613 | 4.8451 | 0.0134 | 0 | | 4.6529 | 4.5201 | 0.0658 | 1 | | 4.3215 | 4.1158 | 0.0992 | 2 | | 3.8808 | 3.6981 | 0.1806 | 3 | | 3.4497 | 3.2741 | 0.2553 | 4 | | 3.0361 | 2.9681 | 0.3177 | 5 | | 2.6734 | 2.6529 | 0.3690 | 6 | | 2.3306 | 2.3803 | 0.4091 | 7 | | 2.0284 | 2.1731 | 0.4738 | 8 | | 1.7542 | 1.9839 | 0.4883 | 9 | | 1.5084 | 1.8335 | 0.5284 | 10 | | 1.2935 | 1.6986 | 0.5619 | 11 | ### Framework versions - Transformers 4.36.2 - TensorFlow 2.15.0 - Datasets 2.16.0 - Tokenizers 0.15.0
[ "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "ben_affleck", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "tom_ellis", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "danielle_panabaker", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "madelaine_petsch", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "katharine_mcphee", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "camila_mendes", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "melissa_fumero", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "anthony_mackie", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "natalie_portman", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "cristiano_ronaldo", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "tom_hiddleston", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "logan_lerman", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "lili_reinhart", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "elon_musk", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "bobby_morley", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "brie_larson", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "josh_radnor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "eliza_taylor", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "alexandra_daddario", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "krysten_ritter", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "zendaya", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "jeff_bezos", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "gal_gadot", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "zoe_saldana", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "shakira_isabel_mebarak", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "mark_zuckerberg", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "marie_avgeropoulos", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "neil_patrick_harris", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "chris_hemsworth", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "elizabeth_lail", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "richard_harmon", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "chris_evans", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "kiernen_shipka", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "natalie_dormer", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "alvaro_morte", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "stephen_amell", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "alex_lawther", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "irina_shayk", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "amanda_crew", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "wentworth_miller", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "katherine_langford", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "penn_badgley", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "barack_obama", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "christian_bale", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "nadia_hilker", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "morena_baccarin", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "chris_pratt", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "anne_hathaway", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "emma_stone", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "ellen_page", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "robert_de_niro", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "tom_holland", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "sarah_wayne_callies", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "inbar_lavi", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "scarlett_johansson", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "tom_hardy", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "megan_fox", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "pedro_alonso", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "brenton_thwaites", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "keanu_reeves", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "andy_samberg", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "rebecca_ferguson", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "alycia_dabnem_carey", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "dwayne_johnson", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "rihanna", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "miley_cyrus", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "zac_efron", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "amber_heard", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "robert_downey_jr", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "leonardo_dicaprio", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "selena_gomez", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "barbara_palvin", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "emilia_clarke", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "morgan_freeman", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "gwyneth_paltrow", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "maria_pedraza", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "jeremy_renner", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "tom_cruise", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "jimmy_fallon", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "hugh_jackman", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "sophie_turner", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "tuppence_middleton", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jessica_barden", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "jennifer_lawrence", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "millie_bobby_brown", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "ursula_corbero", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "bill_gates", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "mark_ruffalo", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "avril_lavigne", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "maisie_williams", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "margot_robbie", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "elizabeth_olsen", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "brian_j._smith", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "grant_gustin", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "rami_malek", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "taylor_swift", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "emma_watson", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "jake_mcdorman", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "adriana_lima", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "henry_cavil", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "lindsey_morgan", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "dominic_purcell", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "jason_momoa", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "johnny_depp", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "lionel_messi", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata", "beatrice_insalata" ]
hkivancoral/hushem_40x_deit_base_rms_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_base_rms_00001_fold2 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.2145 - Accuracy: 0.7778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0124 | 1.0 | 215 | 0.9769 | 0.7556 | | 0.0003 | 2.0 | 430 | 1.1164 | 0.7556 | | 0.0001 | 3.0 | 645 | 1.2999 | 0.7556 | | 0.0 | 4.0 | 860 | 1.4171 | 0.7556 | | 0.0 | 5.0 | 1075 | 1.5668 | 0.7778 | | 0.0 | 6.0 | 1290 | 1.6850 | 0.7778 | | 0.0 | 7.0 | 1505 | 1.8146 | 0.7778 | | 0.0 | 8.0 | 1720 | 1.9589 | 0.7778 | | 0.0 | 9.0 | 1935 | 2.1064 | 0.8 | | 0.0 | 10.0 | 2150 | 2.2093 | 0.8 | | 0.0 | 11.0 | 2365 | 2.2933 | 0.8 | | 0.0 | 12.0 | 2580 | 2.3766 | 0.8 | | 0.0 | 13.0 | 2795 | 2.4083 | 0.7778 | | 0.0 | 14.0 | 3010 | 2.4352 | 0.7778 | | 0.0 | 15.0 | 3225 | 2.4429 | 0.7778 | | 0.0 | 16.0 | 3440 | 2.4405 | 0.7778 | | 0.0 | 17.0 | 3655 | 2.4464 | 0.7778 | | 0.0 | 18.0 | 3870 | 2.4337 | 0.7778 | | 0.0 | 19.0 | 4085 | 2.4439 | 0.7778 | | 0.0 | 20.0 | 4300 | 2.4205 | 0.7778 | | 0.0 | 21.0 | 4515 | 2.4211 | 0.7778 | | 0.0 | 22.0 | 4730 | 2.4042 | 0.7778 | | 0.0 | 23.0 | 4945 | 2.3825 | 0.7778 | | 0.0 | 24.0 | 5160 | 2.3776 | 0.7778 | | 0.0 | 25.0 | 5375 | 2.3705 | 0.7778 | | 0.0 | 26.0 | 5590 | 2.3563 | 0.7778 | | 0.0 | 27.0 | 5805 | 2.3321 | 0.7778 | | 0.0 | 28.0 | 6020 | 2.3284 | 0.7778 | | 0.0 | 29.0 | 6235 | 2.3256 | 0.7778 | | 0.0 | 30.0 | 6450 | 2.3054 | 0.7778 | | 0.0 | 31.0 | 6665 | 2.2910 | 0.7778 | | 0.0 | 32.0 | 6880 | 2.2963 | 0.7778 | | 0.0 | 33.0 | 7095 | 2.2902 | 0.7778 | | 0.0 | 34.0 | 7310 | 2.2745 | 0.7778 | | 0.0 | 35.0 | 7525 | 2.2617 | 0.7778 | | 0.0 | 36.0 | 7740 | 2.2546 | 0.7778 | | 0.0 | 37.0 | 7955 | 2.2630 | 0.7778 | | 0.0 | 38.0 | 8170 | 2.2430 | 0.7778 | | 0.0 | 39.0 | 8385 | 2.2389 | 0.7778 | | 0.0 | 40.0 | 8600 | 2.2433 | 0.7778 | | 0.0 | 41.0 | 8815 | 2.2306 | 0.7778 | | 0.0 | 42.0 | 9030 | 2.2253 | 0.7778 | | 0.0 | 43.0 | 9245 | 2.2215 | 0.7778 | | 0.0 | 44.0 | 9460 | 2.2183 | 0.7778 | | 0.0 | 45.0 | 9675 | 2.2187 | 0.7778 | | 0.0 | 46.0 | 9890 | 2.2190 | 0.7778 | | 0.0 | 47.0 | 10105 | 2.2156 | 0.7778 | | 0.0 | 48.0 | 10320 | 2.2160 | 0.7778 | | 0.0 | 49.0 | 10535 | 2.2147 | 0.7778 | | 0.0 | 50.0 | 10750 | 2.2145 | 0.7778 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_40x_deit_small_adamax_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_small_adamax_001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.0018 - Accuracy: 0.8140 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2113 | 1.0 | 217 | 1.0639 | 0.7209 | | 0.3075 | 2.0 | 434 | 0.6999 | 0.7442 | | 0.0797 | 3.0 | 651 | 1.4112 | 0.7209 | | 0.0613 | 4.0 | 868 | 0.8895 | 0.8605 | | 0.0448 | 5.0 | 1085 | 0.8165 | 0.8140 | | 0.0133 | 6.0 | 1302 | 1.2281 | 0.7907 | | 0.0099 | 7.0 | 1519 | 1.6935 | 0.7907 | | 0.0195 | 8.0 | 1736 | 0.9261 | 0.8837 | | 0.0441 | 9.0 | 1953 | 0.6136 | 0.8605 | | 0.0408 | 10.0 | 2170 | 1.0937 | 0.8605 | | 0.0001 | 11.0 | 2387 | 1.3536 | 0.8372 | | 0.0014 | 12.0 | 2604 | 1.5056 | 0.8372 | | 0.0152 | 13.0 | 2821 | 1.3542 | 0.8140 | | 0.0011 | 14.0 | 3038 | 1.1435 | 0.8140 | | 0.0006 | 15.0 | 3255 | 1.7874 | 0.7907 | | 0.0244 | 16.0 | 3472 | 1.5609 | 0.8140 | | 0.0 | 17.0 | 3689 | 0.9143 | 0.9070 | | 0.0 | 18.0 | 3906 | 1.3119 | 0.8140 | | 0.0 | 19.0 | 4123 | 1.5264 | 0.8372 | | 0.0024 | 20.0 | 4340 | 1.6055 | 0.8140 | | 0.0 | 21.0 | 4557 | 1.7071 | 0.8140 | | 0.0 | 22.0 | 4774 | 1.6943 | 0.8140 | | 0.0 | 23.0 | 4991 | 1.6871 | 0.8140 | | 0.0 | 24.0 | 5208 | 1.6854 | 0.8140 | | 0.0 | 25.0 | 5425 | 1.6881 | 0.8140 | | 0.0 | 26.0 | 5642 | 1.6930 | 0.8140 | | 0.0 | 27.0 | 5859 | 1.6999 | 0.8140 | | 0.0 | 28.0 | 6076 | 1.7095 | 0.8140 | | 0.0 | 29.0 | 6293 | 1.7201 | 0.8140 | | 0.0 | 30.0 | 6510 | 1.7321 | 0.8140 | | 0.0 | 31.0 | 6727 | 1.7453 | 0.8140 | | 0.0 | 32.0 | 6944 | 1.7591 | 0.8140 | | 0.0 | 33.0 | 7161 | 1.7739 | 0.8140 | | 0.0 | 34.0 | 7378 | 1.7893 | 0.8140 | | 0.0 | 35.0 | 7595 | 1.8052 | 0.8140 | | 0.0 | 36.0 | 7812 | 1.8215 | 0.8140 | | 0.0 | 37.0 | 8029 | 1.8380 | 0.8140 | | 0.0 | 38.0 | 8246 | 1.8542 | 0.8140 | | 0.0 | 39.0 | 8463 | 1.8709 | 0.8140 | | 0.0 | 40.0 | 8680 | 1.8874 | 0.8140 | | 0.0 | 41.0 | 8897 | 1.9038 | 0.8140 | | 0.0 | 42.0 | 9114 | 1.9194 | 0.8140 | | 0.0 | 43.0 | 9331 | 1.9350 | 0.8140 | | 0.0 | 44.0 | 9548 | 1.9494 | 0.8140 | | 0.0 | 45.0 | 9765 | 1.9631 | 0.8140 | | 0.0 | 46.0 | 9982 | 1.9753 | 0.8140 | | 0.0 | 47.0 | 10199 | 1.9864 | 0.8140 | | 0.0 | 48.0 | 10416 | 1.9949 | 0.8140 | | 0.0 | 49.0 | 10633 | 2.0003 | 0.8140 | | 0.0 | 50.0 | 10850 | 2.0018 | 0.8140 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]