model_id
stringlengths
7
105
model_card
stringlengths
1
130k
model_labels
listlengths
2
80k
Naterea/vxbcsdf123-0
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 0.5112138390541077 f1: 0.7499999999999999 precision: 0.6 recall: 1.0 auc: 1.0 accuracy: 0.6
[ "noyt", "yt" ]
Naterea/yt_noyt_V2_MoreData_3-0
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 0.509366512298584 f1: 0.0 precision: 0.0 recall: 0.0 auc: 1.0 accuracy: 0.6052631578947368
[ "noyt", "yt" ]
moreover18/vit-base-patch16-224-in21k-YB
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-YB This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3922 - Accuracy: 0.8220 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5973 | 0.49 | 100 | 0.4747 | 0.7797 | | 0.4672 | 0.99 | 200 | 0.4363 | 0.7979 | | 0.3914 | 1.48 | 300 | 0.4090 | 0.8115 | | 0.3749 | 1.97 | 400 | 0.4001 | 0.8189 | | 0.3281 | 2.47 | 500 | 0.4023 | 0.8183 | | 0.3187 | 2.96 | 600 | 0.3922 | 0.8220 | ### Framework versions - Transformers 4.36.2 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.15.0
[ "not_politics", "politics" ]
hkivancoral/smids_5x_deit_base_rms_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_base_rms_00001_fold1 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7394 - Accuracy: 0.9098 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1851 | 1.0 | 376 | 0.2437 | 0.8998 | | 0.1172 | 2.0 | 752 | 0.2452 | 0.9065 | | 0.078 | 3.0 | 1128 | 0.3051 | 0.9032 | | 0.0038 | 4.0 | 1504 | 0.3755 | 0.9149 | | 0.0013 | 5.0 | 1880 | 0.4803 | 0.9048 | | 0.0105 | 6.0 | 2256 | 0.5332 | 0.8948 | | 0.0008 | 7.0 | 2632 | 0.5088 | 0.9015 | | 0.0133 | 8.0 | 3008 | 0.5291 | 0.9149 | | 0.0349 | 9.0 | 3384 | 0.6409 | 0.9048 | | 0.0001 | 10.0 | 3760 | 0.6103 | 0.8998 | | 0.0039 | 11.0 | 4136 | 0.6150 | 0.9065 | | 0.0005 | 12.0 | 4512 | 0.7088 | 0.8948 | | 0.0 | 13.0 | 4888 | 0.6260 | 0.8965 | | 0.0213 | 14.0 | 5264 | 0.6512 | 0.9065 | | 0.0001 | 15.0 | 5640 | 0.6705 | 0.8965 | | 0.0 | 16.0 | 6016 | 0.6402 | 0.9098 | | 0.0 | 17.0 | 6392 | 0.7356 | 0.9015 | | 0.0 | 18.0 | 6768 | 0.6866 | 0.8932 | | 0.0035 | 19.0 | 7144 | 0.7211 | 0.8982 | | 0.0098 | 20.0 | 7520 | 0.7353 | 0.8982 | | 0.0 | 21.0 | 7896 | 0.7497 | 0.9032 | | 0.0001 | 22.0 | 8272 | 0.7881 | 0.9015 | | 0.0 | 23.0 | 8648 | 0.7075 | 0.9065 | | 0.0 | 24.0 | 9024 | 0.8340 | 0.8948 | | 0.0 | 25.0 | 9400 | 0.8050 | 0.9032 | | 0.0028 | 26.0 | 9776 | 0.7114 | 0.8982 | | 0.0 | 27.0 | 10152 | 0.6978 | 0.9048 | | 0.0 | 28.0 | 10528 | 0.7140 | 0.9032 | | 0.0032 | 29.0 | 10904 | 0.6871 | 0.9098 | | 0.0032 | 30.0 | 11280 | 0.7619 | 0.9032 | | 0.0 | 31.0 | 11656 | 0.7031 | 0.9082 | | 0.0 | 32.0 | 12032 | 0.7126 | 0.9082 | | 0.0 | 33.0 | 12408 | 0.7501 | 0.9082 | | 0.0 | 34.0 | 12784 | 0.7212 | 0.9149 | | 0.0 | 35.0 | 13160 | 0.7433 | 0.9098 | | 0.0 | 36.0 | 13536 | 0.7330 | 0.9132 | | 0.0 | 37.0 | 13912 | 0.7531 | 0.9065 | | 0.0 | 38.0 | 14288 | 0.7193 | 0.9098 | | 0.0 | 39.0 | 14664 | 0.7113 | 0.9132 | | 0.0 | 40.0 | 15040 | 0.7484 | 0.9149 | | 0.0 | 41.0 | 15416 | 0.7482 | 0.9132 | | 0.0 | 42.0 | 15792 | 0.7262 | 0.9132 | | 0.0 | 43.0 | 16168 | 0.7432 | 0.9149 | | 0.0 | 44.0 | 16544 | 0.7418 | 0.9149 | | 0.0 | 45.0 | 16920 | 0.7350 | 0.9115 | | 0.0025 | 46.0 | 17296 | 0.7363 | 0.9115 | | 0.0 | 47.0 | 17672 | 0.7386 | 0.9098 | | 0.0 | 48.0 | 18048 | 0.7382 | 0.9098 | | 0.0 | 49.0 | 18424 | 0.7382 | 0.9098 | | 0.0023 | 50.0 | 18800 | 0.7394 | 0.9098 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_base_rms_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_base_rms_0001_fold1 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8795 - Accuracy: 0.9082 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.232 | 1.0 | 376 | 0.3558 | 0.8531 | | 0.1372 | 2.0 | 752 | 0.3215 | 0.8898 | | 0.1203 | 3.0 | 1128 | 0.3886 | 0.8881 | | 0.0697 | 4.0 | 1504 | 0.4340 | 0.8765 | | 0.0324 | 5.0 | 1880 | 0.5132 | 0.8865 | | 0.0728 | 6.0 | 2256 | 0.5310 | 0.8881 | | 0.0495 | 7.0 | 2632 | 0.6186 | 0.8781 | | 0.0797 | 8.0 | 3008 | 0.5758 | 0.8881 | | 0.0121 | 9.0 | 3384 | 0.5420 | 0.8998 | | 0.0399 | 10.0 | 3760 | 0.6888 | 0.8815 | | 0.0293 | 11.0 | 4136 | 0.5816 | 0.9065 | | 0.0382 | 12.0 | 4512 | 0.6430 | 0.8781 | | 0.0148 | 13.0 | 4888 | 0.8464 | 0.8781 | | 0.0249 | 14.0 | 5264 | 0.5972 | 0.8848 | | 0.0008 | 15.0 | 5640 | 0.6413 | 0.8965 | | 0.0409 | 16.0 | 6016 | 0.7158 | 0.8798 | | 0.0202 | 17.0 | 6392 | 0.7167 | 0.8815 | | 0.0253 | 18.0 | 6768 | 0.6005 | 0.8998 | | 0.0002 | 19.0 | 7144 | 0.6775 | 0.8948 | | 0.0275 | 20.0 | 7520 | 0.7685 | 0.8948 | | 0.0272 | 21.0 | 7896 | 0.6847 | 0.8965 | | 0.0003 | 22.0 | 8272 | 0.7613 | 0.8915 | | 0.0008 | 23.0 | 8648 | 0.7342 | 0.8865 | | 0.0001 | 24.0 | 9024 | 0.7590 | 0.8698 | | 0.0 | 25.0 | 9400 | 0.7885 | 0.8898 | | 0.0032 | 26.0 | 9776 | 0.6867 | 0.8982 | | 0.0 | 27.0 | 10152 | 0.7507 | 0.8948 | | 0.0003 | 28.0 | 10528 | 0.7142 | 0.8848 | | 0.0043 | 29.0 | 10904 | 0.6904 | 0.8881 | | 0.0066 | 30.0 | 11280 | 0.7736 | 0.8898 | | 0.0007 | 31.0 | 11656 | 0.7358 | 0.8998 | | 0.0006 | 32.0 | 12032 | 1.0875 | 0.8598 | | 0.0 | 33.0 | 12408 | 0.7340 | 0.9015 | | 0.0 | 34.0 | 12784 | 0.7139 | 0.8982 | | 0.0 | 35.0 | 13160 | 0.7525 | 0.9115 | | 0.0002 | 36.0 | 13536 | 0.7504 | 0.8982 | | 0.0 | 37.0 | 13912 | 0.8006 | 0.8982 | | 0.0 | 38.0 | 14288 | 0.7615 | 0.9015 | | 0.0 | 39.0 | 14664 | 0.7609 | 0.9115 | | 0.0 | 40.0 | 15040 | 0.8059 | 0.9015 | | 0.0 | 41.0 | 15416 | 0.8037 | 0.9032 | | 0.0 | 42.0 | 15792 | 0.8697 | 0.9048 | | 0.0 | 43.0 | 16168 | 0.8414 | 0.9115 | | 0.0 | 44.0 | 16544 | 0.8687 | 0.9098 | | 0.0 | 45.0 | 16920 | 0.8833 | 0.9065 | | 0.0031 | 46.0 | 17296 | 0.8963 | 0.9065 | | 0.0 | 47.0 | 17672 | 0.8765 | 0.9082 | | 0.0 | 48.0 | 18048 | 0.8724 | 0.9082 | | 0.0 | 49.0 | 18424 | 0.8783 | 0.9082 | | 0.0025 | 50.0 | 18800 | 0.8795 | 0.9082 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
Naterea/YT_REDD_V1_BASE-0
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 0.12359786778688431 f1: 1.0 precision: 1.0 recall: 1.0 auc: 0.9999999999999999 accuracy: 1.0
[ "redd", "short" ]
Naterea/YT_REDD_V2_BASE-0
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 14316332.0 f1: 0.6666666666666666 precision: 0.5 recall: 1.0 auc: 0.9694656488549619 accuracy: 0.5
[ "redd", "short" ]
hkivancoral/smids_5x_deit_base_rms_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_base_rms_00001_fold2 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0612 - Accuracy: 0.8852 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2146 | 1.0 | 375 | 0.2894 | 0.8952 | | 0.1437 | 2.0 | 750 | 0.3102 | 0.8935 | | 0.0617 | 3.0 | 1125 | 0.4639 | 0.8902 | | 0.0286 | 4.0 | 1500 | 0.5331 | 0.8935 | | 0.0028 | 5.0 | 1875 | 0.6331 | 0.8802 | | 0.0027 | 6.0 | 2250 | 0.6763 | 0.8902 | | 0.0012 | 7.0 | 2625 | 0.7498 | 0.8835 | | 0.0002 | 8.0 | 3000 | 0.6659 | 0.8952 | | 0.0 | 9.0 | 3375 | 0.7256 | 0.8935 | | 0.0209 | 10.0 | 3750 | 0.7936 | 0.8769 | | 0.0042 | 11.0 | 4125 | 0.8361 | 0.8802 | | 0.001 | 12.0 | 4500 | 0.8162 | 0.8852 | | 0.0011 | 13.0 | 4875 | 0.8014 | 0.8968 | | 0.0 | 14.0 | 5250 | 0.8392 | 0.8885 | | 0.0018 | 15.0 | 5625 | 0.9229 | 0.8752 | | 0.0 | 16.0 | 6000 | 0.8989 | 0.8869 | | 0.0029 | 17.0 | 6375 | 0.8923 | 0.8902 | | 0.0 | 18.0 | 6750 | 0.8680 | 0.8852 | | 0.0112 | 19.0 | 7125 | 0.9026 | 0.8852 | | 0.0 | 20.0 | 7500 | 0.9170 | 0.8902 | | 0.0 | 21.0 | 7875 | 1.0300 | 0.8735 | | 0.0 | 22.0 | 8250 | 0.8953 | 0.8885 | | 0.0 | 23.0 | 8625 | 0.9292 | 0.8918 | | 0.0001 | 24.0 | 9000 | 0.9442 | 0.8935 | | 0.0 | 25.0 | 9375 | 0.9984 | 0.8952 | | 0.0 | 26.0 | 9750 | 1.0751 | 0.8885 | | 0.0 | 27.0 | 10125 | 1.0903 | 0.8819 | | 0.0 | 28.0 | 10500 | 1.0301 | 0.8852 | | 0.0 | 29.0 | 10875 | 1.0019 | 0.8885 | | 0.0 | 30.0 | 11250 | 0.9825 | 0.8902 | | 0.0045 | 31.0 | 11625 | 1.0018 | 0.8835 | | 0.0032 | 32.0 | 12000 | 1.0070 | 0.8885 | | 0.0037 | 33.0 | 12375 | 0.9955 | 0.8902 | | 0.0 | 34.0 | 12750 | 1.0401 | 0.8802 | | 0.0 | 35.0 | 13125 | 1.0361 | 0.8835 | | 0.0 | 36.0 | 13500 | 1.0263 | 0.8869 | | 0.0 | 37.0 | 13875 | 1.0646 | 0.8802 | | 0.0 | 38.0 | 14250 | 1.0823 | 0.8835 | | 0.0 | 39.0 | 14625 | 1.0786 | 0.8852 | | 0.0029 | 40.0 | 15000 | 1.0585 | 0.8869 | | 0.0 | 41.0 | 15375 | 1.0567 | 0.8852 | | 0.0023 | 42.0 | 15750 | 1.0631 | 0.8852 | | 0.0027 | 43.0 | 16125 | 1.0573 | 0.8885 | | 0.0026 | 44.0 | 16500 | 1.0579 | 0.8902 | | 0.0023 | 45.0 | 16875 | 1.0642 | 0.8852 | | 0.0 | 46.0 | 17250 | 1.0620 | 0.8852 | | 0.0049 | 47.0 | 17625 | 1.0628 | 0.8852 | | 0.0 | 48.0 | 18000 | 1.0622 | 0.8852 | | 0.0022 | 49.0 | 18375 | 1.0616 | 0.8852 | | 0.0024 | 50.0 | 18750 | 1.0612 | 0.8852 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_base_rms_0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_base_rms_0001_fold2 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2231 - Accuracy: 0.8819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.244 | 1.0 | 375 | 0.3336 | 0.8735 | | 0.1999 | 2.0 | 750 | 0.3073 | 0.8952 | | 0.0877 | 3.0 | 1125 | 0.5180 | 0.8486 | | 0.0613 | 4.0 | 1500 | 0.6173 | 0.8735 | | 0.0492 | 5.0 | 1875 | 0.4636 | 0.8769 | | 0.0237 | 6.0 | 2250 | 0.5520 | 0.8869 | | 0.0332 | 7.0 | 2625 | 0.6932 | 0.8769 | | 0.0234 | 8.0 | 3000 | 0.5512 | 0.8902 | | 0.0167 | 9.0 | 3375 | 0.6767 | 0.8819 | | 0.0254 | 10.0 | 3750 | 0.4652 | 0.8985 | | 0.0244 | 11.0 | 4125 | 0.6296 | 0.8819 | | 0.0022 | 12.0 | 4500 | 0.7077 | 0.8852 | | 0.0032 | 13.0 | 4875 | 0.5101 | 0.8968 | | 0.0197 | 14.0 | 5250 | 0.7253 | 0.8735 | | 0.0071 | 15.0 | 5625 | 0.6712 | 0.8968 | | 0.0257 | 16.0 | 6000 | 0.7898 | 0.8686 | | 0.0086 | 17.0 | 6375 | 0.7760 | 0.8869 | | 0.028 | 18.0 | 6750 | 0.6224 | 0.8719 | | 0.0186 | 19.0 | 7125 | 0.8173 | 0.8785 | | 0.0482 | 20.0 | 7500 | 0.7586 | 0.8719 | | 0.1001 | 21.0 | 7875 | 0.8040 | 0.8835 | | 0.0024 | 22.0 | 8250 | 0.8709 | 0.8652 | | 0.0013 | 23.0 | 8625 | 0.7956 | 0.8752 | | 0.0002 | 24.0 | 9000 | 0.8317 | 0.8802 | | 0.0002 | 25.0 | 9375 | 0.7874 | 0.8819 | | 0.0164 | 26.0 | 9750 | 0.8324 | 0.8869 | | 0.0002 | 27.0 | 10125 | 0.7963 | 0.8902 | | 0.0001 | 28.0 | 10500 | 0.8631 | 0.8952 | | 0.0312 | 29.0 | 10875 | 0.8641 | 0.8902 | | 0.0005 | 30.0 | 11250 | 0.9305 | 0.8852 | | 0.0052 | 31.0 | 11625 | 1.0338 | 0.8869 | | 0.0033 | 32.0 | 12000 | 0.8216 | 0.8752 | | 0.0052 | 33.0 | 12375 | 0.9970 | 0.8819 | | 0.0065 | 34.0 | 12750 | 0.8099 | 0.8918 | | 0.0 | 35.0 | 13125 | 0.9129 | 0.8852 | | 0.0 | 36.0 | 13500 | 0.8964 | 0.8885 | | 0.0 | 37.0 | 13875 | 0.9774 | 0.8785 | | 0.0 | 38.0 | 14250 | 1.0097 | 0.8852 | | 0.0 | 39.0 | 14625 | 1.0835 | 0.8802 | | 0.0031 | 40.0 | 15000 | 1.0742 | 0.8769 | | 0.0 | 41.0 | 15375 | 1.1287 | 0.8802 | | 0.0028 | 42.0 | 15750 | 1.0739 | 0.8819 | | 0.0028 | 43.0 | 16125 | 1.1899 | 0.8769 | | 0.0028 | 44.0 | 16500 | 1.1924 | 0.8769 | | 0.003 | 45.0 | 16875 | 1.1778 | 0.8802 | | 0.0 | 46.0 | 17250 | 1.2129 | 0.8819 | | 0.0058 | 47.0 | 17625 | 1.2164 | 0.8819 | | 0.0 | 48.0 | 18000 | 1.2195 | 0.8819 | | 0.0025 | 49.0 | 18375 | 1.2217 | 0.8819 | | 0.0023 | 50.0 | 18750 | 1.2231 | 0.8819 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
Naterea/YT_REDD_V2.1_BASE-0
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 0.11768843978643417 f1: 1.0 precision: 1.0 recall: 1.0 auc: 1.0 accuracy: 1.0
[ "redd", "short" ]
hkivancoral/smids_5x_deit_base_rms_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_base_rms_00001_fold3 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8646 - Accuracy: 0.9133 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1923 | 1.0 | 375 | 0.2761 | 0.89 | | 0.0794 | 2.0 | 750 | 0.2797 | 0.92 | | 0.0089 | 3.0 | 1125 | 0.3862 | 0.9083 | | 0.0263 | 4.0 | 1500 | 0.4507 | 0.915 | | 0.0241 | 5.0 | 1875 | 0.5940 | 0.9033 | | 0.0093 | 6.0 | 2250 | 0.5438 | 0.915 | | 0.0074 | 7.0 | 2625 | 0.6908 | 0.9 | | 0.003 | 8.0 | 3000 | 0.6531 | 0.905 | | 0.038 | 9.0 | 3375 | 0.6121 | 0.9133 | | 0.0021 | 10.0 | 3750 | 0.6007 | 0.9167 | | 0.0 | 11.0 | 4125 | 0.7290 | 0.9017 | | 0.0 | 12.0 | 4500 | 0.6566 | 0.905 | | 0.0057 | 13.0 | 4875 | 0.7475 | 0.905 | | 0.0 | 14.0 | 5250 | 0.7562 | 0.915 | | 0.0001 | 15.0 | 5625 | 0.7103 | 0.9117 | | 0.0 | 16.0 | 6000 | 0.7619 | 0.9133 | | 0.023 | 17.0 | 6375 | 0.8515 | 0.9017 | | 0.0 | 18.0 | 6750 | 0.7788 | 0.9167 | | 0.0668 | 19.0 | 7125 | 0.9370 | 0.8917 | | 0.0002 | 20.0 | 7500 | 0.9448 | 0.8933 | | 0.0 | 21.0 | 7875 | 0.8080 | 0.9117 | | 0.0 | 22.0 | 8250 | 0.8600 | 0.9017 | | 0.0029 | 23.0 | 8625 | 0.9399 | 0.9017 | | 0.0 | 24.0 | 9000 | 0.8645 | 0.9067 | | 0.0 | 25.0 | 9375 | 0.8436 | 0.9117 | | 0.0 | 26.0 | 9750 | 0.8337 | 0.905 | | 0.0 | 27.0 | 10125 | 0.8133 | 0.9117 | | 0.0 | 28.0 | 10500 | 0.8299 | 0.9217 | | 0.0 | 29.0 | 10875 | 0.8115 | 0.9117 | | 0.0038 | 30.0 | 11250 | 0.8569 | 0.9183 | | 0.0 | 31.0 | 11625 | 0.7968 | 0.9133 | | 0.0 | 32.0 | 12000 | 0.8353 | 0.9133 | | 0.0 | 33.0 | 12375 | 0.8387 | 0.9117 | | 0.0 | 34.0 | 12750 | 0.8604 | 0.9117 | | 0.0 | 35.0 | 13125 | 0.8144 | 0.9133 | | 0.0 | 36.0 | 13500 | 0.8218 | 0.9033 | | 0.0 | 37.0 | 13875 | 0.8203 | 0.9133 | | 0.0 | 38.0 | 14250 | 0.8248 | 0.9133 | | 0.0 | 39.0 | 14625 | 0.8352 | 0.9133 | | 0.0031 | 40.0 | 15000 | 0.8391 | 0.9233 | | 0.0 | 41.0 | 15375 | 0.8462 | 0.9167 | | 0.0 | 42.0 | 15750 | 0.8442 | 0.9217 | | 0.0 | 43.0 | 16125 | 0.8498 | 0.9133 | | 0.0 | 44.0 | 16500 | 0.8546 | 0.9117 | | 0.0 | 45.0 | 16875 | 0.8578 | 0.9117 | | 0.0 | 46.0 | 17250 | 0.8603 | 0.9117 | | 0.0 | 47.0 | 17625 | 0.8622 | 0.9117 | | 0.0 | 48.0 | 18000 | 0.8633 | 0.9117 | | 0.0 | 49.0 | 18375 | 0.8638 | 0.9133 | | 0.0 | 50.0 | 18750 | 0.8646 | 0.9133 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_base_rms_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_base_rms_0001_fold3 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0537 - Accuracy: 0.9033 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2482 | 1.0 | 375 | 0.3175 | 0.8933 | | 0.1437 | 2.0 | 750 | 0.2994 | 0.91 | | 0.0829 | 3.0 | 1125 | 0.4831 | 0.8717 | | 0.0657 | 4.0 | 1500 | 0.4645 | 0.87 | | 0.0415 | 5.0 | 1875 | 0.5452 | 0.895 | | 0.0518 | 6.0 | 2250 | 0.4649 | 0.89 | | 0.0246 | 7.0 | 2625 | 0.4579 | 0.8933 | | 0.0124 | 8.0 | 3000 | 0.5092 | 0.8933 | | 0.0196 | 9.0 | 3375 | 0.6123 | 0.885 | | 0.0528 | 10.0 | 3750 | 0.5846 | 0.89 | | 0.0162 | 11.0 | 4125 | 0.6461 | 0.89 | | 0.0269 | 12.0 | 4500 | 0.6644 | 0.89 | | 0.0189 | 13.0 | 4875 | 0.6691 | 0.8867 | | 0.013 | 14.0 | 5250 | 0.5509 | 0.895 | | 0.0125 | 15.0 | 5625 | 0.6239 | 0.8867 | | 0.042 | 16.0 | 6000 | 0.5644 | 0.8967 | | 0.0002 | 17.0 | 6375 | 0.7253 | 0.8967 | | 0.0067 | 18.0 | 6750 | 0.7652 | 0.8983 | | 0.0166 | 19.0 | 7125 | 0.7033 | 0.8983 | | 0.0136 | 20.0 | 7500 | 0.7542 | 0.8867 | | 0.0035 | 21.0 | 7875 | 0.8364 | 0.8883 | | 0.0092 | 22.0 | 8250 | 0.7788 | 0.89 | | 0.0366 | 23.0 | 8625 | 0.7487 | 0.8933 | | 0.0057 | 24.0 | 9000 | 0.8195 | 0.8933 | | 0.0002 | 25.0 | 9375 | 0.6186 | 0.9 | | 0.0001 | 26.0 | 9750 | 0.7244 | 0.9017 | | 0.0002 | 27.0 | 10125 | 0.8368 | 0.8883 | | 0.0004 | 28.0 | 10500 | 0.8205 | 0.895 | | 0.0421 | 29.0 | 10875 | 0.8404 | 0.89 | | 0.0033 | 30.0 | 11250 | 0.8091 | 0.8967 | | 0.0 | 31.0 | 11625 | 0.7929 | 0.8967 | | 0.0109 | 32.0 | 12000 | 0.8783 | 0.8883 | | 0.0 | 33.0 | 12375 | 0.8591 | 0.8917 | | 0.0 | 34.0 | 12750 | 0.9822 | 0.89 | | 0.0 | 35.0 | 13125 | 0.9216 | 0.8933 | | 0.0 | 36.0 | 13500 | 0.9855 | 0.895 | | 0.0 | 37.0 | 13875 | 0.8868 | 0.9017 | | 0.0 | 38.0 | 14250 | 0.9047 | 0.9017 | | 0.0 | 39.0 | 14625 | 0.9416 | 0.9017 | | 0.0033 | 40.0 | 15000 | 0.8937 | 0.91 | | 0.0117 | 41.0 | 15375 | 1.0141 | 0.8967 | | 0.0 | 42.0 | 15750 | 1.0456 | 0.9 | | 0.0 | 43.0 | 16125 | 1.0341 | 0.9017 | | 0.0 | 44.0 | 16500 | 1.0394 | 0.9033 | | 0.0 | 45.0 | 16875 | 1.0421 | 0.905 | | 0.0 | 46.0 | 17250 | 1.0425 | 0.9017 | | 0.0 | 47.0 | 17625 | 1.0481 | 0.905 | | 0.0 | 48.0 | 18000 | 1.0506 | 0.9033 | | 0.0 | 49.0 | 18375 | 1.0518 | 0.905 | | 0.0 | 50.0 | 18750 | 1.0537 | 0.9033 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_base_rms_00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_base_rms_00001_fold4 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3574 - Accuracy: 0.8817 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1958 | 1.0 | 375 | 0.3184 | 0.8783 | | 0.0812 | 2.0 | 750 | 0.4073 | 0.8717 | | 0.0484 | 3.0 | 1125 | 0.5219 | 0.8733 | | 0.0271 | 4.0 | 1500 | 0.6124 | 0.8833 | | 0.0074 | 5.0 | 1875 | 0.8129 | 0.88 | | 0.003 | 6.0 | 2250 | 0.9106 | 0.87 | | 0.0001 | 7.0 | 2625 | 1.0845 | 0.855 | | 0.0023 | 8.0 | 3000 | 0.9219 | 0.8833 | | 0.0004 | 9.0 | 3375 | 0.9445 | 0.8733 | | 0.0002 | 10.0 | 3750 | 0.9361 | 0.88 | | 0.0011 | 11.0 | 4125 | 1.0898 | 0.8667 | | 0.0124 | 12.0 | 4500 | 0.9336 | 0.875 | | 0.0 | 13.0 | 4875 | 1.1661 | 0.865 | | 0.0124 | 14.0 | 5250 | 1.0276 | 0.8717 | | 0.0001 | 15.0 | 5625 | 0.9732 | 0.89 | | 0.0815 | 16.0 | 6000 | 1.0823 | 0.8817 | | 0.0006 | 17.0 | 6375 | 0.9866 | 0.8833 | | 0.0045 | 18.0 | 6750 | 1.0728 | 0.8717 | | 0.0021 | 19.0 | 7125 | 1.0523 | 0.8867 | | 0.0 | 20.0 | 7500 | 1.1197 | 0.88 | | 0.0 | 21.0 | 7875 | 1.1101 | 0.8717 | | 0.0 | 22.0 | 8250 | 1.1295 | 0.8833 | | 0.0002 | 23.0 | 8625 | 1.1603 | 0.8733 | | 0.0 | 24.0 | 9000 | 1.0839 | 0.8883 | | 0.0001 | 25.0 | 9375 | 1.3517 | 0.855 | | 0.0 | 26.0 | 9750 | 1.2046 | 0.8767 | | 0.0 | 27.0 | 10125 | 1.2127 | 0.87 | | 0.0 | 28.0 | 10500 | 1.1975 | 0.8733 | | 0.0 | 29.0 | 10875 | 1.2719 | 0.8733 | | 0.0 | 30.0 | 11250 | 1.2005 | 0.88 | | 0.0 | 31.0 | 11625 | 1.2225 | 0.88 | | 0.0 | 32.0 | 12000 | 1.2788 | 0.8817 | | 0.0 | 33.0 | 12375 | 1.2241 | 0.885 | | 0.0 | 34.0 | 12750 | 1.2713 | 0.8817 | | 0.0 | 35.0 | 13125 | 1.3047 | 0.88 | | 0.0 | 36.0 | 13500 | 1.2773 | 0.8817 | | 0.0 | 37.0 | 13875 | 1.3055 | 0.8817 | | 0.0045 | 38.0 | 14250 | 1.3287 | 0.8817 | | 0.0 | 39.0 | 14625 | 1.3354 | 0.88 | | 0.0 | 40.0 | 15000 | 1.3345 | 0.8817 | | 0.0 | 41.0 | 15375 | 1.3411 | 0.88 | | 0.0 | 42.0 | 15750 | 1.3379 | 0.8817 | | 0.0 | 43.0 | 16125 | 1.3500 | 0.8817 | | 0.0 | 44.0 | 16500 | 1.3584 | 0.8817 | | 0.0 | 45.0 | 16875 | 1.3608 | 0.8817 | | 0.0 | 46.0 | 17250 | 1.3593 | 0.8817 | | 0.0 | 47.0 | 17625 | 1.3583 | 0.8817 | | 0.0 | 48.0 | 18000 | 1.3583 | 0.8817 | | 0.0 | 49.0 | 18375 | 1.3582 | 0.8817 | | 0.0 | 50.0 | 18750 | 1.3574 | 0.8817 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_base_rms_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_base_rms_0001_fold4 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4272 - Accuracy: 0.885 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2738 | 1.0 | 375 | 0.3688 | 0.8617 | | 0.164 | 2.0 | 750 | 0.4071 | 0.86 | | 0.1224 | 3.0 | 1125 | 0.5148 | 0.87 | | 0.1153 | 4.0 | 1500 | 0.5755 | 0.8833 | | 0.0768 | 5.0 | 1875 | 0.5201 | 0.8733 | | 0.0128 | 6.0 | 2250 | 0.6962 | 0.88 | | 0.0434 | 7.0 | 2625 | 0.7802 | 0.8733 | | 0.011 | 8.0 | 3000 | 0.8928 | 0.8567 | | 0.0054 | 9.0 | 3375 | 0.8190 | 0.8583 | | 0.0033 | 10.0 | 3750 | 0.8366 | 0.8683 | | 0.0196 | 11.0 | 4125 | 0.7608 | 0.8817 | | 0.0229 | 12.0 | 4500 | 0.8098 | 0.8683 | | 0.032 | 13.0 | 4875 | 0.8776 | 0.865 | | 0.0132 | 14.0 | 5250 | 0.9792 | 0.8517 | | 0.0019 | 15.0 | 5625 | 0.8844 | 0.8783 | | 0.0007 | 16.0 | 6000 | 0.9761 | 0.8683 | | 0.0282 | 17.0 | 6375 | 0.8085 | 0.855 | | 0.0269 | 18.0 | 6750 | 0.8221 | 0.875 | | 0.0 | 19.0 | 7125 | 0.8222 | 0.8833 | | 0.0006 | 20.0 | 7500 | 0.9491 | 0.8633 | | 0.0061 | 21.0 | 7875 | 0.9907 | 0.86 | | 0.0014 | 22.0 | 8250 | 1.0614 | 0.86 | | 0.0136 | 23.0 | 8625 | 0.8637 | 0.8717 | | 0.0038 | 24.0 | 9000 | 0.9073 | 0.8717 | | 0.003 | 25.0 | 9375 | 1.0178 | 0.875 | | 0.0311 | 26.0 | 9750 | 0.9666 | 0.8817 | | 0.0228 | 27.0 | 10125 | 0.9904 | 0.8783 | | 0.0 | 28.0 | 10500 | 1.1423 | 0.8617 | | 0.0001 | 29.0 | 10875 | 1.1415 | 0.865 | | 0.0142 | 30.0 | 11250 | 1.0575 | 0.88 | | 0.0005 | 31.0 | 11625 | 1.2704 | 0.8683 | | 0.0 | 32.0 | 12000 | 1.1747 | 0.875 | | 0.0055 | 33.0 | 12375 | 1.1305 | 0.8717 | | 0.0001 | 34.0 | 12750 | 1.1488 | 0.8817 | | 0.0 | 35.0 | 13125 | 1.1227 | 0.8767 | | 0.0 | 36.0 | 13500 | 1.1854 | 0.8733 | | 0.0 | 37.0 | 13875 | 1.1866 | 0.8817 | | 0.0066 | 38.0 | 14250 | 1.1780 | 0.885 | | 0.0 | 39.0 | 14625 | 1.2908 | 0.88 | | 0.0 | 40.0 | 15000 | 1.2885 | 0.8817 | | 0.0 | 41.0 | 15375 | 1.3508 | 0.885 | | 0.0 | 42.0 | 15750 | 1.3636 | 0.8817 | | 0.0 | 43.0 | 16125 | 1.3897 | 0.8817 | | 0.0 | 44.0 | 16500 | 1.3912 | 0.8783 | | 0.0 | 45.0 | 16875 | 1.4003 | 0.885 | | 0.0 | 46.0 | 17250 | 1.4104 | 0.885 | | 0.0 | 47.0 | 17625 | 1.4228 | 0.885 | | 0.0 | 48.0 | 18000 | 1.4261 | 0.885 | | 0.0 | 49.0 | 18375 | 1.4274 | 0.885 | | 0.0 | 50.0 | 18750 | 1.4272 | 0.885 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_base_rms_00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_base_rms_00001_fold5 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0520 - Accuracy: 0.8967 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1791 | 1.0 | 375 | 0.3808 | 0.845 | | 0.072 | 2.0 | 750 | 0.2732 | 0.8867 | | 0.0675 | 3.0 | 1125 | 0.3762 | 0.89 | | 0.0079 | 4.0 | 1500 | 0.5808 | 0.8883 | | 0.0309 | 5.0 | 1875 | 0.5620 | 0.8917 | | 0.0044 | 6.0 | 2250 | 0.5566 | 0.91 | | 0.0214 | 7.0 | 2625 | 0.6647 | 0.8983 | | 0.0001 | 8.0 | 3000 | 0.6367 | 0.9 | | 0.0006 | 9.0 | 3375 | 0.6995 | 0.8917 | | 0.0001 | 10.0 | 3750 | 0.6804 | 0.9067 | | 0.0009 | 11.0 | 4125 | 0.7694 | 0.8933 | | 0.0325 | 12.0 | 4500 | 0.7424 | 0.895 | | 0.0 | 13.0 | 4875 | 0.7092 | 0.9017 | | 0.0354 | 14.0 | 5250 | 0.6572 | 0.9033 | | 0.0 | 15.0 | 5625 | 0.7900 | 0.8833 | | 0.0 | 16.0 | 6000 | 0.7388 | 0.89 | | 0.0001 | 17.0 | 6375 | 0.7723 | 0.9017 | | 0.0086 | 18.0 | 6750 | 0.9281 | 0.8917 | | 0.0001 | 19.0 | 7125 | 0.9326 | 0.8883 | | 0.0 | 20.0 | 7500 | 0.8593 | 0.8933 | | 0.0 | 21.0 | 7875 | 0.8793 | 0.885 | | 0.0061 | 22.0 | 8250 | 0.9289 | 0.8967 | | 0.0031 | 23.0 | 8625 | 0.8307 | 0.9033 | | 0.0001 | 24.0 | 9000 | 0.9761 | 0.895 | | 0.0001 | 25.0 | 9375 | 0.9704 | 0.89 | | 0.0 | 26.0 | 9750 | 0.9536 | 0.8933 | | 0.0063 | 27.0 | 10125 | 0.9320 | 0.895 | | 0.0 | 28.0 | 10500 | 0.9103 | 0.9 | | 0.0 | 29.0 | 10875 | 0.9411 | 0.8967 | | 0.005 | 30.0 | 11250 | 0.7990 | 0.9017 | | 0.0 | 31.0 | 11625 | 0.8965 | 0.9 | | 0.0 | 32.0 | 12000 | 0.8853 | 0.9017 | | 0.0 | 33.0 | 12375 | 0.9302 | 0.9017 | | 0.0 | 34.0 | 12750 | 0.9471 | 0.9 | | 0.0 | 35.0 | 13125 | 0.9834 | 0.9 | | 0.0 | 36.0 | 13500 | 1.0135 | 0.9 | | 0.0 | 37.0 | 13875 | 0.9973 | 0.8933 | | 0.0029 | 38.0 | 14250 | 1.0014 | 0.895 | | 0.0 | 39.0 | 14625 | 1.0104 | 0.8967 | | 0.0 | 40.0 | 15000 | 1.0257 | 0.895 | | 0.0 | 41.0 | 15375 | 1.0273 | 0.8983 | | 0.0 | 42.0 | 15750 | 1.0325 | 0.8967 | | 0.0 | 43.0 | 16125 | 1.0339 | 0.895 | | 0.0 | 44.0 | 16500 | 1.0373 | 0.895 | | 0.0 | 45.0 | 16875 | 1.0387 | 0.8967 | | 0.0 | 46.0 | 17250 | 1.0456 | 0.8967 | | 0.0025 | 47.0 | 17625 | 1.0491 | 0.8967 | | 0.0 | 48.0 | 18000 | 1.0514 | 0.8967 | | 0.0 | 49.0 | 18375 | 1.0531 | 0.8967 | | 0.0022 | 50.0 | 18750 | 1.0520 | 0.8967 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_base_rms_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_base_rms_0001_fold5 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9290 - Accuracy: 0.915 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2223 | 1.0 | 375 | 0.3109 | 0.8933 | | 0.1256 | 2.0 | 750 | 0.3635 | 0.895 | | 0.0772 | 3.0 | 1125 | 0.3333 | 0.905 | | 0.0469 | 4.0 | 1500 | 0.4780 | 0.8867 | | 0.0893 | 5.0 | 1875 | 0.6491 | 0.8683 | | 0.0141 | 6.0 | 2250 | 0.5373 | 0.8983 | | 0.0431 | 7.0 | 2625 | 0.4446 | 0.8933 | | 0.0189 | 8.0 | 3000 | 0.5065 | 0.9067 | | 0.0211 | 9.0 | 3375 | 0.4802 | 0.9133 | | 0.0346 | 10.0 | 3750 | 0.5071 | 0.9033 | | 0.0253 | 11.0 | 4125 | 0.6744 | 0.8983 | | 0.0144 | 12.0 | 4500 | 0.6389 | 0.89 | | 0.0096 | 13.0 | 4875 | 0.6130 | 0.905 | | 0.0187 | 14.0 | 5250 | 0.6069 | 0.9083 | | 0.103 | 15.0 | 5625 | 0.7299 | 0.8867 | | 0.0003 | 16.0 | 6000 | 0.6047 | 0.9033 | | 0.0078 | 17.0 | 6375 | 0.6867 | 0.8917 | | 0.0106 | 18.0 | 6750 | 0.6759 | 0.8917 | | 0.0326 | 19.0 | 7125 | 0.9578 | 0.8633 | | 0.0231 | 20.0 | 7500 | 0.7260 | 0.8967 | | 0.0327 | 21.0 | 7875 | 0.6167 | 0.9 | | 0.0188 | 22.0 | 8250 | 0.6583 | 0.905 | | 0.0002 | 23.0 | 8625 | 0.6038 | 0.9083 | | 0.0276 | 24.0 | 9000 | 0.8452 | 0.8983 | | 0.0002 | 25.0 | 9375 | 0.6542 | 0.9067 | | 0.0045 | 26.0 | 9750 | 0.7924 | 0.905 | | 0.0011 | 27.0 | 10125 | 0.9081 | 0.9017 | | 0.0001 | 28.0 | 10500 | 0.6753 | 0.91 | | 0.0001 | 29.0 | 10875 | 0.8443 | 0.8933 | | 0.0026 | 30.0 | 11250 | 0.8554 | 0.9 | | 0.0029 | 31.0 | 11625 | 1.0060 | 0.8983 | | 0.0001 | 32.0 | 12000 | 0.7158 | 0.905 | | 0.0113 | 33.0 | 12375 | 0.8641 | 0.9017 | | 0.0 | 34.0 | 12750 | 0.7747 | 0.9067 | | 0.0 | 35.0 | 13125 | 0.7089 | 0.925 | | 0.0 | 36.0 | 13500 | 0.7594 | 0.9133 | | 0.0184 | 37.0 | 13875 | 0.7578 | 0.915 | | 0.0038 | 38.0 | 14250 | 0.7826 | 0.915 | | 0.0 | 39.0 | 14625 | 0.7931 | 0.9183 | | 0.0 | 40.0 | 15000 | 0.8433 | 0.91 | | 0.0 | 41.0 | 15375 | 0.8317 | 0.9133 | | 0.0 | 42.0 | 15750 | 0.8659 | 0.9133 | | 0.0 | 43.0 | 16125 | 0.8765 | 0.9117 | | 0.0 | 44.0 | 16500 | 0.9009 | 0.91 | | 0.0 | 45.0 | 16875 | 0.9256 | 0.9133 | | 0.0 | 46.0 | 17250 | 0.9323 | 0.915 | | 0.003 | 47.0 | 17625 | 0.9213 | 0.915 | | 0.0 | 48.0 | 18000 | 0.9276 | 0.915 | | 0.0 | 49.0 | 18375 | 0.9299 | 0.915 | | 0.0021 | 50.0 | 18750 | 0.9290 | 0.915 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
aaa12963337/msi-vit-small-1218-2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # msi-vit-small-1218-2 This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3372 - Accuracy: 0.6164 - F1: 0.3276 - Precision: 0.6841 - Recall: 0.2154 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.4367 | 1.0 | 1008 | 0.6603 | 0.6572 | 0.5313 | 0.6530 | 0.4478 | | 0.2161 | 2.0 | 2016 | 0.8021 | 0.6329 | 0.4989 | 0.6118 | 0.4211 | | 0.169 | 3.0 | 3024 | 1.4062 | 0.6010 | 0.2653 | 0.6592 | 0.1661 | | 0.1543 | 4.0 | 4032 | 1.1498 | 0.6259 | 0.3670 | 0.6903 | 0.2499 | | 0.1534 | 5.0 | 5040 | 1.5067 | 0.6208 | 0.3519 | 0.6808 | 0.2373 | | 0.1596 | 6.0 | 6048 | 0.8837 | 0.6504 | 0.6505 | 0.5744 | 0.7498 | | 0.1504 | 7.0 | 7056 | 1.0030 | 0.6302 | 0.4192 | 0.6580 | 0.3075 | | 0.1795 | 8.0 | 8064 | 1.3908 | 0.5953 | 0.2950 | 0.6041 | 0.1952 | | 0.1636 | 9.0 | 9072 | 1.1040 | 0.6290 | 0.4619 | 0.6230 | 0.3671 | | 0.1629 | 10.0 | 10080 | 1.3372 | 0.6164 | 0.3276 | 0.6841 | 0.2154 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.0.1+cu117 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "0", "1" ]
almak75/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3937 - Accuracy: 0.8583 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6866 | 0.99 | 37 | 0.6202 | 0.6742 | | 0.5892 | 2.0 | 75 | 0.5467 | 0.72 | | 0.5195 | 2.99 | 112 | 0.4933 | 0.7483 | | 0.4322 | 4.0 | 150 | 0.4787 | 0.765 | | 0.3712 | 4.99 | 187 | 0.3829 | 0.8208 | | 0.3162 | 6.0 | 225 | 0.3960 | 0.8133 | | 0.3082 | 6.99 | 262 | 0.3591 | 0.8392 | | 0.3038 | 8.0 | 300 | 0.3274 | 0.8467 | | 0.2794 | 8.99 | 337 | 0.3533 | 0.8433 | | 0.2596 | 10.0 | 375 | 0.3766 | 0.8258 | | 0.2369 | 10.99 | 412 | 0.3392 | 0.8575 | | 0.2503 | 12.0 | 450 | 0.3198 | 0.8625 | | 0.2009 | 12.99 | 487 | 0.3438 | 0.8625 | | 0.2195 | 14.0 | 525 | 0.3234 | 0.8617 | | 0.2025 | 14.99 | 562 | 0.3758 | 0.855 | | 0.1879 | 16.0 | 600 | 0.3909 | 0.8408 | | 0.18 | 16.99 | 637 | 0.3642 | 0.8617 | | 0.1545 | 18.0 | 675 | 0.3948 | 0.8567 | | 0.171 | 18.99 | 712 | 0.3889 | 0.8525 | | 0.1667 | 20.0 | 750 | 0.3883 | 0.8625 | | 0.163 | 20.99 | 787 | 0.3743 | 0.8575 | | 0.1682 | 22.0 | 825 | 0.3739 | 0.8592 | | 0.1611 | 22.99 | 862 | 0.3623 | 0.8742 | | 0.1348 | 24.0 | 900 | 0.3806 | 0.8592 | | 0.1366 | 24.99 | 937 | 0.3849 | 0.865 | | 0.1418 | 26.0 | 975 | 0.4049 | 0.8558 | | 0.1096 | 26.99 | 1012 | 0.3849 | 0.8608 | | 0.1347 | 28.0 | 1050 | 0.3926 | 0.8592 | | 0.137 | 28.99 | 1087 | 0.3938 | 0.8592 | | 0.1312 | 29.6 | 1110 | 0.3937 | 0.8583 | ### Framework versions - Transformers 4.30.1 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.13.3
[ "control", "hard", "sexy" ]
hkivancoral/smids_5x_deit_small_rms_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_rms_00001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7417 - Accuracy: 0.9232 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2432 | 1.0 | 376 | 0.2964 | 0.8731 | | 0.1614 | 2.0 | 752 | 0.2849 | 0.8932 | | 0.1382 | 3.0 | 1128 | 0.2988 | 0.9048 | | 0.0674 | 4.0 | 1504 | 0.3782 | 0.9082 | | 0.0238 | 5.0 | 1880 | 0.4767 | 0.9132 | | 0.0038 | 6.0 | 2256 | 0.4683 | 0.9215 | | 0.0081 | 7.0 | 2632 | 0.5149 | 0.9015 | | 0.0311 | 8.0 | 3008 | 0.6215 | 0.9082 | | 0.0258 | 9.0 | 3384 | 0.6420 | 0.9015 | | 0.0003 | 10.0 | 3760 | 0.7389 | 0.8982 | | 0.0016 | 11.0 | 4136 | 0.7097 | 0.9032 | | 0.0237 | 12.0 | 4512 | 0.7322 | 0.8965 | | 0.0 | 13.0 | 4888 | 0.6330 | 0.9065 | | 0.0121 | 14.0 | 5264 | 0.6713 | 0.8998 | | 0.0128 | 15.0 | 5640 | 0.6959 | 0.9032 | | 0.0 | 16.0 | 6016 | 0.5921 | 0.9165 | | 0.0147 | 17.0 | 6392 | 0.7286 | 0.9032 | | 0.0096 | 18.0 | 6768 | 0.6654 | 0.9115 | | 0.0001 | 19.0 | 7144 | 0.7241 | 0.9065 | | 0.026 | 20.0 | 7520 | 0.7595 | 0.9115 | | 0.0004 | 21.0 | 7896 | 0.7089 | 0.9132 | | 0.0001 | 22.0 | 8272 | 0.7020 | 0.9132 | | 0.0189 | 23.0 | 8648 | 0.7064 | 0.9032 | | 0.0 | 24.0 | 9024 | 0.6953 | 0.9182 | | 0.0008 | 25.0 | 9400 | 0.6754 | 0.9048 | | 0.0029 | 26.0 | 9776 | 0.6682 | 0.9149 | | 0.0039 | 27.0 | 10152 | 0.7036 | 0.9115 | | 0.0 | 28.0 | 10528 | 0.7901 | 0.9098 | | 0.0047 | 29.0 | 10904 | 0.7958 | 0.9165 | | 0.0042 | 30.0 | 11280 | 0.7246 | 0.9115 | | 0.0 | 31.0 | 11656 | 0.7694 | 0.9132 | | 0.0 | 32.0 | 12032 | 0.7581 | 0.9082 | | 0.0 | 33.0 | 12408 | 0.7146 | 0.9149 | | 0.0 | 34.0 | 12784 | 0.7034 | 0.9165 | | 0.0 | 35.0 | 13160 | 0.7688 | 0.9115 | | 0.0 | 36.0 | 13536 | 0.7638 | 0.9132 | | 0.0 | 37.0 | 13912 | 0.8028 | 0.9115 | | 0.0 | 38.0 | 14288 | 0.7323 | 0.9215 | | 0.0 | 39.0 | 14664 | 0.7555 | 0.9199 | | 0.0 | 40.0 | 15040 | 0.7506 | 0.9215 | | 0.0 | 41.0 | 15416 | 0.7416 | 0.9215 | | 0.0 | 42.0 | 15792 | 0.7376 | 0.9199 | | 0.0 | 43.0 | 16168 | 0.7280 | 0.9215 | | 0.0 | 44.0 | 16544 | 0.7390 | 0.9215 | | 0.0 | 45.0 | 16920 | 0.7365 | 0.9232 | | 0.0028 | 46.0 | 17296 | 0.7367 | 0.9232 | | 0.0 | 47.0 | 17672 | 0.7395 | 0.9232 | | 0.0 | 48.0 | 18048 | 0.7408 | 0.9232 | | 0.0 | 49.0 | 18424 | 0.7418 | 0.9232 | | 0.0024 | 50.0 | 18800 | 0.7417 | 0.9232 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_rms_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_rms_0001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9277 - Accuracy: 0.9048 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2576 | 1.0 | 376 | 0.2886 | 0.8915 | | 0.132 | 2.0 | 752 | 0.3675 | 0.8881 | | 0.153 | 3.0 | 1128 | 0.3722 | 0.8815 | | 0.1012 | 4.0 | 1504 | 0.5411 | 0.8715 | | 0.0774 | 5.0 | 1880 | 0.4743 | 0.8881 | | 0.0966 | 6.0 | 2256 | 0.5187 | 0.8831 | | 0.02 | 7.0 | 2632 | 0.7637 | 0.8548 | | 0.0255 | 8.0 | 3008 | 0.5858 | 0.8982 | | 0.0535 | 9.0 | 3384 | 0.7179 | 0.8798 | | 0.0051 | 10.0 | 3760 | 0.4830 | 0.9132 | | 0.0623 | 11.0 | 4136 | 0.6803 | 0.8898 | | 0.032 | 12.0 | 4512 | 0.6393 | 0.8831 | | 0.0049 | 13.0 | 4888 | 0.6430 | 0.8965 | | 0.0375 | 14.0 | 5264 | 0.6697 | 0.8948 | | 0.0305 | 15.0 | 5640 | 0.4958 | 0.9165 | | 0.0041 | 16.0 | 6016 | 0.6462 | 0.8965 | | 0.0225 | 17.0 | 6392 | 0.6064 | 0.9065 | | 0.0015 | 18.0 | 6768 | 0.7328 | 0.8865 | | 0.0129 | 19.0 | 7144 | 0.6712 | 0.8848 | | 0.0072 | 20.0 | 7520 | 0.7644 | 0.8881 | | 0.002 | 21.0 | 7896 | 0.6536 | 0.9065 | | 0.0135 | 22.0 | 8272 | 0.7707 | 0.8881 | | 0.0245 | 23.0 | 8648 | 0.6111 | 0.8948 | | 0.0006 | 24.0 | 9024 | 0.7622 | 0.8881 | | 0.0001 | 25.0 | 9400 | 0.7257 | 0.9015 | | 0.0065 | 26.0 | 9776 | 0.7266 | 0.8948 | | 0.0001 | 27.0 | 10152 | 0.7834 | 0.9082 | | 0.0001 | 28.0 | 10528 | 0.7481 | 0.9032 | | 0.0047 | 29.0 | 10904 | 0.8083 | 0.8915 | | 0.0032 | 30.0 | 11280 | 0.7670 | 0.8948 | | 0.0008 | 31.0 | 11656 | 0.8608 | 0.8881 | | 0.0001 | 32.0 | 12032 | 0.7792 | 0.8948 | | 0.0001 | 33.0 | 12408 | 0.8789 | 0.8932 | | 0.0 | 34.0 | 12784 | 0.7571 | 0.9015 | | 0.0 | 35.0 | 13160 | 0.7309 | 0.9115 | | 0.0002 | 36.0 | 13536 | 0.7237 | 0.9082 | | 0.0 | 37.0 | 13912 | 0.8459 | 0.9015 | | 0.0 | 38.0 | 14288 | 0.8205 | 0.9082 | | 0.0 | 39.0 | 14664 | 0.8617 | 0.9048 | | 0.0 | 40.0 | 15040 | 0.8709 | 0.8932 | | 0.0 | 41.0 | 15416 | 0.8732 | 0.8915 | | 0.0 | 42.0 | 15792 | 0.8524 | 0.8982 | | 0.0 | 43.0 | 16168 | 0.8924 | 0.9048 | | 0.0 | 44.0 | 16544 | 0.8692 | 0.8898 | | 0.0 | 45.0 | 16920 | 0.8944 | 0.9015 | | 0.0031 | 46.0 | 17296 | 0.8984 | 0.9032 | | 0.0 | 47.0 | 17672 | 0.9119 | 0.9032 | | 0.0 | 48.0 | 18048 | 0.9192 | 0.9048 | | 0.0 | 49.0 | 18424 | 0.9260 | 0.9032 | | 0.0023 | 50.0 | 18800 | 0.9277 | 0.9048 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_rms_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_rms_00001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3142 - Accuracy: 0.8636 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.255 | 1.0 | 375 | 0.2860 | 0.8785 | | 0.192 | 2.0 | 750 | 0.3380 | 0.8719 | | 0.111 | 3.0 | 1125 | 0.5269 | 0.8536 | | 0.0357 | 4.0 | 1500 | 0.6082 | 0.8819 | | 0.0344 | 5.0 | 1875 | 0.7662 | 0.8586 | | 0.0306 | 6.0 | 2250 | 0.7220 | 0.8735 | | 0.0004 | 7.0 | 2625 | 1.0133 | 0.8636 | | 0.0601 | 8.0 | 3000 | 0.9769 | 0.8602 | | 0.0002 | 9.0 | 3375 | 1.0509 | 0.8719 | | 0.0001 | 10.0 | 3750 | 1.0508 | 0.8686 | | 0.0242 | 11.0 | 4125 | 1.1405 | 0.8619 | | 0.0086 | 12.0 | 4500 | 0.9578 | 0.8735 | | 0.0 | 13.0 | 4875 | 0.9452 | 0.8702 | | 0.0167 | 14.0 | 5250 | 1.1793 | 0.8652 | | 0.0068 | 15.0 | 5625 | 1.1314 | 0.8569 | | 0.0242 | 16.0 | 6000 | 1.0830 | 0.8686 | | 0.0116 | 17.0 | 6375 | 1.0898 | 0.8686 | | 0.0019 | 18.0 | 6750 | 1.1516 | 0.8702 | | 0.0 | 19.0 | 7125 | 1.1246 | 0.8686 | | 0.0357 | 20.0 | 7500 | 1.2754 | 0.8419 | | 0.0167 | 21.0 | 7875 | 1.1083 | 0.8586 | | 0.0 | 22.0 | 8250 | 1.1597 | 0.8636 | | 0.0 | 23.0 | 8625 | 1.1775 | 0.8686 | | 0.0001 | 24.0 | 9000 | 1.1781 | 0.8735 | | 0.0 | 25.0 | 9375 | 1.1367 | 0.8752 | | 0.0 | 26.0 | 9750 | 1.2570 | 0.8602 | | 0.0 | 27.0 | 10125 | 1.2344 | 0.8669 | | 0.0 | 28.0 | 10500 | 1.2212 | 0.8686 | | 0.0 | 29.0 | 10875 | 1.1884 | 0.8686 | | 0.0 | 30.0 | 11250 | 1.1717 | 0.8819 | | 0.0041 | 31.0 | 11625 | 1.2327 | 0.8735 | | 0.0049 | 32.0 | 12000 | 1.2073 | 0.8719 | | 0.0069 | 33.0 | 12375 | 1.2981 | 0.8669 | | 0.0 | 34.0 | 12750 | 1.3346 | 0.8602 | | 0.0 | 35.0 | 13125 | 1.2237 | 0.8719 | | 0.0 | 36.0 | 13500 | 1.2742 | 0.8702 | | 0.0 | 37.0 | 13875 | 1.3127 | 0.8702 | | 0.0 | 38.0 | 14250 | 1.3037 | 0.8702 | | 0.0 | 39.0 | 14625 | 1.3578 | 0.8636 | | 0.0033 | 40.0 | 15000 | 1.3159 | 0.8636 | | 0.0 | 41.0 | 15375 | 1.3110 | 0.8686 | | 0.0024 | 42.0 | 15750 | 1.3216 | 0.8652 | | 0.0026 | 43.0 | 16125 | 1.3041 | 0.8686 | | 0.0026 | 44.0 | 16500 | 1.3057 | 0.8669 | | 0.0024 | 45.0 | 16875 | 1.3146 | 0.8652 | | 0.0 | 46.0 | 17250 | 1.3144 | 0.8686 | | 0.0053 | 47.0 | 17625 | 1.3156 | 0.8652 | | 0.0 | 48.0 | 18000 | 1.3154 | 0.8652 | | 0.0023 | 49.0 | 18375 | 1.3139 | 0.8652 | | 0.0024 | 50.0 | 18750 | 1.3142 | 0.8636 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_rms_0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_rms_0001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0513 - Accuracy: 0.8902 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2658 | 1.0 | 375 | 0.3126 | 0.8819 | | 0.1908 | 2.0 | 750 | 0.3283 | 0.8918 | | 0.0825 | 3.0 | 1125 | 0.4361 | 0.8702 | | 0.0963 | 4.0 | 1500 | 0.5286 | 0.8619 | | 0.0865 | 5.0 | 1875 | 0.5193 | 0.8885 | | 0.0848 | 6.0 | 2250 | 0.5531 | 0.8835 | | 0.072 | 7.0 | 2625 | 0.6299 | 0.8669 | | 0.0017 | 8.0 | 3000 | 0.6200 | 0.8819 | | 0.0259 | 9.0 | 3375 | 0.6078 | 0.9002 | | 0.0609 | 10.0 | 3750 | 0.5800 | 0.8918 | | 0.045 | 11.0 | 4125 | 0.6268 | 0.8802 | | 0.068 | 12.0 | 4500 | 0.5720 | 0.8869 | | 0.0342 | 13.0 | 4875 | 0.7774 | 0.8652 | | 0.007 | 14.0 | 5250 | 0.7661 | 0.8769 | | 0.0136 | 15.0 | 5625 | 0.7670 | 0.8885 | | 0.025 | 16.0 | 6000 | 0.9672 | 0.8752 | | 0.0117 | 17.0 | 6375 | 0.6723 | 0.9018 | | 0.01 | 18.0 | 6750 | 0.7201 | 0.8835 | | 0.1057 | 19.0 | 7125 | 0.7988 | 0.8686 | | 0.0079 | 20.0 | 7500 | 0.8529 | 0.8735 | | 0.0114 | 21.0 | 7875 | 0.9574 | 0.8835 | | 0.0141 | 22.0 | 8250 | 0.8344 | 0.8819 | | 0.0006 | 23.0 | 8625 | 0.9308 | 0.8769 | | 0.0008 | 24.0 | 9000 | 0.8418 | 0.8752 | | 0.0001 | 25.0 | 9375 | 0.7076 | 0.8869 | | 0.0123 | 26.0 | 9750 | 0.9006 | 0.8686 | | 0.0003 | 27.0 | 10125 | 0.9386 | 0.8702 | | 0.0081 | 28.0 | 10500 | 1.0332 | 0.8735 | | 0.0 | 29.0 | 10875 | 0.9316 | 0.8752 | | 0.0272 | 30.0 | 11250 | 0.9157 | 0.8835 | | 0.0042 | 31.0 | 11625 | 0.9011 | 0.8752 | | 0.0038 | 32.0 | 12000 | 0.9259 | 0.8769 | | 0.0044 | 33.0 | 12375 | 0.9290 | 0.8869 | | 0.0 | 34.0 | 12750 | 0.9423 | 0.8852 | | 0.0 | 35.0 | 13125 | 0.8933 | 0.8902 | | 0.0 | 36.0 | 13500 | 0.8976 | 0.8885 | | 0.0 | 37.0 | 13875 | 0.8889 | 0.8835 | | 0.0004 | 38.0 | 14250 | 1.0859 | 0.8802 | | 0.0 | 39.0 | 14625 | 1.0992 | 0.8869 | | 0.0031 | 40.0 | 15000 | 1.0003 | 0.8952 | | 0.0 | 41.0 | 15375 | 1.0009 | 0.8985 | | 0.0027 | 42.0 | 15750 | 1.0542 | 0.8885 | | 0.0026 | 43.0 | 16125 | 1.0230 | 0.8852 | | 0.0026 | 44.0 | 16500 | 1.0414 | 0.8885 | | 0.0027 | 45.0 | 16875 | 1.0121 | 0.8885 | | 0.0 | 46.0 | 17250 | 1.0455 | 0.8885 | | 0.006 | 47.0 | 17625 | 1.0427 | 0.8918 | | 0.0 | 48.0 | 18000 | 1.0481 | 0.8918 | | 0.0024 | 49.0 | 18375 | 1.0499 | 0.8918 | | 0.0022 | 50.0 | 18750 | 1.0513 | 0.8902 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_rms_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_rms_00001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0303 - Accuracy: 0.8967 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2375 | 1.0 | 375 | 0.2542 | 0.9083 | | 0.1409 | 2.0 | 750 | 0.2770 | 0.91 | | 0.0609 | 3.0 | 1125 | 0.3094 | 0.8983 | | 0.0237 | 4.0 | 1500 | 0.3996 | 0.8983 | | 0.0338 | 5.0 | 1875 | 0.5715 | 0.8883 | | 0.0534 | 6.0 | 2250 | 0.6112 | 0.8933 | | 0.0052 | 7.0 | 2625 | 0.6878 | 0.8933 | | 0.0006 | 8.0 | 3000 | 0.6908 | 0.9 | | 0.0283 | 9.0 | 3375 | 0.6671 | 0.8983 | | 0.0065 | 10.0 | 3750 | 0.7326 | 0.9067 | | 0.0065 | 11.0 | 4125 | 0.6436 | 0.915 | | 0.0 | 12.0 | 4500 | 0.7498 | 0.9 | | 0.0016 | 13.0 | 4875 | 0.8235 | 0.8933 | | 0.0003 | 14.0 | 5250 | 0.8655 | 0.895 | | 0.0082 | 15.0 | 5625 | 0.7486 | 0.9067 | | 0.0001 | 16.0 | 6000 | 0.8142 | 0.9017 | | 0.0001 | 17.0 | 6375 | 0.7387 | 0.8917 | | 0.0001 | 18.0 | 6750 | 0.8424 | 0.9067 | | 0.0 | 19.0 | 7125 | 0.7973 | 0.9017 | | 0.0378 | 20.0 | 7500 | 0.7948 | 0.8967 | | 0.0 | 21.0 | 7875 | 0.8629 | 0.8917 | | 0.0 | 22.0 | 8250 | 0.7939 | 0.9017 | | 0.0063 | 23.0 | 8625 | 0.8369 | 0.89 | | 0.0 | 24.0 | 9000 | 0.8848 | 0.9 | | 0.0 | 25.0 | 9375 | 0.8284 | 0.91 | | 0.0 | 26.0 | 9750 | 0.9412 | 0.9 | | 0.0 | 27.0 | 10125 | 0.8363 | 0.905 | | 0.0 | 28.0 | 10500 | 0.9351 | 0.8917 | | 0.0 | 29.0 | 10875 | 0.8734 | 0.9033 | | 0.0037 | 30.0 | 11250 | 0.9770 | 0.9067 | | 0.0 | 31.0 | 11625 | 0.8887 | 0.905 | | 0.0 | 32.0 | 12000 | 0.9455 | 0.9 | | 0.0 | 33.0 | 12375 | 0.9432 | 0.9033 | | 0.0 | 34.0 | 12750 | 0.9703 | 0.8983 | | 0.0 | 35.0 | 13125 | 0.9495 | 0.9067 | | 0.0 | 36.0 | 13500 | 0.9886 | 0.8983 | | 0.0 | 37.0 | 13875 | 0.9999 | 0.9 | | 0.0 | 38.0 | 14250 | 1.0388 | 0.8983 | | 0.0 | 39.0 | 14625 | 1.0645 | 0.8917 | | 0.0038 | 40.0 | 15000 | 0.9923 | 0.9017 | | 0.0 | 41.0 | 15375 | 1.0132 | 0.8983 | | 0.0 | 42.0 | 15750 | 1.0058 | 0.9017 | | 0.0 | 43.0 | 16125 | 1.0185 | 0.8983 | | 0.0 | 44.0 | 16500 | 1.0211 | 0.8967 | | 0.0 | 45.0 | 16875 | 1.0174 | 0.8967 | | 0.0 | 46.0 | 17250 | 1.0256 | 0.8967 | | 0.0 | 47.0 | 17625 | 1.0251 | 0.8967 | | 0.0 | 48.0 | 18000 | 1.0297 | 0.8967 | | 0.0 | 49.0 | 18375 | 1.0318 | 0.8967 | | 0.0 | 50.0 | 18750 | 1.0303 | 0.8967 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_rms_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_rms_0001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0792 - Accuracy: 0.8933 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.279 | 1.0 | 375 | 0.3229 | 0.88 | | 0.164 | 2.0 | 750 | 0.3138 | 0.91 | | 0.1109 | 3.0 | 1125 | 0.3763 | 0.8967 | | 0.0429 | 4.0 | 1500 | 0.4357 | 0.8933 | | 0.0647 | 5.0 | 1875 | 0.5383 | 0.9 | | 0.0292 | 6.0 | 2250 | 0.4950 | 0.8983 | | 0.0793 | 7.0 | 2625 | 0.5600 | 0.8867 | | 0.0326 | 8.0 | 3000 | 0.6289 | 0.885 | | 0.0175 | 9.0 | 3375 | 0.6125 | 0.89 | | 0.0162 | 10.0 | 3750 | 0.7037 | 0.8983 | | 0.0357 | 11.0 | 4125 | 0.6928 | 0.8833 | | 0.0295 | 12.0 | 4500 | 0.7344 | 0.8817 | | 0.0007 | 13.0 | 4875 | 0.6848 | 0.9033 | | 0.0001 | 14.0 | 5250 | 0.6912 | 0.89 | | 0.0101 | 15.0 | 5625 | 0.6507 | 0.9 | | 0.024 | 16.0 | 6000 | 0.6949 | 0.8933 | | 0.0004 | 17.0 | 6375 | 0.5735 | 0.9133 | | 0.0116 | 18.0 | 6750 | 0.6520 | 0.905 | | 0.0214 | 19.0 | 7125 | 0.8822 | 0.895 | | 0.0002 | 20.0 | 7500 | 0.7795 | 0.8883 | | 0.023 | 21.0 | 7875 | 0.7295 | 0.9017 | | 0.0001 | 22.0 | 8250 | 0.7805 | 0.9 | | 0.0077 | 23.0 | 8625 | 0.7822 | 0.89 | | 0.0248 | 24.0 | 9000 | 0.6997 | 0.9017 | | 0.0192 | 25.0 | 9375 | 0.7612 | 0.8983 | | 0.0053 | 26.0 | 9750 | 0.6937 | 0.9 | | 0.0001 | 27.0 | 10125 | 0.8197 | 0.9 | | 0.0066 | 28.0 | 10500 | 0.7264 | 0.9033 | | 0.0 | 29.0 | 10875 | 0.9769 | 0.8883 | | 0.0055 | 30.0 | 11250 | 0.9308 | 0.8817 | | 0.0083 | 31.0 | 11625 | 0.9275 | 0.88 | | 0.0 | 32.0 | 12000 | 0.8761 | 0.8917 | | 0.0001 | 33.0 | 12375 | 0.8754 | 0.8967 | | 0.0001 | 34.0 | 12750 | 0.9281 | 0.8883 | | 0.0158 | 35.0 | 13125 | 1.0369 | 0.8767 | | 0.0043 | 36.0 | 13500 | 1.0161 | 0.88 | | 0.0023 | 37.0 | 13875 | 0.9274 | 0.8933 | | 0.0 | 38.0 | 14250 | 0.9705 | 0.8933 | | 0.0 | 39.0 | 14625 | 1.0691 | 0.8867 | | 0.0029 | 40.0 | 15000 | 1.0780 | 0.89 | | 0.0 | 41.0 | 15375 | 1.0592 | 0.885 | | 0.0 | 42.0 | 15750 | 1.0784 | 0.885 | | 0.0 | 43.0 | 16125 | 1.0389 | 0.8917 | | 0.0 | 44.0 | 16500 | 1.0434 | 0.8933 | | 0.0 | 45.0 | 16875 | 1.0581 | 0.8917 | | 0.0 | 46.0 | 17250 | 1.0632 | 0.8933 | | 0.0 | 47.0 | 17625 | 1.0692 | 0.8933 | | 0.0 | 48.0 | 18000 | 1.0755 | 0.8917 | | 0.0 | 49.0 | 18375 | 1.0789 | 0.8917 | | 0.0 | 50.0 | 18750 | 1.0792 | 0.8933 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_rms_00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_rms_00001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4091 - Accuracy: 0.8733 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2357 | 1.0 | 375 | 0.3537 | 0.8617 | | 0.1744 | 2.0 | 750 | 0.3961 | 0.8667 | | 0.086 | 3.0 | 1125 | 0.5528 | 0.8517 | | 0.0758 | 4.0 | 1500 | 0.6078 | 0.87 | | 0.0324 | 5.0 | 1875 | 0.7790 | 0.8683 | | 0.0181 | 6.0 | 2250 | 0.9636 | 0.855 | | 0.0003 | 7.0 | 2625 | 0.8894 | 0.8633 | | 0.0164 | 8.0 | 3000 | 1.0314 | 0.865 | | 0.0386 | 9.0 | 3375 | 1.0455 | 0.87 | | 0.0021 | 10.0 | 3750 | 1.0868 | 0.86 | | 0.0004 | 11.0 | 4125 | 1.1036 | 0.8667 | | 0.0001 | 12.0 | 4500 | 1.0921 | 0.8683 | | 0.0451 | 13.0 | 4875 | 1.1416 | 0.865 | | 0.0313 | 14.0 | 5250 | 1.1480 | 0.875 | | 0.0 | 15.0 | 5625 | 1.2563 | 0.8617 | | 0.0294 | 16.0 | 6000 | 1.1827 | 0.8683 | | 0.0014 | 17.0 | 6375 | 1.1784 | 0.8683 | | 0.0017 | 18.0 | 6750 | 1.2479 | 0.8633 | | 0.0 | 19.0 | 7125 | 1.1339 | 0.8783 | | 0.0 | 20.0 | 7500 | 1.1679 | 0.8783 | | 0.0 | 21.0 | 7875 | 1.2344 | 0.8683 | | 0.0027 | 22.0 | 8250 | 1.2529 | 0.8667 | | 0.0001 | 23.0 | 8625 | 1.2449 | 0.865 | | 0.0 | 24.0 | 9000 | 1.2744 | 0.865 | | 0.0177 | 25.0 | 9375 | 1.2614 | 0.8717 | | 0.0 | 26.0 | 9750 | 1.2469 | 0.8717 | | 0.0 | 27.0 | 10125 | 1.3072 | 0.8667 | | 0.0 | 28.0 | 10500 | 1.2917 | 0.8683 | | 0.0 | 29.0 | 10875 | 1.2871 | 0.8767 | | 0.0 | 30.0 | 11250 | 1.3156 | 0.8617 | | 0.0 | 31.0 | 11625 | 1.2509 | 0.8767 | | 0.0 | 32.0 | 12000 | 1.2613 | 0.8783 | | 0.0 | 33.0 | 12375 | 1.3454 | 0.8683 | | 0.0001 | 34.0 | 12750 | 1.2978 | 0.8667 | | 0.0 | 35.0 | 13125 | 1.2980 | 0.8617 | | 0.0 | 36.0 | 13500 | 1.2859 | 0.8683 | | 0.0 | 37.0 | 13875 | 1.3447 | 0.8683 | | 0.0044 | 38.0 | 14250 | 1.3496 | 0.87 | | 0.0 | 39.0 | 14625 | 1.3716 | 0.8717 | | 0.0 | 40.0 | 15000 | 1.3540 | 0.8733 | | 0.0 | 41.0 | 15375 | 1.3616 | 0.8733 | | 0.0 | 42.0 | 15750 | 1.3855 | 0.8717 | | 0.0 | 43.0 | 16125 | 1.3813 | 0.8733 | | 0.0 | 44.0 | 16500 | 1.4013 | 0.8717 | | 0.0 | 45.0 | 16875 | 1.3939 | 0.875 | | 0.0 | 46.0 | 17250 | 1.3982 | 0.875 | | 0.0 | 47.0 | 17625 | 1.4014 | 0.875 | | 0.0 | 48.0 | 18000 | 1.4060 | 0.8733 | | 0.0 | 49.0 | 18375 | 1.4084 | 0.8733 | | 0.0 | 50.0 | 18750 | 1.4091 | 0.8733 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_rms_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_rms_0001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4198 - Accuracy: 0.8783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2762 | 1.0 | 375 | 0.4287 | 0.8433 | | 0.1833 | 2.0 | 750 | 0.3850 | 0.88 | | 0.0712 | 3.0 | 1125 | 0.5130 | 0.8533 | | 0.1056 | 4.0 | 1500 | 0.5359 | 0.8617 | | 0.0785 | 5.0 | 1875 | 0.5955 | 0.8767 | | 0.0284 | 6.0 | 2250 | 0.6660 | 0.875 | | 0.0398 | 7.0 | 2625 | 0.7191 | 0.8517 | | 0.0358 | 8.0 | 3000 | 0.7890 | 0.855 | | 0.0148 | 9.0 | 3375 | 0.8497 | 0.88 | | 0.0842 | 10.0 | 3750 | 0.8652 | 0.8583 | | 0.0545 | 11.0 | 4125 | 0.9852 | 0.8583 | | 0.0347 | 12.0 | 4500 | 0.8898 | 0.8733 | | 0.0481 | 13.0 | 4875 | 0.8728 | 0.8417 | | 0.0109 | 14.0 | 5250 | 0.7860 | 0.8717 | | 0.004 | 15.0 | 5625 | 0.9119 | 0.87 | | 0.0001 | 16.0 | 6000 | 1.0067 | 0.8667 | | 0.0208 | 17.0 | 6375 | 0.9891 | 0.8633 | | 0.0022 | 18.0 | 6750 | 0.9452 | 0.8733 | | 0.0258 | 19.0 | 7125 | 0.9437 | 0.865 | | 0.0001 | 20.0 | 7500 | 0.9752 | 0.8867 | | 0.014 | 21.0 | 7875 | 0.9198 | 0.8717 | | 0.0001 | 22.0 | 8250 | 1.0263 | 0.8817 | | 0.0007 | 23.0 | 8625 | 1.0253 | 0.875 | | 0.0298 | 24.0 | 9000 | 0.9487 | 0.8717 | | 0.0364 | 25.0 | 9375 | 1.0414 | 0.8567 | | 0.0168 | 26.0 | 9750 | 0.9402 | 0.8667 | | 0.0267 | 27.0 | 10125 | 1.0294 | 0.87 | | 0.0004 | 28.0 | 10500 | 1.1186 | 0.8617 | | 0.0 | 29.0 | 10875 | 1.0351 | 0.8733 | | 0.0136 | 30.0 | 11250 | 1.0077 | 0.8667 | | 0.0403 | 31.0 | 11625 | 0.9771 | 0.885 | | 0.0141 | 32.0 | 12000 | 0.9784 | 0.875 | | 0.0079 | 33.0 | 12375 | 0.9784 | 0.875 | | 0.0198 | 34.0 | 12750 | 1.1841 | 0.87 | | 0.0247 | 35.0 | 13125 | 1.1305 | 0.8767 | | 0.0348 | 36.0 | 13500 | 1.1191 | 0.8733 | | 0.0086 | 37.0 | 13875 | 1.0822 | 0.8817 | | 0.004 | 38.0 | 14250 | 1.1525 | 0.8733 | | 0.0 | 39.0 | 14625 | 1.3113 | 0.87 | | 0.0 | 40.0 | 15000 | 1.2648 | 0.8767 | | 0.0 | 41.0 | 15375 | 1.3219 | 0.88 | | 0.0 | 42.0 | 15750 | 1.3106 | 0.88 | | 0.0 | 43.0 | 16125 | 1.3404 | 0.8783 | | 0.0 | 44.0 | 16500 | 1.3645 | 0.8783 | | 0.0 | 45.0 | 16875 | 1.3862 | 0.875 | | 0.0 | 46.0 | 17250 | 1.4020 | 0.8767 | | 0.0 | 47.0 | 17625 | 1.4061 | 0.8783 | | 0.0 | 48.0 | 18000 | 1.4149 | 0.8783 | | 0.0 | 49.0 | 18375 | 1.4183 | 0.8783 | | 0.0 | 50.0 | 18750 | 1.4198 | 0.8783 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_rms_00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_rms_00001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9811 - Accuracy: 0.905 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2316 | 1.0 | 375 | 0.3828 | 0.845 | | 0.1221 | 2.0 | 750 | 0.3043 | 0.8767 | | 0.1068 | 3.0 | 1125 | 0.3175 | 0.8983 | | 0.0662 | 4.0 | 1500 | 0.4134 | 0.91 | | 0.0386 | 5.0 | 1875 | 0.4725 | 0.9017 | | 0.0006 | 6.0 | 2250 | 0.5991 | 0.8967 | | 0.0164 | 7.0 | 2625 | 0.5133 | 0.9083 | | 0.0227 | 8.0 | 3000 | 0.6114 | 0.905 | | 0.0539 | 9.0 | 3375 | 0.7854 | 0.8933 | | 0.0093 | 10.0 | 3750 | 0.6820 | 0.8983 | | 0.0296 | 11.0 | 4125 | 0.8028 | 0.88 | | 0.0005 | 12.0 | 4500 | 0.7438 | 0.8983 | | 0.0103 | 13.0 | 4875 | 0.8815 | 0.885 | | 0.0 | 14.0 | 5250 | 0.8305 | 0.89 | | 0.0056 | 15.0 | 5625 | 0.8083 | 0.895 | | 0.0 | 16.0 | 6000 | 0.7453 | 0.8917 | | 0.0 | 17.0 | 6375 | 0.7352 | 0.8967 | | 0.0086 | 18.0 | 6750 | 0.8865 | 0.8883 | | 0.0 | 19.0 | 7125 | 0.9006 | 0.8867 | | 0.0007 | 20.0 | 7500 | 0.8620 | 0.8883 | | 0.0001 | 21.0 | 7875 | 0.8055 | 0.8967 | | 0.0167 | 22.0 | 8250 | 0.9436 | 0.8917 | | 0.0 | 23.0 | 8625 | 0.9512 | 0.8883 | | 0.0 | 24.0 | 9000 | 0.8722 | 0.9017 | | 0.014 | 25.0 | 9375 | 0.8674 | 0.9 | | 0.0 | 26.0 | 9750 | 0.8631 | 0.9017 | | 0.0 | 27.0 | 10125 | 0.8607 | 0.9017 | | 0.0054 | 28.0 | 10500 | 0.8949 | 0.9017 | | 0.0 | 29.0 | 10875 | 0.9221 | 0.895 | | 0.0052 | 30.0 | 11250 | 0.8532 | 0.905 | | 0.0 | 31.0 | 11625 | 0.8797 | 0.9017 | | 0.0 | 32.0 | 12000 | 0.8663 | 0.8983 | | 0.0 | 33.0 | 12375 | 0.9338 | 0.8983 | | 0.0 | 34.0 | 12750 | 0.9397 | 0.9033 | | 0.0 | 35.0 | 13125 | 0.9338 | 0.905 | | 0.0 | 36.0 | 13500 | 0.9706 | 0.9 | | 0.0 | 37.0 | 13875 | 0.9486 | 0.9 | | 0.0028 | 38.0 | 14250 | 0.9187 | 0.9033 | | 0.0 | 39.0 | 14625 | 0.9340 | 0.9067 | | 0.0 | 40.0 | 15000 | 0.9523 | 0.905 | | 0.0 | 41.0 | 15375 | 0.9602 | 0.9067 | | 0.0 | 42.0 | 15750 | 0.9532 | 0.905 | | 0.0 | 43.0 | 16125 | 0.9551 | 0.9067 | | 0.0 | 44.0 | 16500 | 0.9611 | 0.905 | | 0.0 | 45.0 | 16875 | 0.9674 | 0.905 | | 0.0 | 46.0 | 17250 | 0.9798 | 0.905 | | 0.0027 | 47.0 | 17625 | 0.9798 | 0.905 | | 0.0 | 48.0 | 18000 | 0.9814 | 0.905 | | 0.0 | 49.0 | 18375 | 0.9829 | 0.905 | | 0.0022 | 50.0 | 18750 | 0.9811 | 0.905 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_rms_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_rms_0001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8725 - Accuracy: 0.915 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2465 | 1.0 | 375 | 0.3246 | 0.8817 | | 0.1757 | 2.0 | 750 | 0.3108 | 0.8983 | | 0.1416 | 3.0 | 1125 | 0.4018 | 0.8917 | | 0.0805 | 4.0 | 1500 | 0.5614 | 0.8733 | | 0.0831 | 5.0 | 1875 | 0.4489 | 0.8783 | | 0.0313 | 6.0 | 2250 | 0.6222 | 0.8833 | | 0.0541 | 7.0 | 2625 | 0.5461 | 0.9017 | | 0.0151 | 8.0 | 3000 | 0.5573 | 0.9017 | | 0.0125 | 9.0 | 3375 | 0.6886 | 0.8817 | | 0.0415 | 10.0 | 3750 | 0.5097 | 0.8967 | | 0.0264 | 11.0 | 4125 | 0.6554 | 0.89 | | 0.0122 | 12.0 | 4500 | 0.5509 | 0.8983 | | 0.004 | 13.0 | 4875 | 0.6519 | 0.895 | | 0.0092 | 14.0 | 5250 | 0.8038 | 0.88 | | 0.0491 | 15.0 | 5625 | 0.5627 | 0.9117 | | 0.0096 | 16.0 | 6000 | 0.7380 | 0.885 | | 0.003 | 17.0 | 6375 | 0.6721 | 0.8967 | | 0.0153 | 18.0 | 6750 | 0.6251 | 0.9017 | | 0.0001 | 19.0 | 7125 | 0.7496 | 0.8933 | | 0.0067 | 20.0 | 7500 | 0.7015 | 0.895 | | 0.0015 | 21.0 | 7875 | 0.7054 | 0.9017 | | 0.0117 | 22.0 | 8250 | 0.7731 | 0.8933 | | 0.0015 | 23.0 | 8625 | 0.6632 | 0.9033 | | 0.0003 | 24.0 | 9000 | 0.7347 | 0.9 | | 0.0 | 25.0 | 9375 | 0.8344 | 0.8917 | | 0.0002 | 26.0 | 9750 | 0.7080 | 0.905 | | 0.0026 | 27.0 | 10125 | 0.8121 | 0.8933 | | 0.0001 | 28.0 | 10500 | 0.7303 | 0.915 | | 0.0285 | 29.0 | 10875 | 0.7237 | 0.905 | | 0.0021 | 30.0 | 11250 | 0.7546 | 0.9 | | 0.0096 | 31.0 | 11625 | 0.8217 | 0.8933 | | 0.0 | 32.0 | 12000 | 0.8254 | 0.9017 | | 0.0 | 33.0 | 12375 | 0.7430 | 0.905 | | 0.0 | 34.0 | 12750 | 0.7668 | 0.9083 | | 0.0002 | 35.0 | 13125 | 0.7980 | 0.9117 | | 0.0018 | 36.0 | 13500 | 0.8507 | 0.9017 | | 0.0 | 37.0 | 13875 | 0.7364 | 0.9133 | | 0.0038 | 38.0 | 14250 | 0.7573 | 0.9133 | | 0.0 | 39.0 | 14625 | 0.7786 | 0.9083 | | 0.0 | 40.0 | 15000 | 0.7610 | 0.915 | | 0.0 | 41.0 | 15375 | 0.7902 | 0.9117 | | 0.0 | 42.0 | 15750 | 0.7539 | 0.9133 | | 0.0 | 43.0 | 16125 | 0.8516 | 0.9133 | | 0.0 | 44.0 | 16500 | 0.8335 | 0.9183 | | 0.0 | 45.0 | 16875 | 0.8847 | 0.905 | | 0.0 | 46.0 | 17250 | 0.9773 | 0.8967 | | 0.0028 | 47.0 | 17625 | 0.8669 | 0.915 | | 0.0 | 48.0 | 18000 | 0.8681 | 0.9133 | | 0.0 | 49.0 | 18375 | 0.8719 | 0.9133 | | 0.0019 | 50.0 | 18750 | 0.8725 | 0.915 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_sgd_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_sgd_0001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5286 - Accuracy: 0.7913 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0382 | 1.0 | 376 | 1.0714 | 0.4090 | | 1.0394 | 2.0 | 752 | 1.0356 | 0.4491 | | 0.9541 | 3.0 | 1128 | 1.0000 | 0.5042 | | 0.928 | 4.0 | 1504 | 0.9637 | 0.5559 | | 0.8813 | 5.0 | 1880 | 0.9285 | 0.5943 | | 0.8344 | 6.0 | 2256 | 0.8967 | 0.6227 | | 0.7873 | 7.0 | 2632 | 0.8681 | 0.6361 | | 0.8127 | 8.0 | 3008 | 0.8414 | 0.6544 | | 0.8077 | 9.0 | 3384 | 0.8166 | 0.6628 | | 0.7559 | 10.0 | 3760 | 0.7932 | 0.6811 | | 0.7293 | 11.0 | 4136 | 0.7713 | 0.6962 | | 0.7184 | 12.0 | 4512 | 0.7510 | 0.7028 | | 0.6866 | 13.0 | 4888 | 0.7320 | 0.7078 | | 0.6588 | 14.0 | 5264 | 0.7145 | 0.7195 | | 0.6425 | 15.0 | 5640 | 0.6984 | 0.7245 | | 0.6378 | 16.0 | 6016 | 0.6836 | 0.7312 | | 0.5876 | 17.0 | 6392 | 0.6699 | 0.7396 | | 0.6379 | 18.0 | 6768 | 0.6573 | 0.7429 | | 0.6063 | 19.0 | 7144 | 0.6456 | 0.7479 | | 0.5557 | 20.0 | 7520 | 0.6350 | 0.7496 | | 0.5709 | 21.0 | 7896 | 0.6253 | 0.7513 | | 0.5404 | 22.0 | 8272 | 0.6166 | 0.7563 | | 0.5599 | 23.0 | 8648 | 0.6082 | 0.7529 | | 0.5567 | 24.0 | 9024 | 0.6008 | 0.7613 | | 0.5445 | 25.0 | 9400 | 0.5938 | 0.7646 | | 0.5273 | 26.0 | 9776 | 0.5874 | 0.7629 | | 0.5187 | 27.0 | 10152 | 0.5814 | 0.7613 | | 0.4686 | 28.0 | 10528 | 0.5760 | 0.7629 | | 0.502 | 29.0 | 10904 | 0.5710 | 0.7629 | | 0.5086 | 30.0 | 11280 | 0.5663 | 0.7663 | | 0.5383 | 31.0 | 11656 | 0.5621 | 0.7679 | | 0.5306 | 32.0 | 12032 | 0.5581 | 0.7696 | | 0.4719 | 33.0 | 12408 | 0.5545 | 0.7713 | | 0.4733 | 34.0 | 12784 | 0.5512 | 0.7763 | | 0.4916 | 35.0 | 13160 | 0.5482 | 0.7796 | | 0.4659 | 36.0 | 13536 | 0.5454 | 0.7796 | | 0.4447 | 37.0 | 13912 | 0.5429 | 0.7830 | | 0.5196 | 38.0 | 14288 | 0.5406 | 0.7830 | | 0.4685 | 39.0 | 14664 | 0.5386 | 0.7830 | | 0.4526 | 40.0 | 15040 | 0.5367 | 0.7830 | | 0.4896 | 41.0 | 15416 | 0.5350 | 0.7863 | | 0.4446 | 42.0 | 15792 | 0.5336 | 0.7863 | | 0.4328 | 43.0 | 16168 | 0.5323 | 0.7863 | | 0.5156 | 44.0 | 16544 | 0.5312 | 0.7880 | | 0.4252 | 45.0 | 16920 | 0.5303 | 0.7896 | | 0.4576 | 46.0 | 17296 | 0.5296 | 0.7896 | | 0.4261 | 47.0 | 17672 | 0.5291 | 0.7913 | | 0.4841 | 48.0 | 18048 | 0.5288 | 0.7913 | | 0.4563 | 49.0 | 18424 | 0.5286 | 0.7913 | | 0.4361 | 50.0 | 18800 | 0.5286 | 0.7913 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_rms_001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_rms_001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5438 - Accuracy: 0.7663 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.9114 | 1.0 | 376 | 0.8706 | 0.5559 | | 0.8509 | 2.0 | 752 | 1.2414 | 0.3456 | | 0.8099 | 3.0 | 1128 | 1.0576 | 0.4007 | | 0.8085 | 4.0 | 1504 | 0.8246 | 0.5442 | | 0.8886 | 5.0 | 1880 | 0.8245 | 0.5376 | | 0.7819 | 6.0 | 2256 | 0.7875 | 0.5977 | | 0.7498 | 7.0 | 2632 | 0.8002 | 0.6344 | | 0.7083 | 8.0 | 3008 | 0.8113 | 0.6027 | | 0.7609 | 9.0 | 3384 | 0.7440 | 0.6594 | | 0.7953 | 10.0 | 3760 | 0.7639 | 0.5993 | | 0.694 | 11.0 | 4136 | 0.7065 | 0.6594 | | 0.7315 | 12.0 | 4512 | 0.7188 | 0.6277 | | 0.7192 | 13.0 | 4888 | 0.6863 | 0.7229 | | 0.6504 | 14.0 | 5264 | 0.6661 | 0.6828 | | 0.6524 | 15.0 | 5640 | 0.6777 | 0.6661 | | 0.5701 | 16.0 | 6016 | 0.7272 | 0.6561 | | 0.5543 | 17.0 | 6392 | 0.7125 | 0.6878 | | 0.6439 | 18.0 | 6768 | 0.6430 | 0.7028 | | 0.648 | 19.0 | 7144 | 0.6863 | 0.6928 | | 0.5899 | 20.0 | 7520 | 0.6226 | 0.7162 | | 0.6393 | 21.0 | 7896 | 0.6018 | 0.7312 | | 0.5884 | 22.0 | 8272 | 0.5610 | 0.7412 | | 0.5288 | 23.0 | 8648 | 0.5975 | 0.7379 | | 0.5965 | 24.0 | 9024 | 0.6473 | 0.7028 | | 0.58 | 25.0 | 9400 | 0.5765 | 0.7396 | | 0.5899 | 26.0 | 9776 | 0.6331 | 0.7245 | | 0.5507 | 27.0 | 10152 | 0.5858 | 0.7396 | | 0.5002 | 28.0 | 10528 | 0.5674 | 0.7396 | | 0.5229 | 29.0 | 10904 | 0.5711 | 0.7629 | | 0.5096 | 30.0 | 11280 | 0.5570 | 0.7312 | | 0.5311 | 31.0 | 11656 | 0.5601 | 0.7396 | | 0.5742 | 32.0 | 12032 | 0.6065 | 0.7346 | | 0.4585 | 33.0 | 12408 | 0.5565 | 0.7462 | | 0.5294 | 34.0 | 12784 | 0.5555 | 0.7446 | | 0.5171 | 35.0 | 13160 | 0.5723 | 0.7462 | | 0.4899 | 36.0 | 13536 | 0.5748 | 0.7279 | | 0.4582 | 37.0 | 13912 | 0.5789 | 0.7396 | | 0.5149 | 38.0 | 14288 | 0.5146 | 0.7679 | | 0.4968 | 39.0 | 14664 | 0.6020 | 0.7613 | | 0.5645 | 40.0 | 15040 | 0.5459 | 0.7546 | | 0.4741 | 41.0 | 15416 | 0.5562 | 0.7479 | | 0.4423 | 42.0 | 15792 | 0.5487 | 0.7412 | | 0.4186 | 43.0 | 16168 | 0.5329 | 0.7479 | | 0.4763 | 44.0 | 16544 | 0.5469 | 0.7462 | | 0.4775 | 45.0 | 16920 | 0.5538 | 0.7496 | | 0.4053 | 46.0 | 17296 | 0.5298 | 0.7613 | | 0.429 | 47.0 | 17672 | 0.5338 | 0.7663 | | 0.4194 | 48.0 | 18048 | 0.5631 | 0.7496 | | 0.3965 | 49.0 | 18424 | 0.5407 | 0.7629 | | 0.356 | 50.0 | 18800 | 0.5438 | 0.7663 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_sgd_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_sgd_00001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9977 - Accuracy: 0.5159 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0592 | 1.0 | 376 | 1.0737 | 0.4391 | | 1.0782 | 2.0 | 752 | 1.0706 | 0.4374 | | 1.0408 | 3.0 | 1128 | 1.0676 | 0.4391 | | 1.0826 | 4.0 | 1504 | 1.0646 | 0.4441 | | 1.0586 | 5.0 | 1880 | 1.0616 | 0.4407 | | 1.0333 | 6.0 | 2256 | 1.0588 | 0.4424 | | 1.0668 | 7.0 | 2632 | 1.0559 | 0.4424 | | 1.0617 | 8.0 | 3008 | 1.0531 | 0.4441 | | 1.0464 | 9.0 | 3384 | 1.0504 | 0.4457 | | 1.0296 | 10.0 | 3760 | 1.0477 | 0.4474 | | 1.0219 | 11.0 | 4136 | 1.0452 | 0.4524 | | 1.036 | 12.0 | 4512 | 1.0426 | 0.4558 | | 1.0086 | 13.0 | 4888 | 1.0402 | 0.4591 | | 1.0374 | 14.0 | 5264 | 1.0378 | 0.4591 | | 1.0308 | 15.0 | 5640 | 1.0354 | 0.4624 | | 1.0138 | 16.0 | 6016 | 1.0332 | 0.4641 | | 1.039 | 17.0 | 6392 | 1.0310 | 0.4674 | | 1.0251 | 18.0 | 6768 | 1.0289 | 0.4691 | | 1.0132 | 19.0 | 7144 | 1.0268 | 0.4674 | | 1.0078 | 20.0 | 7520 | 1.0248 | 0.4674 | | 1.0073 | 21.0 | 7896 | 1.0229 | 0.4741 | | 0.9973 | 22.0 | 8272 | 1.0210 | 0.4775 | | 0.9979 | 23.0 | 8648 | 1.0192 | 0.4791 | | 0.9943 | 24.0 | 9024 | 1.0175 | 0.4791 | | 0.9653 | 25.0 | 9400 | 1.0159 | 0.4841 | | 0.9982 | 26.0 | 9776 | 1.0143 | 0.4841 | | 1.0041 | 27.0 | 10152 | 1.0128 | 0.4875 | | 1.0054 | 28.0 | 10528 | 1.0114 | 0.4908 | | 0.9643 | 29.0 | 10904 | 1.0101 | 0.4925 | | 0.9735 | 30.0 | 11280 | 1.0088 | 0.4958 | | 1.0 | 31.0 | 11656 | 1.0076 | 0.4958 | | 0.998 | 32.0 | 12032 | 1.0064 | 0.4975 | | 0.9763 | 33.0 | 12408 | 1.0054 | 0.4975 | | 0.9704 | 34.0 | 12784 | 1.0044 | 0.4992 | | 0.9948 | 35.0 | 13160 | 1.0035 | 0.5008 | | 0.9708 | 36.0 | 13536 | 1.0026 | 0.5008 | | 0.9711 | 37.0 | 13912 | 1.0019 | 0.5025 | | 0.999 | 38.0 | 14288 | 1.0012 | 0.5042 | | 0.9534 | 39.0 | 14664 | 1.0005 | 0.5042 | | 0.9776 | 40.0 | 15040 | 1.0000 | 0.5058 | | 1.0022 | 41.0 | 15416 | 0.9995 | 0.5058 | | 0.9618 | 42.0 | 15792 | 0.9991 | 0.5058 | | 0.9978 | 43.0 | 16168 | 0.9987 | 0.5109 | | 0.9845 | 44.0 | 16544 | 0.9984 | 0.5142 | | 0.9783 | 45.0 | 16920 | 0.9982 | 0.5159 | | 0.99 | 46.0 | 17296 | 0.9980 | 0.5159 | | 0.9708 | 47.0 | 17672 | 0.9979 | 0.5159 | | 1.0004 | 48.0 | 18048 | 0.9978 | 0.5159 | | 0.9871 | 49.0 | 18424 | 0.9977 | 0.5159 | | 0.9947 | 50.0 | 18800 | 0.9977 | 0.5159 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_rms_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_rms_001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5587 - Accuracy: 0.7571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.1114 | 1.0 | 375 | 1.0890 | 0.3910 | | 0.8594 | 2.0 | 750 | 0.8338 | 0.5258 | | 0.7942 | 3.0 | 1125 | 0.8187 | 0.5857 | | 0.8295 | 4.0 | 1500 | 0.8518 | 0.5441 | | 0.7679 | 5.0 | 1875 | 0.8037 | 0.5574 | | 0.78 | 6.0 | 2250 | 0.8086 | 0.5641 | | 0.8051 | 7.0 | 2625 | 0.8475 | 0.5707 | | 0.7931 | 8.0 | 3000 | 0.7986 | 0.5740 | | 0.787 | 9.0 | 3375 | 0.8111 | 0.6273 | | 0.7865 | 10.0 | 3750 | 0.7854 | 0.5874 | | 0.7545 | 11.0 | 4125 | 0.7499 | 0.6156 | | 0.8266 | 12.0 | 4500 | 0.7256 | 0.6373 | | 0.699 | 13.0 | 4875 | 0.7372 | 0.6456 | | 0.7725 | 14.0 | 5250 | 0.7195 | 0.6439 | | 0.7834 | 15.0 | 5625 | 0.7235 | 0.6489 | | 0.6954 | 16.0 | 6000 | 0.7061 | 0.6589 | | 0.6911 | 17.0 | 6375 | 0.8229 | 0.5973 | | 0.667 | 18.0 | 6750 | 0.6746 | 0.6772 | | 0.6889 | 19.0 | 7125 | 0.6831 | 0.6955 | | 0.6628 | 20.0 | 7500 | 0.6921 | 0.6922 | | 0.7228 | 21.0 | 7875 | 0.6764 | 0.6656 | | 0.7022 | 22.0 | 8250 | 0.6797 | 0.6905 | | 0.6549 | 23.0 | 8625 | 0.6709 | 0.6772 | | 0.7183 | 24.0 | 9000 | 0.6429 | 0.6955 | | 0.6612 | 25.0 | 9375 | 0.6503 | 0.6988 | | 0.6901 | 26.0 | 9750 | 0.7018 | 0.6739 | | 0.7038 | 27.0 | 10125 | 0.6168 | 0.7271 | | 0.6364 | 28.0 | 10500 | 0.6219 | 0.7121 | | 0.6477 | 29.0 | 10875 | 0.6546 | 0.7188 | | 0.5753 | 30.0 | 11250 | 0.6252 | 0.7221 | | 0.6932 | 31.0 | 11625 | 0.6174 | 0.7271 | | 0.6245 | 32.0 | 12000 | 0.6281 | 0.7255 | | 0.6083 | 33.0 | 12375 | 0.6211 | 0.7105 | | 0.6277 | 34.0 | 12750 | 0.5911 | 0.7471 | | 0.596 | 35.0 | 13125 | 0.5943 | 0.7304 | | 0.5539 | 36.0 | 13500 | 0.5798 | 0.7121 | | 0.6231 | 37.0 | 13875 | 0.5842 | 0.7321 | | 0.5692 | 38.0 | 14250 | 0.5897 | 0.7288 | | 0.5587 | 39.0 | 14625 | 0.6220 | 0.7155 | | 0.5891 | 40.0 | 15000 | 0.6063 | 0.7338 | | 0.561 | 41.0 | 15375 | 0.5930 | 0.7338 | | 0.5901 | 42.0 | 15750 | 0.5990 | 0.7288 | | 0.5194 | 43.0 | 16125 | 0.5632 | 0.7488 | | 0.5311 | 44.0 | 16500 | 0.5715 | 0.7488 | | 0.5414 | 45.0 | 16875 | 0.5640 | 0.7537 | | 0.5291 | 46.0 | 17250 | 0.5674 | 0.7504 | | 0.5724 | 47.0 | 17625 | 0.5765 | 0.7454 | | 0.4849 | 48.0 | 18000 | 0.5625 | 0.7438 | | 0.5463 | 49.0 | 18375 | 0.5558 | 0.7571 | | 0.4844 | 50.0 | 18750 | 0.5587 | 0.7571 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_sgd_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_sgd_00001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9907 - Accuracy: 0.5108 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0998 | 1.0 | 375 | 1.0701 | 0.4260 | | 1.0872 | 2.0 | 750 | 1.0668 | 0.4276 | | 1.0721 | 3.0 | 1125 | 1.0636 | 0.4343 | | 1.068 | 4.0 | 1500 | 1.0604 | 0.4343 | | 1.0595 | 5.0 | 1875 | 1.0573 | 0.4376 | | 1.0368 | 6.0 | 2250 | 1.0542 | 0.4409 | | 1.0324 | 7.0 | 2625 | 1.0513 | 0.4493 | | 1.0454 | 8.0 | 3000 | 1.0484 | 0.4526 | | 1.0424 | 9.0 | 3375 | 1.0456 | 0.4509 | | 1.0256 | 10.0 | 3750 | 1.0428 | 0.4559 | | 1.0778 | 11.0 | 4125 | 1.0401 | 0.4559 | | 1.0323 | 12.0 | 4500 | 1.0375 | 0.4592 | | 1.0063 | 13.0 | 4875 | 1.0349 | 0.4626 | | 1.0266 | 14.0 | 5250 | 1.0324 | 0.4642 | | 1.0153 | 15.0 | 5625 | 1.0299 | 0.4659 | | 1.0 | 16.0 | 6000 | 1.0276 | 0.4692 | | 1.0251 | 17.0 | 6375 | 1.0253 | 0.4692 | | 1.0305 | 18.0 | 6750 | 1.0231 | 0.4725 | | 1.0097 | 19.0 | 7125 | 1.0209 | 0.4725 | | 1.02 | 20.0 | 7500 | 1.0189 | 0.4725 | | 0.9981 | 21.0 | 7875 | 1.0168 | 0.4775 | | 0.9952 | 22.0 | 8250 | 1.0149 | 0.4775 | | 1.007 | 23.0 | 8625 | 1.0131 | 0.4859 | | 1.0141 | 24.0 | 9000 | 1.0113 | 0.4875 | | 1.0041 | 25.0 | 9375 | 1.0095 | 0.4875 | | 1.0032 | 26.0 | 9750 | 1.0079 | 0.4875 | | 1.0112 | 27.0 | 10125 | 1.0064 | 0.4875 | | 0.9862 | 28.0 | 10500 | 1.0049 | 0.4892 | | 0.9745 | 29.0 | 10875 | 1.0035 | 0.4892 | | 0.9881 | 30.0 | 11250 | 1.0022 | 0.4925 | | 0.9936 | 31.0 | 11625 | 1.0009 | 0.4908 | | 0.9959 | 32.0 | 12000 | 0.9998 | 0.4892 | | 0.9804 | 33.0 | 12375 | 0.9987 | 0.4875 | | 0.9795 | 34.0 | 12750 | 0.9977 | 0.4958 | | 0.9724 | 35.0 | 13125 | 0.9967 | 0.4975 | | 0.9957 | 36.0 | 13500 | 0.9958 | 0.4992 | | 0.9651 | 37.0 | 13875 | 0.9950 | 0.5025 | | 0.9869 | 38.0 | 14250 | 0.9943 | 0.5042 | | 0.9664 | 39.0 | 14625 | 0.9937 | 0.5075 | | 0.9769 | 40.0 | 15000 | 0.9931 | 0.5092 | | 0.9473 | 41.0 | 15375 | 0.9926 | 0.5108 | | 0.9911 | 42.0 | 15750 | 0.9921 | 0.5108 | | 0.9625 | 43.0 | 16125 | 0.9917 | 0.5108 | | 0.9689 | 44.0 | 16500 | 0.9914 | 0.5108 | | 0.9736 | 45.0 | 16875 | 0.9912 | 0.5108 | | 0.9789 | 46.0 | 17250 | 0.9910 | 0.5108 | | 0.9732 | 47.0 | 17625 | 0.9908 | 0.5108 | | 0.9789 | 48.0 | 18000 | 0.9907 | 0.5108 | | 0.987 | 49.0 | 18375 | 0.9907 | 0.5108 | | 1.0128 | 50.0 | 18750 | 0.9907 | 0.5108 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_rms_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_rms_001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5638 - Accuracy: 0.7967 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.8408 | 1.0 | 375 | 0.8491 | 0.5383 | | 0.836 | 2.0 | 750 | 0.8820 | 0.4983 | | 0.8155 | 3.0 | 1125 | 0.8200 | 0.5917 | | 0.829 | 4.0 | 1500 | 0.7980 | 0.5933 | | 0.8032 | 5.0 | 1875 | 0.8027 | 0.5967 | | 0.8095 | 6.0 | 2250 | 0.7557 | 0.635 | | 0.7468 | 7.0 | 2625 | 0.7635 | 0.65 | | 0.7133 | 8.0 | 3000 | 0.7025 | 0.6667 | | 0.6424 | 9.0 | 3375 | 0.8608 | 0.6467 | | 0.6511 | 10.0 | 3750 | 0.6834 | 0.6817 | | 0.6928 | 11.0 | 4125 | 0.7883 | 0.6183 | | 0.6757 | 12.0 | 4500 | 0.7380 | 0.635 | | 0.6473 | 13.0 | 4875 | 0.6942 | 0.6633 | | 0.5828 | 14.0 | 5250 | 0.6863 | 0.7117 | | 0.5787 | 15.0 | 5625 | 0.6877 | 0.6933 | | 0.5711 | 16.0 | 6000 | 0.7012 | 0.685 | | 0.6198 | 17.0 | 6375 | 0.6000 | 0.7183 | | 0.6331 | 18.0 | 6750 | 0.6316 | 0.7217 | | 0.5457 | 19.0 | 7125 | 0.6381 | 0.7333 | | 0.585 | 20.0 | 7500 | 0.6083 | 0.7367 | | 0.4779 | 21.0 | 7875 | 0.6292 | 0.7 | | 0.4504 | 22.0 | 8250 | 0.5995 | 0.7533 | | 0.513 | 23.0 | 8625 | 0.6005 | 0.735 | | 0.5931 | 24.0 | 9000 | 0.5450 | 0.76 | | 0.4836 | 25.0 | 9375 | 0.5749 | 0.7517 | | 0.4981 | 26.0 | 9750 | 0.5577 | 0.77 | | 0.5035 | 27.0 | 10125 | 0.5452 | 0.7583 | | 0.4996 | 28.0 | 10500 | 0.5583 | 0.765 | | 0.4767 | 29.0 | 10875 | 0.5589 | 0.765 | | 0.4202 | 30.0 | 11250 | 0.5291 | 0.78 | | 0.4307 | 31.0 | 11625 | 0.5250 | 0.7967 | | 0.5107 | 32.0 | 12000 | 0.5223 | 0.7917 | | 0.4923 | 33.0 | 12375 | 0.5101 | 0.7917 | | 0.4996 | 34.0 | 12750 | 0.5329 | 0.79 | | 0.3762 | 35.0 | 13125 | 0.5542 | 0.79 | | 0.4379 | 36.0 | 13500 | 0.5598 | 0.7883 | | 0.4018 | 37.0 | 13875 | 0.5521 | 0.7983 | | 0.4033 | 38.0 | 14250 | 0.5506 | 0.7767 | | 0.4228 | 39.0 | 14625 | 0.5150 | 0.7917 | | 0.366 | 40.0 | 15000 | 0.5580 | 0.8017 | | 0.3549 | 41.0 | 15375 | 0.5360 | 0.8067 | | 0.3677 | 42.0 | 15750 | 0.5521 | 0.8 | | 0.4255 | 43.0 | 16125 | 0.5412 | 0.8033 | | 0.355 | 44.0 | 16500 | 0.5640 | 0.7717 | | 0.3586 | 45.0 | 16875 | 0.5441 | 0.7783 | | 0.3404 | 46.0 | 17250 | 0.5592 | 0.7867 | | 0.3867 | 47.0 | 17625 | 0.5593 | 0.8 | | 0.3586 | 48.0 | 18000 | 0.5571 | 0.8067 | | 0.2696 | 49.0 | 18375 | 0.5541 | 0.8 | | 0.3761 | 50.0 | 18750 | 0.5638 | 0.7967 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_sgd_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_sgd_00001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0078 - Accuracy: 0.4883 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0625 | 1.0 | 375 | 1.0854 | 0.38 | | 1.055 | 2.0 | 750 | 1.0820 | 0.3817 | | 1.0441 | 3.0 | 1125 | 1.0787 | 0.3817 | | 1.0543 | 4.0 | 1500 | 1.0755 | 0.3833 | | 1.0717 | 5.0 | 1875 | 1.0724 | 0.3833 | | 1.0405 | 6.0 | 2250 | 1.0694 | 0.3833 | | 1.0573 | 7.0 | 2625 | 1.0664 | 0.3867 | | 1.052 | 8.0 | 3000 | 1.0635 | 0.3933 | | 1.0402 | 9.0 | 3375 | 1.0606 | 0.395 | | 1.026 | 10.0 | 3750 | 1.0579 | 0.3967 | | 1.0363 | 11.0 | 4125 | 1.0552 | 0.4017 | | 1.044 | 12.0 | 4500 | 1.0526 | 0.4033 | | 1.0227 | 13.0 | 4875 | 1.0501 | 0.4117 | | 1.0237 | 14.0 | 5250 | 1.0477 | 0.4133 | | 1.0137 | 15.0 | 5625 | 1.0453 | 0.4183 | | 1.005 | 16.0 | 6000 | 1.0431 | 0.4167 | | 1.0298 | 17.0 | 6375 | 1.0409 | 0.4167 | | 1.0209 | 18.0 | 6750 | 1.0387 | 0.4183 | | 1.0296 | 19.0 | 7125 | 1.0366 | 0.425 | | 1.0081 | 20.0 | 7500 | 1.0346 | 0.4283 | | 0.9849 | 21.0 | 7875 | 1.0327 | 0.4317 | | 1.0033 | 22.0 | 8250 | 1.0308 | 0.44 | | 1.0003 | 23.0 | 8625 | 1.0290 | 0.4417 | | 1.0236 | 24.0 | 9000 | 1.0274 | 0.445 | | 0.9768 | 25.0 | 9375 | 1.0257 | 0.4533 | | 0.9963 | 26.0 | 9750 | 1.0242 | 0.4567 | | 0.9973 | 27.0 | 10125 | 1.0227 | 0.46 | | 1.025 | 28.0 | 10500 | 1.0213 | 0.4617 | | 0.9786 | 29.0 | 10875 | 1.0199 | 0.465 | | 1.0006 | 30.0 | 11250 | 1.0187 | 0.4667 | | 1.0183 | 31.0 | 11625 | 1.0175 | 0.47 | | 0.9871 | 32.0 | 12000 | 1.0164 | 0.4733 | | 0.9751 | 33.0 | 12375 | 1.0154 | 0.4733 | | 0.9558 | 34.0 | 12750 | 1.0144 | 0.475 | | 0.9521 | 35.0 | 13125 | 1.0135 | 0.475 | | 0.975 | 36.0 | 13500 | 1.0127 | 0.475 | | 0.9912 | 37.0 | 13875 | 1.0119 | 0.4783 | | 0.9818 | 38.0 | 14250 | 1.0112 | 0.48 | | 0.9973 | 39.0 | 14625 | 1.0106 | 0.4817 | | 0.9737 | 40.0 | 15000 | 1.0101 | 0.4833 | | 0.9571 | 41.0 | 15375 | 1.0096 | 0.4833 | | 0.9497 | 42.0 | 15750 | 1.0092 | 0.4833 | | 0.9898 | 43.0 | 16125 | 1.0088 | 0.485 | | 0.9733 | 44.0 | 16500 | 1.0085 | 0.485 | | 0.9695 | 45.0 | 16875 | 1.0083 | 0.4833 | | 0.9603 | 46.0 | 17250 | 1.0081 | 0.4867 | | 0.9924 | 47.0 | 17625 | 1.0079 | 0.4867 | | 0.9781 | 48.0 | 18000 | 1.0079 | 0.4867 | | 1.0064 | 49.0 | 18375 | 1.0078 | 0.4883 | | 0.9488 | 50.0 | 18750 | 1.0078 | 0.4883 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_rms_001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_rms_001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5824 - Accuracy: 0.78 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.9095 | 1.0 | 375 | 0.8145 | 0.53 | | 0.8394 | 2.0 | 750 | 0.8071 | 0.565 | | 0.7959 | 3.0 | 1125 | 0.7903 | 0.6133 | | 0.7869 | 4.0 | 1500 | 0.7586 | 0.6367 | | 0.7989 | 5.0 | 1875 | 0.7552 | 0.6 | | 0.8124 | 6.0 | 2250 | 0.7289 | 0.66 | | 0.7708 | 7.0 | 2625 | 0.7042 | 0.6733 | | 0.7586 | 8.0 | 3000 | 0.7504 | 0.6583 | | 0.6986 | 9.0 | 3375 | 0.6527 | 0.6983 | | 0.6979 | 10.0 | 3750 | 0.6544 | 0.7017 | | 0.682 | 11.0 | 4125 | 0.6621 | 0.7117 | | 0.6815 | 12.0 | 4500 | 0.6293 | 0.7067 | | 0.6311 | 13.0 | 4875 | 0.6466 | 0.7033 | | 0.743 | 14.0 | 5250 | 0.5967 | 0.7383 | | 0.6884 | 15.0 | 5625 | 0.5874 | 0.7533 | | 0.6214 | 16.0 | 6000 | 0.5678 | 0.7567 | | 0.6379 | 17.0 | 6375 | 0.6145 | 0.7267 | | 0.5615 | 18.0 | 6750 | 0.5793 | 0.7417 | | 0.5825 | 19.0 | 7125 | 0.5647 | 0.76 | | 0.5806 | 20.0 | 7500 | 0.5298 | 0.7617 | | 0.5732 | 21.0 | 7875 | 0.6497 | 0.7117 | | 0.4981 | 22.0 | 8250 | 0.6229 | 0.7283 | | 0.5878 | 23.0 | 8625 | 0.5476 | 0.77 | | 0.5732 | 24.0 | 9000 | 0.5431 | 0.7783 | | 0.5633 | 25.0 | 9375 | 0.5734 | 0.7617 | | 0.5704 | 26.0 | 9750 | 0.5553 | 0.7683 | | 0.537 | 27.0 | 10125 | 0.5504 | 0.7733 | | 0.4571 | 28.0 | 10500 | 0.5331 | 0.7783 | | 0.5264 | 29.0 | 10875 | 0.5680 | 0.7633 | | 0.6141 | 30.0 | 11250 | 0.5510 | 0.765 | | 0.5469 | 31.0 | 11625 | 0.5500 | 0.7933 | | 0.4915 | 32.0 | 12000 | 0.5001 | 0.785 | | 0.5227 | 33.0 | 12375 | 0.5958 | 0.7783 | | 0.4961 | 34.0 | 12750 | 0.5665 | 0.78 | | 0.4306 | 35.0 | 13125 | 0.5345 | 0.7683 | | 0.461 | 36.0 | 13500 | 0.5456 | 0.7683 | | 0.5254 | 37.0 | 13875 | 0.5228 | 0.78 | | 0.4633 | 38.0 | 14250 | 0.5026 | 0.7917 | | 0.4546 | 39.0 | 14625 | 0.5577 | 0.7633 | | 0.4842 | 40.0 | 15000 | 0.5245 | 0.78 | | 0.4453 | 41.0 | 15375 | 0.5350 | 0.785 | | 0.3943 | 42.0 | 15750 | 0.5494 | 0.7867 | | 0.4031 | 43.0 | 16125 | 0.5697 | 0.7833 | | 0.3729 | 44.0 | 16500 | 0.5326 | 0.7933 | | 0.3744 | 45.0 | 16875 | 0.5371 | 0.7817 | | 0.4535 | 46.0 | 17250 | 0.5557 | 0.7817 | | 0.4267 | 47.0 | 17625 | 0.5568 | 0.7767 | | 0.372 | 48.0 | 18000 | 0.5642 | 0.77 | | 0.3734 | 49.0 | 18375 | 0.5737 | 0.785 | | 0.4125 | 50.0 | 18750 | 0.5824 | 0.78 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_sgd_00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_sgd_00001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9877 - Accuracy: 0.5167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0889 | 1.0 | 375 | 1.0662 | 0.4217 | | 1.0663 | 2.0 | 750 | 1.0632 | 0.4233 | | 1.047 | 3.0 | 1125 | 1.0602 | 0.43 | | 1.0589 | 4.0 | 1500 | 1.0573 | 0.43 | | 1.0463 | 5.0 | 1875 | 1.0544 | 0.4317 | | 1.0388 | 6.0 | 2250 | 1.0516 | 0.4333 | | 1.0109 | 7.0 | 2625 | 1.0487 | 0.435 | | 1.0394 | 8.0 | 3000 | 1.0459 | 0.4383 | | 1.0347 | 9.0 | 3375 | 1.0431 | 0.4417 | | 1.0355 | 10.0 | 3750 | 1.0404 | 0.4467 | | 1.059 | 11.0 | 4125 | 1.0377 | 0.4467 | | 1.0235 | 12.0 | 4500 | 1.0352 | 0.445 | | 1.0136 | 13.0 | 4875 | 1.0326 | 0.4467 | | 1.0313 | 14.0 | 5250 | 1.0301 | 0.45 | | 1.0046 | 15.0 | 5625 | 1.0277 | 0.4567 | | 1.0138 | 16.0 | 6000 | 1.0253 | 0.4633 | | 1.0055 | 17.0 | 6375 | 1.0230 | 0.465 | | 0.998 | 18.0 | 6750 | 1.0207 | 0.4667 | | 1.0178 | 19.0 | 7125 | 1.0186 | 0.4667 | | 1.019 | 20.0 | 7500 | 1.0165 | 0.4717 | | 0.9884 | 21.0 | 7875 | 1.0145 | 0.4783 | | 1.0226 | 22.0 | 8250 | 1.0125 | 0.48 | | 1.0239 | 23.0 | 8625 | 1.0106 | 0.4833 | | 1.0151 | 24.0 | 9000 | 1.0088 | 0.49 | | 0.997 | 25.0 | 9375 | 1.0071 | 0.49 | | 0.9698 | 26.0 | 9750 | 1.0054 | 0.4917 | | 0.958 | 27.0 | 10125 | 1.0038 | 0.495 | | 1.0132 | 28.0 | 10500 | 1.0023 | 0.4933 | | 0.9673 | 29.0 | 10875 | 1.0008 | 0.4983 | | 0.9986 | 30.0 | 11250 | 0.9995 | 0.5 | | 0.9881 | 31.0 | 11625 | 0.9982 | 0.505 | | 1.0083 | 32.0 | 12000 | 0.9970 | 0.505 | | 0.9851 | 33.0 | 12375 | 0.9959 | 0.5067 | | 0.9949 | 34.0 | 12750 | 0.9949 | 0.5067 | | 0.988 | 35.0 | 13125 | 0.9939 | 0.5083 | | 1.0062 | 36.0 | 13500 | 0.9930 | 0.51 | | 0.9899 | 37.0 | 13875 | 0.9922 | 0.5083 | | 0.9951 | 38.0 | 14250 | 0.9914 | 0.51 | | 1.0002 | 39.0 | 14625 | 0.9908 | 0.5133 | | 0.9573 | 40.0 | 15000 | 0.9902 | 0.5133 | | 0.9723 | 41.0 | 15375 | 0.9896 | 0.515 | | 0.977 | 42.0 | 15750 | 0.9892 | 0.515 | | 0.9762 | 43.0 | 16125 | 0.9888 | 0.515 | | 0.9976 | 44.0 | 16500 | 0.9885 | 0.5167 | | 0.965 | 45.0 | 16875 | 0.9882 | 0.5167 | | 0.9904 | 46.0 | 17250 | 0.9880 | 0.5167 | | 0.9962 | 47.0 | 17625 | 0.9879 | 0.5167 | | 0.982 | 48.0 | 18000 | 0.9878 | 0.5167 | | 0.9851 | 49.0 | 18375 | 0.9877 | 0.5167 | | 0.9675 | 50.0 | 18750 | 0.9877 | 0.5167 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_sgd_00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_sgd_00001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9917 - Accuracy: 0.5233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.1042 | 1.0 | 375 | 1.0744 | 0.425 | | 1.0586 | 2.0 | 750 | 1.0707 | 0.4317 | | 1.0589 | 3.0 | 1125 | 1.0672 | 0.4317 | | 1.0667 | 4.0 | 1500 | 1.0637 | 0.435 | | 1.0705 | 5.0 | 1875 | 1.0604 | 0.435 | | 1.0556 | 6.0 | 2250 | 1.0572 | 0.44 | | 1.037 | 7.0 | 2625 | 1.0541 | 0.4467 | | 1.0193 | 8.0 | 3000 | 1.0509 | 0.4533 | | 1.0476 | 9.0 | 3375 | 1.0479 | 0.455 | | 1.0596 | 10.0 | 3750 | 1.0451 | 0.4617 | | 1.0294 | 11.0 | 4125 | 1.0422 | 0.465 | | 1.0295 | 12.0 | 4500 | 1.0395 | 0.47 | | 1.0343 | 13.0 | 4875 | 1.0368 | 0.47 | | 1.0353 | 14.0 | 5250 | 1.0343 | 0.4717 | | 1.029 | 15.0 | 5625 | 1.0318 | 0.4767 | | 1.027 | 16.0 | 6000 | 1.0293 | 0.48 | | 1.0226 | 17.0 | 6375 | 1.0270 | 0.4833 | | 0.9782 | 18.0 | 6750 | 1.0246 | 0.4917 | | 1.019 | 19.0 | 7125 | 1.0224 | 0.4983 | | 0.9985 | 20.0 | 7500 | 1.0203 | 0.5017 | | 0.9654 | 21.0 | 7875 | 1.0183 | 0.5033 | | 1.0053 | 22.0 | 8250 | 1.0163 | 0.5083 | | 1.0112 | 23.0 | 8625 | 1.0144 | 0.5117 | | 1.0077 | 24.0 | 9000 | 1.0126 | 0.515 | | 0.9958 | 25.0 | 9375 | 1.0109 | 0.515 | | 0.987 | 26.0 | 9750 | 1.0092 | 0.515 | | 0.9915 | 27.0 | 10125 | 1.0076 | 0.5133 | | 0.9929 | 28.0 | 10500 | 1.0061 | 0.5133 | | 0.9979 | 29.0 | 10875 | 1.0047 | 0.5117 | | 0.9641 | 30.0 | 11250 | 1.0034 | 0.5117 | | 0.9905 | 31.0 | 11625 | 1.0021 | 0.5117 | | 1.0051 | 32.0 | 12000 | 1.0009 | 0.5117 | | 1.0023 | 33.0 | 12375 | 0.9998 | 0.5133 | | 0.9697 | 34.0 | 12750 | 0.9988 | 0.5167 | | 0.9834 | 35.0 | 13125 | 0.9979 | 0.5167 | | 0.985 | 36.0 | 13500 | 0.9970 | 0.5183 | | 0.9928 | 37.0 | 13875 | 0.9962 | 0.5183 | | 0.9649 | 38.0 | 14250 | 0.9954 | 0.52 | | 1.0054 | 39.0 | 14625 | 0.9948 | 0.5217 | | 0.9871 | 40.0 | 15000 | 0.9942 | 0.5217 | | 0.977 | 41.0 | 15375 | 0.9937 | 0.5217 | | 0.9799 | 42.0 | 15750 | 0.9932 | 0.5217 | | 0.9791 | 43.0 | 16125 | 0.9928 | 0.525 | | 0.9745 | 44.0 | 16500 | 0.9925 | 0.525 | | 0.9751 | 45.0 | 16875 | 0.9922 | 0.525 | | 0.9977 | 46.0 | 17250 | 0.9920 | 0.5233 | | 0.9954 | 47.0 | 17625 | 0.9919 | 0.5233 | | 0.9619 | 48.0 | 18000 | 0.9918 | 0.5233 | | 0.9797 | 49.0 | 18375 | 0.9917 | 0.5233 | | 0.9489 | 50.0 | 18750 | 0.9917 | 0.5233 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_rms_001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_rms_001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9266 - Accuracy: 0.7933 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.8478 | 1.0 | 375 | 0.8288 | 0.5567 | | 0.8902 | 2.0 | 750 | 0.8202 | 0.5483 | | 0.8343 | 3.0 | 1125 | 0.7711 | 0.65 | | 0.8476 | 4.0 | 1500 | 0.8002 | 0.5683 | | 0.7349 | 5.0 | 1875 | 0.7526 | 0.62 | | 0.7146 | 6.0 | 2250 | 0.7401 | 0.645 | | 0.6924 | 7.0 | 2625 | 0.7402 | 0.6467 | | 0.8092 | 8.0 | 3000 | 0.7366 | 0.6267 | | 0.7302 | 9.0 | 3375 | 0.7094 | 0.67 | | 0.6674 | 10.0 | 3750 | 0.6785 | 0.6733 | | 0.7108 | 11.0 | 4125 | 0.6584 | 0.6967 | | 0.5797 | 12.0 | 4500 | 0.7184 | 0.68 | | 0.6078 | 13.0 | 4875 | 0.6814 | 0.6767 | | 0.6457 | 14.0 | 5250 | 0.6261 | 0.72 | | 0.6638 | 15.0 | 5625 | 0.5980 | 0.7217 | | 0.5829 | 16.0 | 6000 | 0.5841 | 0.7517 | | 0.5785 | 17.0 | 6375 | 0.5759 | 0.7283 | | 0.5879 | 18.0 | 6750 | 0.5909 | 0.7233 | | 0.5859 | 19.0 | 7125 | 0.6185 | 0.7183 | | 0.5569 | 20.0 | 7500 | 0.5506 | 0.745 | | 0.5537 | 21.0 | 7875 | 0.5606 | 0.7617 | | 0.5092 | 22.0 | 8250 | 0.5522 | 0.7483 | | 0.61 | 23.0 | 8625 | 0.6297 | 0.7383 | | 0.5833 | 24.0 | 9000 | 0.5399 | 0.7667 | | 0.5315 | 25.0 | 9375 | 0.5551 | 0.7517 | | 0.5313 | 26.0 | 9750 | 0.5176 | 0.7717 | | 0.5327 | 27.0 | 10125 | 0.5547 | 0.775 | | 0.4821 | 28.0 | 10500 | 0.5221 | 0.77 | | 0.5852 | 29.0 | 10875 | 0.4983 | 0.7717 | | 0.4777 | 30.0 | 11250 | 0.5766 | 0.75 | | 0.4511 | 31.0 | 11625 | 0.5104 | 0.7733 | | 0.5002 | 32.0 | 12000 | 0.5870 | 0.76 | | 0.465 | 33.0 | 12375 | 0.4942 | 0.7917 | | 0.4934 | 34.0 | 12750 | 0.5302 | 0.7783 | | 0.4217 | 35.0 | 13125 | 0.5314 | 0.7883 | | 0.3994 | 36.0 | 13500 | 0.5461 | 0.7917 | | 0.3823 | 37.0 | 13875 | 0.5187 | 0.7933 | | 0.3965 | 38.0 | 14250 | 0.5803 | 0.7917 | | 0.3576 | 39.0 | 14625 | 0.5564 | 0.79 | | 0.3853 | 40.0 | 15000 | 0.5425 | 0.8033 | | 0.3694 | 41.0 | 15375 | 0.5885 | 0.7967 | | 0.3496 | 42.0 | 15750 | 0.6131 | 0.7967 | | 0.3293 | 43.0 | 16125 | 0.6330 | 0.8033 | | 0.2565 | 44.0 | 16500 | 0.6562 | 0.795 | | 0.3188 | 45.0 | 16875 | 0.7306 | 0.7933 | | 0.2833 | 46.0 | 17250 | 0.8042 | 0.7917 | | 0.2208 | 47.0 | 17625 | 0.7887 | 0.79 | | 0.1436 | 48.0 | 18000 | 0.8206 | 0.7933 | | 0.1521 | 49.0 | 18375 | 0.8762 | 0.8083 | | 0.1603 | 50.0 | 18750 | 0.9266 | 0.7933 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_adamax_001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_adamax_001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9202 - Accuracy: 0.9098 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3418 | 1.0 | 751 | 0.3645 | 0.8681 | | 0.228 | 2.0 | 1502 | 0.3383 | 0.8815 | | 0.2191 | 3.0 | 2253 | 0.3232 | 0.8915 | | 0.2142 | 4.0 | 3004 | 0.3382 | 0.9015 | | 0.1533 | 5.0 | 3755 | 0.4440 | 0.8681 | | 0.1001 | 6.0 | 4506 | 0.4207 | 0.8698 | | 0.085 | 7.0 | 5257 | 0.5360 | 0.8748 | | 0.0859 | 8.0 | 6008 | 0.4176 | 0.9032 | | 0.0802 | 9.0 | 6759 | 0.5292 | 0.8781 | | 0.0332 | 10.0 | 7510 | 0.5218 | 0.9015 | | 0.0553 | 11.0 | 8261 | 0.4537 | 0.9015 | | 0.039 | 12.0 | 9012 | 0.7412 | 0.8848 | | 0.0239 | 13.0 | 9763 | 0.6194 | 0.8915 | | 0.0108 | 14.0 | 10514 | 0.7066 | 0.8848 | | 0.0526 | 15.0 | 11265 | 0.6131 | 0.9032 | | 0.0063 | 16.0 | 12016 | 0.8576 | 0.8681 | | 0.0154 | 17.0 | 12767 | 0.6269 | 0.8948 | | 0.0175 | 18.0 | 13518 | 0.6667 | 0.9048 | | 0.0017 | 19.0 | 14269 | 0.6041 | 0.9048 | | 0.0002 | 20.0 | 15020 | 0.7017 | 0.8798 | | 0.0104 | 21.0 | 15771 | 0.6523 | 0.8965 | | 0.0004 | 22.0 | 16522 | 0.5978 | 0.9065 | | 0.0007 | 23.0 | 17273 | 0.7511 | 0.8982 | | 0.0003 | 24.0 | 18024 | 0.8000 | 0.8948 | | 0.0003 | 25.0 | 18775 | 0.7612 | 0.8932 | | 0.0002 | 26.0 | 19526 | 0.7543 | 0.9032 | | 0.0 | 27.0 | 20277 | 0.7144 | 0.9032 | | 0.0 | 28.0 | 21028 | 0.8366 | 0.8831 | | 0.0053 | 29.0 | 21779 | 0.9486 | 0.8815 | | 0.0 | 30.0 | 22530 | 0.9579 | 0.8932 | | 0.0 | 31.0 | 23281 | 0.8276 | 0.9015 | | 0.0 | 32.0 | 24032 | 0.8430 | 0.9065 | | 0.0 | 33.0 | 24783 | 0.7752 | 0.9098 | | 0.0 | 34.0 | 25534 | 0.7966 | 0.9098 | | 0.0 | 35.0 | 26285 | 0.8408 | 0.9048 | | 0.0 | 36.0 | 27036 | 0.8314 | 0.9065 | | 0.0 | 37.0 | 27787 | 0.8780 | 0.9015 | | 0.0 | 38.0 | 28538 | 0.8886 | 0.8998 | | 0.0 | 39.0 | 29289 | 0.8653 | 0.9048 | | 0.0 | 40.0 | 30040 | 0.8404 | 0.9082 | | 0.0 | 41.0 | 30791 | 0.8630 | 0.8998 | | 0.0 | 42.0 | 31542 | 0.8333 | 0.9098 | | 0.0 | 43.0 | 32293 | 0.9256 | 0.9048 | | 0.0 | 44.0 | 33044 | 0.8529 | 0.9082 | | 0.0 | 45.0 | 33795 | 0.8963 | 0.9082 | | 0.0 | 46.0 | 34546 | 0.8944 | 0.9098 | | 0.0 | 47.0 | 35297 | 0.9059 | 0.9098 | | 0.0 | 48.0 | 36048 | 0.9120 | 0.9098 | | 0.0 | 49.0 | 36799 | 0.9160 | 0.9098 | | 0.0 | 50.0 | 37550 | 0.9202 | 0.9098 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_adamax_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_adamax_0001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8826 - Accuracy: 0.9132 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2523 | 1.0 | 751 | 0.2969 | 0.8781 | | 0.098 | 2.0 | 1502 | 0.3056 | 0.9048 | | 0.0879 | 3.0 | 2253 | 0.3755 | 0.9082 | | 0.0112 | 4.0 | 3004 | 0.4946 | 0.9015 | | 0.0331 | 5.0 | 3755 | 0.5182 | 0.9115 | | 0.034 | 6.0 | 4506 | 0.6894 | 0.8982 | | 0.0195 | 7.0 | 5257 | 0.5733 | 0.9082 | | 0.0006 | 8.0 | 6008 | 0.6615 | 0.9065 | | 0.0 | 9.0 | 6759 | 0.6188 | 0.9149 | | 0.0001 | 10.0 | 7510 | 0.6672 | 0.9115 | | 0.0153 | 11.0 | 8261 | 0.6447 | 0.9165 | | 0.0 | 12.0 | 9012 | 0.7794 | 0.9098 | | 0.0 | 13.0 | 9763 | 0.7124 | 0.9098 | | 0.0 | 14.0 | 10514 | 0.7255 | 0.9082 | | 0.0 | 15.0 | 11265 | 0.7805 | 0.9098 | | 0.0 | 16.0 | 12016 | 0.8624 | 0.9048 | | 0.0069 | 17.0 | 12767 | 0.7828 | 0.9115 | | 0.0001 | 18.0 | 13518 | 0.7977 | 0.9032 | | 0.0 | 19.0 | 14269 | 0.7469 | 0.9065 | | 0.0001 | 20.0 | 15020 | 0.8490 | 0.9065 | | 0.0 | 21.0 | 15771 | 0.7619 | 0.9098 | | 0.0 | 22.0 | 16522 | 0.7972 | 0.9149 | | 0.0 | 23.0 | 17273 | 0.7542 | 0.9199 | | 0.0 | 24.0 | 18024 | 0.8510 | 0.9048 | | 0.0 | 25.0 | 18775 | 0.8348 | 0.9082 | | 0.0 | 26.0 | 19526 | 0.8141 | 0.9182 | | 0.0 | 27.0 | 20277 | 0.8518 | 0.9115 | | 0.0 | 28.0 | 21028 | 0.8281 | 0.9098 | | 0.0044 | 29.0 | 21779 | 0.8328 | 0.9132 | | 0.0 | 30.0 | 22530 | 0.8675 | 0.9149 | | 0.0 | 31.0 | 23281 | 0.8219 | 0.9048 | | 0.0 | 32.0 | 24032 | 0.8656 | 0.9065 | | 0.0 | 33.0 | 24783 | 0.8259 | 0.9048 | | 0.0 | 34.0 | 25534 | 0.8526 | 0.9082 | | 0.0 | 35.0 | 26285 | 0.8439 | 0.9098 | | 0.0 | 36.0 | 27036 | 0.8589 | 0.9115 | | 0.0 | 37.0 | 27787 | 0.8573 | 0.9149 | | 0.0 | 38.0 | 28538 | 0.8548 | 0.9149 | | 0.0 | 39.0 | 29289 | 0.8558 | 0.9149 | | 0.0 | 40.0 | 30040 | 0.8593 | 0.9149 | | 0.0 | 41.0 | 30791 | 0.8680 | 0.9149 | | 0.0 | 42.0 | 31542 | 0.8686 | 0.9149 | | 0.0 | 43.0 | 32293 | 0.8703 | 0.9132 | | 0.0 | 44.0 | 33044 | 0.8724 | 0.9132 | | 0.0 | 45.0 | 33795 | 0.8746 | 0.9132 | | 0.0 | 46.0 | 34546 | 0.8749 | 0.9132 | | 0.0 | 47.0 | 35297 | 0.8795 | 0.9132 | | 0.0 | 48.0 | 36048 | 0.8807 | 0.9132 | | 0.0 | 49.0 | 36799 | 0.8817 | 0.9132 | | 0.0 | 50.0 | 37550 | 0.8826 | 0.9132 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_adamax_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_adamax_001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0087 - Accuracy: 0.8985 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3033 | 1.0 | 750 | 0.3519 | 0.8552 | | 0.1821 | 2.0 | 1500 | 0.2712 | 0.9151 | | 0.17 | 3.0 | 2250 | 0.2785 | 0.8985 | | 0.2112 | 4.0 | 3000 | 0.3938 | 0.8735 | | 0.1304 | 5.0 | 3750 | 0.4058 | 0.8719 | | 0.1351 | 6.0 | 4500 | 0.3464 | 0.8835 | | 0.0944 | 7.0 | 5250 | 0.4185 | 0.9018 | | 0.08 | 8.0 | 6000 | 0.4321 | 0.9052 | | 0.0621 | 9.0 | 6750 | 0.4755 | 0.8902 | | 0.0267 | 10.0 | 7500 | 0.5448 | 0.8669 | | 0.0347 | 11.0 | 8250 | 0.4694 | 0.8968 | | 0.1206 | 12.0 | 9000 | 0.6302 | 0.8819 | | 0.032 | 13.0 | 9750 | 0.6552 | 0.8819 | | 0.0071 | 14.0 | 10500 | 0.5891 | 0.8869 | | 0.0007 | 15.0 | 11250 | 0.6706 | 0.8935 | | 0.0152 | 16.0 | 12000 | 0.6742 | 0.8852 | | 0.0005 | 17.0 | 12750 | 0.6672 | 0.8952 | | 0.0068 | 18.0 | 13500 | 0.7499 | 0.8885 | | 0.0002 | 19.0 | 14250 | 0.7790 | 0.9018 | | 0.0003 | 20.0 | 15000 | 0.7692 | 0.8835 | | 0.0056 | 21.0 | 15750 | 0.8482 | 0.8752 | | 0.0136 | 22.0 | 16500 | 0.8127 | 0.8835 | | 0.0063 | 23.0 | 17250 | 0.6687 | 0.8952 | | 0.0072 | 24.0 | 18000 | 0.8624 | 0.8869 | | 0.0 | 25.0 | 18750 | 0.8382 | 0.8902 | | 0.0001 | 26.0 | 19500 | 0.8780 | 0.8769 | | 0.0107 | 27.0 | 20250 | 0.8313 | 0.8935 | | 0.0 | 28.0 | 21000 | 0.9547 | 0.8869 | | 0.0 | 29.0 | 21750 | 0.9878 | 0.8952 | | 0.0 | 30.0 | 22500 | 0.8456 | 0.9035 | | 0.0 | 31.0 | 23250 | 1.0397 | 0.8918 | | 0.0 | 32.0 | 24000 | 0.9157 | 0.9018 | | 0.0 | 33.0 | 24750 | 0.9451 | 0.9018 | | 0.0 | 34.0 | 25500 | 0.9702 | 0.8985 | | 0.0 | 35.0 | 26250 | 0.9002 | 0.9085 | | 0.0 | 36.0 | 27000 | 0.9202 | 0.8985 | | 0.004 | 37.0 | 27750 | 0.9200 | 0.8968 | | 0.0 | 38.0 | 28500 | 0.9595 | 0.8968 | | 0.0 | 39.0 | 29250 | 0.9656 | 0.9018 | | 0.0 | 40.0 | 30000 | 0.9661 | 0.9018 | | 0.0 | 41.0 | 30750 | 0.9523 | 0.9018 | | 0.0 | 42.0 | 31500 | 0.9556 | 0.9002 | | 0.0 | 43.0 | 32250 | 0.9641 | 0.8985 | | 0.0 | 44.0 | 33000 | 0.9727 | 0.9002 | | 0.0026 | 45.0 | 33750 | 0.9812 | 0.8985 | | 0.0 | 46.0 | 34500 | 0.9958 | 0.8985 | | 0.0 | 47.0 | 35250 | 0.9951 | 0.8985 | | 0.0 | 48.0 | 36000 | 1.0005 | 0.8985 | | 0.0 | 49.0 | 36750 | 1.0064 | 0.8985 | | 0.0 | 50.0 | 37500 | 1.0087 | 0.8985 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_adamax_0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_adamax_0001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1317 - Accuracy: 0.8918 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2268 | 1.0 | 750 | 0.2931 | 0.8885 | | 0.0856 | 2.0 | 1500 | 0.3354 | 0.9018 | | 0.0454 | 3.0 | 2250 | 0.4505 | 0.8902 | | 0.0358 | 4.0 | 3000 | 0.7256 | 0.8918 | | 0.0161 | 5.0 | 3750 | 0.7228 | 0.8918 | | 0.0143 | 6.0 | 4500 | 0.7580 | 0.8885 | | 0.0111 | 7.0 | 5250 | 0.7663 | 0.9052 | | 0.003 | 8.0 | 6000 | 0.8086 | 0.8852 | | 0.0012 | 9.0 | 6750 | 0.7843 | 0.8968 | | 0.0007 | 10.0 | 7500 | 0.8932 | 0.8935 | | 0.0001 | 11.0 | 8250 | 0.9695 | 0.8902 | | 0.0 | 12.0 | 9000 | 0.9681 | 0.8968 | | 0.0 | 13.0 | 9750 | 0.9905 | 0.8869 | | 0.0 | 14.0 | 10500 | 1.0234 | 0.8819 | | 0.0001 | 15.0 | 11250 | 0.9740 | 0.9002 | | 0.0354 | 16.0 | 12000 | 0.9561 | 0.9018 | | 0.0 | 17.0 | 12750 | 0.9170 | 0.8902 | | 0.0 | 18.0 | 13500 | 0.8495 | 0.9035 | | 0.0 | 19.0 | 14250 | 0.9851 | 0.8852 | | 0.0 | 20.0 | 15000 | 0.9671 | 0.8952 | | 0.0 | 21.0 | 15750 | 1.0261 | 0.8885 | | 0.004 | 22.0 | 16500 | 1.0404 | 0.8918 | | 0.0023 | 23.0 | 17250 | 1.0326 | 0.8952 | | 0.0 | 24.0 | 18000 | 1.0484 | 0.8852 | | 0.0 | 25.0 | 18750 | 1.0611 | 0.8902 | | 0.0 | 26.0 | 19500 | 1.0089 | 0.8985 | | 0.0058 | 27.0 | 20250 | 1.0656 | 0.8935 | | 0.0 | 28.0 | 21000 | 1.1306 | 0.8802 | | 0.0 | 29.0 | 21750 | 1.0444 | 0.8985 | | 0.0 | 30.0 | 22500 | 1.1060 | 0.8835 | | 0.0 | 31.0 | 23250 | 1.0495 | 0.8935 | | 0.0 | 32.0 | 24000 | 1.0881 | 0.8902 | | 0.0 | 33.0 | 24750 | 1.0585 | 0.8968 | | 0.0 | 34.0 | 25500 | 1.0870 | 0.8918 | | 0.0 | 35.0 | 26250 | 1.0814 | 0.8935 | | 0.0 | 36.0 | 27000 | 1.0837 | 0.8902 | | 0.0038 | 37.0 | 27750 | 1.0867 | 0.8918 | | 0.0 | 38.0 | 28500 | 1.1027 | 0.8918 | | 0.0 | 39.0 | 29250 | 1.1059 | 0.8935 | | 0.0 | 40.0 | 30000 | 1.1104 | 0.8918 | | 0.0 | 41.0 | 30750 | 1.1167 | 0.8918 | | 0.0 | 42.0 | 31500 | 1.1201 | 0.8918 | | 0.0 | 43.0 | 32250 | 1.1220 | 0.8918 | | 0.0 | 44.0 | 33000 | 1.1250 | 0.8918 | | 0.0024 | 45.0 | 33750 | 1.1275 | 0.8918 | | 0.0 | 46.0 | 34500 | 1.1293 | 0.8918 | | 0.0 | 47.0 | 35250 | 1.1307 | 0.8918 | | 0.0 | 48.0 | 36000 | 1.1319 | 0.8918 | | 0.0 | 49.0 | 36750 | 1.1319 | 0.8918 | | 0.0 | 50.0 | 37500 | 1.1317 | 0.8918 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
Naterea/1zre-yc3h-pnbk1-0
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 0.48027312755584717 f1: 0.7100271002710028 precision: 0.5504201680672269 recall: 1.0 auc: 0.9999999999999999 accuracy: 0.5916030534351145
[ "redd", "short" ]
hkivancoral/smids_10x_deit_small_adamax_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_adamax_001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1653 - Accuracy: 0.8983 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2967 | 1.0 | 750 | 0.3026 | 0.8833 | | 0.2176 | 2.0 | 1500 | 0.3634 | 0.865 | | 0.1482 | 3.0 | 2250 | 0.3577 | 0.8917 | | 0.1878 | 4.0 | 3000 | 0.3366 | 0.89 | | 0.2072 | 5.0 | 3750 | 0.4027 | 0.8783 | | 0.1159 | 6.0 | 4500 | 0.4215 | 0.9017 | | 0.1092 | 7.0 | 5250 | 0.4522 | 0.875 | | 0.1055 | 8.0 | 6000 | 0.5168 | 0.8733 | | 0.0404 | 9.0 | 6750 | 0.4834 | 0.8867 | | 0.0353 | 10.0 | 7500 | 0.5506 | 0.895 | | 0.0316 | 11.0 | 8250 | 0.5770 | 0.88 | | 0.0204 | 12.0 | 9000 | 0.7013 | 0.88 | | 0.0769 | 13.0 | 9750 | 0.7049 | 0.8833 | | 0.0408 | 14.0 | 10500 | 0.5508 | 0.8983 | | 0.014 | 15.0 | 11250 | 0.6644 | 0.8883 | | 0.0112 | 16.0 | 12000 | 0.7305 | 0.895 | | 0.0118 | 17.0 | 12750 | 0.6466 | 0.8967 | | 0.0015 | 18.0 | 13500 | 0.7382 | 0.89 | | 0.0022 | 19.0 | 14250 | 0.9099 | 0.8967 | | 0.0028 | 20.0 | 15000 | 0.8123 | 0.8883 | | 0.0003 | 21.0 | 15750 | 0.7936 | 0.895 | | 0.0021 | 22.0 | 16500 | 0.8670 | 0.89 | | 0.0001 | 23.0 | 17250 | 0.8387 | 0.89 | | 0.0001 | 24.0 | 18000 | 0.9036 | 0.8867 | | 0.0071 | 25.0 | 18750 | 0.9933 | 0.8967 | | 0.0003 | 26.0 | 19500 | 0.9103 | 0.8933 | | 0.0 | 27.0 | 20250 | 0.9486 | 0.8983 | | 0.0 | 28.0 | 21000 | 0.9480 | 0.89 | | 0.0 | 29.0 | 21750 | 1.0149 | 0.8983 | | 0.0 | 30.0 | 22500 | 0.9710 | 0.89 | | 0.0 | 31.0 | 23250 | 0.8903 | 0.9017 | | 0.0 | 32.0 | 24000 | 0.9900 | 0.8983 | | 0.0 | 33.0 | 24750 | 0.9812 | 0.8967 | | 0.0 | 34.0 | 25500 | 1.0802 | 0.8917 | | 0.0 | 35.0 | 26250 | 1.0127 | 0.9 | | 0.0 | 36.0 | 27000 | 1.0499 | 0.8917 | | 0.0 | 37.0 | 27750 | 1.0711 | 0.895 | | 0.0018 | 38.0 | 28500 | 1.1040 | 0.8983 | | 0.0 | 39.0 | 29250 | 1.0513 | 0.9017 | | 0.0 | 40.0 | 30000 | 1.1398 | 0.9 | | 0.0 | 41.0 | 30750 | 1.1537 | 0.9 | | 0.0 | 42.0 | 31500 | 1.1196 | 0.9 | | 0.0 | 43.0 | 32250 | 1.1395 | 0.8967 | | 0.0 | 44.0 | 33000 | 1.1136 | 0.9017 | | 0.0 | 45.0 | 33750 | 1.1523 | 0.895 | | 0.0 | 46.0 | 34500 | 1.1446 | 0.9 | | 0.0 | 47.0 | 35250 | 1.1542 | 0.8967 | | 0.0 | 48.0 | 36000 | 1.1560 | 0.8983 | | 0.0 | 49.0 | 36750 | 1.1589 | 0.8983 | | 0.0 | 50.0 | 37500 | 1.1653 | 0.8983 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_adamax_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_adamax_0001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0059 - Accuracy: 0.8983 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1525 | 1.0 | 750 | 0.2697 | 0.895 | | 0.1282 | 2.0 | 1500 | 0.3452 | 0.8917 | | 0.0382 | 3.0 | 2250 | 0.4781 | 0.89 | | 0.0664 | 4.0 | 3000 | 0.6005 | 0.8917 | | 0.0139 | 5.0 | 3750 | 0.6269 | 0.8983 | | 0.0003 | 6.0 | 4500 | 0.7710 | 0.9017 | | 0.0029 | 7.0 | 5250 | 0.7996 | 0.89 | | 0.0177 | 8.0 | 6000 | 0.7898 | 0.9033 | | 0.0001 | 9.0 | 6750 | 0.7240 | 0.8983 | | 0.0001 | 10.0 | 7500 | 0.8037 | 0.9133 | | 0.0 | 11.0 | 8250 | 0.7846 | 0.91 | | 0.0005 | 12.0 | 9000 | 0.9174 | 0.885 | | 0.04 | 13.0 | 9750 | 0.8629 | 0.8983 | | 0.0005 | 14.0 | 10500 | 0.8319 | 0.895 | | 0.0 | 15.0 | 11250 | 0.8174 | 0.91 | | 0.0 | 16.0 | 12000 | 0.8650 | 0.9017 | | 0.0 | 17.0 | 12750 | 0.7601 | 0.9133 | | 0.0 | 18.0 | 13500 | 0.9296 | 0.8867 | | 0.0 | 19.0 | 14250 | 0.8663 | 0.9033 | | 0.0 | 20.0 | 15000 | 0.9126 | 0.895 | | 0.0 | 21.0 | 15750 | 0.8974 | 0.8983 | | 0.0 | 22.0 | 16500 | 0.9203 | 0.8967 | | 0.0 | 23.0 | 17250 | 0.8786 | 0.9033 | | 0.0 | 24.0 | 18000 | 0.8565 | 0.9017 | | 0.0 | 25.0 | 18750 | 0.9193 | 0.89 | | 0.0 | 26.0 | 19500 | 0.9069 | 0.895 | | 0.0 | 27.0 | 20250 | 0.8841 | 0.9 | | 0.0 | 28.0 | 21000 | 0.9282 | 0.895 | | 0.0 | 29.0 | 21750 | 0.9329 | 0.8983 | | 0.0 | 30.0 | 22500 | 0.9485 | 0.9017 | | 0.0 | 31.0 | 23250 | 0.9410 | 0.8967 | | 0.0 | 32.0 | 24000 | 0.9299 | 0.9 | | 0.0 | 33.0 | 24750 | 0.9416 | 0.8983 | | 0.0 | 34.0 | 25500 | 0.9468 | 0.8967 | | 0.0 | 35.0 | 26250 | 0.9697 | 0.895 | | 0.0 | 36.0 | 27000 | 0.9684 | 0.8983 | | 0.0 | 37.0 | 27750 | 0.9718 | 0.8983 | | 0.0 | 38.0 | 28500 | 0.9758 | 0.8983 | | 0.0 | 39.0 | 29250 | 0.9793 | 0.8983 | | 0.0 | 40.0 | 30000 | 0.9881 | 0.8983 | | 0.0 | 41.0 | 30750 | 0.9875 | 0.8983 | | 0.0 | 42.0 | 31500 | 0.9984 | 0.8983 | | 0.0 | 43.0 | 32250 | 0.9995 | 0.8983 | | 0.0 | 44.0 | 33000 | 1.0002 | 0.8983 | | 0.0 | 45.0 | 33750 | 1.0011 | 0.8983 | | 0.0 | 46.0 | 34500 | 1.0026 | 0.8983 | | 0.0 | 47.0 | 35250 | 1.0030 | 0.8983 | | 0.0 | 48.0 | 36000 | 1.0034 | 0.8983 | | 0.0 | 49.0 | 36750 | 1.0045 | 0.8983 | | 0.0 | 50.0 | 37500 | 1.0059 | 0.8983 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
nicolasdupuisroy/vit-gabor-detection-v2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-gabor-detection-v2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0186 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 200 - eval_batch_size: 200 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 120.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 0.5751 | 1.0 | | No log | 2.0 | 2 | 0.5081 | 1.0 | | No log | 3.0 | 3 | 0.4654 | 1.0 | | No log | 4.0 | 4 | 0.4014 | 1.0 | | No log | 5.0 | 5 | 0.3692 | 1.0 | | No log | 6.0 | 6 | 0.3327 | 1.0 | | No log | 7.0 | 7 | 0.2937 | 1.0 | | No log | 8.0 | 8 | 0.2775 | 1.0 | | No log | 9.0 | 9 | 0.2335 | 1.0 | | 0.4432 | 10.0 | 10 | 0.2092 | 1.0 | | 0.4432 | 11.0 | 11 | 0.2007 | 1.0 | | 0.4432 | 12.0 | 12 | 0.1674 | 1.0 | | 0.4432 | 13.0 | 13 | 0.1546 | 1.0 | | 0.4432 | 14.0 | 14 | 0.1393 | 1.0 | | 0.4432 | 15.0 | 15 | 0.1297 | 1.0 | | 0.4432 | 16.0 | 16 | 0.1219 | 1.0 | | 0.4432 | 17.0 | 17 | 0.1090 | 1.0 | | 0.4432 | 18.0 | 18 | 0.1012 | 1.0 | | 0.4432 | 19.0 | 19 | 0.0981 | 1.0 | | 0.1696 | 20.0 | 20 | 0.0874 | 1.0 | | 0.1696 | 21.0 | 21 | 0.0812 | 1.0 | | 0.1696 | 22.0 | 22 | 0.0750 | 1.0 | | 0.1696 | 23.0 | 23 | 0.0754 | 1.0 | | 0.1696 | 24.0 | 24 | 0.0693 | 1.0 | | 0.1696 | 25.0 | 25 | 0.0642 | 1.0 | | 0.1696 | 26.0 | 26 | 0.0610 | 1.0 | | 0.1696 | 27.0 | 27 | 0.0586 | 1.0 | | 0.1696 | 28.0 | 28 | 0.0569 | 1.0 | | 0.1696 | 29.0 | 29 | 0.0532 | 1.0 | | 0.0792 | 30.0 | 30 | 0.0506 | 1.0 | | 0.0792 | 31.0 | 31 | 0.0495 | 1.0 | | 0.0792 | 32.0 | 32 | 0.0476 | 1.0 | | 0.0792 | 33.0 | 33 | 0.0457 | 1.0 | | 0.0792 | 34.0 | 34 | 0.0442 | 1.0 | | 0.0792 | 35.0 | 35 | 0.0419 | 1.0 | | 0.0792 | 36.0 | 36 | 0.0404 | 1.0 | | 0.0792 | 37.0 | 37 | 0.0396 | 1.0 | | 0.0792 | 38.0 | 38 | 0.0384 | 1.0 | | 0.0792 | 39.0 | 39 | 0.0377 | 1.0 | | 0.049 | 40.0 | 40 | 0.0366 | 1.0 | | 0.049 | 41.0 | 41 | 0.0370 | 1.0 | | 0.049 | 42.0 | 42 | 0.0339 | 1.0 | | 0.049 | 43.0 | 43 | 0.0330 | 1.0 | | 0.049 | 44.0 | 44 | 0.0344 | 1.0 | | 0.049 | 45.0 | 45 | 0.0324 | 1.0 | | 0.049 | 46.0 | 46 | 0.0323 | 1.0 | | 0.049 | 47.0 | 47 | 0.0311 | 1.0 | | 0.049 | 48.0 | 48 | 0.0308 | 1.0 | | 0.049 | 49.0 | 49 | 0.0294 | 1.0 | | 0.0359 | 50.0 | 50 | 0.0297 | 1.0 | | 0.0359 | 51.0 | 51 | 0.0289 | 1.0 | | 0.0359 | 52.0 | 52 | 0.0285 | 1.0 | | 0.0359 | 53.0 | 53 | 0.0280 | 1.0 | | 0.0359 | 54.0 | 54 | 0.0270 | 1.0 | | 0.0359 | 55.0 | 55 | 0.0265 | 1.0 | | 0.0359 | 56.0 | 56 | 0.0266 | 1.0 | | 0.0359 | 57.0 | 57 | 0.0261 | 1.0 | | 0.0359 | 58.0 | 58 | 0.0268 | 1.0 | | 0.0359 | 59.0 | 59 | 0.0255 | 1.0 | | 0.0293 | 60.0 | 60 | 0.0255 | 1.0 | | 0.0293 | 61.0 | 61 | 0.0246 | 1.0 | | 0.0293 | 62.0 | 62 | 0.0256 | 1.0 | | 0.0293 | 63.0 | 63 | 0.0247 | 1.0 | | 0.0293 | 64.0 | 64 | 0.0241 | 1.0 | | 0.0293 | 65.0 | 65 | 0.0241 | 1.0 | | 0.0293 | 66.0 | 66 | 0.0234 | 1.0 | | 0.0293 | 67.0 | 67 | 0.0236 | 1.0 | | 0.0293 | 68.0 | 68 | 0.0228 | 1.0 | | 0.0293 | 69.0 | 69 | 0.0233 | 1.0 | | 0.0256 | 70.0 | 70 | 0.0227 | 1.0 | | 0.0256 | 71.0 | 71 | 0.0227 | 1.0 | | 0.0256 | 72.0 | 72 | 0.0230 | 1.0 | | 0.0256 | 73.0 | 73 | 0.0222 | 1.0 | | 0.0256 | 74.0 | 74 | 0.0220 | 1.0 | | 0.0256 | 75.0 | 75 | 0.0221 | 1.0 | | 0.0256 | 76.0 | 76 | 0.0219 | 1.0 | | 0.0256 | 77.0 | 77 | 0.0215 | 1.0 | | 0.0256 | 78.0 | 78 | 0.0210 | 1.0 | | 0.0256 | 79.0 | 79 | 0.0209 | 1.0 | | 0.0234 | 80.0 | 80 | 0.0212 | 1.0 | | 0.0234 | 81.0 | 81 | 0.0212 | 1.0 | | 0.0234 | 82.0 | 82 | 0.0206 | 1.0 | | 0.0234 | 83.0 | 83 | 0.0210 | 1.0 | | 0.0234 | 84.0 | 84 | 0.0204 | 1.0 | | 0.0234 | 85.0 | 85 | 0.0205 | 1.0 | | 0.0234 | 86.0 | 86 | 0.0204 | 1.0 | | 0.0234 | 87.0 | 87 | 0.0203 | 1.0 | | 0.0234 | 88.0 | 88 | 0.0200 | 1.0 | | 0.0234 | 89.0 | 89 | 0.0203 | 1.0 | | 0.0218 | 90.0 | 90 | 0.0196 | 1.0 | | 0.0218 | 91.0 | 91 | 0.0199 | 1.0 | | 0.0218 | 92.0 | 92 | 0.0198 | 1.0 | | 0.0218 | 93.0 | 93 | 0.0196 | 1.0 | | 0.0218 | 94.0 | 94 | 0.0195 | 1.0 | | 0.0218 | 95.0 | 95 | 0.0198 | 1.0 | | 0.0218 | 96.0 | 96 | 0.0197 | 1.0 | | 0.0218 | 97.0 | 97 | 0.0193 | 1.0 | | 0.0218 | 98.0 | 98 | 0.0195 | 1.0 | | 0.0218 | 99.0 | 99 | 0.0194 | 1.0 | | 0.0208 | 100.0 | 100 | 0.0192 | 1.0 | | 0.0208 | 101.0 | 101 | 0.0190 | 1.0 | | 0.0208 | 102.0 | 102 | 0.0188 | 1.0 | | 0.0208 | 103.0 | 103 | 0.0191 | 1.0 | | 0.0208 | 104.0 | 104 | 0.0193 | 1.0 | | 0.0208 | 105.0 | 105 | 0.0193 | 1.0 | | 0.0208 | 106.0 | 106 | 0.0190 | 1.0 | | 0.0208 | 107.0 | 107 | 0.0191 | 1.0 | | 0.0208 | 108.0 | 108 | 0.0186 | 1.0 | | 0.0208 | 109.0 | 109 | 0.0188 | 1.0 | | 0.0202 | 110.0 | 110 | 0.0187 | 1.0 | | 0.0202 | 111.0 | 111 | 0.0191 | 1.0 | | 0.0202 | 112.0 | 112 | 0.0188 | 1.0 | | 0.0202 | 113.0 | 113 | 0.0185 | 1.0 | | 0.0202 | 114.0 | 114 | 0.0188 | 1.0 | | 0.0202 | 115.0 | 115 | 0.0183 | 1.0 | | 0.0202 | 116.0 | 116 | 0.0187 | 1.0 | | 0.0202 | 117.0 | 117 | 0.0185 | 1.0 | | 0.0202 | 118.0 | 118 | 0.0184 | 1.0 | | 0.0202 | 119.0 | 119 | 0.0188 | 1.0 | | 0.0197 | 120.0 | 120 | 0.0185 | 1.0 | ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "absent", "present" ]
hkivancoral/smids_5x_deit_small_adamax_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_adamax_0001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2568 - Accuracy: 0.895 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1785 | 1.0 | 750 | 0.3310 | 0.8867 | | 0.1427 | 2.0 | 1500 | 0.4997 | 0.86 | | 0.0558 | 3.0 | 2250 | 0.6477 | 0.8833 | | 0.0755 | 4.0 | 3000 | 0.8076 | 0.8783 | | 0.0696 | 5.0 | 3750 | 0.8523 | 0.885 | | 0.0129 | 6.0 | 4500 | 0.8649 | 0.8917 | | 0.0009 | 7.0 | 5250 | 0.8612 | 0.895 | | 0.0 | 8.0 | 6000 | 0.9953 | 0.8883 | | 0.0001 | 9.0 | 6750 | 0.9803 | 0.89 | | 0.0001 | 10.0 | 7500 | 0.9507 | 0.895 | | 0.0 | 11.0 | 8250 | 1.0047 | 0.8983 | | 0.0 | 12.0 | 9000 | 1.0208 | 0.885 | | 0.0 | 13.0 | 9750 | 1.0442 | 0.8867 | | 0.0 | 14.0 | 10500 | 0.9977 | 0.89 | | 0.0 | 15.0 | 11250 | 1.0546 | 0.8917 | | 0.0087 | 16.0 | 12000 | 1.1978 | 0.885 | | 0.0001 | 17.0 | 12750 | 1.0539 | 0.9017 | | 0.0 | 18.0 | 13500 | 1.1390 | 0.8917 | | 0.0 | 19.0 | 14250 | 1.0555 | 0.9 | | 0.0 | 20.0 | 15000 | 1.0783 | 0.8983 | | 0.0 | 21.0 | 15750 | 1.1342 | 0.89 | | 0.0 | 22.0 | 16500 | 1.1482 | 0.895 | | 0.0 | 23.0 | 17250 | 1.1356 | 0.8933 | | 0.0 | 24.0 | 18000 | 1.0819 | 0.9 | | 0.0 | 25.0 | 18750 | 1.0556 | 0.8967 | | 0.0116 | 26.0 | 19500 | 1.1710 | 0.8917 | | 0.0 | 27.0 | 20250 | 1.1214 | 0.8967 | | 0.0 | 28.0 | 21000 | 1.1327 | 0.8967 | | 0.0 | 29.0 | 21750 | 1.1390 | 0.895 | | 0.0 | 30.0 | 22500 | 1.1576 | 0.8967 | | 0.0 | 31.0 | 23250 | 1.1495 | 0.8933 | | 0.0 | 32.0 | 24000 | 1.1623 | 0.9 | | 0.0 | 33.0 | 24750 | 1.1633 | 0.895 | | 0.0 | 34.0 | 25500 | 1.1868 | 0.895 | | 0.0 | 35.0 | 26250 | 1.1906 | 0.8983 | | 0.0 | 36.0 | 27000 | 1.2000 | 0.8967 | | 0.0 | 37.0 | 27750 | 1.2102 | 0.8983 | | 0.0 | 38.0 | 28500 | 1.2162 | 0.8967 | | 0.0 | 39.0 | 29250 | 1.2243 | 0.895 | | 0.0 | 40.0 | 30000 | 1.2297 | 0.895 | | 0.0 | 41.0 | 30750 | 1.2339 | 0.8933 | | 0.0 | 42.0 | 31500 | 1.2401 | 0.8933 | | 0.0 | 43.0 | 32250 | 1.2422 | 0.8933 | | 0.0 | 44.0 | 33000 | 1.2459 | 0.8933 | | 0.0 | 45.0 | 33750 | 1.2496 | 0.895 | | 0.0 | 46.0 | 34500 | 1.2523 | 0.895 | | 0.0 | 47.0 | 35250 | 1.2541 | 0.895 | | 0.0 | 48.0 | 36000 | 1.2558 | 0.895 | | 0.0 | 49.0 | 36750 | 1.2566 | 0.895 | | 0.0 | 50.0 | 37500 | 1.2568 | 0.895 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_adamax_001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_adamax_001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.6552 - Accuracy: 0.8733 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2966 | 1.0 | 750 | 0.3694 | 0.8683 | | 0.2339 | 2.0 | 1500 | 0.4363 | 0.8417 | | 0.2355 | 3.0 | 2250 | 0.4010 | 0.8633 | | 0.1515 | 4.0 | 3000 | 0.4419 | 0.8767 | | 0.2106 | 5.0 | 3750 | 0.4830 | 0.8683 | | 0.1439 | 6.0 | 4500 | 0.4916 | 0.8683 | | 0.0791 | 7.0 | 5250 | 0.5811 | 0.875 | | 0.077 | 8.0 | 6000 | 0.7477 | 0.8617 | | 0.0484 | 9.0 | 6750 | 0.9268 | 0.8383 | | 0.0478 | 10.0 | 7500 | 0.8560 | 0.855 | | 0.0454 | 11.0 | 8250 | 0.7616 | 0.8733 | | 0.0414 | 12.0 | 9000 | 0.8591 | 0.8617 | | 0.0224 | 13.0 | 9750 | 0.8231 | 0.8833 | | 0.0062 | 14.0 | 10500 | 0.9264 | 0.8717 | | 0.004 | 15.0 | 11250 | 0.8932 | 0.8783 | | 0.0299 | 16.0 | 12000 | 0.8030 | 0.8733 | | 0.0268 | 17.0 | 12750 | 0.8616 | 0.88 | | 0.0071 | 18.0 | 13500 | 0.9511 | 0.8767 | | 0.0023 | 19.0 | 14250 | 0.9282 | 0.8783 | | 0.008 | 20.0 | 15000 | 1.1898 | 0.855 | | 0.0085 | 21.0 | 15750 | 1.0698 | 0.8683 | | 0.0003 | 22.0 | 16500 | 1.1571 | 0.8633 | | 0.0004 | 23.0 | 17250 | 1.1256 | 0.8783 | | 0.0035 | 24.0 | 18000 | 1.2671 | 0.8633 | | 0.0 | 25.0 | 18750 | 1.1579 | 0.8683 | | 0.002 | 26.0 | 19500 | 1.2159 | 0.87 | | 0.0001 | 27.0 | 20250 | 1.2282 | 0.8717 | | 0.0 | 28.0 | 21000 | 1.2713 | 0.8683 | | 0.0 | 29.0 | 21750 | 1.3150 | 0.8683 | | 0.0 | 30.0 | 22500 | 1.2639 | 0.8733 | | 0.0 | 31.0 | 23250 | 1.4238 | 0.865 | | 0.0 | 32.0 | 24000 | 1.3138 | 0.8717 | | 0.0 | 33.0 | 24750 | 1.4236 | 0.8733 | | 0.0 | 34.0 | 25500 | 1.4930 | 0.865 | | 0.0 | 35.0 | 26250 | 1.4369 | 0.87 | | 0.0 | 36.0 | 27000 | 1.4573 | 0.8667 | | 0.0 | 37.0 | 27750 | 1.4567 | 0.8717 | | 0.0 | 38.0 | 28500 | 1.4973 | 0.8767 | | 0.0 | 39.0 | 29250 | 1.5427 | 0.8667 | | 0.0 | 40.0 | 30000 | 1.5656 | 0.8717 | | 0.0 | 41.0 | 30750 | 1.5787 | 0.8717 | | 0.0 | 42.0 | 31500 | 1.5845 | 0.87 | | 0.0 | 43.0 | 32250 | 1.5904 | 0.8717 | | 0.0 | 44.0 | 33000 | 1.5995 | 0.8717 | | 0.0 | 45.0 | 33750 | 1.6192 | 0.8717 | | 0.0 | 46.0 | 34500 | 1.6307 | 0.8717 | | 0.0 | 47.0 | 35250 | 1.6406 | 0.8733 | | 0.0 | 48.0 | 36000 | 1.6477 | 0.8733 | | 0.0 | 49.0 | 36750 | 1.6529 | 0.8733 | | 0.0 | 50.0 | 37500 | 1.6552 | 0.8733 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_adamax_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_adamax_0001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8104 - Accuracy: 0.925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1649 | 1.0 | 750 | 0.2981 | 0.8767 | | 0.1219 | 2.0 | 1500 | 0.3351 | 0.9117 | | 0.088 | 3.0 | 2250 | 0.4113 | 0.9117 | | 0.0191 | 4.0 | 3000 | 0.5041 | 0.91 | | 0.0316 | 5.0 | 3750 | 0.6431 | 0.905 | | 0.0519 | 6.0 | 4500 | 0.6485 | 0.9017 | | 0.0145 | 7.0 | 5250 | 0.6712 | 0.91 | | 0.001 | 8.0 | 6000 | 0.7180 | 0.91 | | 0.0005 | 9.0 | 6750 | 0.6201 | 0.9067 | | 0.0001 | 10.0 | 7500 | 0.7375 | 0.92 | | 0.009 | 11.0 | 8250 | 0.7397 | 0.9133 | | 0.0 | 12.0 | 9000 | 0.7531 | 0.9233 | | 0.0005 | 13.0 | 9750 | 0.7094 | 0.9167 | | 0.0 | 14.0 | 10500 | 0.6906 | 0.9217 | | 0.0 | 15.0 | 11250 | 0.7622 | 0.92 | | 0.0 | 16.0 | 12000 | 0.7690 | 0.9167 | | 0.0099 | 17.0 | 12750 | 0.7093 | 0.925 | | 0.0049 | 18.0 | 13500 | 0.7817 | 0.9167 | | 0.0 | 19.0 | 14250 | 0.7714 | 0.9183 | | 0.0 | 20.0 | 15000 | 0.7423 | 0.92 | | 0.0 | 21.0 | 15750 | 0.7472 | 0.9283 | | 0.0 | 22.0 | 16500 | 0.8201 | 0.9217 | | 0.0003 | 23.0 | 17250 | 0.7230 | 0.925 | | 0.0 | 24.0 | 18000 | 0.7873 | 0.9233 | | 0.0 | 25.0 | 18750 | 0.7903 | 0.9233 | | 0.0 | 26.0 | 19500 | 0.7611 | 0.9233 | | 0.0 | 27.0 | 20250 | 0.7662 | 0.9267 | | 0.0 | 28.0 | 21000 | 0.7601 | 0.9267 | | 0.0 | 29.0 | 21750 | 0.7659 | 0.925 | | 0.0054 | 30.0 | 22500 | 0.7697 | 0.9217 | | 0.0 | 31.0 | 23250 | 0.7755 | 0.9217 | | 0.0 | 32.0 | 24000 | 0.7712 | 0.9217 | | 0.0 | 33.0 | 24750 | 0.7599 | 0.9267 | | 0.0 | 34.0 | 25500 | 0.7735 | 0.9267 | | 0.0 | 35.0 | 26250 | 0.7806 | 0.925 | | 0.0 | 36.0 | 27000 | 0.7835 | 0.9217 | | 0.0039 | 37.0 | 27750 | 0.7879 | 0.925 | | 0.0 | 38.0 | 28500 | 0.7885 | 0.9267 | | 0.0 | 39.0 | 29250 | 0.7918 | 0.925 | | 0.0 | 40.0 | 30000 | 0.7945 | 0.9267 | | 0.0 | 41.0 | 30750 | 0.7955 | 0.9267 | | 0.0 | 42.0 | 31500 | 0.7991 | 0.9233 | | 0.0 | 43.0 | 32250 | 0.8003 | 0.925 | | 0.0 | 44.0 | 33000 | 0.8023 | 0.925 | | 0.0 | 45.0 | 33750 | 0.8041 | 0.925 | | 0.0 | 46.0 | 34500 | 0.8060 | 0.925 | | 0.0 | 47.0 | 35250 | 0.8084 | 0.925 | | 0.0 | 48.0 | 36000 | 0.8088 | 0.925 | | 0.0 | 49.0 | 36750 | 0.8102 | 0.9267 | | 0.0 | 50.0 | 37500 | 0.8104 | 0.925 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_adamax_001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_adamax_001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9313 - Accuracy: 0.905 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3135 | 1.0 | 750 | 0.2998 | 0.8733 | | 0.2768 | 2.0 | 1500 | 0.4449 | 0.8417 | | 0.2302 | 3.0 | 2250 | 0.3152 | 0.885 | | 0.1705 | 4.0 | 3000 | 0.4052 | 0.8717 | | 0.1542 | 5.0 | 3750 | 0.3463 | 0.9017 | | 0.1492 | 6.0 | 4500 | 0.3620 | 0.8967 | | 0.0963 | 7.0 | 5250 | 0.3599 | 0.895 | | 0.0922 | 8.0 | 6000 | 0.4809 | 0.8883 | | 0.1185 | 9.0 | 6750 | 0.5269 | 0.9017 | | 0.0382 | 10.0 | 7500 | 0.5699 | 0.8983 | | 0.0415 | 11.0 | 8250 | 0.4670 | 0.9033 | | 0.0358 | 12.0 | 9000 | 0.4714 | 0.9083 | | 0.0301 | 13.0 | 9750 | 0.4780 | 0.915 | | 0.0178 | 14.0 | 10500 | 0.5327 | 0.9033 | | 0.0172 | 15.0 | 11250 | 0.6375 | 0.8983 | | 0.0148 | 16.0 | 12000 | 0.5566 | 0.8967 | | 0.0235 | 17.0 | 12750 | 0.5739 | 0.9067 | | 0.0298 | 18.0 | 13500 | 0.7210 | 0.8983 | | 0.0012 | 19.0 | 14250 | 0.7611 | 0.8883 | | 0.0391 | 20.0 | 15000 | 0.8089 | 0.8917 | | 0.0002 | 21.0 | 15750 | 0.6460 | 0.8983 | | 0.0095 | 22.0 | 16500 | 0.6954 | 0.9067 | | 0.0251 | 23.0 | 17250 | 0.6718 | 0.9017 | | 0.0021 | 24.0 | 18000 | 0.6374 | 0.9067 | | 0.0001 | 25.0 | 18750 | 0.6533 | 0.905 | | 0.0001 | 26.0 | 19500 | 0.7022 | 0.91 | | 0.003 | 27.0 | 20250 | 0.8113 | 0.9 | | 0.0277 | 28.0 | 21000 | 0.7402 | 0.8983 | | 0.0056 | 29.0 | 21750 | 0.7949 | 0.8967 | | 0.007 | 30.0 | 22500 | 0.8055 | 0.8967 | | 0.0001 | 31.0 | 23250 | 0.8426 | 0.9083 | | 0.0 | 32.0 | 24000 | 0.8618 | 0.905 | | 0.0 | 33.0 | 24750 | 0.8392 | 0.9083 | | 0.0 | 34.0 | 25500 | 0.8019 | 0.9067 | | 0.0 | 35.0 | 26250 | 0.8163 | 0.9067 | | 0.0 | 36.0 | 27000 | 0.8994 | 0.895 | | 0.0037 | 37.0 | 27750 | 0.8599 | 0.9067 | | 0.0 | 38.0 | 28500 | 0.8721 | 0.905 | | 0.0 | 39.0 | 29250 | 0.8612 | 0.9067 | | 0.0 | 40.0 | 30000 | 0.9150 | 0.9033 | | 0.0 | 41.0 | 30750 | 0.8804 | 0.91 | | 0.0 | 42.0 | 31500 | 0.8814 | 0.9067 | | 0.0 | 43.0 | 32250 | 0.8966 | 0.9083 | | 0.0 | 44.0 | 33000 | 0.9028 | 0.9083 | | 0.0 | 45.0 | 33750 | 0.9087 | 0.9083 | | 0.0 | 46.0 | 34500 | 0.9131 | 0.9083 | | 0.0 | 47.0 | 35250 | 0.9195 | 0.9067 | | 0.0 | 48.0 | 36000 | 0.9269 | 0.9067 | | 0.0 | 49.0 | 36750 | 0.9312 | 0.9067 | | 0.0 | 50.0 | 37500 | 0.9313 | 0.905 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_sgd_0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_sgd_0001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5006 - Accuracy: 0.8037 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0629 | 1.0 | 375 | 1.0383 | 0.4592 | | 1.0151 | 2.0 | 750 | 1.0009 | 0.4925 | | 0.9588 | 3.0 | 1125 | 0.9619 | 0.5574 | | 0.924 | 4.0 | 1500 | 0.9255 | 0.5890 | | 0.8743 | 5.0 | 1875 | 0.8899 | 0.6290 | | 0.8177 | 6.0 | 2250 | 0.8563 | 0.6522 | | 0.7888 | 7.0 | 2625 | 0.8262 | 0.6755 | | 0.7921 | 8.0 | 3000 | 0.7964 | 0.7005 | | 0.7372 | 9.0 | 3375 | 0.7699 | 0.7138 | | 0.7291 | 10.0 | 3750 | 0.7453 | 0.7221 | | 0.7295 | 11.0 | 4125 | 0.7221 | 0.7255 | | 0.6995 | 12.0 | 4500 | 0.7007 | 0.7288 | | 0.621 | 13.0 | 4875 | 0.6811 | 0.7388 | | 0.6398 | 14.0 | 5250 | 0.6638 | 0.7504 | | 0.6383 | 15.0 | 5625 | 0.6483 | 0.7587 | | 0.5747 | 16.0 | 6000 | 0.6341 | 0.7587 | | 0.6097 | 17.0 | 6375 | 0.6214 | 0.7604 | | 0.594 | 18.0 | 6750 | 0.6099 | 0.7604 | | 0.5533 | 19.0 | 7125 | 0.5997 | 0.7654 | | 0.5984 | 20.0 | 7500 | 0.5904 | 0.7687 | | 0.5406 | 21.0 | 7875 | 0.5822 | 0.7720 | | 0.525 | 22.0 | 8250 | 0.5743 | 0.7704 | | 0.5434 | 23.0 | 8625 | 0.5673 | 0.7720 | | 0.5253 | 24.0 | 9000 | 0.5609 | 0.7737 | | 0.5143 | 25.0 | 9375 | 0.5549 | 0.7754 | | 0.5351 | 26.0 | 9750 | 0.5494 | 0.7787 | | 0.5716 | 27.0 | 10125 | 0.5444 | 0.7787 | | 0.4849 | 28.0 | 10500 | 0.5399 | 0.7820 | | 0.4878 | 29.0 | 10875 | 0.5357 | 0.7887 | | 0.4887 | 30.0 | 11250 | 0.5319 | 0.7920 | | 0.4866 | 31.0 | 11625 | 0.5283 | 0.7920 | | 0.5025 | 32.0 | 12000 | 0.5250 | 0.7937 | | 0.4672 | 33.0 | 12375 | 0.5219 | 0.7903 | | 0.4395 | 34.0 | 12750 | 0.5192 | 0.7887 | | 0.473 | 35.0 | 13125 | 0.5166 | 0.7920 | | 0.4458 | 36.0 | 13500 | 0.5143 | 0.7920 | | 0.4639 | 37.0 | 13875 | 0.5122 | 0.7937 | | 0.4488 | 38.0 | 14250 | 0.5103 | 0.7953 | | 0.4766 | 39.0 | 14625 | 0.5086 | 0.7970 | | 0.4603 | 40.0 | 15000 | 0.5071 | 0.7987 | | 0.4461 | 41.0 | 15375 | 0.5058 | 0.8003 | | 0.4671 | 42.0 | 15750 | 0.5046 | 0.8003 | | 0.4415 | 43.0 | 16125 | 0.5036 | 0.8020 | | 0.4496 | 44.0 | 16500 | 0.5027 | 0.8020 | | 0.4327 | 45.0 | 16875 | 0.5020 | 0.8020 | | 0.5062 | 46.0 | 17250 | 0.5015 | 0.8020 | | 0.4692 | 47.0 | 17625 | 0.5010 | 0.8037 | | 0.426 | 48.0 | 18000 | 0.5008 | 0.8037 | | 0.518 | 49.0 | 18375 | 0.5006 | 0.8037 | | 0.4765 | 50.0 | 18750 | 0.5006 | 0.8037 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_adamax_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_adamax_00001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8282 - Accuracy: 0.9065 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2758 | 1.0 | 751 | 0.3168 | 0.8831 | | 0.1955 | 2.0 | 1502 | 0.2645 | 0.9032 | | 0.1342 | 3.0 | 2253 | 0.2464 | 0.9115 | | 0.0581 | 4.0 | 3004 | 0.2670 | 0.9032 | | 0.0977 | 5.0 | 3755 | 0.3303 | 0.9115 | | 0.0404 | 6.0 | 4506 | 0.3924 | 0.9048 | | 0.0407 | 7.0 | 5257 | 0.4392 | 0.9098 | | 0.0229 | 8.0 | 6008 | 0.5277 | 0.9132 | | 0.023 | 9.0 | 6759 | 0.5759 | 0.9115 | | 0.016 | 10.0 | 7510 | 0.6280 | 0.9032 | | 0.0002 | 11.0 | 8261 | 0.6513 | 0.9098 | | 0.0008 | 12.0 | 9012 | 0.6409 | 0.9182 | | 0.006 | 13.0 | 9763 | 0.6473 | 0.9199 | | 0.0 | 14.0 | 10514 | 0.7396 | 0.9065 | | 0.0 | 15.0 | 11265 | 0.7703 | 0.9065 | | 0.0 | 16.0 | 12016 | 0.7534 | 0.9065 | | 0.0001 | 17.0 | 12767 | 0.8086 | 0.9032 | | 0.0 | 18.0 | 13518 | 0.7937 | 0.9032 | | 0.0 | 19.0 | 14269 | 0.7606 | 0.9165 | | 0.0 | 20.0 | 15020 | 0.8234 | 0.9065 | | 0.0001 | 21.0 | 15771 | 0.7617 | 0.9149 | | 0.0 | 22.0 | 16522 | 0.8024 | 0.9015 | | 0.0 | 23.0 | 17273 | 0.8089 | 0.9065 | | 0.0 | 24.0 | 18024 | 0.8495 | 0.9015 | | 0.0 | 25.0 | 18775 | 0.7997 | 0.9115 | | 0.0 | 26.0 | 19526 | 0.8566 | 0.9015 | | 0.0 | 27.0 | 20277 | 0.8140 | 0.9065 | | 0.0 | 28.0 | 21028 | 0.8138 | 0.9065 | | 0.0073 | 29.0 | 21779 | 0.7958 | 0.9082 | | 0.0 | 30.0 | 22530 | 0.8037 | 0.9115 | | 0.0 | 31.0 | 23281 | 0.8741 | 0.9032 | | 0.0 | 32.0 | 24032 | 0.8298 | 0.9082 | | 0.0 | 33.0 | 24783 | 0.8730 | 0.9015 | | 0.0 | 34.0 | 25534 | 0.8840 | 0.8982 | | 0.0 | 35.0 | 26285 | 0.8051 | 0.9132 | | 0.0 | 36.0 | 27036 | 0.8192 | 0.9115 | | 0.0 | 37.0 | 27787 | 0.8059 | 0.9132 | | 0.0 | 38.0 | 28538 | 0.8065 | 0.9149 | | 0.0 | 39.0 | 29289 | 0.8139 | 0.9132 | | 0.0 | 40.0 | 30040 | 0.8141 | 0.9132 | | 0.0 | 41.0 | 30791 | 0.8317 | 0.9098 | | 0.0 | 42.0 | 31542 | 0.8371 | 0.9048 | | 0.0 | 43.0 | 32293 | 0.8394 | 0.9032 | | 0.0 | 44.0 | 33044 | 0.8362 | 0.9048 | | 0.0 | 45.0 | 33795 | 0.8367 | 0.9048 | | 0.0 | 46.0 | 34546 | 0.8416 | 0.9032 | | 0.0 | 47.0 | 35297 | 0.8349 | 0.9048 | | 0.0 | 48.0 | 36048 | 0.8314 | 0.9065 | | 0.0 | 49.0 | 36799 | 0.8317 | 0.9065 | | 0.0 | 50.0 | 37550 | 0.8282 | 0.9065 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_sgd_001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_sgd_001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2862 - Accuracy: 0.9015 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5539 | 1.0 | 751 | 0.5690 | 0.7763 | | 0.3867 | 2.0 | 1502 | 0.4456 | 0.8314 | | 0.3236 | 3.0 | 2253 | 0.3927 | 0.8497 | | 0.259 | 4.0 | 3004 | 0.3726 | 0.8514 | | 0.3099 | 5.0 | 3755 | 0.3487 | 0.8598 | | 0.2986 | 6.0 | 4506 | 0.3416 | 0.8715 | | 0.2728 | 7.0 | 5257 | 0.3260 | 0.8731 | | 0.2249 | 8.0 | 6008 | 0.3188 | 0.8781 | | 0.2673 | 9.0 | 6759 | 0.3155 | 0.8848 | | 0.2491 | 10.0 | 7510 | 0.3089 | 0.8848 | | 0.2349 | 11.0 | 8261 | 0.3099 | 0.8881 | | 0.2513 | 12.0 | 9012 | 0.3016 | 0.8898 | | 0.2098 | 13.0 | 9763 | 0.3061 | 0.8898 | | 0.1606 | 14.0 | 10514 | 0.3022 | 0.8881 | | 0.1914 | 15.0 | 11265 | 0.2955 | 0.8881 | | 0.2039 | 16.0 | 12016 | 0.2953 | 0.8898 | | 0.2821 | 17.0 | 12767 | 0.2940 | 0.8965 | | 0.1703 | 18.0 | 13518 | 0.2962 | 0.8915 | | 0.2178 | 19.0 | 14269 | 0.2905 | 0.8965 | | 0.1883 | 20.0 | 15020 | 0.2902 | 0.8998 | | 0.13 | 21.0 | 15771 | 0.2893 | 0.8948 | | 0.1613 | 22.0 | 16522 | 0.2875 | 0.8982 | | 0.1627 | 23.0 | 17273 | 0.2879 | 0.8948 | | 0.2201 | 24.0 | 18024 | 0.2853 | 0.8998 | | 0.2067 | 25.0 | 18775 | 0.2893 | 0.8965 | | 0.1982 | 26.0 | 19526 | 0.2860 | 0.8982 | | 0.1922 | 27.0 | 20277 | 0.2854 | 0.8998 | | 0.2065 | 28.0 | 21028 | 0.2873 | 0.8948 | | 0.1663 | 29.0 | 21779 | 0.2836 | 0.9032 | | 0.1637 | 30.0 | 22530 | 0.2824 | 0.9032 | | 0.1216 | 31.0 | 23281 | 0.2840 | 0.8998 | | 0.2073 | 32.0 | 24032 | 0.2863 | 0.9065 | | 0.1694 | 33.0 | 24783 | 0.2888 | 0.8965 | | 0.1525 | 34.0 | 25534 | 0.2882 | 0.8982 | | 0.1562 | 35.0 | 26285 | 0.2864 | 0.9032 | | 0.1612 | 36.0 | 27036 | 0.2821 | 0.9032 | | 0.2418 | 37.0 | 27787 | 0.2832 | 0.9015 | | 0.138 | 38.0 | 28538 | 0.2859 | 0.9032 | | 0.0832 | 39.0 | 29289 | 0.2853 | 0.8998 | | 0.1792 | 40.0 | 30040 | 0.2866 | 0.9015 | | 0.1296 | 41.0 | 30791 | 0.2848 | 0.9032 | | 0.1436 | 42.0 | 31542 | 0.2863 | 0.9032 | | 0.1676 | 43.0 | 32293 | 0.2864 | 0.9015 | | 0.129 | 44.0 | 33044 | 0.2863 | 0.9015 | | 0.1268 | 45.0 | 33795 | 0.2864 | 0.9015 | | 0.182 | 46.0 | 34546 | 0.2870 | 0.8998 | | 0.0802 | 47.0 | 35297 | 0.2872 | 0.9015 | | 0.1369 | 48.0 | 36048 | 0.2866 | 0.9015 | | 0.1294 | 49.0 | 36799 | 0.2861 | 0.9015 | | 0.1488 | 50.0 | 37550 | 0.2862 | 0.9015 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
ongkn/emikes-classifier
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emikes-classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0253 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 69 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3954 | 1.25 | 10 | 0.3092 | 0.8571 | | 0.1249 | 2.5 | 20 | 0.1407 | 1.0 | | 0.046 | 3.75 | 30 | 0.0666 | 1.0 | | 0.034 | 5.0 | 40 | 0.1060 | 0.9286 | | 0.0255 | 6.25 | 50 | 0.0295 | 1.0 | | 0.0198 | 7.5 | 60 | 0.0274 | 1.0 | | 0.0209 | 8.75 | 70 | 0.1060 | 0.9286 | | 0.02 | 10.0 | 80 | 0.0253 | 1.0 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.0.1+cu117 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "emi", "kes" ]
hkivancoral/smids_10x_deit_small_sgd_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_sgd_001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3138 - Accuracy: 0.8686 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5167 | 1.0 | 750 | 0.5666 | 0.7737 | | 0.3469 | 2.0 | 1500 | 0.4420 | 0.8136 | | 0.3157 | 3.0 | 2250 | 0.3924 | 0.8336 | | 0.3366 | 4.0 | 3000 | 0.3644 | 0.8469 | | 0.2937 | 5.0 | 3750 | 0.3504 | 0.8569 | | 0.2683 | 6.0 | 4500 | 0.3342 | 0.8602 | | 0.2786 | 7.0 | 5250 | 0.3236 | 0.8636 | | 0.2458 | 8.0 | 6000 | 0.3168 | 0.8619 | | 0.2409 | 9.0 | 6750 | 0.3122 | 0.8586 | | 0.2266 | 10.0 | 7500 | 0.3079 | 0.8652 | | 0.2724 | 11.0 | 8250 | 0.3033 | 0.8586 | | 0.2793 | 12.0 | 9000 | 0.3021 | 0.8586 | | 0.2082 | 13.0 | 9750 | 0.3016 | 0.8619 | | 0.152 | 14.0 | 10500 | 0.3001 | 0.8669 | | 0.1732 | 15.0 | 11250 | 0.2977 | 0.8636 | | 0.1629 | 16.0 | 12000 | 0.2993 | 0.8636 | | 0.1493 | 17.0 | 12750 | 0.2962 | 0.8669 | | 0.1762 | 18.0 | 13500 | 0.2975 | 0.8669 | | 0.1954 | 19.0 | 14250 | 0.2989 | 0.8735 | | 0.1979 | 20.0 | 15000 | 0.2956 | 0.8636 | | 0.1452 | 21.0 | 15750 | 0.2997 | 0.8636 | | 0.1414 | 22.0 | 16500 | 0.2986 | 0.8636 | | 0.131 | 23.0 | 17250 | 0.2989 | 0.8652 | | 0.1633 | 24.0 | 18000 | 0.2990 | 0.8652 | | 0.1429 | 25.0 | 18750 | 0.3003 | 0.8636 | | 0.2373 | 26.0 | 19500 | 0.3030 | 0.8735 | | 0.1884 | 27.0 | 20250 | 0.3051 | 0.8702 | | 0.1254 | 28.0 | 21000 | 0.3031 | 0.8602 | | 0.1804 | 29.0 | 21750 | 0.3034 | 0.8719 | | 0.1437 | 30.0 | 22500 | 0.3048 | 0.8686 | | 0.1608 | 31.0 | 23250 | 0.3012 | 0.8669 | | 0.1618 | 32.0 | 24000 | 0.3040 | 0.8652 | | 0.1429 | 33.0 | 24750 | 0.3043 | 0.8602 | | 0.1612 | 34.0 | 25500 | 0.3075 | 0.8652 | | 0.1719 | 35.0 | 26250 | 0.3075 | 0.8619 | | 0.1633 | 36.0 | 27000 | 0.3103 | 0.8669 | | 0.1619 | 37.0 | 27750 | 0.3071 | 0.8636 | | 0.1665 | 38.0 | 28500 | 0.3086 | 0.8669 | | 0.1293 | 39.0 | 29250 | 0.3088 | 0.8669 | | 0.1641 | 40.0 | 30000 | 0.3125 | 0.8719 | | 0.1466 | 41.0 | 30750 | 0.3125 | 0.8702 | | 0.1482 | 42.0 | 31500 | 0.3110 | 0.8652 | | 0.1022 | 43.0 | 32250 | 0.3124 | 0.8652 | | 0.1075 | 44.0 | 33000 | 0.3116 | 0.8669 | | 0.1257 | 45.0 | 33750 | 0.3131 | 0.8669 | | 0.1217 | 46.0 | 34500 | 0.3119 | 0.8669 | | 0.1431 | 47.0 | 35250 | 0.3120 | 0.8686 | | 0.1086 | 48.0 | 36000 | 0.3131 | 0.8686 | | 0.1041 | 49.0 | 36750 | 0.3136 | 0.8686 | | 0.1201 | 50.0 | 37500 | 0.3138 | 0.8686 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_adamax_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_adamax_00001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1874 - Accuracy: 0.8719 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2546 | 1.0 | 750 | 0.2964 | 0.8885 | | 0.1392 | 2.0 | 1500 | 0.2964 | 0.8935 | | 0.1051 | 3.0 | 2250 | 0.3173 | 0.8802 | | 0.0797 | 4.0 | 3000 | 0.3716 | 0.8802 | | 0.0803 | 5.0 | 3750 | 0.4496 | 0.8769 | | 0.0599 | 6.0 | 4500 | 0.5455 | 0.8769 | | 0.0367 | 7.0 | 5250 | 0.6753 | 0.8686 | | 0.0203 | 8.0 | 6000 | 0.7402 | 0.8752 | | 0.0136 | 9.0 | 6750 | 0.8455 | 0.8686 | | 0.0001 | 10.0 | 7500 | 0.8969 | 0.8686 | | 0.0056 | 11.0 | 8250 | 0.9305 | 0.8769 | | 0.0002 | 12.0 | 9000 | 0.9474 | 0.8752 | | 0.0 | 13.0 | 9750 | 0.9957 | 0.8785 | | 0.0 | 14.0 | 10500 | 1.0123 | 0.8769 | | 0.0001 | 15.0 | 11250 | 0.9720 | 0.8835 | | 0.0001 | 16.0 | 12000 | 1.0684 | 0.8785 | | 0.0003 | 17.0 | 12750 | 1.1079 | 0.8752 | | 0.0 | 18.0 | 13500 | 1.0971 | 0.8752 | | 0.0 | 19.0 | 14250 | 1.0987 | 0.8735 | | 0.0 | 20.0 | 15000 | 1.1190 | 0.8769 | | 0.0 | 21.0 | 15750 | 1.1376 | 0.8686 | | 0.0049 | 22.0 | 16500 | 1.1379 | 0.8686 | | 0.0014 | 23.0 | 17250 | 1.1542 | 0.8752 | | 0.0 | 24.0 | 18000 | 1.1536 | 0.8735 | | 0.0 | 25.0 | 18750 | 1.1721 | 0.8719 | | 0.0 | 26.0 | 19500 | 1.1498 | 0.8719 | | 0.01 | 27.0 | 20250 | 1.1595 | 0.8719 | | 0.0 | 28.0 | 21000 | 1.1250 | 0.8785 | | 0.0 | 29.0 | 21750 | 1.1514 | 0.8686 | | 0.0 | 30.0 | 22500 | 1.1182 | 0.8735 | | 0.0 | 31.0 | 23250 | 1.1637 | 0.8752 | | 0.0 | 32.0 | 24000 | 1.1726 | 0.8735 | | 0.0 | 33.0 | 24750 | 1.1697 | 0.8719 | | 0.0 | 34.0 | 25500 | 1.1588 | 0.8752 | | 0.0 | 35.0 | 26250 | 1.1653 | 0.8702 | | 0.0 | 36.0 | 27000 | 1.1669 | 0.8719 | | 0.0141 | 37.0 | 27750 | 1.1767 | 0.8719 | | 0.0 | 38.0 | 28500 | 1.1781 | 0.8719 | | 0.0 | 39.0 | 29250 | 1.1951 | 0.8702 | | 0.0 | 40.0 | 30000 | 1.1887 | 0.8702 | | 0.0 | 41.0 | 30750 | 1.1872 | 0.8702 | | 0.0 | 42.0 | 31500 | 1.1896 | 0.8702 | | 0.0 | 43.0 | 32250 | 1.1930 | 0.8702 | | 0.0 | 44.0 | 33000 | 1.1942 | 0.8702 | | 0.0056 | 45.0 | 33750 | 1.1902 | 0.8702 | | 0.0 | 46.0 | 34500 | 1.1880 | 0.8702 | | 0.0 | 47.0 | 35250 | 1.1877 | 0.8702 | | 0.0 | 48.0 | 36000 | 1.1882 | 0.8702 | | 0.0 | 49.0 | 36750 | 1.1884 | 0.8702 | | 0.0 | 50.0 | 37500 | 1.1874 | 0.8719 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_sgd_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_sgd_0001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4989 - Accuracy: 0.835 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.059 | 1.0 | 375 | 1.0700 | 0.43 | | 1.011 | 2.0 | 750 | 1.0336 | 0.4883 | | 0.9442 | 3.0 | 1125 | 0.9972 | 0.5233 | | 0.9044 | 4.0 | 1500 | 0.9613 | 0.5667 | | 0.9034 | 5.0 | 1875 | 0.9273 | 0.6067 | | 0.8397 | 6.0 | 2250 | 0.8960 | 0.6217 | | 0.8344 | 7.0 | 2625 | 0.8666 | 0.6533 | | 0.8102 | 8.0 | 3000 | 0.8388 | 0.66 | | 0.7533 | 9.0 | 3375 | 0.8130 | 0.675 | | 0.7704 | 10.0 | 3750 | 0.7889 | 0.6783 | | 0.6922 | 11.0 | 4125 | 0.7657 | 0.695 | | 0.7058 | 12.0 | 4500 | 0.7444 | 0.7067 | | 0.7015 | 13.0 | 4875 | 0.7244 | 0.7183 | | 0.7084 | 14.0 | 5250 | 0.7056 | 0.725 | | 0.6276 | 15.0 | 5625 | 0.6882 | 0.74 | | 0.6138 | 16.0 | 6000 | 0.6721 | 0.745 | | 0.6401 | 17.0 | 6375 | 0.6573 | 0.7533 | | 0.6373 | 18.0 | 6750 | 0.6430 | 0.7633 | | 0.569 | 19.0 | 7125 | 0.6303 | 0.7633 | | 0.5819 | 20.0 | 7500 | 0.6185 | 0.77 | | 0.5294 | 21.0 | 7875 | 0.6077 | 0.7817 | | 0.5473 | 22.0 | 8250 | 0.5978 | 0.7883 | | 0.5629 | 23.0 | 8625 | 0.5888 | 0.7967 | | 0.5783 | 24.0 | 9000 | 0.5802 | 0.8017 | | 0.509 | 25.0 | 9375 | 0.5724 | 0.8067 | | 0.5255 | 26.0 | 9750 | 0.5652 | 0.805 | | 0.5612 | 27.0 | 10125 | 0.5585 | 0.81 | | 0.5914 | 28.0 | 10500 | 0.5523 | 0.815 | | 0.4839 | 29.0 | 10875 | 0.5467 | 0.815 | | 0.4781 | 30.0 | 11250 | 0.5414 | 0.8167 | | 0.5423 | 31.0 | 11625 | 0.5367 | 0.8183 | | 0.5434 | 32.0 | 12000 | 0.5323 | 0.8183 | | 0.5812 | 33.0 | 12375 | 0.5281 | 0.82 | | 0.4776 | 34.0 | 12750 | 0.5244 | 0.8183 | | 0.4385 | 35.0 | 13125 | 0.5209 | 0.8217 | | 0.4956 | 36.0 | 13500 | 0.5178 | 0.8217 | | 0.4746 | 37.0 | 13875 | 0.5150 | 0.8233 | | 0.4824 | 38.0 | 14250 | 0.5124 | 0.8233 | | 0.49 | 39.0 | 14625 | 0.5101 | 0.8217 | | 0.4379 | 40.0 | 15000 | 0.5080 | 0.8233 | | 0.4149 | 41.0 | 15375 | 0.5062 | 0.825 | | 0.4917 | 42.0 | 15750 | 0.5046 | 0.8267 | | 0.5208 | 43.0 | 16125 | 0.5031 | 0.8283 | | 0.4676 | 44.0 | 16500 | 0.5020 | 0.8283 | | 0.4552 | 45.0 | 16875 | 0.5009 | 0.8317 | | 0.4563 | 46.0 | 17250 | 0.5002 | 0.8333 | | 0.5467 | 47.0 | 17625 | 0.4996 | 0.835 | | 0.5056 | 48.0 | 18000 | 0.4992 | 0.835 | | 0.4817 | 49.0 | 18375 | 0.4990 | 0.835 | | 0.4808 | 50.0 | 18750 | 0.4989 | 0.835 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_sgd_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_sgd_001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2811 - Accuracy: 0.9083 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.545 | 1.0 | 750 | 0.5587 | 0.785 | | 0.4133 | 2.0 | 1500 | 0.4211 | 0.8467 | | 0.358 | 3.0 | 2250 | 0.3782 | 0.8633 | | 0.3237 | 4.0 | 3000 | 0.3490 | 0.87 | | 0.3443 | 5.0 | 3750 | 0.3305 | 0.8767 | | 0.2928 | 6.0 | 4500 | 0.3200 | 0.8817 | | 0.2686 | 7.0 | 5250 | 0.3122 | 0.8867 | | 0.2534 | 8.0 | 6000 | 0.3123 | 0.885 | | 0.2251 | 9.0 | 6750 | 0.2946 | 0.8933 | | 0.1954 | 10.0 | 7500 | 0.2908 | 0.9 | | 0.2504 | 11.0 | 8250 | 0.2911 | 0.8967 | | 0.2172 | 12.0 | 9000 | 0.2849 | 0.905 | | 0.2089 | 13.0 | 9750 | 0.2810 | 0.905 | | 0.2631 | 14.0 | 10500 | 0.2804 | 0.905 | | 0.2076 | 15.0 | 11250 | 0.2751 | 0.915 | | 0.1833 | 16.0 | 12000 | 0.2763 | 0.9067 | | 0.2051 | 17.0 | 12750 | 0.2775 | 0.905 | | 0.1927 | 18.0 | 13500 | 0.2752 | 0.9083 | | 0.1896 | 19.0 | 14250 | 0.2722 | 0.9117 | | 0.193 | 20.0 | 15000 | 0.2720 | 0.905 | | 0.1978 | 21.0 | 15750 | 0.2723 | 0.905 | | 0.193 | 22.0 | 16500 | 0.2691 | 0.91 | | 0.1867 | 23.0 | 17250 | 0.2706 | 0.9133 | | 0.1588 | 24.0 | 18000 | 0.2753 | 0.9083 | | 0.1896 | 25.0 | 18750 | 0.2771 | 0.8983 | | 0.1697 | 26.0 | 19500 | 0.2708 | 0.9133 | | 0.1259 | 27.0 | 20250 | 0.2702 | 0.9117 | | 0.152 | 28.0 | 21000 | 0.2731 | 0.9083 | | 0.1891 | 29.0 | 21750 | 0.2747 | 0.9117 | | 0.1716 | 30.0 | 22500 | 0.2723 | 0.9083 | | 0.1252 | 31.0 | 23250 | 0.2778 | 0.905 | | 0.1227 | 32.0 | 24000 | 0.2742 | 0.9083 | | 0.166 | 33.0 | 24750 | 0.2738 | 0.9017 | | 0.1299 | 34.0 | 25500 | 0.2772 | 0.9083 | | 0.1287 | 35.0 | 26250 | 0.2752 | 0.91 | | 0.1172 | 36.0 | 27000 | 0.2784 | 0.9033 | | 0.1292 | 37.0 | 27750 | 0.2763 | 0.9033 | | 0.1686 | 38.0 | 28500 | 0.2772 | 0.9067 | | 0.1469 | 39.0 | 29250 | 0.2777 | 0.9067 | | 0.1673 | 40.0 | 30000 | 0.2785 | 0.9083 | | 0.1244 | 41.0 | 30750 | 0.2779 | 0.9067 | | 0.149 | 42.0 | 31500 | 0.2782 | 0.9067 | | 0.1031 | 43.0 | 32250 | 0.2799 | 0.905 | | 0.1374 | 44.0 | 33000 | 0.2832 | 0.9067 | | 0.1179 | 45.0 | 33750 | 0.2818 | 0.905 | | 0.1282 | 46.0 | 34500 | 0.2810 | 0.905 | | 0.1603 | 47.0 | 35250 | 0.2819 | 0.9067 | | 0.1237 | 48.0 | 36000 | 0.2811 | 0.9083 | | 0.1333 | 49.0 | 36750 | 0.2808 | 0.9067 | | 0.1344 | 50.0 | 37500 | 0.2811 | 0.9083 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_adamax_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_adamax_00001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8187 - Accuracy: 0.9183 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2585 | 1.0 | 750 | 0.2681 | 0.8983 | | 0.2105 | 2.0 | 1500 | 0.2490 | 0.9167 | | 0.0786 | 3.0 | 2250 | 0.2625 | 0.9167 | | 0.0736 | 4.0 | 3000 | 0.2826 | 0.9133 | | 0.0688 | 5.0 | 3750 | 0.3568 | 0.91 | | 0.0468 | 6.0 | 4500 | 0.4349 | 0.9083 | | 0.0289 | 7.0 | 5250 | 0.4645 | 0.9183 | | 0.0394 | 8.0 | 6000 | 0.5300 | 0.9183 | | 0.0012 | 9.0 | 6750 | 0.5842 | 0.92 | | 0.0139 | 10.0 | 7500 | 0.6285 | 0.915 | | 0.0002 | 11.0 | 8250 | 0.6464 | 0.9217 | | 0.0001 | 12.0 | 9000 | 0.6757 | 0.9133 | | 0.0 | 13.0 | 9750 | 0.7480 | 0.9167 | | 0.0001 | 14.0 | 10500 | 0.7033 | 0.92 | | 0.0 | 15.0 | 11250 | 0.7525 | 0.9133 | | 0.0 | 16.0 | 12000 | 0.7472 | 0.915 | | 0.0 | 17.0 | 12750 | 0.7380 | 0.92 | | 0.0 | 18.0 | 13500 | 0.7432 | 0.9183 | | 0.0 | 19.0 | 14250 | 0.7438 | 0.9217 | | 0.0 | 20.0 | 15000 | 0.7615 | 0.92 | | 0.0 | 21.0 | 15750 | 0.7581 | 0.9233 | | 0.0 | 22.0 | 16500 | 0.7753 | 0.92 | | 0.0 | 23.0 | 17250 | 0.7758 | 0.92 | | 0.0 | 24.0 | 18000 | 0.7745 | 0.9217 | | 0.0 | 25.0 | 18750 | 0.7780 | 0.9233 | | 0.0 | 26.0 | 19500 | 0.7763 | 0.9217 | | 0.0 | 27.0 | 20250 | 0.7839 | 0.9183 | | 0.0 | 28.0 | 21000 | 0.7914 | 0.9183 | | 0.0 | 29.0 | 21750 | 0.7935 | 0.92 | | 0.0 | 30.0 | 22500 | 0.8320 | 0.9117 | | 0.0 | 31.0 | 23250 | 0.8021 | 0.9183 | | 0.0 | 32.0 | 24000 | 0.8041 | 0.9217 | | 0.0 | 33.0 | 24750 | 0.8030 | 0.9167 | | 0.0 | 34.0 | 25500 | 0.8170 | 0.9133 | | 0.0 | 35.0 | 26250 | 0.8237 | 0.915 | | 0.0 | 36.0 | 27000 | 0.8072 | 0.9167 | | 0.0 | 37.0 | 27750 | 0.8249 | 0.915 | | 0.0 | 38.0 | 28500 | 0.8116 | 0.9167 | | 0.0 | 39.0 | 29250 | 0.8160 | 0.9217 | | 0.0 | 40.0 | 30000 | 0.8158 | 0.92 | | 0.0 | 41.0 | 30750 | 0.8164 | 0.92 | | 0.0 | 42.0 | 31500 | 0.8163 | 0.92 | | 0.0 | 43.0 | 32250 | 0.8169 | 0.92 | | 0.0 | 44.0 | 33000 | 0.8174 | 0.92 | | 0.0 | 45.0 | 33750 | 0.8182 | 0.92 | | 0.0 | 46.0 | 34500 | 0.8186 | 0.9183 | | 0.0 | 47.0 | 35250 | 0.8185 | 0.92 | | 0.0 | 48.0 | 36000 | 0.8187 | 0.92 | | 0.0 | 49.0 | 36750 | 0.8181 | 0.9183 | | 0.0 | 50.0 | 37500 | 0.8187 | 0.9183 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_sgd_001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_sgd_001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3291 - Accuracy: 0.8767 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5469 | 1.0 | 750 | 0.5533 | 0.7983 | | 0.4148 | 2.0 | 1500 | 0.4326 | 0.8367 | | 0.3982 | 3.0 | 2250 | 0.3912 | 0.8467 | | 0.355 | 4.0 | 3000 | 0.3693 | 0.8533 | | 0.3032 | 5.0 | 3750 | 0.3569 | 0.8583 | | 0.2345 | 6.0 | 4500 | 0.3466 | 0.8617 | | 0.2053 | 7.0 | 5250 | 0.3412 | 0.865 | | 0.2443 | 8.0 | 6000 | 0.3381 | 0.8633 | | 0.259 | 9.0 | 6750 | 0.3314 | 0.875 | | 0.2146 | 10.0 | 7500 | 0.3275 | 0.8717 | | 0.2301 | 11.0 | 8250 | 0.3262 | 0.8733 | | 0.298 | 12.0 | 9000 | 0.3264 | 0.8733 | | 0.2031 | 13.0 | 9750 | 0.3234 | 0.8783 | | 0.1941 | 14.0 | 10500 | 0.3276 | 0.8783 | | 0.1822 | 15.0 | 11250 | 0.3209 | 0.88 | | 0.2209 | 16.0 | 12000 | 0.3226 | 0.8767 | | 0.1294 | 17.0 | 12750 | 0.3179 | 0.8817 | | 0.1726 | 18.0 | 13500 | 0.3224 | 0.88 | | 0.2222 | 19.0 | 14250 | 0.3196 | 0.8833 | | 0.1604 | 20.0 | 15000 | 0.3199 | 0.8817 | | 0.1742 | 21.0 | 15750 | 0.3204 | 0.8783 | | 0.1599 | 22.0 | 16500 | 0.3188 | 0.88 | | 0.1753 | 23.0 | 17250 | 0.3189 | 0.8817 | | 0.1975 | 24.0 | 18000 | 0.3189 | 0.8817 | | 0.1797 | 25.0 | 18750 | 0.3190 | 0.8817 | | 0.1646 | 26.0 | 19500 | 0.3244 | 0.8817 | | 0.1585 | 27.0 | 20250 | 0.3244 | 0.885 | | 0.1303 | 28.0 | 21000 | 0.3225 | 0.8817 | | 0.1144 | 29.0 | 21750 | 0.3207 | 0.8817 | | 0.1409 | 30.0 | 22500 | 0.3230 | 0.8817 | | 0.1303 | 31.0 | 23250 | 0.3219 | 0.8833 | | 0.1405 | 32.0 | 24000 | 0.3260 | 0.8817 | | 0.1503 | 33.0 | 24750 | 0.3248 | 0.88 | | 0.1402 | 34.0 | 25500 | 0.3257 | 0.8817 | | 0.1266 | 35.0 | 26250 | 0.3227 | 0.88 | | 0.1495 | 36.0 | 27000 | 0.3271 | 0.8817 | | 0.1021 | 37.0 | 27750 | 0.3248 | 0.8833 | | 0.1616 | 38.0 | 28500 | 0.3242 | 0.885 | | 0.158 | 39.0 | 29250 | 0.3254 | 0.88 | | 0.1668 | 40.0 | 30000 | 0.3256 | 0.8833 | | 0.1276 | 41.0 | 30750 | 0.3297 | 0.88 | | 0.1072 | 42.0 | 31500 | 0.3307 | 0.88 | | 0.1457 | 43.0 | 32250 | 0.3289 | 0.8783 | | 0.1691 | 44.0 | 33000 | 0.3278 | 0.8817 | | 0.1442 | 45.0 | 33750 | 0.3288 | 0.88 | | 0.1231 | 46.0 | 34500 | 0.3279 | 0.88 | | 0.1011 | 47.0 | 35250 | 0.3276 | 0.8767 | | 0.1059 | 48.0 | 36000 | 0.3287 | 0.8767 | | 0.1263 | 49.0 | 36750 | 0.3292 | 0.8767 | | 0.1053 | 50.0 | 37500 | 0.3291 | 0.8767 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_adamax_00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_adamax_00001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3483 - Accuracy: 0.8783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2714 | 1.0 | 750 | 0.3483 | 0.875 | | 0.1928 | 2.0 | 1500 | 0.3347 | 0.8833 | | 0.1383 | 3.0 | 2250 | 0.3802 | 0.87 | | 0.0835 | 4.0 | 3000 | 0.4083 | 0.8833 | | 0.07 | 5.0 | 3750 | 0.4749 | 0.8833 | | 0.0338 | 6.0 | 4500 | 0.5541 | 0.8767 | | 0.0133 | 7.0 | 5250 | 0.6527 | 0.8783 | | 0.0087 | 8.0 | 6000 | 0.7143 | 0.88 | | 0.0145 | 9.0 | 6750 | 0.7738 | 0.88 | | 0.0002 | 10.0 | 7500 | 0.8388 | 0.8767 | | 0.0004 | 11.0 | 8250 | 0.9053 | 0.8817 | | 0.0065 | 12.0 | 9000 | 0.9720 | 0.8783 | | 0.0 | 13.0 | 9750 | 1.0304 | 0.8767 | | 0.0 | 14.0 | 10500 | 1.0771 | 0.8717 | | 0.0 | 15.0 | 11250 | 1.0764 | 0.8783 | | 0.0326 | 16.0 | 12000 | 1.0955 | 0.8833 | | 0.0001 | 17.0 | 12750 | 1.0921 | 0.8817 | | 0.0 | 18.0 | 13500 | 1.1024 | 0.8817 | | 0.0 | 19.0 | 14250 | 1.1225 | 0.8817 | | 0.0 | 20.0 | 15000 | 1.1467 | 0.88 | | 0.0 | 21.0 | 15750 | 1.1711 | 0.88 | | 0.0 | 22.0 | 16500 | 1.1842 | 0.8783 | | 0.0 | 23.0 | 17250 | 1.1878 | 0.8783 | | 0.0 | 24.0 | 18000 | 1.2170 | 0.8817 | | 0.0 | 25.0 | 18750 | 1.2183 | 0.88 | | 0.0 | 26.0 | 19500 | 1.2367 | 0.88 | | 0.0 | 27.0 | 20250 | 1.2535 | 0.8783 | | 0.0 | 28.0 | 21000 | 1.2655 | 0.8833 | | 0.0 | 29.0 | 21750 | 1.2701 | 0.8783 | | 0.0 | 30.0 | 22500 | 1.2647 | 0.8783 | | 0.0 | 31.0 | 23250 | 1.2884 | 0.8783 | | 0.0 | 32.0 | 24000 | 1.2899 | 0.8733 | | 0.0 | 33.0 | 24750 | 1.3073 | 0.8817 | | 0.0 | 34.0 | 25500 | 1.3112 | 0.8833 | | 0.0 | 35.0 | 26250 | 1.3094 | 0.8817 | | 0.0 | 36.0 | 27000 | 1.3116 | 0.88 | | 0.0 | 37.0 | 27750 | 1.3157 | 0.88 | | 0.0 | 38.0 | 28500 | 1.3213 | 0.88 | | 0.0 | 39.0 | 29250 | 1.3285 | 0.8767 | | 0.0 | 40.0 | 30000 | 1.3297 | 0.8767 | | 0.0 | 41.0 | 30750 | 1.3323 | 0.8783 | | 0.0 | 42.0 | 31500 | 1.3346 | 0.8767 | | 0.0 | 43.0 | 32250 | 1.3389 | 0.8783 | | 0.0 | 44.0 | 33000 | 1.3404 | 0.8783 | | 0.0 | 45.0 | 33750 | 1.3431 | 0.8783 | | 0.0 | 46.0 | 34500 | 1.3453 | 0.8783 | | 0.0 | 47.0 | 35250 | 1.3463 | 0.8783 | | 0.0 | 48.0 | 36000 | 1.3478 | 0.8783 | | 0.0 | 49.0 | 36750 | 1.3483 | 0.8783 | | 0.0 | 50.0 | 37500 | 1.3483 | 0.8783 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_sgd_001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_sgd_001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2764 - Accuracy: 0.895 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5289 | 1.0 | 750 | 0.5577 | 0.7883 | | 0.411 | 2.0 | 1500 | 0.4355 | 0.835 | | 0.3696 | 3.0 | 2250 | 0.3887 | 0.85 | | 0.3417 | 4.0 | 3000 | 0.3643 | 0.8517 | | 0.3357 | 5.0 | 3750 | 0.3441 | 0.8617 | | 0.2644 | 6.0 | 4500 | 0.3299 | 0.865 | | 0.2577 | 7.0 | 5250 | 0.3164 | 0.8667 | | 0.2725 | 8.0 | 6000 | 0.3096 | 0.875 | | 0.2894 | 9.0 | 6750 | 0.3046 | 0.8717 | | 0.2245 | 10.0 | 7500 | 0.2980 | 0.87 | | 0.2663 | 11.0 | 8250 | 0.2930 | 0.8817 | | 0.2488 | 12.0 | 9000 | 0.2925 | 0.8717 | | 0.2365 | 13.0 | 9750 | 0.2865 | 0.88 | | 0.2172 | 14.0 | 10500 | 0.2813 | 0.8833 | | 0.2487 | 15.0 | 11250 | 0.2761 | 0.885 | | 0.1796 | 16.0 | 12000 | 0.2827 | 0.8817 | | 0.1959 | 17.0 | 12750 | 0.2794 | 0.8833 | | 0.1795 | 18.0 | 13500 | 0.2745 | 0.8833 | | 0.2262 | 19.0 | 14250 | 0.2788 | 0.885 | | 0.1595 | 20.0 | 15000 | 0.2793 | 0.885 | | 0.2022 | 21.0 | 15750 | 0.2745 | 0.8833 | | 0.2023 | 22.0 | 16500 | 0.2758 | 0.8917 | | 0.1864 | 23.0 | 17250 | 0.2773 | 0.8883 | | 0.1869 | 24.0 | 18000 | 0.2763 | 0.8967 | | 0.1883 | 25.0 | 18750 | 0.2788 | 0.89 | | 0.1768 | 26.0 | 19500 | 0.2728 | 0.8967 | | 0.1135 | 27.0 | 20250 | 0.2823 | 0.8867 | | 0.1819 | 28.0 | 21000 | 0.2713 | 0.8933 | | 0.1691 | 29.0 | 21750 | 0.2729 | 0.8967 | | 0.1867 | 30.0 | 22500 | 0.2819 | 0.89 | | 0.1549 | 31.0 | 23250 | 0.2710 | 0.8933 | | 0.125 | 32.0 | 24000 | 0.2766 | 0.8917 | | 0.1602 | 33.0 | 24750 | 0.2747 | 0.895 | | 0.1131 | 34.0 | 25500 | 0.2730 | 0.9 | | 0.1454 | 35.0 | 26250 | 0.2723 | 0.895 | | 0.1829 | 36.0 | 27000 | 0.2731 | 0.8967 | | 0.1 | 37.0 | 27750 | 0.2730 | 0.8967 | | 0.1344 | 38.0 | 28500 | 0.2751 | 0.8983 | | 0.1584 | 39.0 | 29250 | 0.2745 | 0.8983 | | 0.1265 | 40.0 | 30000 | 0.2754 | 0.8967 | | 0.1671 | 41.0 | 30750 | 0.2769 | 0.8967 | | 0.147 | 42.0 | 31500 | 0.2744 | 0.8933 | | 0.1588 | 43.0 | 32250 | 0.2753 | 0.8967 | | 0.1433 | 44.0 | 33000 | 0.2767 | 0.9 | | 0.1715 | 45.0 | 33750 | 0.2775 | 0.8967 | | 0.1027 | 46.0 | 34500 | 0.2766 | 0.9 | | 0.1628 | 47.0 | 35250 | 0.2771 | 0.8967 | | 0.1468 | 48.0 | 36000 | 0.2769 | 0.895 | | 0.1346 | 49.0 | 36750 | 0.2765 | 0.895 | | 0.0897 | 50.0 | 37500 | 0.2764 | 0.895 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_adamax_00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_adamax_00001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8990 - Accuracy: 0.8967 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2314 | 1.0 | 750 | 0.3103 | 0.8783 | | 0.2018 | 2.0 | 1500 | 0.2701 | 0.8983 | | 0.1509 | 3.0 | 2250 | 0.2783 | 0.9 | | 0.0994 | 4.0 | 3000 | 0.3003 | 0.8983 | | 0.1044 | 5.0 | 3750 | 0.3361 | 0.905 | | 0.0438 | 6.0 | 4500 | 0.4330 | 0.89 | | 0.017 | 7.0 | 5250 | 0.4929 | 0.9 | | 0.0271 | 8.0 | 6000 | 0.5583 | 0.8883 | | 0.0025 | 9.0 | 6750 | 0.6641 | 0.895 | | 0.0019 | 10.0 | 7500 | 0.6642 | 0.895 | | 0.0069 | 11.0 | 8250 | 0.7364 | 0.895 | | 0.0001 | 12.0 | 9000 | 0.7531 | 0.8917 | | 0.0001 | 13.0 | 9750 | 0.7730 | 0.8983 | | 0.0 | 14.0 | 10500 | 0.7588 | 0.895 | | 0.0 | 15.0 | 11250 | 0.8009 | 0.8933 | | 0.0 | 16.0 | 12000 | 0.8380 | 0.8883 | | 0.002 | 17.0 | 12750 | 0.8384 | 0.8917 | | 0.0 | 18.0 | 13500 | 0.8173 | 0.9017 | | 0.0 | 19.0 | 14250 | 0.8127 | 0.8983 | | 0.0 | 20.0 | 15000 | 0.8957 | 0.8917 | | 0.0 | 21.0 | 15750 | 0.8587 | 0.8917 | | 0.0 | 22.0 | 16500 | 0.8745 | 0.89 | | 0.0 | 23.0 | 17250 | 0.8882 | 0.8917 | | 0.0 | 24.0 | 18000 | 0.8669 | 0.895 | | 0.0 | 25.0 | 18750 | 0.8973 | 0.8917 | | 0.0 | 26.0 | 19500 | 0.8589 | 0.895 | | 0.0 | 27.0 | 20250 | 0.8360 | 0.9 | | 0.0 | 28.0 | 21000 | 0.8787 | 0.8967 | | 0.0 | 29.0 | 21750 | 0.8640 | 0.895 | | 0.0039 | 30.0 | 22500 | 0.8923 | 0.8867 | | 0.0 | 31.0 | 23250 | 0.8650 | 0.8967 | | 0.0 | 32.0 | 24000 | 0.8868 | 0.8983 | | 0.0 | 33.0 | 24750 | 0.8702 | 0.8967 | | 0.0 | 34.0 | 25500 | 0.8705 | 0.8933 | | 0.0 | 35.0 | 26250 | 0.8693 | 0.8933 | | 0.0 | 36.0 | 27000 | 0.8764 | 0.895 | | 0.0035 | 37.0 | 27750 | 0.8971 | 0.8933 | | 0.0 | 38.0 | 28500 | 0.8853 | 0.895 | | 0.0 | 39.0 | 29250 | 0.8831 | 0.895 | | 0.0 | 40.0 | 30000 | 0.8852 | 0.895 | | 0.0 | 41.0 | 30750 | 0.8936 | 0.8967 | | 0.0 | 42.0 | 31500 | 0.8989 | 0.895 | | 0.0 | 43.0 | 32250 | 0.8952 | 0.895 | | 0.0 | 44.0 | 33000 | 0.8982 | 0.895 | | 0.0 | 45.0 | 33750 | 0.8981 | 0.895 | | 0.0 | 46.0 | 34500 | 0.8978 | 0.895 | | 0.0 | 47.0 | 35250 | 0.8987 | 0.895 | | 0.0 | 48.0 | 36000 | 0.8989 | 0.895 | | 0.0 | 49.0 | 36750 | 0.8995 | 0.895 | | 0.0 | 50.0 | 37500 | 0.8990 | 0.8967 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_sgd_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_sgd_0001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4121 - Accuracy: 0.8464 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.9962 | 1.0 | 751 | 1.0240 | 0.4741 | | 0.8829 | 2.0 | 1502 | 0.9630 | 0.5259 | | 0.8203 | 3.0 | 2253 | 0.8958 | 0.6027 | | 0.7637 | 4.0 | 3004 | 0.8327 | 0.6578 | | 0.7321 | 5.0 | 3755 | 0.7782 | 0.6912 | | 0.7017 | 6.0 | 4506 | 0.7239 | 0.7078 | | 0.5812 | 7.0 | 5257 | 0.6809 | 0.7212 | | 0.581 | 8.0 | 6008 | 0.6410 | 0.7346 | | 0.5592 | 9.0 | 6759 | 0.6086 | 0.7513 | | 0.5145 | 10.0 | 7510 | 0.5829 | 0.7679 | | 0.5332 | 11.0 | 8261 | 0.5629 | 0.7746 | | 0.4756 | 12.0 | 9012 | 0.5433 | 0.7796 | | 0.4797 | 13.0 | 9763 | 0.5294 | 0.7846 | | 0.4315 | 14.0 | 10514 | 0.5168 | 0.7930 | | 0.4112 | 15.0 | 11265 | 0.5056 | 0.8013 | | 0.4474 | 16.0 | 12016 | 0.4952 | 0.8030 | | 0.4529 | 17.0 | 12767 | 0.4868 | 0.8097 | | 0.421 | 18.0 | 13518 | 0.4802 | 0.8130 | | 0.4112 | 19.0 | 14269 | 0.4730 | 0.8130 | | 0.4039 | 20.0 | 15020 | 0.4670 | 0.8180 | | 0.3219 | 21.0 | 15771 | 0.4615 | 0.8164 | | 0.411 | 22.0 | 16522 | 0.4563 | 0.8180 | | 0.3769 | 23.0 | 17273 | 0.4528 | 0.8214 | | 0.4423 | 24.0 | 18024 | 0.4481 | 0.8214 | | 0.4214 | 25.0 | 18775 | 0.4442 | 0.8230 | | 0.4588 | 26.0 | 19526 | 0.4419 | 0.8280 | | 0.3977 | 27.0 | 20277 | 0.4383 | 0.8314 | | 0.4288 | 28.0 | 21028 | 0.4359 | 0.8297 | | 0.3842 | 29.0 | 21779 | 0.4331 | 0.8331 | | 0.38 | 30.0 | 22530 | 0.4307 | 0.8331 | | 0.3344 | 31.0 | 23281 | 0.4288 | 0.8347 | | 0.4273 | 32.0 | 24032 | 0.4264 | 0.8347 | | 0.3923 | 33.0 | 24783 | 0.4244 | 0.8364 | | 0.3452 | 34.0 | 25534 | 0.4233 | 0.8364 | | 0.3666 | 35.0 | 26285 | 0.4214 | 0.8381 | | 0.3806 | 36.0 | 27036 | 0.4199 | 0.8397 | | 0.4471 | 37.0 | 27787 | 0.4189 | 0.8397 | | 0.3236 | 38.0 | 28538 | 0.4183 | 0.8414 | | 0.2974 | 39.0 | 29289 | 0.4171 | 0.8397 | | 0.4164 | 40.0 | 30040 | 0.4161 | 0.8397 | | 0.3819 | 41.0 | 30791 | 0.4153 | 0.8431 | | 0.3798 | 42.0 | 31542 | 0.4146 | 0.8447 | | 0.3898 | 43.0 | 32293 | 0.4139 | 0.8447 | | 0.3508 | 44.0 | 33044 | 0.4133 | 0.8447 | | 0.3647 | 45.0 | 33795 | 0.4128 | 0.8447 | | 0.4056 | 46.0 | 34546 | 0.4125 | 0.8447 | | 0.3591 | 47.0 | 35297 | 0.4123 | 0.8464 | | 0.4233 | 48.0 | 36048 | 0.4121 | 0.8464 | | 0.3734 | 49.0 | 36799 | 0.4121 | 0.8464 | | 0.3779 | 50.0 | 37550 | 0.4121 | 0.8464 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_sgd_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_sgd_00001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9431 - Accuracy: 0.5543 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0703 | 1.0 | 751 | 1.0730 | 0.4341 | | 1.028 | 2.0 | 1502 | 1.0687 | 0.4341 | | 1.0258 | 3.0 | 2253 | 1.0640 | 0.4374 | | 1.0492 | 4.0 | 3004 | 1.0592 | 0.4391 | | 1.036 | 5.0 | 3755 | 1.0544 | 0.4407 | | 1.0185 | 6.0 | 4506 | 1.0494 | 0.4457 | | 1.0044 | 7.0 | 5257 | 1.0445 | 0.4524 | | 1.0106 | 8.0 | 6008 | 1.0397 | 0.4524 | | 1.0008 | 9.0 | 6759 | 1.0351 | 0.4558 | | 0.983 | 10.0 | 7510 | 1.0305 | 0.4624 | | 0.9888 | 11.0 | 8261 | 1.0261 | 0.4691 | | 0.98 | 12.0 | 9012 | 1.0217 | 0.4758 | | 0.9777 | 13.0 | 9763 | 1.0175 | 0.4775 | | 0.9805 | 14.0 | 10514 | 1.0134 | 0.4825 | | 0.9554 | 15.0 | 11265 | 1.0095 | 0.4875 | | 0.9727 | 16.0 | 12016 | 1.0055 | 0.4942 | | 0.9405 | 17.0 | 12767 | 1.0016 | 0.4992 | | 0.9669 | 18.0 | 13518 | 0.9980 | 0.5042 | | 0.9407 | 19.0 | 14269 | 0.9944 | 0.5042 | | 0.9487 | 20.0 | 15020 | 0.9909 | 0.5075 | | 0.9336 | 21.0 | 15771 | 0.9876 | 0.5092 | | 0.9505 | 22.0 | 16522 | 0.9843 | 0.5109 | | 0.9425 | 23.0 | 17273 | 0.9812 | 0.5125 | | 0.9422 | 24.0 | 18024 | 0.9782 | 0.5175 | | 0.9397 | 25.0 | 18775 | 0.9753 | 0.5209 | | 0.9277 | 26.0 | 19526 | 0.9725 | 0.5225 | | 0.9248 | 27.0 | 20277 | 0.9699 | 0.5326 | | 0.915 | 28.0 | 21028 | 0.9674 | 0.5342 | | 0.9341 | 29.0 | 21779 | 0.9650 | 0.5376 | | 0.9201 | 30.0 | 22530 | 0.9628 | 0.5392 | | 0.8994 | 31.0 | 23281 | 0.9606 | 0.5376 | | 0.9167 | 32.0 | 24032 | 0.9586 | 0.5392 | | 0.8872 | 33.0 | 24783 | 0.9568 | 0.5426 | | 0.8983 | 34.0 | 25534 | 0.9550 | 0.5426 | | 0.8839 | 35.0 | 26285 | 0.9534 | 0.5442 | | 0.9018 | 36.0 | 27036 | 0.9519 | 0.5476 | | 0.8955 | 37.0 | 27787 | 0.9506 | 0.5492 | | 0.8964 | 38.0 | 28538 | 0.9493 | 0.5492 | | 0.9005 | 39.0 | 29289 | 0.9482 | 0.5492 | | 0.8988 | 40.0 | 30040 | 0.9472 | 0.5526 | | 0.8967 | 41.0 | 30791 | 0.9463 | 0.5543 | | 0.8873 | 42.0 | 31542 | 0.9455 | 0.5543 | | 0.9048 | 43.0 | 32293 | 0.9449 | 0.5543 | | 0.8665 | 44.0 | 33044 | 0.9443 | 0.5543 | | 0.8925 | 45.0 | 33795 | 0.9439 | 0.5543 | | 0.8934 | 46.0 | 34546 | 0.9435 | 0.5543 | | 0.8656 | 47.0 | 35297 | 0.9433 | 0.5543 | | 0.9144 | 48.0 | 36048 | 0.9431 | 0.5543 | | 0.9081 | 49.0 | 36799 | 0.9431 | 0.5543 | | 0.8986 | 50.0 | 37550 | 0.9431 | 0.5543 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_5x_deit_small_sgd_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_sgd_0001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4830 - Accuracy: 0.8233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0422 | 1.0 | 375 | 1.0355 | 0.445 | | 0.9877 | 2.0 | 750 | 0.9987 | 0.5017 | | 0.9301 | 3.0 | 1125 | 0.9591 | 0.5367 | | 0.9069 | 4.0 | 1500 | 0.9204 | 0.5917 | | 0.8815 | 5.0 | 1875 | 0.8838 | 0.6217 | | 0.8208 | 6.0 | 2250 | 0.8478 | 0.6383 | | 0.7819 | 7.0 | 2625 | 0.8141 | 0.6817 | | 0.7955 | 8.0 | 3000 | 0.7823 | 0.7033 | | 0.7492 | 9.0 | 3375 | 0.7528 | 0.7233 | | 0.7403 | 10.0 | 3750 | 0.7259 | 0.7317 | | 0.7047 | 11.0 | 4125 | 0.7009 | 0.745 | | 0.6669 | 12.0 | 4500 | 0.6790 | 0.76 | | 0.6557 | 13.0 | 4875 | 0.6594 | 0.7667 | | 0.6563 | 14.0 | 5250 | 0.6418 | 0.77 | | 0.5999 | 15.0 | 5625 | 0.6263 | 0.7667 | | 0.589 | 16.0 | 6000 | 0.6125 | 0.77 | | 0.5618 | 17.0 | 6375 | 0.5999 | 0.7767 | | 0.5666 | 18.0 | 6750 | 0.5885 | 0.7817 | | 0.6067 | 19.0 | 7125 | 0.5784 | 0.7867 | | 0.5796 | 20.0 | 7500 | 0.5694 | 0.79 | | 0.547 | 21.0 | 7875 | 0.5612 | 0.7883 | | 0.5698 | 22.0 | 8250 | 0.5540 | 0.7867 | | 0.5377 | 23.0 | 8625 | 0.5473 | 0.7917 | | 0.5508 | 24.0 | 9000 | 0.5411 | 0.7967 | | 0.5752 | 25.0 | 9375 | 0.5355 | 0.7983 | | 0.5019 | 26.0 | 9750 | 0.5303 | 0.8 | | 0.5146 | 27.0 | 10125 | 0.5255 | 0.8017 | | 0.5114 | 28.0 | 10500 | 0.5210 | 0.8033 | | 0.4588 | 29.0 | 10875 | 0.5170 | 0.8033 | | 0.5045 | 30.0 | 11250 | 0.5133 | 0.805 | | 0.5118 | 31.0 | 11625 | 0.5098 | 0.805 | | 0.4619 | 32.0 | 12000 | 0.5067 | 0.8083 | | 0.4796 | 33.0 | 12375 | 0.5037 | 0.81 | | 0.5217 | 34.0 | 12750 | 0.5011 | 0.81 | | 0.4423 | 35.0 | 13125 | 0.4986 | 0.8133 | | 0.4692 | 36.0 | 13500 | 0.4964 | 0.815 | | 0.4889 | 37.0 | 13875 | 0.4944 | 0.815 | | 0.487 | 38.0 | 14250 | 0.4925 | 0.82 | | 0.5206 | 39.0 | 14625 | 0.4909 | 0.82 | | 0.4988 | 40.0 | 15000 | 0.4894 | 0.82 | | 0.4485 | 41.0 | 15375 | 0.4881 | 0.8217 | | 0.4284 | 42.0 | 15750 | 0.4870 | 0.8217 | | 0.4979 | 43.0 | 16125 | 0.4860 | 0.8217 | | 0.454 | 44.0 | 16500 | 0.4851 | 0.8217 | | 0.4865 | 45.0 | 16875 | 0.4845 | 0.8217 | | 0.4847 | 46.0 | 17250 | 0.4839 | 0.8217 | | 0.5681 | 47.0 | 17625 | 0.4835 | 0.8217 | | 0.4795 | 48.0 | 18000 | 0.4832 | 0.8217 | | 0.4757 | 49.0 | 18375 | 0.4831 | 0.8233 | | 0.4471 | 50.0 | 18750 | 0.4830 | 0.8233 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_sgd_0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_sgd_0001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4097 - Accuracy: 0.8270 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.9968 | 1.0 | 750 | 1.0169 | 0.4659 | | 0.9174 | 2.0 | 1500 | 0.9543 | 0.5308 | | 0.8121 | 3.0 | 2250 | 0.8838 | 0.6273 | | 0.7871 | 4.0 | 3000 | 0.8228 | 0.6522 | | 0.691 | 5.0 | 3750 | 0.7665 | 0.6922 | | 0.6733 | 6.0 | 4500 | 0.7184 | 0.7271 | | 0.611 | 7.0 | 5250 | 0.6739 | 0.7488 | | 0.5495 | 8.0 | 6000 | 0.6348 | 0.7537 | | 0.5871 | 9.0 | 6750 | 0.6046 | 0.7587 | | 0.5362 | 10.0 | 7500 | 0.5781 | 0.7754 | | 0.5478 | 11.0 | 8250 | 0.5567 | 0.7754 | | 0.5521 | 12.0 | 9000 | 0.5409 | 0.7804 | | 0.475 | 13.0 | 9750 | 0.5265 | 0.7787 | | 0.4124 | 14.0 | 10500 | 0.5147 | 0.7887 | | 0.4689 | 15.0 | 11250 | 0.5048 | 0.7870 | | 0.4042 | 16.0 | 12000 | 0.4956 | 0.7903 | | 0.3787 | 17.0 | 12750 | 0.4873 | 0.7937 | | 0.4203 | 18.0 | 13500 | 0.4799 | 0.7937 | | 0.4173 | 19.0 | 14250 | 0.4729 | 0.7987 | | 0.4444 | 20.0 | 15000 | 0.4676 | 0.8020 | | 0.4225 | 21.0 | 15750 | 0.4619 | 0.8020 | | 0.3886 | 22.0 | 16500 | 0.4572 | 0.8070 | | 0.3882 | 23.0 | 17250 | 0.4523 | 0.8120 | | 0.3793 | 24.0 | 18000 | 0.4484 | 0.8103 | | 0.4027 | 25.0 | 18750 | 0.4443 | 0.8136 | | 0.4864 | 26.0 | 19500 | 0.4411 | 0.8136 | | 0.4229 | 27.0 | 20250 | 0.4378 | 0.8153 | | 0.4258 | 28.0 | 21000 | 0.4349 | 0.8153 | | 0.3905 | 29.0 | 21750 | 0.4322 | 0.8170 | | 0.4099 | 30.0 | 22500 | 0.4297 | 0.8170 | | 0.3721 | 31.0 | 23250 | 0.4276 | 0.8186 | | 0.4104 | 32.0 | 24000 | 0.4255 | 0.8203 | | 0.3815 | 33.0 | 24750 | 0.4237 | 0.8220 | | 0.3966 | 34.0 | 25500 | 0.4218 | 0.8220 | | 0.4057 | 35.0 | 26250 | 0.4202 | 0.8220 | | 0.4004 | 36.0 | 27000 | 0.4187 | 0.8220 | | 0.3921 | 37.0 | 27750 | 0.4174 | 0.8220 | | 0.4046 | 38.0 | 28500 | 0.4161 | 0.8220 | | 0.3819 | 39.0 | 29250 | 0.4149 | 0.8220 | | 0.4626 | 40.0 | 30000 | 0.4139 | 0.8236 | | 0.4062 | 41.0 | 30750 | 0.4130 | 0.8236 | | 0.3793 | 42.0 | 31500 | 0.4123 | 0.8253 | | 0.3246 | 43.0 | 32250 | 0.4116 | 0.8253 | | 0.3382 | 44.0 | 33000 | 0.4110 | 0.8270 | | 0.3636 | 45.0 | 33750 | 0.4106 | 0.8270 | | 0.4008 | 46.0 | 34500 | 0.4102 | 0.8270 | | 0.3708 | 47.0 | 35250 | 0.4099 | 0.8270 | | 0.3436 | 48.0 | 36000 | 0.4098 | 0.8270 | | 0.3738 | 49.0 | 36750 | 0.4097 | 0.8270 | | 0.373 | 50.0 | 37500 | 0.4097 | 0.8270 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_sgd_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_sgd_00001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9349 - Accuracy: 0.5591 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0693 | 1.0 | 750 | 1.0692 | 0.4293 | | 1.0584 | 2.0 | 1500 | 1.0648 | 0.4343 | | 1.0342 | 3.0 | 2250 | 1.0600 | 0.4376 | | 1.0374 | 4.0 | 3000 | 1.0551 | 0.4443 | | 1.028 | 5.0 | 3750 | 1.0500 | 0.4459 | | 1.0131 | 6.0 | 4500 | 1.0451 | 0.4476 | | 1.022 | 7.0 | 5250 | 1.0402 | 0.4459 | | 1.0192 | 8.0 | 6000 | 1.0354 | 0.4526 | | 1.0168 | 9.0 | 6750 | 1.0306 | 0.4576 | | 0.9985 | 10.0 | 7500 | 1.0259 | 0.4592 | | 0.9898 | 11.0 | 8250 | 1.0213 | 0.4609 | | 1.0116 | 12.0 | 9000 | 1.0168 | 0.4642 | | 0.9986 | 13.0 | 9750 | 1.0125 | 0.4659 | | 0.9818 | 14.0 | 10500 | 1.0083 | 0.4759 | | 0.9837 | 15.0 | 11250 | 1.0041 | 0.4809 | | 0.9601 | 16.0 | 12000 | 1.0001 | 0.4809 | | 0.9572 | 17.0 | 12750 | 0.9961 | 0.4809 | | 0.9406 | 18.0 | 13500 | 0.9923 | 0.4859 | | 0.9621 | 19.0 | 14250 | 0.9887 | 0.4892 | | 0.9467 | 20.0 | 15000 | 0.9850 | 0.4925 | | 0.9691 | 21.0 | 15750 | 0.9816 | 0.4992 | | 0.9406 | 22.0 | 16500 | 0.9782 | 0.5008 | | 0.9223 | 23.0 | 17250 | 0.9750 | 0.5058 | | 0.9127 | 24.0 | 18000 | 0.9718 | 0.5075 | | 0.9371 | 25.0 | 18750 | 0.9688 | 0.5141 | | 0.9589 | 26.0 | 19500 | 0.9659 | 0.5175 | | 0.9189 | 27.0 | 20250 | 0.9631 | 0.5208 | | 0.9249 | 28.0 | 21000 | 0.9605 | 0.5258 | | 0.927 | 29.0 | 21750 | 0.9580 | 0.5275 | | 0.9378 | 30.0 | 22500 | 0.9556 | 0.5308 | | 0.8829 | 31.0 | 23250 | 0.9533 | 0.5308 | | 0.931 | 32.0 | 24000 | 0.9512 | 0.5341 | | 0.9197 | 33.0 | 24750 | 0.9492 | 0.5374 | | 0.9032 | 34.0 | 25500 | 0.9474 | 0.5374 | | 0.9 | 35.0 | 26250 | 0.9457 | 0.5391 | | 0.8939 | 36.0 | 27000 | 0.9442 | 0.5441 | | 0.9276 | 37.0 | 27750 | 0.9427 | 0.5458 | | 0.8712 | 38.0 | 28500 | 0.9414 | 0.5458 | | 0.9222 | 39.0 | 29250 | 0.9402 | 0.5458 | | 0.8913 | 40.0 | 30000 | 0.9392 | 0.5474 | | 0.8879 | 41.0 | 30750 | 0.9383 | 0.5474 | | 0.8851 | 42.0 | 31500 | 0.9375 | 0.5541 | | 0.8777 | 43.0 | 32250 | 0.9368 | 0.5541 | | 0.8945 | 44.0 | 33000 | 0.9362 | 0.5541 | | 0.8708 | 45.0 | 33750 | 0.9358 | 0.5574 | | 0.9082 | 46.0 | 34500 | 0.9354 | 0.5591 | | 0.9028 | 47.0 | 35250 | 0.9352 | 0.5591 | | 0.8903 | 48.0 | 36000 | 0.9350 | 0.5591 | | 0.8994 | 49.0 | 36750 | 0.9349 | 0.5591 | | 0.9183 | 50.0 | 37500 | 0.9349 | 0.5591 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
andakm/cats_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # andakm/cats_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.6069 - Train Accuracy: 0.7143 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Epoch | |:----------:|:--------------:|:-----:| | 1.8083 | 0.2857 | 0 | | 1.7613 | 0.5714 | 1 | | 1.7004 | 0.7143 | 2 | | 1.6459 | 0.7143 | 3 | | 1.6069 | 0.7143 | 4 | ### Framework versions - Transformers 4.36.2 - TensorFlow 2.15.0 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "britain", "persian_cats", "scottish fold", "siamese", "turkish angora", "bombeicats" ]
hkivancoral/smids_10x_deit_small_sgd_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_sgd_0001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3901 - Accuracy: 0.855 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.9824 | 1.0 | 750 | 1.0331 | 0.435 | | 0.9063 | 2.0 | 1500 | 0.9735 | 0.5233 | | 0.8503 | 3.0 | 2250 | 0.9109 | 0.59 | | 0.7679 | 4.0 | 3000 | 0.8466 | 0.645 | | 0.7248 | 5.0 | 3750 | 0.7860 | 0.69 | | 0.6585 | 6.0 | 4500 | 0.7311 | 0.7167 | | 0.6129 | 7.0 | 5250 | 0.6856 | 0.7283 | | 0.6082 | 8.0 | 6000 | 0.6417 | 0.7617 | | 0.581 | 9.0 | 6750 | 0.6068 | 0.7683 | | 0.5231 | 10.0 | 7500 | 0.5777 | 0.7767 | | 0.5113 | 11.0 | 8250 | 0.5554 | 0.7833 | | 0.4834 | 12.0 | 9000 | 0.5347 | 0.8 | | 0.5002 | 13.0 | 9750 | 0.5194 | 0.8067 | | 0.5244 | 14.0 | 10500 | 0.5049 | 0.8117 | | 0.478 | 15.0 | 11250 | 0.4926 | 0.8183 | | 0.4573 | 16.0 | 12000 | 0.4823 | 0.8183 | | 0.4332 | 17.0 | 12750 | 0.4737 | 0.8233 | | 0.4552 | 18.0 | 13500 | 0.4642 | 0.8283 | | 0.4717 | 19.0 | 14250 | 0.4573 | 0.8283 | | 0.4284 | 20.0 | 15000 | 0.4511 | 0.8283 | | 0.418 | 21.0 | 15750 | 0.4442 | 0.835 | | 0.4355 | 22.0 | 16500 | 0.4394 | 0.8417 | | 0.442 | 23.0 | 17250 | 0.4349 | 0.84 | | 0.4592 | 24.0 | 18000 | 0.4307 | 0.845 | | 0.4174 | 25.0 | 18750 | 0.4266 | 0.8483 | | 0.4133 | 26.0 | 19500 | 0.4227 | 0.8483 | | 0.3538 | 27.0 | 20250 | 0.4190 | 0.8517 | | 0.4061 | 28.0 | 21000 | 0.4159 | 0.8533 | | 0.4077 | 29.0 | 21750 | 0.4132 | 0.8517 | | 0.4051 | 30.0 | 22500 | 0.4109 | 0.8533 | | 0.3404 | 31.0 | 23250 | 0.4086 | 0.8517 | | 0.353 | 32.0 | 24000 | 0.4061 | 0.855 | | 0.3864 | 33.0 | 24750 | 0.4039 | 0.8567 | | 0.3572 | 34.0 | 25500 | 0.4020 | 0.8567 | | 0.3431 | 35.0 | 26250 | 0.4002 | 0.8567 | | 0.3693 | 36.0 | 27000 | 0.3992 | 0.8567 | | 0.3706 | 37.0 | 27750 | 0.3978 | 0.8567 | | 0.423 | 38.0 | 28500 | 0.3964 | 0.855 | | 0.3909 | 39.0 | 29250 | 0.3953 | 0.855 | | 0.41 | 40.0 | 30000 | 0.3943 | 0.855 | | 0.3387 | 41.0 | 30750 | 0.3933 | 0.855 | | 0.3698 | 42.0 | 31500 | 0.3927 | 0.855 | | 0.3644 | 43.0 | 32250 | 0.3919 | 0.855 | | 0.3722 | 44.0 | 33000 | 0.3914 | 0.8567 | | 0.3269 | 45.0 | 33750 | 0.3910 | 0.8567 | | 0.3532 | 46.0 | 34500 | 0.3906 | 0.8567 | | 0.3899 | 47.0 | 35250 | 0.3904 | 0.8567 | | 0.3783 | 48.0 | 36000 | 0.3902 | 0.855 | | 0.3767 | 49.0 | 36750 | 0.3901 | 0.855 | | 0.3232 | 50.0 | 37500 | 0.3901 | 0.855 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_sgd_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_sgd_00001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9545 - Accuracy: 0.5483 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0617 | 1.0 | 750 | 1.0842 | 0.3733 | | 1.0585 | 2.0 | 1500 | 1.0794 | 0.37 | | 1.0424 | 3.0 | 2250 | 1.0744 | 0.3733 | | 1.0456 | 4.0 | 3000 | 1.0693 | 0.3767 | | 1.0291 | 5.0 | 3750 | 1.0643 | 0.3833 | | 1.0038 | 6.0 | 4500 | 1.0594 | 0.3917 | | 1.0218 | 7.0 | 5250 | 1.0545 | 0.4017 | | 1.0056 | 8.0 | 6000 | 1.0497 | 0.4083 | | 0.9993 | 9.0 | 6750 | 1.0451 | 0.4133 | | 0.9987 | 10.0 | 7500 | 1.0406 | 0.4233 | | 1.005 | 11.0 | 8250 | 1.0361 | 0.43 | | 0.9768 | 12.0 | 9000 | 1.0318 | 0.4367 | | 0.9767 | 13.0 | 9750 | 1.0276 | 0.4383 | | 0.9832 | 14.0 | 10500 | 1.0235 | 0.4417 | | 0.9795 | 15.0 | 11250 | 1.0196 | 0.4517 | | 0.9438 | 16.0 | 12000 | 1.0158 | 0.47 | | 0.9511 | 17.0 | 12750 | 1.0122 | 0.4733 | | 0.9685 | 18.0 | 13500 | 1.0086 | 0.475 | | 0.9616 | 19.0 | 14250 | 1.0051 | 0.4833 | | 0.9593 | 20.0 | 15000 | 1.0018 | 0.485 | | 0.9173 | 21.0 | 15750 | 0.9985 | 0.49 | | 0.9516 | 22.0 | 16500 | 0.9954 | 0.5017 | | 0.9352 | 23.0 | 17250 | 0.9923 | 0.5033 | | 0.9563 | 24.0 | 18000 | 0.9894 | 0.5083 | | 0.9134 | 25.0 | 18750 | 0.9866 | 0.5117 | | 0.9284 | 26.0 | 19500 | 0.9839 | 0.515 | | 0.8974 | 27.0 | 20250 | 0.9813 | 0.52 | | 0.9371 | 28.0 | 21000 | 0.9789 | 0.52 | | 0.8946 | 29.0 | 21750 | 0.9765 | 0.5283 | | 0.9089 | 30.0 | 22500 | 0.9743 | 0.5317 | | 0.9026 | 31.0 | 23250 | 0.9722 | 0.5333 | | 0.9027 | 32.0 | 24000 | 0.9702 | 0.5317 | | 0.9034 | 33.0 | 24750 | 0.9683 | 0.5333 | | 0.9095 | 34.0 | 25500 | 0.9666 | 0.5333 | | 0.8767 | 35.0 | 26250 | 0.9650 | 0.5367 | | 0.8854 | 36.0 | 27000 | 0.9635 | 0.5367 | | 0.8862 | 37.0 | 27750 | 0.9621 | 0.5367 | | 0.9211 | 38.0 | 28500 | 0.9608 | 0.5367 | | 0.8993 | 39.0 | 29250 | 0.9597 | 0.535 | | 0.8897 | 40.0 | 30000 | 0.9587 | 0.5383 | | 0.8933 | 41.0 | 30750 | 0.9578 | 0.5417 | | 0.8954 | 42.0 | 31500 | 0.9571 | 0.5483 | | 0.887 | 43.0 | 32250 | 0.9564 | 0.5483 | | 0.902 | 44.0 | 33000 | 0.9558 | 0.5483 | | 0.8561 | 45.0 | 33750 | 0.9554 | 0.5483 | | 0.8814 | 46.0 | 34500 | 0.9551 | 0.5483 | | 0.8975 | 47.0 | 35250 | 0.9548 | 0.5483 | | 0.8624 | 48.0 | 36000 | 0.9546 | 0.5483 | | 0.8832 | 49.0 | 36750 | 0.9546 | 0.5483 | | 0.8754 | 50.0 | 37500 | 0.9545 | 0.5483 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
andakm/cats_new_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # andakm/cats_new_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.7028 - Train Accuracy: 0.625 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 470, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Epoch | |:----------:|:--------------:|:-----:| | 1.9593 | 0.125 | 0 | | 1.8944 | 0.25 | 1 | | 1.8400 | 0.375 | 2 | | 1.7575 | 0.625 | 3 | | 1.7028 | 0.625 | 4 | ### Framework versions - Transformers 4.36.2 - TensorFlow 2.15.0 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "bengal cats", "britain", "persian_cats", "scottish fold", "siamese", "turkish angora", "bombeicats" ]
hkivancoral/smids_5x_deit_small_sgd_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_sgd_0001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4899 - Accuracy: 0.8117 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0575 | 1.0 | 375 | 1.0409 | 0.4667 | | 0.9896 | 2.0 | 750 | 1.0031 | 0.5117 | | 0.9428 | 3.0 | 1125 | 0.9645 | 0.5567 | | 0.9186 | 4.0 | 1500 | 0.9265 | 0.615 | | 0.8922 | 5.0 | 1875 | 0.8895 | 0.6483 | | 0.8541 | 6.0 | 2250 | 0.8539 | 0.6717 | | 0.7885 | 7.0 | 2625 | 0.8194 | 0.69 | | 0.7714 | 8.0 | 3000 | 0.7879 | 0.705 | | 0.758 | 9.0 | 3375 | 0.7592 | 0.7133 | | 0.7212 | 10.0 | 3750 | 0.7334 | 0.7217 | | 0.6793 | 11.0 | 4125 | 0.7102 | 0.7333 | | 0.6484 | 12.0 | 4500 | 0.6895 | 0.7367 | | 0.6765 | 13.0 | 4875 | 0.6713 | 0.7467 | | 0.664 | 14.0 | 5250 | 0.6548 | 0.7533 | | 0.6332 | 15.0 | 5625 | 0.6395 | 0.7617 | | 0.5983 | 16.0 | 6000 | 0.6261 | 0.77 | | 0.6122 | 17.0 | 6375 | 0.6142 | 0.77 | | 0.5912 | 18.0 | 6750 | 0.6024 | 0.7733 | | 0.5764 | 19.0 | 7125 | 0.5918 | 0.775 | | 0.5461 | 20.0 | 7500 | 0.5824 | 0.7783 | | 0.5245 | 21.0 | 7875 | 0.5733 | 0.7833 | | 0.5339 | 22.0 | 8250 | 0.5654 | 0.7867 | | 0.5651 | 23.0 | 8625 | 0.5584 | 0.7867 | | 0.5365 | 24.0 | 9000 | 0.5518 | 0.7933 | | 0.4982 | 25.0 | 9375 | 0.5457 | 0.795 | | 0.5274 | 26.0 | 9750 | 0.5402 | 0.7933 | | 0.5167 | 27.0 | 10125 | 0.5353 | 0.795 | | 0.53 | 28.0 | 10500 | 0.5303 | 0.7967 | | 0.5404 | 29.0 | 10875 | 0.5260 | 0.7967 | | 0.4414 | 30.0 | 11250 | 0.5222 | 0.8017 | | 0.5269 | 31.0 | 11625 | 0.5183 | 0.8017 | | 0.5299 | 32.0 | 12000 | 0.5150 | 0.8017 | | 0.5311 | 33.0 | 12375 | 0.5120 | 0.8033 | | 0.499 | 34.0 | 12750 | 0.5091 | 0.8033 | | 0.4712 | 35.0 | 13125 | 0.5065 | 0.8033 | | 0.4169 | 36.0 | 13500 | 0.5042 | 0.8017 | | 0.4803 | 37.0 | 13875 | 0.5020 | 0.8017 | | 0.4796 | 38.0 | 14250 | 0.5001 | 0.805 | | 0.4865 | 39.0 | 14625 | 0.4984 | 0.8067 | | 0.5122 | 40.0 | 15000 | 0.4967 | 0.8083 | | 0.4785 | 41.0 | 15375 | 0.4953 | 0.8067 | | 0.4562 | 42.0 | 15750 | 0.4941 | 0.8083 | | 0.5248 | 43.0 | 16125 | 0.4930 | 0.8117 | | 0.4817 | 44.0 | 16500 | 0.4922 | 0.8117 | | 0.4662 | 45.0 | 16875 | 0.4914 | 0.8117 | | 0.4968 | 46.0 | 17250 | 0.4908 | 0.8117 | | 0.5157 | 47.0 | 17625 | 0.4904 | 0.8117 | | 0.4378 | 48.0 | 18000 | 0.4901 | 0.8117 | | 0.4668 | 49.0 | 18375 | 0.4899 | 0.8117 | | 0.4722 | 50.0 | 18750 | 0.4899 | 0.8117 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_sgd_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_sgd_0001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4063 - Accuracy: 0.8417 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.9715 | 1.0 | 750 | 1.0172 | 0.455 | | 0.9076 | 2.0 | 1500 | 0.9524 | 0.5267 | | 0.8403 | 3.0 | 2250 | 0.8812 | 0.625 | | 0.7987 | 4.0 | 3000 | 0.8125 | 0.6817 | | 0.7256 | 5.0 | 3750 | 0.7521 | 0.7183 | | 0.6364 | 6.0 | 4500 | 0.7018 | 0.7483 | | 0.5752 | 7.0 | 5250 | 0.6571 | 0.775 | | 0.63 | 8.0 | 6000 | 0.6211 | 0.7817 | | 0.6197 | 9.0 | 6750 | 0.5901 | 0.79 | | 0.5118 | 10.0 | 7500 | 0.5651 | 0.7983 | | 0.5006 | 11.0 | 8250 | 0.5449 | 0.8017 | | 0.5617 | 12.0 | 9000 | 0.5276 | 0.8033 | | 0.4842 | 13.0 | 9750 | 0.5134 | 0.8083 | | 0.5031 | 14.0 | 10500 | 0.5016 | 0.81 | | 0.4417 | 15.0 | 11250 | 0.4908 | 0.8083 | | 0.4457 | 16.0 | 12000 | 0.4818 | 0.8083 | | 0.3768 | 17.0 | 12750 | 0.4743 | 0.8117 | | 0.4232 | 18.0 | 13500 | 0.4671 | 0.8167 | | 0.4491 | 19.0 | 14250 | 0.4614 | 0.8167 | | 0.4472 | 20.0 | 15000 | 0.4557 | 0.8233 | | 0.3954 | 21.0 | 15750 | 0.4506 | 0.8267 | | 0.405 | 22.0 | 16500 | 0.4463 | 0.83 | | 0.4169 | 23.0 | 17250 | 0.4425 | 0.8317 | | 0.4563 | 24.0 | 18000 | 0.4389 | 0.8333 | | 0.3987 | 25.0 | 18750 | 0.4356 | 0.8333 | | 0.39 | 26.0 | 19500 | 0.4325 | 0.8317 | | 0.4056 | 27.0 | 20250 | 0.4297 | 0.8317 | | 0.3872 | 28.0 | 21000 | 0.4272 | 0.8317 | | 0.3817 | 29.0 | 21750 | 0.4249 | 0.835 | | 0.4035 | 30.0 | 22500 | 0.4229 | 0.8367 | | 0.3636 | 31.0 | 23250 | 0.4211 | 0.835 | | 0.4122 | 32.0 | 24000 | 0.4193 | 0.8367 | | 0.3917 | 33.0 | 24750 | 0.4176 | 0.8383 | | 0.3839 | 34.0 | 25500 | 0.4161 | 0.84 | | 0.3217 | 35.0 | 26250 | 0.4147 | 0.84 | | 0.3641 | 36.0 | 27000 | 0.4136 | 0.84 | | 0.3379 | 37.0 | 27750 | 0.4124 | 0.84 | | 0.3959 | 38.0 | 28500 | 0.4115 | 0.84 | | 0.3972 | 39.0 | 29250 | 0.4106 | 0.84 | | 0.3899 | 40.0 | 30000 | 0.4098 | 0.84 | | 0.3662 | 41.0 | 30750 | 0.4090 | 0.84 | | 0.3473 | 42.0 | 31500 | 0.4084 | 0.8417 | | 0.3905 | 43.0 | 32250 | 0.4078 | 0.8417 | | 0.3794 | 44.0 | 33000 | 0.4074 | 0.8417 | | 0.3783 | 45.0 | 33750 | 0.4070 | 0.8417 | | 0.3309 | 46.0 | 34500 | 0.4067 | 0.8417 | | 0.3086 | 47.0 | 35250 | 0.4065 | 0.8417 | | 0.3454 | 48.0 | 36000 | 0.4063 | 0.8417 | | 0.3559 | 49.0 | 36750 | 0.4063 | 0.8417 | | 0.323 | 50.0 | 37500 | 0.4063 | 0.8417 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_sgd_00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_sgd_00001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9309 - Accuracy: 0.5667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.049 | 1.0 | 750 | 1.0657 | 0.4217 | | 1.0271 | 2.0 | 1500 | 1.0619 | 0.4233 | | 1.0309 | 3.0 | 2250 | 1.0577 | 0.4233 | | 1.0685 | 4.0 | 3000 | 1.0531 | 0.4233 | | 1.0213 | 5.0 | 3750 | 1.0484 | 0.425 | | 1.0075 | 6.0 | 4500 | 1.0438 | 0.4267 | | 1.0135 | 7.0 | 5250 | 1.0390 | 0.4283 | | 1.0193 | 8.0 | 6000 | 1.0343 | 0.43 | | 1.0172 | 9.0 | 6750 | 1.0296 | 0.4383 | | 0.995 | 10.0 | 7500 | 1.0249 | 0.4417 | | 0.9861 | 11.0 | 8250 | 1.0204 | 0.4467 | | 0.9925 | 12.0 | 9000 | 1.0158 | 0.4533 | | 0.9841 | 13.0 | 9750 | 1.0115 | 0.465 | | 0.9738 | 14.0 | 10500 | 1.0072 | 0.4733 | | 0.9779 | 15.0 | 11250 | 1.0030 | 0.4783 | | 0.9393 | 16.0 | 12000 | 0.9988 | 0.485 | | 0.968 | 17.0 | 12750 | 0.9949 | 0.485 | | 0.9542 | 18.0 | 13500 | 0.9909 | 0.4883 | | 0.9456 | 19.0 | 14250 | 0.9871 | 0.4917 | | 0.9805 | 20.0 | 15000 | 0.9834 | 0.4967 | | 0.9272 | 21.0 | 15750 | 0.9798 | 0.5 | | 0.9402 | 22.0 | 16500 | 0.9763 | 0.5083 | | 0.9463 | 23.0 | 17250 | 0.9729 | 0.5133 | | 0.9349 | 24.0 | 18000 | 0.9697 | 0.515 | | 0.9212 | 25.0 | 18750 | 0.9666 | 0.5167 | | 0.9115 | 26.0 | 19500 | 0.9636 | 0.5183 | | 0.9201 | 27.0 | 20250 | 0.9607 | 0.5217 | | 0.9475 | 28.0 | 21000 | 0.9580 | 0.525 | | 0.9135 | 29.0 | 21750 | 0.9554 | 0.5267 | | 0.9341 | 30.0 | 22500 | 0.9529 | 0.53 | | 0.9173 | 31.0 | 23250 | 0.9505 | 0.5317 | | 0.9276 | 32.0 | 24000 | 0.9483 | 0.535 | | 0.9211 | 33.0 | 24750 | 0.9462 | 0.5417 | | 0.9232 | 34.0 | 25500 | 0.9443 | 0.5467 | | 0.9171 | 35.0 | 26250 | 0.9425 | 0.5483 | | 0.9007 | 36.0 | 27000 | 0.9408 | 0.5483 | | 0.9143 | 37.0 | 27750 | 0.9393 | 0.555 | | 0.8916 | 38.0 | 28500 | 0.9379 | 0.5567 | | 0.8951 | 39.0 | 29250 | 0.9366 | 0.5567 | | 0.9014 | 40.0 | 30000 | 0.9355 | 0.5567 | | 0.8889 | 41.0 | 30750 | 0.9345 | 0.5583 | | 0.8953 | 42.0 | 31500 | 0.9336 | 0.5583 | | 0.9154 | 43.0 | 32250 | 0.9329 | 0.5583 | | 0.8836 | 44.0 | 33000 | 0.9323 | 0.5633 | | 0.8961 | 45.0 | 33750 | 0.9318 | 0.5667 | | 0.8837 | 46.0 | 34500 | 0.9314 | 0.5667 | | 0.8621 | 47.0 | 35250 | 0.9311 | 0.5667 | | 0.8982 | 48.0 | 36000 | 0.9310 | 0.5667 | | 0.8793 | 49.0 | 36750 | 0.9309 | 0.5667 | | 0.8813 | 50.0 | 37500 | 0.9309 | 0.5667 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_sgd_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_sgd_0001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4025 - Accuracy: 0.835 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.999 | 1.0 | 750 | 1.0177 | 0.4867 | | 0.9125 | 2.0 | 1500 | 0.9538 | 0.56 | | 0.8354 | 3.0 | 2250 | 0.8848 | 0.64 | | 0.7909 | 4.0 | 3000 | 0.8172 | 0.685 | | 0.7315 | 5.0 | 3750 | 0.7535 | 0.7183 | | 0.6641 | 6.0 | 4500 | 0.7023 | 0.7433 | | 0.61 | 7.0 | 5250 | 0.6582 | 0.755 | | 0.5883 | 8.0 | 6000 | 0.6232 | 0.7783 | | 0.6057 | 9.0 | 6750 | 0.5936 | 0.79 | | 0.5434 | 10.0 | 7500 | 0.5693 | 0.795 | | 0.5298 | 11.0 | 8250 | 0.5500 | 0.7917 | | 0.4881 | 12.0 | 9000 | 0.5324 | 0.8 | | 0.5014 | 13.0 | 9750 | 0.5180 | 0.8 | | 0.4862 | 14.0 | 10500 | 0.5060 | 0.8083 | | 0.4712 | 15.0 | 11250 | 0.4949 | 0.81 | | 0.4371 | 16.0 | 12000 | 0.4864 | 0.8117 | | 0.4626 | 17.0 | 12750 | 0.4789 | 0.815 | | 0.4294 | 18.0 | 13500 | 0.4706 | 0.815 | | 0.4498 | 19.0 | 14250 | 0.4650 | 0.815 | | 0.425 | 20.0 | 15000 | 0.4594 | 0.815 | | 0.4212 | 21.0 | 15750 | 0.4532 | 0.8167 | | 0.4517 | 22.0 | 16500 | 0.4489 | 0.82 | | 0.4104 | 23.0 | 17250 | 0.4443 | 0.8167 | | 0.4051 | 24.0 | 18000 | 0.4407 | 0.82 | | 0.4019 | 25.0 | 18750 | 0.4371 | 0.8217 | | 0.3884 | 26.0 | 19500 | 0.4338 | 0.825 | | 0.3154 | 27.0 | 20250 | 0.4302 | 0.825 | | 0.3994 | 28.0 | 21000 | 0.4273 | 0.8283 | | 0.4061 | 29.0 | 21750 | 0.4246 | 0.83 | | 0.4059 | 30.0 | 22500 | 0.4225 | 0.8283 | | 0.3637 | 31.0 | 23250 | 0.4202 | 0.8267 | | 0.3501 | 32.0 | 24000 | 0.4181 | 0.8283 | | 0.4209 | 33.0 | 24750 | 0.4163 | 0.8317 | | 0.3255 | 34.0 | 25500 | 0.4145 | 0.8317 | | 0.3933 | 35.0 | 26250 | 0.4127 | 0.8317 | | 0.3766 | 36.0 | 27000 | 0.4115 | 0.8317 | | 0.3145 | 37.0 | 27750 | 0.4102 | 0.8317 | | 0.3874 | 38.0 | 28500 | 0.4090 | 0.83 | | 0.3898 | 39.0 | 29250 | 0.4079 | 0.83 | | 0.365 | 40.0 | 30000 | 0.4069 | 0.8317 | | 0.3728 | 41.0 | 30750 | 0.4059 | 0.8317 | | 0.3865 | 42.0 | 31500 | 0.4051 | 0.8317 | | 0.3813 | 43.0 | 32250 | 0.4045 | 0.8317 | | 0.3607 | 44.0 | 33000 | 0.4040 | 0.8317 | | 0.3955 | 45.0 | 33750 | 0.4034 | 0.8333 | | 0.3317 | 46.0 | 34500 | 0.4031 | 0.835 | | 0.4022 | 47.0 | 35250 | 0.4028 | 0.835 | | 0.3888 | 48.0 | 36000 | 0.4026 | 0.835 | | 0.3745 | 49.0 | 36750 | 0.4025 | 0.835 | | 0.3 | 50.0 | 37500 | 0.4025 | 0.835 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_sgd_00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_sgd_00001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9331 - Accuracy: 0.5933 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0687 | 1.0 | 750 | 1.0724 | 0.425 | | 1.0435 | 2.0 | 1500 | 1.0669 | 0.42 | | 1.0439 | 3.0 | 2250 | 1.0614 | 0.4283 | | 1.0595 | 4.0 | 3000 | 1.0559 | 0.435 | | 1.0216 | 5.0 | 3750 | 1.0506 | 0.4383 | | 1.0179 | 6.0 | 4500 | 1.0454 | 0.4467 | | 1.0048 | 7.0 | 5250 | 1.0402 | 0.4517 | | 1.0171 | 8.0 | 6000 | 1.0351 | 0.4533 | | 1.0075 | 9.0 | 6750 | 1.0302 | 0.4567 | | 0.9942 | 10.0 | 7500 | 1.0255 | 0.4683 | | 0.9968 | 11.0 | 8250 | 1.0209 | 0.4783 | | 0.9853 | 12.0 | 9000 | 1.0163 | 0.485 | | 0.9829 | 13.0 | 9750 | 1.0118 | 0.4933 | | 0.9676 | 14.0 | 10500 | 1.0075 | 0.4933 | | 0.9869 | 15.0 | 11250 | 1.0033 | 0.5 | | 0.9385 | 16.0 | 12000 | 0.9992 | 0.5117 | | 0.9422 | 17.0 | 12750 | 0.9953 | 0.5167 | | 0.9475 | 18.0 | 13500 | 0.9914 | 0.5233 | | 0.9706 | 19.0 | 14250 | 0.9876 | 0.5267 | | 0.9823 | 20.0 | 15000 | 0.9840 | 0.53 | | 0.9281 | 21.0 | 15750 | 0.9805 | 0.535 | | 0.9429 | 22.0 | 16500 | 0.9770 | 0.54 | | 0.9545 | 23.0 | 17250 | 0.9738 | 0.545 | | 0.9266 | 24.0 | 18000 | 0.9706 | 0.545 | | 0.943 | 25.0 | 18750 | 0.9675 | 0.545 | | 0.9362 | 26.0 | 19500 | 0.9646 | 0.55 | | 0.9017 | 27.0 | 20250 | 0.9618 | 0.5517 | | 0.9415 | 28.0 | 21000 | 0.9592 | 0.555 | | 0.9141 | 29.0 | 21750 | 0.9566 | 0.555 | | 0.9329 | 30.0 | 22500 | 0.9543 | 0.5567 | | 0.931 | 31.0 | 23250 | 0.9520 | 0.5617 | | 0.9115 | 32.0 | 24000 | 0.9498 | 0.5633 | | 0.9251 | 33.0 | 24750 | 0.9478 | 0.565 | | 0.8996 | 34.0 | 25500 | 0.9460 | 0.5717 | | 0.9232 | 35.0 | 26250 | 0.9442 | 0.5717 | | 0.8817 | 36.0 | 27000 | 0.9427 | 0.5717 | | 0.8794 | 37.0 | 27750 | 0.9412 | 0.575 | | 0.8813 | 38.0 | 28500 | 0.9398 | 0.5767 | | 0.8952 | 39.0 | 29250 | 0.9386 | 0.58 | | 0.8846 | 40.0 | 30000 | 0.9375 | 0.5817 | | 0.8967 | 41.0 | 30750 | 0.9366 | 0.5867 | | 0.9065 | 42.0 | 31500 | 0.9358 | 0.5883 | | 0.9123 | 43.0 | 32250 | 0.9351 | 0.59 | | 0.8878 | 44.0 | 33000 | 0.9345 | 0.59 | | 0.8772 | 45.0 | 33750 | 0.9340 | 0.59 | | 0.9035 | 46.0 | 34500 | 0.9336 | 0.5933 | | 0.9152 | 47.0 | 35250 | 0.9334 | 0.5933 | | 0.8837 | 48.0 | 36000 | 0.9332 | 0.5933 | | 0.8879 | 49.0 | 36750 | 0.9331 | 0.5933 | | 0.8918 | 50.0 | 37500 | 0.9331 | 0.5933 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_rms_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_rms_0001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0119 - Accuracy: 0.9015 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3304 | 1.0 | 751 | 0.3159 | 0.8881 | | 0.1492 | 2.0 | 1502 | 0.3656 | 0.8798 | | 0.137 | 3.0 | 2253 | 0.4619 | 0.8831 | | 0.0713 | 4.0 | 3004 | 0.5428 | 0.8798 | | 0.0396 | 5.0 | 3755 | 0.4807 | 0.9098 | | 0.041 | 6.0 | 4506 | 0.6616 | 0.8881 | | 0.0505 | 7.0 | 5257 | 0.6229 | 0.9032 | | 0.016 | 8.0 | 6008 | 0.6188 | 0.8865 | | 0.0115 | 9.0 | 6759 | 0.6016 | 0.8915 | | 0.0369 | 10.0 | 7510 | 0.5294 | 0.8965 | | 0.0298 | 11.0 | 8261 | 0.5896 | 0.8865 | | 0.0282 | 12.0 | 9012 | 0.6477 | 0.8815 | | 0.0159 | 13.0 | 9763 | 0.5597 | 0.9032 | | 0.0145 | 14.0 | 10514 | 0.5006 | 0.9032 | | 0.0274 | 15.0 | 11265 | 0.6365 | 0.8932 | | 0.0233 | 16.0 | 12016 | 0.5679 | 0.9032 | | 0.0005 | 17.0 | 12767 | 0.7523 | 0.8815 | | 0.0253 | 18.0 | 13518 | 0.6785 | 0.8915 | | 0.0256 | 19.0 | 14269 | 0.6246 | 0.8815 | | 0.0099 | 20.0 | 15020 | 0.6349 | 0.9082 | | 0.0038 | 21.0 | 15771 | 0.5512 | 0.9115 | | 0.0001 | 22.0 | 16522 | 0.6204 | 0.9032 | | 0.0062 | 23.0 | 17273 | 0.7652 | 0.8932 | | 0.0 | 24.0 | 18024 | 0.6455 | 0.9048 | | 0.0 | 25.0 | 18775 | 0.8288 | 0.8932 | | 0.0004 | 26.0 | 19526 | 0.7865 | 0.8982 | | 0.0072 | 27.0 | 20277 | 0.8381 | 0.8965 | | 0.0124 | 28.0 | 21028 | 0.6706 | 0.9082 | | 0.0027 | 29.0 | 21779 | 0.7345 | 0.9115 | | 0.0001 | 30.0 | 22530 | 0.8086 | 0.9032 | | 0.0007 | 31.0 | 23281 | 0.9133 | 0.8948 | | 0.0 | 32.0 | 24032 | 0.9315 | 0.8948 | | 0.0005 | 33.0 | 24783 | 0.9041 | 0.8865 | | 0.0002 | 34.0 | 25534 | 0.7984 | 0.9082 | | 0.0 | 35.0 | 26285 | 0.7336 | 0.9199 | | 0.0 | 36.0 | 27036 | 0.7739 | 0.9082 | | 0.0 | 37.0 | 27787 | 0.7374 | 0.9149 | | 0.0 | 38.0 | 28538 | 0.8856 | 0.8998 | | 0.0 | 39.0 | 29289 | 0.7863 | 0.9115 | | 0.0 | 40.0 | 30040 | 0.8280 | 0.9065 | | 0.0 | 41.0 | 30791 | 0.8525 | 0.9065 | | 0.0 | 42.0 | 31542 | 0.8579 | 0.9015 | | 0.0 | 43.0 | 32293 | 0.9128 | 0.9048 | | 0.0 | 44.0 | 33044 | 0.9440 | 0.8998 | | 0.0 | 45.0 | 33795 | 0.9799 | 0.8998 | | 0.0 | 46.0 | 34546 | 0.9704 | 0.9015 | | 0.0 | 47.0 | 35297 | 0.9969 | 0.9015 | | 0.0 | 48.0 | 36048 | 1.0043 | 0.9015 | | 0.0 | 49.0 | 36799 | 1.0140 | 0.8998 | | 0.0 | 50.0 | 37550 | 1.0119 | 0.9015 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_rms_001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_rms_001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5007 - Accuracy: 0.8063 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.7661 | 1.0 | 751 | 0.8852 | 0.6060 | | 0.6485 | 2.0 | 1502 | 0.7308 | 0.6578 | | 0.696 | 3.0 | 2253 | 0.7036 | 0.6594 | | 0.6301 | 4.0 | 3004 | 0.7247 | 0.6761 | | 0.6536 | 5.0 | 3755 | 0.6760 | 0.6828 | | 0.6653 | 6.0 | 4506 | 0.6159 | 0.7095 | | 0.5636 | 7.0 | 5257 | 0.5571 | 0.7579 | | 0.5506 | 8.0 | 6008 | 0.6121 | 0.7329 | | 0.5582 | 9.0 | 6759 | 0.5862 | 0.7546 | | 0.5548 | 10.0 | 7510 | 0.5892 | 0.7329 | | 0.5549 | 11.0 | 8261 | 0.5848 | 0.7412 | | 0.5362 | 12.0 | 9012 | 0.6200 | 0.7396 | | 0.4966 | 13.0 | 9763 | 0.5530 | 0.7713 | | 0.4818 | 14.0 | 10514 | 0.5786 | 0.7529 | | 0.4746 | 15.0 | 11265 | 0.6115 | 0.7229 | | 0.4852 | 16.0 | 12016 | 0.6019 | 0.7362 | | 0.4634 | 17.0 | 12767 | 0.5783 | 0.7613 | | 0.453 | 18.0 | 13518 | 0.5821 | 0.7462 | | 0.4908 | 19.0 | 14269 | 0.5445 | 0.7629 | | 0.4881 | 20.0 | 15020 | 0.5377 | 0.7763 | | 0.4025 | 21.0 | 15771 | 0.5423 | 0.7813 | | 0.4591 | 22.0 | 16522 | 0.5168 | 0.7813 | | 0.3695 | 23.0 | 17273 | 0.5306 | 0.7730 | | 0.4288 | 24.0 | 18024 | 0.5369 | 0.7997 | | 0.4022 | 25.0 | 18775 | 0.5176 | 0.7896 | | 0.3916 | 26.0 | 19526 | 0.5681 | 0.7830 | | 0.4188 | 27.0 | 20277 | 0.5488 | 0.7830 | | 0.4088 | 28.0 | 21028 | 0.5430 | 0.7947 | | 0.3236 | 29.0 | 21779 | 0.5528 | 0.7947 | | 0.3272 | 30.0 | 22530 | 0.5104 | 0.8164 | | 0.305 | 31.0 | 23281 | 0.5401 | 0.8080 | | 0.3925 | 32.0 | 24032 | 0.5133 | 0.8013 | | 0.3211 | 33.0 | 24783 | 0.5292 | 0.7980 | | 0.2648 | 34.0 | 25534 | 0.6583 | 0.7846 | | 0.2286 | 35.0 | 26285 | 0.6241 | 0.7896 | | 0.2863 | 36.0 | 27036 | 0.6657 | 0.7947 | | 0.2968 | 37.0 | 27787 | 0.5922 | 0.8214 | | 0.2233 | 38.0 | 28538 | 0.6706 | 0.7880 | | 0.1424 | 39.0 | 29289 | 0.6769 | 0.8097 | | 0.2253 | 40.0 | 30040 | 0.7552 | 0.7963 | | 0.1253 | 41.0 | 30791 | 0.7804 | 0.8164 | | 0.16 | 42.0 | 31542 | 0.8311 | 0.7980 | | 0.1962 | 43.0 | 32293 | 0.8198 | 0.8047 | | 0.0759 | 44.0 | 33044 | 0.9444 | 0.7997 | | 0.1175 | 45.0 | 33795 | 0.9448 | 0.8080 | | 0.1291 | 46.0 | 34546 | 1.0860 | 0.8080 | | 0.0879 | 47.0 | 35297 | 1.2492 | 0.7980 | | 0.0404 | 48.0 | 36048 | 1.3416 | 0.8047 | | 0.0466 | 49.0 | 36799 | 1.4861 | 0.8030 | | 0.0362 | 50.0 | 37550 | 1.5007 | 0.8063 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
MichalGas/instrument-0
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: nan f1_macro: 0.05573770491803279 f1_micro: 0.20078740157480315 f1_weighted: 0.06714857364140958 precision_macro: 0.03346456692913386 precision_micro: 0.20078740157480315 precision_weighted: 0.040315580631161266 recall_macro: 0.16666666666666666 recall_micro: 0.20078740157480315 recall_weighted: 0.20078740157480315 accuracy: 0.20078740157480315
[ "bipolars", "clippers", "graspers", "hooks", "irrigators", "scissorss" ]
MichalGas/ycjn-unff-povv-0
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: nan f1_macro: 0.05573770491803279 f1_micro: 0.20078740157480315 f1_weighted: 0.06714857364140958 precision_macro: 0.03346456692913386 precision_micro: 0.20078740157480315 precision_weighted: 0.040315580631161266 recall_macro: 0.16666666666666666 recall_micro: 0.20078740157480315 recall_weighted: 0.20078740157480315 accuracy: 0.20078740157480315
[ "bipolars", "clippers", "graspers", "hooks", "irrigators", "scissorss" ]
MichalGas/vit-base-mgas
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-mgas This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the ./mgr/dataset/HF_DS dataset. It achieves the following results on the evaluation set: - Loss: 0.8530 - Accuracy: 0.7323 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 1.4331 | 1.0 | 143 | 0.4803 | 1.3804 | | 1.1653 | 2.0 | 286 | 0.6850 | 1.0843 | | 1.0919 | 3.0 | 429 | 0.7165 | 0.9539 | | 0.9689 | 4.0 | 572 | 0.7323 | 0.8724 | | 0.9175 | 5.0 | 715 | 0.8530 | 0.7323 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "bipolars", "clippers", "graspers", "hooks", "irrigators", "scissorss" ]
hkivancoral/smids_10x_deit_tiny_adamax_001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_tiny_adamax_001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1377 - Accuracy: 0.8948 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.387 | 1.0 | 751 | 0.3856 | 0.8447 | | 0.2275 | 2.0 | 1502 | 0.3799 | 0.8497 | | 0.1732 | 3.0 | 2253 | 0.3628 | 0.8898 | | 0.1418 | 4.0 | 3004 | 0.3720 | 0.8848 | | 0.1851 | 5.0 | 3755 | 0.4163 | 0.8497 | | 0.1168 | 6.0 | 4506 | 0.4228 | 0.8915 | | 0.1217 | 7.0 | 5257 | 0.4050 | 0.8965 | | 0.0972 | 8.0 | 6008 | 0.4659 | 0.8881 | | 0.0717 | 9.0 | 6759 | 0.4692 | 0.8848 | | 0.0615 | 10.0 | 7510 | 0.5939 | 0.8748 | | 0.0582 | 11.0 | 8261 | 0.5202 | 0.8898 | | 0.0569 | 12.0 | 9012 | 0.5681 | 0.8982 | | 0.0142 | 13.0 | 9763 | 0.7223 | 0.8815 | | 0.0849 | 14.0 | 10514 | 0.6292 | 0.8948 | | 0.0289 | 15.0 | 11265 | 0.7113 | 0.8898 | | 0.0438 | 16.0 | 12016 | 0.6702 | 0.8982 | | 0.0561 | 17.0 | 12767 | 0.7629 | 0.8765 | | 0.0013 | 18.0 | 13518 | 0.7639 | 0.8865 | | 0.0173 | 19.0 | 14269 | 0.6756 | 0.8965 | | 0.0044 | 20.0 | 15020 | 0.7365 | 0.8965 | | 0.013 | 21.0 | 15771 | 0.8044 | 0.8831 | | 0.0056 | 22.0 | 16522 | 0.7938 | 0.8915 | | 0.0006 | 23.0 | 17273 | 0.8954 | 0.8848 | | 0.0157 | 24.0 | 18024 | 0.8083 | 0.8998 | | 0.0002 | 25.0 | 18775 | 0.8156 | 0.8965 | | 0.0001 | 26.0 | 19526 | 0.8204 | 0.8982 | | 0.0087 | 27.0 | 20277 | 0.8556 | 0.8948 | | 0.0001 | 28.0 | 21028 | 0.8189 | 0.9048 | | 0.0132 | 29.0 | 21779 | 0.8401 | 0.9065 | | 0.0001 | 30.0 | 22530 | 0.9274 | 0.8915 | | 0.0 | 31.0 | 23281 | 0.9668 | 0.8965 | | 0.0153 | 32.0 | 24032 | 0.9746 | 0.8932 | | 0.0 | 33.0 | 24783 | 1.0269 | 0.8881 | | 0.0 | 34.0 | 25534 | 1.0125 | 0.8948 | | 0.0 | 35.0 | 26285 | 1.0419 | 0.8898 | | 0.0003 | 36.0 | 27036 | 1.0764 | 0.8898 | | 0.0 | 37.0 | 27787 | 1.0824 | 0.8915 | | 0.0 | 38.0 | 28538 | 1.0882 | 0.8898 | | 0.0 | 39.0 | 29289 | 1.0563 | 0.8932 | | 0.0 | 40.0 | 30040 | 1.0771 | 0.8915 | | 0.0 | 41.0 | 30791 | 1.0705 | 0.8948 | | 0.0 | 42.0 | 31542 | 1.0752 | 0.8932 | | 0.0 | 43.0 | 32293 | 1.1011 | 0.8948 | | 0.0 | 44.0 | 33044 | 1.1049 | 0.8948 | | 0.0 | 45.0 | 33795 | 1.1132 | 0.8948 | | 0.0 | 46.0 | 34546 | 1.1208 | 0.8965 | | 0.0 | 47.0 | 35297 | 1.1280 | 0.8948 | | 0.0 | 48.0 | 36048 | 1.1328 | 0.8948 | | 0.0 | 49.0 | 36799 | 1.1361 | 0.8948 | | 0.0 | 50.0 | 37550 | 1.1377 | 0.8948 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_rms_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_rms_001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7042 - Accuracy: 0.8369 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.8864 | 1.0 | 750 | 0.8248 | 0.5474 | | 0.7722 | 2.0 | 1500 | 0.8988 | 0.5092 | | 0.7721 | 3.0 | 2250 | 0.7604 | 0.6456 | | 0.7086 | 4.0 | 3000 | 0.6560 | 0.7371 | | 0.6588 | 5.0 | 3750 | 0.6906 | 0.7088 | | 0.5657 | 6.0 | 4500 | 0.5964 | 0.7654 | | 0.5826 | 7.0 | 5250 | 0.5186 | 0.7854 | | 0.5637 | 8.0 | 6000 | 0.5513 | 0.7737 | | 0.5395 | 9.0 | 6750 | 0.5704 | 0.7537 | | 0.5342 | 10.0 | 7500 | 0.4931 | 0.7987 | | 0.5349 | 11.0 | 8250 | 0.5109 | 0.7937 | | 0.596 | 12.0 | 9000 | 0.5425 | 0.7804 | | 0.5878 | 13.0 | 9750 | 0.4766 | 0.8103 | | 0.4609 | 14.0 | 10500 | 0.7520 | 0.7022 | | 0.4491 | 15.0 | 11250 | 0.5442 | 0.7504 | | 0.4637 | 16.0 | 12000 | 0.5054 | 0.8136 | | 0.4699 | 17.0 | 12750 | 0.4927 | 0.8037 | | 0.4528 | 18.0 | 13500 | 0.4576 | 0.8120 | | 0.4797 | 19.0 | 14250 | 0.4748 | 0.7970 | | 0.4704 | 20.0 | 15000 | 0.4438 | 0.8070 | | 0.4406 | 21.0 | 15750 | 0.4383 | 0.8153 | | 0.4289 | 22.0 | 16500 | 0.4522 | 0.8120 | | 0.4219 | 23.0 | 17250 | 0.4457 | 0.8286 | | 0.3979 | 24.0 | 18000 | 0.4791 | 0.8203 | | 0.476 | 25.0 | 18750 | 0.4867 | 0.8136 | | 0.4039 | 26.0 | 19500 | 0.4638 | 0.8319 | | 0.4302 | 27.0 | 20250 | 0.4222 | 0.8303 | | 0.4091 | 28.0 | 21000 | 0.4516 | 0.8270 | | 0.3603 | 29.0 | 21750 | 0.5085 | 0.8170 | | 0.4414 | 30.0 | 22500 | 0.4568 | 0.8353 | | 0.3768 | 31.0 | 23250 | 0.4984 | 0.8253 | | 0.3126 | 32.0 | 24000 | 0.4428 | 0.8436 | | 0.3269 | 33.0 | 24750 | 0.4871 | 0.8236 | | 0.3283 | 34.0 | 25500 | 0.4708 | 0.8253 | | 0.3471 | 35.0 | 26250 | 0.4869 | 0.8353 | | 0.3619 | 36.0 | 27000 | 0.5210 | 0.8153 | | 0.4176 | 37.0 | 27750 | 0.4744 | 0.8353 | | 0.3395 | 38.0 | 28500 | 0.5334 | 0.8386 | | 0.2458 | 39.0 | 29250 | 0.5218 | 0.8286 | | 0.3331 | 40.0 | 30000 | 0.5874 | 0.8186 | | 0.3063 | 41.0 | 30750 | 0.5488 | 0.8236 | | 0.2956 | 42.0 | 31500 | 0.5739 | 0.8220 | | 0.3105 | 43.0 | 32250 | 0.5441 | 0.8369 | | 0.2918 | 44.0 | 33000 | 0.6039 | 0.8303 | | 0.2418 | 45.0 | 33750 | 0.6214 | 0.8303 | | 0.2859 | 46.0 | 34500 | 0.6601 | 0.8286 | | 0.2507 | 47.0 | 35250 | 0.6435 | 0.8369 | | 0.2443 | 48.0 | 36000 | 0.6789 | 0.8336 | | 0.2825 | 49.0 | 36750 | 0.6931 | 0.8336 | | 0.1845 | 50.0 | 37500 | 0.7042 | 0.8369 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_rms_0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_rms_0001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0878 - Accuracy: 0.8968 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2199 | 1.0 | 750 | 0.3087 | 0.8702 | | 0.1415 | 2.0 | 1500 | 0.3270 | 0.8952 | | 0.0884 | 3.0 | 2250 | 0.4634 | 0.8735 | | 0.1125 | 4.0 | 3000 | 0.5598 | 0.8735 | | 0.0577 | 5.0 | 3750 | 0.5910 | 0.8785 | | 0.0338 | 6.0 | 4500 | 0.6886 | 0.8802 | | 0.0535 | 7.0 | 5250 | 0.5872 | 0.8835 | | 0.0197 | 8.0 | 6000 | 0.6155 | 0.8952 | | 0.0062 | 9.0 | 6750 | 0.7198 | 0.8852 | | 0.019 | 10.0 | 7500 | 0.6897 | 0.8985 | | 0.011 | 11.0 | 8250 | 0.6348 | 0.8985 | | 0.0457 | 12.0 | 9000 | 0.6894 | 0.8835 | | 0.0048 | 13.0 | 9750 | 0.7305 | 0.8918 | | 0.0312 | 14.0 | 10500 | 0.8235 | 0.8835 | | 0.0002 | 15.0 | 11250 | 0.8886 | 0.8819 | | 0.0307 | 16.0 | 12000 | 0.8040 | 0.8769 | | 0.0004 | 17.0 | 12750 | 0.6236 | 0.9068 | | 0.0012 | 18.0 | 13500 | 0.8211 | 0.8785 | | 0.0004 | 19.0 | 14250 | 0.6599 | 0.8985 | | 0.0198 | 20.0 | 15000 | 0.6886 | 0.8985 | | 0.0006 | 21.0 | 15750 | 0.8044 | 0.8852 | | 0.007 | 22.0 | 16500 | 0.7019 | 0.8885 | | 0.0333 | 23.0 | 17250 | 0.7287 | 0.8819 | | 0.0 | 24.0 | 18000 | 1.0716 | 0.8652 | | 0.0 | 25.0 | 18750 | 0.9627 | 0.8752 | | 0.0006 | 26.0 | 19500 | 1.0237 | 0.8686 | | 0.0027 | 27.0 | 20250 | 0.9748 | 0.8769 | | 0.0004 | 28.0 | 21000 | 0.9776 | 0.8902 | | 0.0 | 29.0 | 21750 | 0.9254 | 0.8785 | | 0.0001 | 30.0 | 22500 | 0.9772 | 0.8902 | | 0.0132 | 31.0 | 23250 | 0.7890 | 0.9035 | | 0.0002 | 32.0 | 24000 | 0.8329 | 0.8985 | | 0.0 | 33.0 | 24750 | 0.8259 | 0.9101 | | 0.0 | 34.0 | 25500 | 0.9870 | 0.8852 | | 0.0067 | 35.0 | 26250 | 1.0178 | 0.8918 | | 0.0 | 36.0 | 27000 | 0.9706 | 0.9035 | | 0.0039 | 37.0 | 27750 | 0.9405 | 0.8952 | | 0.0 | 38.0 | 28500 | 1.0909 | 0.8869 | | 0.0 | 39.0 | 29250 | 1.0161 | 0.9002 | | 0.0 | 40.0 | 30000 | 0.9672 | 0.9068 | | 0.0 | 41.0 | 30750 | 1.0373 | 0.8952 | | 0.0 | 42.0 | 31500 | 1.0954 | 0.8968 | | 0.0 | 43.0 | 32250 | 1.0521 | 0.9035 | | 0.0 | 44.0 | 33000 | 1.0547 | 0.8968 | | 0.0028 | 45.0 | 33750 | 1.0695 | 0.8952 | | 0.0 | 46.0 | 34500 | 1.0676 | 0.8968 | | 0.0 | 47.0 | 35250 | 1.0805 | 0.8968 | | 0.0 | 48.0 | 36000 | 1.0830 | 0.8968 | | 0.0 | 49.0 | 36750 | 1.0856 | 0.8968 | | 0.0 | 50.0 | 37500 | 1.0878 | 0.8968 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_rms_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_rms_001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5590 - Accuracy: 0.7767 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.8839 | 1.0 | 750 | 0.8956 | 0.4917 | | 0.8402 | 2.0 | 1500 | 0.8459 | 0.5383 | | 0.827 | 3.0 | 2250 | 0.8365 | 0.5417 | | 0.7595 | 4.0 | 3000 | 0.8404 | 0.5617 | | 0.8496 | 5.0 | 3750 | 0.9112 | 0.505 | | 0.7825 | 6.0 | 4500 | 0.8246 | 0.6233 | | 0.8185 | 7.0 | 5250 | 0.7843 | 0.6233 | | 0.7863 | 8.0 | 6000 | 0.7862 | 0.6183 | | 0.7304 | 9.0 | 6750 | 0.7478 | 0.6433 | | 0.7486 | 10.0 | 7500 | 0.7941 | 0.625 | | 0.7979 | 11.0 | 8250 | 0.7438 | 0.6817 | | 0.6928 | 12.0 | 9000 | 0.8898 | 0.58 | | 0.683 | 13.0 | 9750 | 0.7126 | 0.68 | | 0.7194 | 14.0 | 10500 | 0.7634 | 0.6367 | | 0.7001 | 15.0 | 11250 | 0.6906 | 0.68 | | 0.7209 | 16.0 | 12000 | 0.6988 | 0.675 | | 0.693 | 17.0 | 12750 | 0.7227 | 0.6733 | | 0.6594 | 18.0 | 13500 | 0.7119 | 0.675 | | 0.6733 | 19.0 | 14250 | 0.6769 | 0.695 | | 0.6368 | 20.0 | 15000 | 0.6310 | 0.7183 | | 0.5529 | 21.0 | 15750 | 0.6379 | 0.73 | | 0.674 | 22.0 | 16500 | 0.6200 | 0.7233 | | 0.6173 | 23.0 | 17250 | 0.6390 | 0.7117 | | 0.7017 | 24.0 | 18000 | 0.6234 | 0.7217 | | 0.6672 | 25.0 | 18750 | 0.6159 | 0.7117 | | 0.6143 | 26.0 | 19500 | 0.6119 | 0.7133 | | 0.5447 | 27.0 | 20250 | 0.6511 | 0.7 | | 0.616 | 28.0 | 21000 | 0.5943 | 0.7317 | | 0.6257 | 29.0 | 21750 | 0.6135 | 0.7417 | | 0.5784 | 30.0 | 22500 | 0.6236 | 0.7383 | | 0.5488 | 31.0 | 23250 | 0.5814 | 0.7483 | | 0.5683 | 32.0 | 24000 | 0.6409 | 0.725 | | 0.5657 | 33.0 | 24750 | 0.6193 | 0.7583 | | 0.7061 | 34.0 | 25500 | 0.7958 | 0.6533 | | 0.5815 | 35.0 | 26250 | 0.6092 | 0.7467 | | 0.545 | 36.0 | 27000 | 0.5902 | 0.7567 | | 0.574 | 37.0 | 27750 | 0.5865 | 0.7483 | | 0.5654 | 38.0 | 28500 | 0.6161 | 0.7467 | | 0.5393 | 39.0 | 29250 | 0.5677 | 0.7667 | | 0.6213 | 40.0 | 30000 | 0.5702 | 0.7633 | | 0.5565 | 41.0 | 30750 | 0.5675 | 0.75 | | 0.5323 | 42.0 | 31500 | 0.5645 | 0.7583 | | 0.5444 | 43.0 | 32250 | 0.5820 | 0.76 | | 0.4988 | 44.0 | 33000 | 0.5588 | 0.765 | | 0.5249 | 45.0 | 33750 | 0.5669 | 0.7583 | | 0.5246 | 46.0 | 34500 | 0.5504 | 0.7733 | | 0.4975 | 47.0 | 35250 | 0.5697 | 0.7717 | | 0.5083 | 48.0 | 36000 | 0.5554 | 0.7717 | | 0.4948 | 49.0 | 36750 | 0.5551 | 0.775 | | 0.4147 | 50.0 | 37500 | 0.5590 | 0.7767 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_rms_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_rms_0001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0751 - Accuracy: 0.905 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2232 | 1.0 | 750 | 0.2644 | 0.9033 | | 0.1749 | 2.0 | 1500 | 0.3888 | 0.87 | | 0.0708 | 3.0 | 2250 | 0.3860 | 0.9083 | | 0.0864 | 4.0 | 3000 | 0.4018 | 0.9017 | | 0.027 | 5.0 | 3750 | 0.5540 | 0.905 | | 0.0152 | 6.0 | 4500 | 0.5713 | 0.9133 | | 0.0637 | 7.0 | 5250 | 0.5356 | 0.9017 | | 0.0914 | 8.0 | 6000 | 0.5134 | 0.9117 | | 0.0042 | 9.0 | 6750 | 0.5226 | 0.905 | | 0.0385 | 10.0 | 7500 | 0.6176 | 0.9067 | | 0.0198 | 11.0 | 8250 | 0.7633 | 0.89 | | 0.0448 | 12.0 | 9000 | 0.7337 | 0.8783 | | 0.0021 | 13.0 | 9750 | 0.6859 | 0.8917 | | 0.0019 | 14.0 | 10500 | 0.7372 | 0.8933 | | 0.0006 | 15.0 | 11250 | 0.7460 | 0.885 | | 0.0195 | 16.0 | 12000 | 0.7805 | 0.8933 | | 0.0202 | 17.0 | 12750 | 0.8243 | 0.895 | | 0.016 | 18.0 | 13500 | 0.7845 | 0.89 | | 0.0037 | 19.0 | 14250 | 0.7538 | 0.8883 | | 0.0001 | 20.0 | 15000 | 0.6925 | 0.8967 | | 0.0006 | 21.0 | 15750 | 0.8393 | 0.8933 | | 0.0 | 22.0 | 16500 | 0.7236 | 0.9 | | 0.0024 | 23.0 | 17250 | 0.8639 | 0.885 | | 0.0014 | 24.0 | 18000 | 0.8799 | 0.8917 | | 0.0236 | 25.0 | 18750 | 0.6893 | 0.9033 | | 0.0001 | 26.0 | 19500 | 0.7435 | 0.9033 | | 0.0001 | 27.0 | 20250 | 0.6829 | 0.89 | | 0.0194 | 28.0 | 21000 | 0.8267 | 0.8967 | | 0.0002 | 29.0 | 21750 | 0.8000 | 0.8983 | | 0.0001 | 30.0 | 22500 | 0.8336 | 0.89 | | 0.0 | 31.0 | 23250 | 0.8017 | 0.9 | | 0.0 | 32.0 | 24000 | 0.8257 | 0.9117 | | 0.0 | 33.0 | 24750 | 0.8456 | 0.905 | | 0.0 | 34.0 | 25500 | 0.7637 | 0.91 | | 0.0 | 35.0 | 26250 | 0.8426 | 0.9067 | | 0.0219 | 36.0 | 27000 | 0.8594 | 0.9067 | | 0.0 | 37.0 | 27750 | 0.8437 | 0.9083 | | 0.0 | 38.0 | 28500 | 0.9026 | 0.9117 | | 0.0 | 39.0 | 29250 | 0.9566 | 0.9067 | | 0.0 | 40.0 | 30000 | 0.9200 | 0.915 | | 0.0 | 41.0 | 30750 | 0.9067 | 0.92 | | 0.0 | 42.0 | 31500 | 0.9289 | 0.91 | | 0.0 | 43.0 | 32250 | 0.9815 | 0.91 | | 0.0 | 44.0 | 33000 | 0.9712 | 0.91 | | 0.0 | 45.0 | 33750 | 1.0254 | 0.9067 | | 0.0 | 46.0 | 34500 | 1.0353 | 0.9083 | | 0.0 | 47.0 | 35250 | 1.0450 | 0.905 | | 0.0 | 48.0 | 36000 | 1.0661 | 0.905 | | 0.0 | 49.0 | 36750 | 1.0715 | 0.905 | | 0.0 | 50.0 | 37500 | 1.0751 | 0.905 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_tiny_adamax_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_tiny_adamax_001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0691 - Accuracy: 0.9002 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3362 | 1.0 | 750 | 0.3519 | 0.8652 | | 0.2971 | 2.0 | 1500 | 0.3131 | 0.8918 | | 0.1771 | 3.0 | 2250 | 0.2717 | 0.8885 | | 0.2985 | 4.0 | 3000 | 0.3652 | 0.8652 | | 0.1399 | 5.0 | 3750 | 0.3216 | 0.9018 | | 0.1317 | 6.0 | 4500 | 0.3948 | 0.8802 | | 0.1309 | 7.0 | 5250 | 0.3860 | 0.8902 | | 0.1165 | 8.0 | 6000 | 0.4557 | 0.8852 | | 0.0308 | 9.0 | 6750 | 0.5032 | 0.8686 | | 0.0315 | 10.0 | 7500 | 0.4981 | 0.8769 | | 0.0974 | 11.0 | 8250 | 0.6363 | 0.8769 | | 0.1017 | 12.0 | 9000 | 0.5021 | 0.8869 | | 0.0475 | 13.0 | 9750 | 0.5896 | 0.8885 | | 0.0086 | 14.0 | 10500 | 0.6931 | 0.8918 | | 0.0301 | 15.0 | 11250 | 0.6531 | 0.8902 | | 0.0049 | 16.0 | 12000 | 0.7157 | 0.8819 | | 0.0307 | 17.0 | 12750 | 0.7054 | 0.8935 | | 0.0113 | 18.0 | 13500 | 0.7646 | 0.8869 | | 0.0492 | 19.0 | 14250 | 0.7424 | 0.8885 | | 0.0093 | 20.0 | 15000 | 0.6366 | 0.8952 | | 0.011 | 21.0 | 15750 | 0.8426 | 0.8885 | | 0.0191 | 22.0 | 16500 | 0.7557 | 0.8952 | | 0.0047 | 23.0 | 17250 | 0.7578 | 0.8885 | | 0.0163 | 24.0 | 18000 | 0.8275 | 0.8902 | | 0.0001 | 25.0 | 18750 | 0.8176 | 0.8935 | | 0.0023 | 26.0 | 19500 | 0.8054 | 0.8968 | | 0.0181 | 27.0 | 20250 | 0.8270 | 0.8952 | | 0.0 | 28.0 | 21000 | 0.8173 | 0.9035 | | 0.0001 | 29.0 | 21750 | 0.8348 | 0.9018 | | 0.0 | 30.0 | 22500 | 0.8105 | 0.9101 | | 0.0 | 31.0 | 23250 | 0.7837 | 0.9118 | | 0.0 | 32.0 | 24000 | 0.9929 | 0.8935 | | 0.0 | 33.0 | 24750 | 0.8103 | 0.9085 | | 0.0 | 34.0 | 25500 | 0.8769 | 0.9035 | | 0.0 | 35.0 | 26250 | 0.8987 | 0.8985 | | 0.0 | 36.0 | 27000 | 1.0129 | 0.9002 | | 0.0053 | 37.0 | 27750 | 0.9506 | 0.9068 | | 0.0 | 38.0 | 28500 | 1.0495 | 0.8935 | | 0.0 | 39.0 | 29250 | 0.9869 | 0.9018 | | 0.0 | 40.0 | 30000 | 1.0087 | 0.8968 | | 0.0 | 41.0 | 30750 | 1.0348 | 0.8985 | | 0.0 | 42.0 | 31500 | 1.0299 | 0.8985 | | 0.0 | 43.0 | 32250 | 1.0437 | 0.8968 | | 0.0 | 44.0 | 33000 | 1.0468 | 0.8985 | | 0.0028 | 45.0 | 33750 | 1.0539 | 0.9002 | | 0.0 | 46.0 | 34500 | 1.0588 | 0.9002 | | 0.0 | 47.0 | 35250 | 1.0567 | 0.9002 | | 0.0 | 48.0 | 36000 | 1.0631 | 0.9002 | | 0.0 | 49.0 | 36750 | 1.0673 | 0.9002 | | 0.0 | 50.0 | 37500 | 1.0691 | 0.9002 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_rms_001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_rms_001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.4181 - Accuracy: 0.7883 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.808 | 1.0 | 750 | 0.7526 | 0.6333 | | 0.7422 | 2.0 | 1500 | 0.7345 | 0.58 | | 0.7146 | 3.0 | 2250 | 0.7176 | 0.6433 | | 0.701 | 4.0 | 3000 | 0.6566 | 0.6933 | | 0.6799 | 5.0 | 3750 | 0.5734 | 0.745 | | 0.6007 | 6.0 | 4500 | 0.6368 | 0.72 | | 0.5519 | 7.0 | 5250 | 0.5547 | 0.7833 | | 0.6119 | 8.0 | 6000 | 0.5502 | 0.7667 | | 0.5599 | 9.0 | 6750 | 0.5552 | 0.75 | | 0.508 | 10.0 | 7500 | 0.5666 | 0.7383 | | 0.5209 | 11.0 | 8250 | 0.5288 | 0.7683 | | 0.6053 | 12.0 | 9000 | 0.5408 | 0.765 | | 0.4938 | 13.0 | 9750 | 0.5449 | 0.755 | | 0.5012 | 14.0 | 10500 | 0.6211 | 0.7533 | | 0.4544 | 15.0 | 11250 | 0.6619 | 0.725 | | 0.4855 | 16.0 | 12000 | 0.5101 | 0.8083 | | 0.3525 | 17.0 | 12750 | 0.5278 | 0.7833 | | 0.4312 | 18.0 | 13500 | 0.5051 | 0.7917 | | 0.4593 | 19.0 | 14250 | 0.5277 | 0.7867 | | 0.3939 | 20.0 | 15000 | 0.5517 | 0.7833 | | 0.5185 | 21.0 | 15750 | 0.5418 | 0.7667 | | 0.424 | 22.0 | 16500 | 0.5465 | 0.7917 | | 0.3637 | 23.0 | 17250 | 0.5971 | 0.785 | | 0.4457 | 24.0 | 18000 | 0.5681 | 0.7933 | | 0.4023 | 25.0 | 18750 | 0.5160 | 0.805 | | 0.3012 | 26.0 | 19500 | 0.5373 | 0.8283 | | 0.2933 | 27.0 | 20250 | 0.5885 | 0.8067 | | 0.3104 | 28.0 | 21000 | 0.6014 | 0.8017 | | 0.292 | 29.0 | 21750 | 0.6093 | 0.8033 | | 0.3506 | 30.0 | 22500 | 0.6800 | 0.7633 | | 0.2777 | 31.0 | 23250 | 0.6858 | 0.795 | | 0.2396 | 32.0 | 24000 | 0.6442 | 0.805 | | 0.3316 | 33.0 | 24750 | 0.6440 | 0.81 | | 0.2958 | 34.0 | 25500 | 0.6532 | 0.8117 | | 0.2121 | 35.0 | 26250 | 0.7494 | 0.8083 | | 0.1764 | 36.0 | 27000 | 0.7942 | 0.7933 | | 0.1963 | 37.0 | 27750 | 0.7817 | 0.7883 | | 0.1829 | 38.0 | 28500 | 0.8010 | 0.7917 | | 0.1937 | 39.0 | 29250 | 0.8544 | 0.795 | | 0.1493 | 40.0 | 30000 | 0.9520 | 0.7967 | | 0.1419 | 41.0 | 30750 | 0.9695 | 0.81 | | 0.1784 | 42.0 | 31500 | 1.0763 | 0.8017 | | 0.1485 | 43.0 | 32250 | 1.1404 | 0.825 | | 0.0665 | 44.0 | 33000 | 1.3155 | 0.7933 | | 0.1004 | 45.0 | 33750 | 1.5689 | 0.7933 | | 0.0917 | 46.0 | 34500 | 1.5920 | 0.7917 | | 0.0556 | 47.0 | 35250 | 1.8022 | 0.7967 | | 0.0427 | 48.0 | 36000 | 2.0148 | 0.8067 | | 0.032 | 49.0 | 36750 | 2.3253 | 0.795 | | 0.012 | 50.0 | 37500 | 2.4181 | 0.7883 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_rms_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_rms_0001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4626 - Accuracy: 0.885 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2601 | 1.0 | 750 | 0.3716 | 0.8583 | | 0.1216 | 2.0 | 1500 | 0.6123 | 0.855 | | 0.1115 | 3.0 | 2250 | 0.5264 | 0.8667 | | 0.066 | 4.0 | 3000 | 0.5528 | 0.8733 | | 0.0447 | 5.0 | 3750 | 0.7645 | 0.8633 | | 0.0487 | 6.0 | 4500 | 0.8232 | 0.8767 | | 0.0554 | 7.0 | 5250 | 0.7277 | 0.865 | | 0.0304 | 8.0 | 6000 | 0.8549 | 0.8667 | | 0.0413 | 9.0 | 6750 | 0.8526 | 0.865 | | 0.0142 | 10.0 | 7500 | 1.0441 | 0.8567 | | 0.0561 | 11.0 | 8250 | 1.0164 | 0.8633 | | 0.0352 | 12.0 | 9000 | 0.8537 | 0.875 | | 0.0141 | 13.0 | 9750 | 0.9173 | 0.8617 | | 0.0343 | 14.0 | 10500 | 1.0250 | 0.86 | | 0.0048 | 15.0 | 11250 | 0.9231 | 0.8617 | | 0.0038 | 16.0 | 12000 | 1.1476 | 0.8617 | | 0.0308 | 17.0 | 12750 | 0.9914 | 0.885 | | 0.0168 | 18.0 | 13500 | 1.0050 | 0.8783 | | 0.0001 | 19.0 | 14250 | 1.0610 | 0.8667 | | 0.0 | 20.0 | 15000 | 1.0251 | 0.86 | | 0.0232 | 21.0 | 15750 | 1.1692 | 0.855 | | 0.0026 | 22.0 | 16500 | 0.9562 | 0.8833 | | 0.0001 | 23.0 | 17250 | 1.0914 | 0.8733 | | 0.0042 | 24.0 | 18000 | 1.0684 | 0.8767 | | 0.0 | 25.0 | 18750 | 0.9724 | 0.8833 | | 0.0001 | 26.0 | 19500 | 1.0636 | 0.86 | | 0.0001 | 27.0 | 20250 | 1.1239 | 0.86 | | 0.0015 | 28.0 | 21000 | 1.1692 | 0.8683 | | 0.0308 | 29.0 | 21750 | 1.1241 | 0.875 | | 0.0263 | 30.0 | 22500 | 1.0816 | 0.8867 | | 0.0 | 31.0 | 23250 | 0.9644 | 0.8867 | | 0.0 | 32.0 | 24000 | 1.1653 | 0.875 | | 0.0 | 33.0 | 24750 | 1.2370 | 0.8833 | | 0.0117 | 34.0 | 25500 | 1.3585 | 0.8767 | | 0.0014 | 35.0 | 26250 | 1.1826 | 0.885 | | 0.0 | 36.0 | 27000 | 1.2030 | 0.8867 | | 0.0 | 37.0 | 27750 | 1.4012 | 0.8717 | | 0.0 | 38.0 | 28500 | 1.3242 | 0.8717 | | 0.0 | 39.0 | 29250 | 1.2640 | 0.8833 | | 0.0 | 40.0 | 30000 | 1.3613 | 0.8817 | | 0.0 | 41.0 | 30750 | 1.3467 | 0.8883 | | 0.0 | 42.0 | 31500 | 1.3505 | 0.8917 | | 0.0 | 43.0 | 32250 | 1.3990 | 0.8867 | | 0.0 | 44.0 | 33000 | 1.4081 | 0.8917 | | 0.0 | 45.0 | 33750 | 1.4275 | 0.8883 | | 0.0 | 46.0 | 34500 | 1.4373 | 0.8867 | | 0.0 | 47.0 | 35250 | 1.4471 | 0.8867 | | 0.0 | 48.0 | 36000 | 1.4562 | 0.885 | | 0.0 | 49.0 | 36750 | 1.4611 | 0.885 | | 0.0 | 50.0 | 37500 | 1.4626 | 0.885 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_rms_001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_rms_001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6302 - Accuracy: 0.7167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.1023 | 1.0 | 750 | 1.0958 | 0.34 | | 0.9402 | 2.0 | 1500 | 0.9088 | 0.5033 | | 0.9044 | 3.0 | 2250 | 0.8761 | 0.5383 | | 0.8247 | 4.0 | 3000 | 0.8349 | 0.5233 | | 0.7854 | 5.0 | 3750 | 0.8127 | 0.5633 | | 0.7771 | 6.0 | 4500 | 0.8860 | 0.5383 | | 0.773 | 7.0 | 5250 | 0.8230 | 0.575 | | 0.8024 | 8.0 | 6000 | 0.7956 | 0.5883 | | 0.8797 | 9.0 | 6750 | 0.8015 | 0.6183 | | 0.7815 | 10.0 | 7500 | 0.7866 | 0.6083 | | 0.7914 | 11.0 | 8250 | 0.7547 | 0.6267 | | 0.7411 | 12.0 | 9000 | 0.7615 | 0.59 | | 0.7343 | 13.0 | 9750 | 0.7214 | 0.6617 | | 0.7764 | 14.0 | 10500 | 0.7295 | 0.6717 | | 0.7555 | 15.0 | 11250 | 0.7012 | 0.6617 | | 0.7373 | 16.0 | 12000 | 0.7948 | 0.6217 | | 0.6985 | 17.0 | 12750 | 0.7396 | 0.6267 | | 0.7821 | 18.0 | 13500 | 0.7384 | 0.66 | | 0.7914 | 19.0 | 14250 | 0.7821 | 0.635 | | 0.7863 | 20.0 | 15000 | 0.7254 | 0.655 | | 0.6932 | 21.0 | 15750 | 0.7242 | 0.6633 | | 0.6744 | 22.0 | 16500 | 0.7009 | 0.6817 | | 0.6983 | 23.0 | 17250 | 0.6866 | 0.7133 | | 0.6779 | 24.0 | 18000 | 0.6963 | 0.6983 | | 0.6937 | 25.0 | 18750 | 0.6942 | 0.6817 | | 0.6943 | 26.0 | 19500 | 0.6864 | 0.695 | | 0.6231 | 27.0 | 20250 | 0.7126 | 0.665 | | 0.6418 | 28.0 | 21000 | 0.6620 | 0.6983 | | 0.72 | 29.0 | 21750 | 0.6656 | 0.7017 | | 0.7042 | 30.0 | 22500 | 0.6697 | 0.6867 | | 0.754 | 31.0 | 23250 | 0.6511 | 0.7033 | | 0.6987 | 32.0 | 24000 | 0.6765 | 0.69 | | 0.7166 | 33.0 | 24750 | 0.6802 | 0.7083 | | 0.6725 | 34.0 | 25500 | 0.6763 | 0.7033 | | 0.6612 | 35.0 | 26250 | 0.6382 | 0.7083 | | 0.6967 | 36.0 | 27000 | 0.6445 | 0.705 | | 0.6491 | 37.0 | 27750 | 0.6443 | 0.7133 | | 0.7274 | 38.0 | 28500 | 0.6314 | 0.7333 | | 0.6904 | 39.0 | 29250 | 0.6429 | 0.7267 | | 0.6516 | 40.0 | 30000 | 0.6385 | 0.7167 | | 0.6647 | 41.0 | 30750 | 0.6386 | 0.7 | | 0.666 | 42.0 | 31500 | 0.6656 | 0.695 | | 0.6901 | 43.0 | 32250 | 0.6568 | 0.715 | | 0.6021 | 44.0 | 33000 | 0.6375 | 0.7117 | | 0.6467 | 45.0 | 33750 | 0.6267 | 0.7117 | | 0.6249 | 46.0 | 34500 | 0.6374 | 0.71 | | 0.6161 | 47.0 | 35250 | 0.6354 | 0.71 | | 0.6534 | 48.0 | 36000 | 0.6396 | 0.715 | | 0.6031 | 49.0 | 36750 | 0.6326 | 0.7117 | | 0.6145 | 50.0 | 37500 | 0.6302 | 0.7167 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_rms_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_rms_0001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1181 - Accuracy: 0.8983 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1916 | 1.0 | 750 | 0.2932 | 0.8983 | | 0.1713 | 2.0 | 1500 | 0.3029 | 0.9083 | | 0.1109 | 3.0 | 2250 | 0.5067 | 0.8767 | | 0.0732 | 4.0 | 3000 | 0.4780 | 0.91 | | 0.102 | 5.0 | 3750 | 0.4476 | 0.8983 | | 0.0951 | 6.0 | 4500 | 0.5447 | 0.9017 | | 0.0348 | 7.0 | 5250 | 0.5626 | 0.915 | | 0.0144 | 8.0 | 6000 | 0.5893 | 0.9133 | | 0.0592 | 9.0 | 6750 | 0.5568 | 0.91 | | 0.0154 | 10.0 | 7500 | 0.5698 | 0.9017 | | 0.0271 | 11.0 | 8250 | 0.6256 | 0.8983 | | 0.0176 | 12.0 | 9000 | 0.7391 | 0.8917 | | 0.012 | 13.0 | 9750 | 0.6256 | 0.8967 | | 0.0094 | 14.0 | 10500 | 0.7473 | 0.895 | | 0.0301 | 15.0 | 11250 | 0.6066 | 0.905 | | 0.0069 | 16.0 | 12000 | 0.6970 | 0.9 | | 0.0131 | 17.0 | 12750 | 0.6902 | 0.895 | | 0.0022 | 18.0 | 13500 | 0.7962 | 0.8833 | | 0.0162 | 19.0 | 14250 | 0.8033 | 0.8967 | | 0.0045 | 20.0 | 15000 | 0.7612 | 0.8933 | | 0.0063 | 21.0 | 15750 | 0.7939 | 0.9 | | 0.0002 | 22.0 | 16500 | 0.7612 | 0.8933 | | 0.0161 | 23.0 | 17250 | 0.8161 | 0.8867 | | 0.0001 | 24.0 | 18000 | 0.8196 | 0.8933 | | 0.038 | 25.0 | 18750 | 0.8702 | 0.8883 | | 0.0257 | 26.0 | 19500 | 0.7862 | 0.8983 | | 0.0004 | 27.0 | 20250 | 0.8138 | 0.8983 | | 0.0001 | 28.0 | 21000 | 0.8830 | 0.905 | | 0.0003 | 29.0 | 21750 | 1.0169 | 0.89 | | 0.016 | 30.0 | 22500 | 0.8531 | 0.8883 | | 0.0027 | 31.0 | 23250 | 0.9699 | 0.895 | | 0.0379 | 32.0 | 24000 | 1.0313 | 0.89 | | 0.0307 | 33.0 | 24750 | 0.8698 | 0.905 | | 0.0 | 34.0 | 25500 | 0.8949 | 0.9017 | | 0.0 | 35.0 | 26250 | 0.9260 | 0.8917 | | 0.0 | 36.0 | 27000 | 0.9677 | 0.89 | | 0.0036 | 37.0 | 27750 | 1.0175 | 0.9017 | | 0.0 | 38.0 | 28500 | 1.0579 | 0.8967 | | 0.0003 | 39.0 | 29250 | 1.1044 | 0.8917 | | 0.0 | 40.0 | 30000 | 1.0357 | 0.895 | | 0.0 | 41.0 | 30750 | 1.0321 | 0.9 | | 0.0 | 42.0 | 31500 | 1.0802 | 0.895 | | 0.0 | 43.0 | 32250 | 1.1064 | 0.895 | | 0.0 | 44.0 | 33000 | 1.1154 | 0.9017 | | 0.0 | 45.0 | 33750 | 1.1069 | 0.9017 | | 0.0 | 46.0 | 34500 | 1.1200 | 0.895 | | 0.0 | 47.0 | 35250 | 1.1205 | 0.8967 | | 0.0 | 48.0 | 36000 | 1.1169 | 0.8983 | | 0.0 | 49.0 | 36750 | 1.1164 | 0.8983 | | 0.0 | 50.0 | 37500 | 1.1181 | 0.8983 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_tiny_adamax_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_tiny_adamax_001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0576 - Accuracy: 0.9133 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.366 | 1.0 | 750 | 0.3478 | 0.86 | | 0.3154 | 2.0 | 1500 | 0.3552 | 0.8633 | | 0.2007 | 3.0 | 2250 | 0.4696 | 0.845 | | 0.2296 | 4.0 | 3000 | 0.3387 | 0.8783 | | 0.2607 | 5.0 | 3750 | 0.4239 | 0.865 | | 0.2365 | 6.0 | 4500 | 0.3514 | 0.895 | | 0.1244 | 7.0 | 5250 | 0.3538 | 0.8917 | | 0.1356 | 8.0 | 6000 | 0.3984 | 0.895 | | 0.0526 | 9.0 | 6750 | 0.4940 | 0.875 | | 0.1195 | 10.0 | 7500 | 0.5287 | 0.8783 | | 0.0559 | 11.0 | 8250 | 0.6054 | 0.8817 | | 0.0512 | 12.0 | 9000 | 0.6374 | 0.8717 | | 0.0498 | 13.0 | 9750 | 0.6405 | 0.8817 | | 0.0296 | 14.0 | 10500 | 0.6601 | 0.895 | | 0.0626 | 15.0 | 11250 | 0.7807 | 0.89 | | 0.055 | 16.0 | 12000 | 0.7694 | 0.905 | | 0.0444 | 17.0 | 12750 | 0.6413 | 0.905 | | 0.0317 | 18.0 | 13500 | 0.7330 | 0.9033 | | 0.0108 | 19.0 | 14250 | 0.7464 | 0.8917 | | 0.0236 | 20.0 | 15000 | 0.7591 | 0.885 | | 0.01 | 21.0 | 15750 | 0.8264 | 0.9067 | | 0.0226 | 22.0 | 16500 | 0.7921 | 0.8933 | | 0.0127 | 23.0 | 17250 | 0.7486 | 0.9033 | | 0.0025 | 24.0 | 18000 | 0.8018 | 0.8983 | | 0.0004 | 25.0 | 18750 | 0.7411 | 0.9083 | | 0.0 | 26.0 | 19500 | 0.8554 | 0.895 | | 0.0 | 27.0 | 20250 | 0.9122 | 0.9017 | | 0.0 | 28.0 | 21000 | 0.8611 | 0.9067 | | 0.0041 | 29.0 | 21750 | 0.8741 | 0.9033 | | 0.0 | 30.0 | 22500 | 0.7969 | 0.9167 | | 0.012 | 31.0 | 23250 | 0.8521 | 0.91 | | 0.0058 | 32.0 | 24000 | 0.9974 | 0.8983 | | 0.0 | 33.0 | 24750 | 0.9864 | 0.9 | | 0.0 | 34.0 | 25500 | 0.8709 | 0.91 | | 0.0 | 35.0 | 26250 | 0.9411 | 0.9117 | | 0.0 | 36.0 | 27000 | 1.0050 | 0.9033 | | 0.0 | 37.0 | 27750 | 0.9456 | 0.905 | | 0.0 | 38.0 | 28500 | 0.9323 | 0.9083 | | 0.0 | 39.0 | 29250 | 0.9349 | 0.9117 | | 0.0 | 40.0 | 30000 | 0.9420 | 0.9117 | | 0.0 | 41.0 | 30750 | 0.9601 | 0.9133 | | 0.0 | 42.0 | 31500 | 0.9780 | 0.9133 | | 0.0 | 43.0 | 32250 | 0.9953 | 0.9133 | | 0.0 | 44.0 | 33000 | 1.0029 | 0.915 | | 0.0 | 45.0 | 33750 | 1.0208 | 0.9133 | | 0.0 | 46.0 | 34500 | 1.0335 | 0.9133 | | 0.0 | 47.0 | 35250 | 1.0420 | 0.9133 | | 0.0 | 48.0 | 36000 | 1.0522 | 0.9133 | | 0.0 | 49.0 | 36750 | 1.0572 | 0.9117 | | 0.0 | 50.0 | 37500 | 1.0576 | 0.9133 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_tiny_adamax_001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_tiny_adamax_001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5843 - Accuracy: 0.8717 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3385 | 1.0 | 750 | 0.3848 | 0.84 | | 0.2692 | 2.0 | 1500 | 0.3830 | 0.8633 | | 0.2345 | 3.0 | 2250 | 0.4255 | 0.8617 | | 0.1851 | 4.0 | 3000 | 0.4988 | 0.8517 | | 0.1806 | 5.0 | 3750 | 0.5032 | 0.8433 | | 0.1568 | 6.0 | 4500 | 0.5429 | 0.8633 | | 0.0638 | 7.0 | 5250 | 0.6033 | 0.855 | | 0.1397 | 8.0 | 6000 | 0.6990 | 0.845 | | 0.1208 | 9.0 | 6750 | 0.6852 | 0.8483 | | 0.0667 | 10.0 | 7500 | 0.8743 | 0.8383 | | 0.0482 | 11.0 | 8250 | 0.7516 | 0.8667 | | 0.0306 | 12.0 | 9000 | 0.8187 | 0.8783 | | 0.0125 | 13.0 | 9750 | 0.8525 | 0.86 | | 0.0512 | 14.0 | 10500 | 1.0441 | 0.8483 | | 0.0023 | 15.0 | 11250 | 1.0562 | 0.85 | | 0.0353 | 16.0 | 12000 | 1.1914 | 0.8583 | | 0.0637 | 17.0 | 12750 | 1.1115 | 0.8667 | | 0.025 | 18.0 | 13500 | 1.1677 | 0.865 | | 0.0126 | 19.0 | 14250 | 1.0523 | 0.8833 | | 0.0 | 20.0 | 15000 | 1.0935 | 0.8633 | | 0.0359 | 21.0 | 15750 | 1.1791 | 0.8733 | | 0.0003 | 22.0 | 16500 | 1.0630 | 0.87 | | 0.0003 | 23.0 | 17250 | 1.0996 | 0.8667 | | 0.0006 | 24.0 | 18000 | 1.0915 | 0.8817 | | 0.0001 | 25.0 | 18750 | 1.1484 | 0.8617 | | 0.0 | 26.0 | 19500 | 1.1656 | 0.875 | | 0.0179 | 27.0 | 20250 | 1.2101 | 0.8717 | | 0.0 | 28.0 | 21000 | 1.3179 | 0.86 | | 0.0 | 29.0 | 21750 | 1.2425 | 0.8733 | | 0.0 | 30.0 | 22500 | 1.3660 | 0.87 | | 0.0 | 31.0 | 23250 | 1.3781 | 0.87 | | 0.0 | 32.0 | 24000 | 1.4541 | 0.86 | | 0.0003 | 33.0 | 24750 | 1.3447 | 0.8717 | | 0.0 | 34.0 | 25500 | 1.3846 | 0.8633 | | 0.0 | 35.0 | 26250 | 1.3907 | 0.8733 | | 0.0 | 36.0 | 27000 | 1.4240 | 0.87 | | 0.0 | 37.0 | 27750 | 1.3878 | 0.8717 | | 0.0 | 38.0 | 28500 | 1.4082 | 0.87 | | 0.0 | 39.0 | 29250 | 1.4530 | 0.8717 | | 0.0 | 40.0 | 30000 | 1.4653 | 0.8717 | | 0.0 | 41.0 | 30750 | 1.4878 | 0.87 | | 0.0 | 42.0 | 31500 | 1.5011 | 0.8717 | | 0.0 | 43.0 | 32250 | 1.5107 | 0.8717 | | 0.0 | 44.0 | 33000 | 1.5209 | 0.8717 | | 0.0 | 45.0 | 33750 | 1.5429 | 0.8717 | | 0.0 | 46.0 | 34500 | 1.5577 | 0.8717 | | 0.0 | 47.0 | 35250 | 1.5684 | 0.8717 | | 0.0 | 48.0 | 36000 | 1.5772 | 0.8717 | | 0.0 | 49.0 | 36750 | 1.5824 | 0.8717 | | 0.0 | 50.0 | 37500 | 1.5843 | 0.8717 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_tiny_adamax_001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_tiny_adamax_001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8586 - Accuracy: 0.915 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3782 | 1.0 | 750 | 0.3344 | 0.8667 | | 0.2904 | 2.0 | 1500 | 0.3574 | 0.8483 | | 0.2048 | 3.0 | 2250 | 0.3230 | 0.8817 | | 0.2 | 4.0 | 3000 | 0.3479 | 0.8933 | | 0.2233 | 5.0 | 3750 | 0.3431 | 0.8883 | | 0.1334 | 6.0 | 4500 | 0.3350 | 0.9017 | | 0.1268 | 7.0 | 5250 | 0.3335 | 0.8967 | | 0.077 | 8.0 | 6000 | 0.4549 | 0.8883 | | 0.0723 | 9.0 | 6750 | 0.3771 | 0.9067 | | 0.0426 | 10.0 | 7500 | 0.4455 | 0.9017 | | 0.0977 | 11.0 | 8250 | 0.4334 | 0.9067 | | 0.0237 | 12.0 | 9000 | 0.5437 | 0.9 | | 0.0358 | 13.0 | 9750 | 0.5148 | 0.885 | | 0.0032 | 14.0 | 10500 | 0.6045 | 0.9083 | | 0.0293 | 15.0 | 11250 | 0.6394 | 0.8933 | | 0.0156 | 16.0 | 12000 | 0.6836 | 0.89 | | 0.0548 | 17.0 | 12750 | 0.5770 | 0.9017 | | 0.0127 | 18.0 | 13500 | 0.6663 | 0.8983 | | 0.0203 | 19.0 | 14250 | 0.6791 | 0.905 | | 0.0154 | 20.0 | 15000 | 0.6990 | 0.905 | | 0.0128 | 21.0 | 15750 | 0.7251 | 0.9017 | | 0.0003 | 22.0 | 16500 | 0.7324 | 0.8933 | | 0.0024 | 23.0 | 17250 | 0.7123 | 0.9017 | | 0.0015 | 24.0 | 18000 | 0.6502 | 0.9133 | | 0.0109 | 25.0 | 18750 | 0.6676 | 0.9117 | | 0.0004 | 26.0 | 19500 | 0.6984 | 0.9033 | | 0.0105 | 27.0 | 20250 | 0.8181 | 0.8967 | | 0.0029 | 28.0 | 21000 | 0.7764 | 0.9 | | 0.0304 | 29.0 | 21750 | 0.7986 | 0.8967 | | 0.008 | 30.0 | 22500 | 0.8233 | 0.895 | | 0.0008 | 31.0 | 23250 | 0.8494 | 0.9033 | | 0.0 | 32.0 | 24000 | 0.8041 | 0.91 | | 0.0 | 33.0 | 24750 | 0.8842 | 0.9167 | | 0.0 | 34.0 | 25500 | 0.7437 | 0.9233 | | 0.0 | 35.0 | 26250 | 0.7405 | 0.925 | | 0.0 | 36.0 | 27000 | 0.7962 | 0.9083 | | 0.0059 | 37.0 | 27750 | 0.7867 | 0.9233 | | 0.0 | 38.0 | 28500 | 0.8151 | 0.92 | | 0.0 | 39.0 | 29250 | 0.8010 | 0.91 | | 0.0 | 40.0 | 30000 | 0.8483 | 0.9133 | | 0.0 | 41.0 | 30750 | 0.8225 | 0.9167 | | 0.0 | 42.0 | 31500 | 0.8207 | 0.9167 | | 0.0 | 43.0 | 32250 | 0.8290 | 0.915 | | 0.0 | 44.0 | 33000 | 0.8408 | 0.915 | | 0.0 | 45.0 | 33750 | 0.8374 | 0.9183 | | 0.0 | 46.0 | 34500 | 0.8446 | 0.9167 | | 0.0 | 47.0 | 35250 | 0.8518 | 0.915 | | 0.0 | 48.0 | 36000 | 0.8526 | 0.915 | | 0.0 | 49.0 | 36750 | 0.8568 | 0.9167 | | 0.0 | 50.0 | 37500 | 0.8586 | 0.915 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.1+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_rms_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_rms_00001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8415 - Accuracy: 0.9132 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2066 | 1.0 | 751 | 0.2884 | 0.8948 | | 0.1058 | 2.0 | 1502 | 0.3397 | 0.8898 | | 0.0676 | 3.0 | 2253 | 0.3801 | 0.9098 | | 0.0667 | 4.0 | 3004 | 0.5510 | 0.8998 | | 0.0253 | 5.0 | 3755 | 0.5205 | 0.9048 | | 0.0216 | 6.0 | 4506 | 0.6002 | 0.9082 | | 0.0362 | 7.0 | 5257 | 0.8486 | 0.8815 | | 0.0357 | 8.0 | 6008 | 0.6658 | 0.9082 | | 0.035 | 9.0 | 6759 | 0.5803 | 0.9065 | | 0.001 | 10.0 | 7510 | 0.6719 | 0.9082 | | 0.0231 | 11.0 | 8261 | 0.7324 | 0.9098 | | 0.0001 | 12.0 | 9012 | 0.6703 | 0.9082 | | 0.0008 | 13.0 | 9763 | 0.8412 | 0.8965 | | 0.0582 | 14.0 | 10514 | 0.7418 | 0.9115 | | 0.0077 | 15.0 | 11265 | 0.8736 | 0.8998 | | 0.0 | 16.0 | 12016 | 0.7725 | 0.9115 | | 0.0019 | 17.0 | 12767 | 0.7981 | 0.9065 | | 0.0064 | 18.0 | 13518 | 1.0317 | 0.8848 | | 0.0 | 19.0 | 14269 | 0.8686 | 0.9048 | | 0.0 | 20.0 | 15020 | 0.8083 | 0.9082 | | 0.0 | 21.0 | 15771 | 0.6862 | 0.9115 | | 0.0 | 22.0 | 16522 | 0.7317 | 0.9132 | | 0.0 | 23.0 | 17273 | 0.6937 | 0.9182 | | 0.0 | 24.0 | 18024 | 0.7497 | 0.9199 | | 0.0 | 25.0 | 18775 | 0.8180 | 0.9098 | | 0.0001 | 26.0 | 19526 | 0.8680 | 0.9098 | | 0.0 | 27.0 | 20277 | 0.8268 | 0.9115 | | 0.0 | 28.0 | 21028 | 0.8126 | 0.9082 | | 0.0042 | 29.0 | 21779 | 0.8397 | 0.9048 | | 0.024 | 30.0 | 22530 | 0.8418 | 0.9098 | | 0.0 | 31.0 | 23281 | 0.8800 | 0.9065 | | 0.0 | 32.0 | 24032 | 0.8577 | 0.9065 | | 0.0 | 33.0 | 24783 | 0.7988 | 0.9048 | | 0.0 | 34.0 | 25534 | 0.8415 | 0.9032 | | 0.0 | 35.0 | 26285 | 0.8311 | 0.9132 | | 0.0 | 36.0 | 27036 | 0.8203 | 0.9149 | | 0.0 | 37.0 | 27787 | 0.8136 | 0.9165 | | 0.0 | 38.0 | 28538 | 0.8129 | 0.9199 | | 0.0 | 39.0 | 29289 | 0.8165 | 0.9199 | | 0.0 | 40.0 | 30040 | 0.8151 | 0.9149 | | 0.0 | 41.0 | 30791 | 0.8242 | 0.9115 | | 0.0 | 42.0 | 31542 | 0.8225 | 0.9115 | | 0.0 | 43.0 | 32293 | 0.8294 | 0.9132 | | 0.0 | 44.0 | 33044 | 0.8299 | 0.9132 | | 0.0 | 45.0 | 33795 | 0.8337 | 0.9132 | | 0.0 | 46.0 | 34546 | 0.8327 | 0.9132 | | 0.0 | 47.0 | 35297 | 0.8358 | 0.9132 | | 0.0 | 48.0 | 36048 | 0.8374 | 0.9132 | | 0.0 | 49.0 | 36799 | 0.8388 | 0.9115 | | 0.0 | 50.0 | 37550 | 0.8415 | 0.9132 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_base_adamax_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_base_adamax_0001_fold1 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7705 - Accuracy: 0.9199 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1576 | 1.0 | 751 | 0.2860 | 0.9048 | | 0.0603 | 2.0 | 1502 | 0.3372 | 0.9032 | | 0.0282 | 3.0 | 2253 | 0.4104 | 0.9032 | | 0.0596 | 4.0 | 3004 | 0.4768 | 0.9032 | | 0.0033 | 5.0 | 3755 | 0.5266 | 0.9149 | | 0.0006 | 6.0 | 4506 | 0.5128 | 0.9115 | | 0.0015 | 7.0 | 5257 | 0.5742 | 0.9082 | | 0.0026 | 8.0 | 6008 | 0.5961 | 0.9098 | | 0.0 | 9.0 | 6759 | 0.5031 | 0.9115 | | 0.0036 | 10.0 | 7510 | 0.6399 | 0.9115 | | 0.0 | 11.0 | 8261 | 0.5907 | 0.9232 | | 0.0001 | 12.0 | 9012 | 0.6159 | 0.9232 | | 0.0 | 13.0 | 9763 | 0.6128 | 0.9182 | | 0.0 | 14.0 | 10514 | 0.6345 | 0.9182 | | 0.0 | 15.0 | 11265 | 0.6774 | 0.9199 | | 0.0 | 16.0 | 12016 | 0.6357 | 0.9282 | | 0.0 | 17.0 | 12767 | 0.7378 | 0.9149 | | 0.0 | 18.0 | 13518 | 0.7242 | 0.9165 | | 0.0 | 19.0 | 14269 | 0.7112 | 0.9082 | | 0.0 | 20.0 | 15020 | 0.7275 | 0.9098 | | 0.0 | 21.0 | 15771 | 0.8349 | 0.8948 | | 0.0 | 22.0 | 16522 | 0.7912 | 0.9132 | | 0.0 | 23.0 | 17273 | 0.7309 | 0.9149 | | 0.0 | 24.0 | 18024 | 0.6807 | 0.9115 | | 0.0 | 25.0 | 18775 | 0.8169 | 0.9065 | | 0.0 | 26.0 | 19526 | 0.7364 | 0.9165 | | 0.0 | 27.0 | 20277 | 0.7319 | 0.9182 | | 0.0 | 28.0 | 21028 | 0.7198 | 0.9182 | | 0.0031 | 29.0 | 21779 | 0.7870 | 0.9149 | | 0.0 | 30.0 | 22530 | 0.7127 | 0.9182 | | 0.0 | 31.0 | 23281 | 0.7309 | 0.9215 | | 0.0 | 32.0 | 24032 | 0.7557 | 0.9115 | | 0.0 | 33.0 | 24783 | 0.7371 | 0.9182 | | 0.0 | 34.0 | 25534 | 0.7301 | 0.9199 | | 0.0 | 35.0 | 26285 | 0.7669 | 0.9132 | | 0.0 | 36.0 | 27036 | 0.7428 | 0.9182 | | 0.0 | 37.0 | 27787 | 0.7916 | 0.9098 | | 0.0 | 38.0 | 28538 | 0.7540 | 0.9182 | | 0.0 | 39.0 | 29289 | 0.7662 | 0.9199 | | 0.0 | 40.0 | 30040 | 0.7637 | 0.9199 | | 0.0 | 41.0 | 30791 | 0.7639 | 0.9215 | | 0.0 | 42.0 | 31542 | 0.7613 | 0.9249 | | 0.0 | 43.0 | 32293 | 0.7603 | 0.9215 | | 0.0 | 44.0 | 33044 | 0.7633 | 0.9215 | | 0.0 | 45.0 | 33795 | 0.7654 | 0.9215 | | 0.0 | 46.0 | 34546 | 0.7636 | 0.9215 | | 0.0 | 47.0 | 35297 | 0.7674 | 0.9215 | | 0.0 | 48.0 | 36048 | 0.7672 | 0.9215 | | 0.0 | 49.0 | 36799 | 0.7679 | 0.9199 | | 0.0 | 50.0 | 37550 | 0.7705 | 0.9199 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_rms_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_rms_00001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1223 - Accuracy: 0.8952 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2125 | 1.0 | 750 | 0.2971 | 0.8636 | | 0.1007 | 2.0 | 1500 | 0.3569 | 0.8902 | | 0.033 | 3.0 | 2250 | 0.4786 | 0.8852 | | 0.0414 | 4.0 | 3000 | 0.6308 | 0.8719 | | 0.0169 | 5.0 | 3750 | 0.7881 | 0.8769 | | 0.0209 | 6.0 | 4500 | 0.8756 | 0.8802 | | 0.0232 | 7.0 | 5250 | 0.7942 | 0.8785 | | 0.0001 | 8.0 | 6000 | 0.8024 | 0.8885 | | 0.0037 | 9.0 | 6750 | 0.9766 | 0.8852 | | 0.0663 | 10.0 | 7500 | 0.9288 | 0.8785 | | 0.0416 | 11.0 | 8250 | 1.0051 | 0.8835 | | 0.0257 | 12.0 | 9000 | 1.1036 | 0.8752 | | 0.0003 | 13.0 | 9750 | 0.9284 | 0.8835 | | 0.0007 | 14.0 | 10500 | 0.9766 | 0.8752 | | 0.0009 | 15.0 | 11250 | 1.0060 | 0.8869 | | 0.024 | 16.0 | 12000 | 0.9566 | 0.8918 | | 0.0002 | 17.0 | 12750 | 0.9308 | 0.8985 | | 0.0226 | 18.0 | 13500 | 0.9878 | 0.8952 | | 0.0002 | 19.0 | 14250 | 1.0344 | 0.8802 | | 0.0 | 20.0 | 15000 | 1.0012 | 0.8902 | | 0.0 | 21.0 | 15750 | 1.0757 | 0.8852 | | 0.0197 | 22.0 | 16500 | 1.1327 | 0.8918 | | 0.0059 | 23.0 | 17250 | 1.1959 | 0.8785 | | 0.014 | 24.0 | 18000 | 0.9244 | 0.8918 | | 0.0 | 25.0 | 18750 | 1.0134 | 0.8952 | | 0.0001 | 26.0 | 19500 | 1.2273 | 0.8735 | | 0.0081 | 27.0 | 20250 | 1.2216 | 0.8735 | | 0.0 | 28.0 | 21000 | 1.1304 | 0.8769 | | 0.0 | 29.0 | 21750 | 0.9499 | 0.8902 | | 0.0 | 30.0 | 22500 | 1.0368 | 0.8885 | | 0.0 | 31.0 | 23250 | 1.0392 | 0.8852 | | 0.0038 | 32.0 | 24000 | 1.2288 | 0.8835 | | 0.0 | 33.0 | 24750 | 1.1678 | 0.8952 | | 0.0 | 34.0 | 25500 | 1.0162 | 0.8918 | | 0.0 | 35.0 | 26250 | 1.0770 | 0.8918 | | 0.0 | 36.0 | 27000 | 1.0678 | 0.8902 | | 0.0067 | 37.0 | 27750 | 1.0739 | 0.8935 | | 0.0 | 38.0 | 28500 | 1.1577 | 0.8935 | | 0.0 | 39.0 | 29250 | 1.1277 | 0.8935 | | 0.0 | 40.0 | 30000 | 1.1396 | 0.8918 | | 0.0 | 41.0 | 30750 | 1.1296 | 0.8952 | | 0.0 | 42.0 | 31500 | 1.1324 | 0.8935 | | 0.0 | 43.0 | 32250 | 1.1390 | 0.8918 | | 0.0 | 44.0 | 33000 | 1.1307 | 0.8952 | | 0.0025 | 45.0 | 33750 | 1.1302 | 0.8918 | | 0.0 | 46.0 | 34500 | 1.1293 | 0.8935 | | 0.0 | 47.0 | 35250 | 1.1264 | 0.8935 | | 0.0 | 48.0 | 36000 | 1.1267 | 0.8952 | | 0.0 | 49.0 | 36750 | 1.1233 | 0.8952 | | 0.0 | 50.0 | 37500 | 1.1223 | 0.8952 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_10x_deit_small_rms_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_10x_deit_small_rms_00001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0298 - Accuracy: 0.9117 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.17 | 1.0 | 750 | 0.2376 | 0.9167 | | 0.1586 | 2.0 | 1500 | 0.3127 | 0.9117 | | 0.0361 | 3.0 | 2250 | 0.4163 | 0.9033 | | 0.0337 | 4.0 | 3000 | 0.5484 | 0.89 | | 0.0423 | 5.0 | 3750 | 0.5723 | 0.91 | | 0.0287 | 6.0 | 4500 | 0.6189 | 0.905 | | 0.0936 | 7.0 | 5250 | 0.8112 | 0.895 | | 0.0008 | 8.0 | 6000 | 0.7783 | 0.8967 | | 0.0117 | 9.0 | 6750 | 0.8326 | 0.8983 | | 0.0553 | 10.0 | 7500 | 0.8619 | 0.8967 | | 0.0 | 11.0 | 8250 | 0.7664 | 0.895 | | 0.0 | 12.0 | 9000 | 0.9064 | 0.8917 | | 0.022 | 13.0 | 9750 | 0.8767 | 0.905 | | 0.0003 | 14.0 | 10500 | 0.8256 | 0.8967 | | 0.0 | 15.0 | 11250 | 0.8920 | 0.8983 | | 0.0 | 16.0 | 12000 | 0.8919 | 0.8883 | | 0.0005 | 17.0 | 12750 | 0.7873 | 0.91 | | 0.0002 | 18.0 | 13500 | 1.0358 | 0.8833 | | 0.0 | 19.0 | 14250 | 0.9585 | 0.8933 | | 0.0 | 20.0 | 15000 | 0.9183 | 0.8933 | | 0.02 | 21.0 | 15750 | 1.0608 | 0.8867 | | 0.0 | 22.0 | 16500 | 0.9497 | 0.8983 | | 0.0 | 23.0 | 17250 | 0.9676 | 0.895 | | 0.0 | 24.0 | 18000 | 0.9490 | 0.8983 | | 0.001 | 25.0 | 18750 | 1.0068 | 0.8983 | | 0.0001 | 26.0 | 19500 | 0.9409 | 0.9017 | | 0.0 | 27.0 | 20250 | 0.9205 | 0.8933 | | 0.0006 | 28.0 | 21000 | 0.9294 | 0.9033 | | 0.0 | 29.0 | 21750 | 0.9650 | 0.8917 | | 0.0 | 30.0 | 22500 | 1.0551 | 0.8933 | | 0.0 | 31.0 | 23250 | 0.9687 | 0.895 | | 0.0 | 32.0 | 24000 | 0.9869 | 0.8933 | | 0.0 | 33.0 | 24750 | 0.9708 | 0.905 | | 0.0 | 34.0 | 25500 | 0.9496 | 0.9067 | | 0.0 | 35.0 | 26250 | 0.9626 | 0.91 | | 0.0 | 36.0 | 27000 | 1.0150 | 0.9033 | | 0.0 | 37.0 | 27750 | 0.9930 | 0.9017 | | 0.0 | 38.0 | 28500 | 0.9861 | 0.905 | | 0.0 | 39.0 | 29250 | 1.0163 | 0.9033 | | 0.0 | 40.0 | 30000 | 1.0159 | 0.9017 | | 0.0 | 41.0 | 30750 | 1.0242 | 0.9033 | | 0.0 | 42.0 | 31500 | 1.0278 | 0.905 | | 0.0 | 43.0 | 32250 | 1.0282 | 0.9033 | | 0.0 | 44.0 | 33000 | 1.0251 | 0.9083 | | 0.0 | 45.0 | 33750 | 1.0218 | 0.91 | | 0.0 | 46.0 | 34500 | 1.0275 | 0.91 | | 0.0 | 47.0 | 35250 | 1.0270 | 0.9083 | | 0.0 | 48.0 | 36000 | 1.0297 | 0.91 | | 0.0 | 49.0 | 36750 | 1.0292 | 0.91 | | 0.0 | 50.0 | 37500 | 1.0298 | 0.9117 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]