model_id
stringlengths 7
105
| model_card
stringlengths 1
130k
| model_labels
listlengths 2
80k
|
---|---|---|
hkivancoral/smids_5x_deit_tiny_sgd_0001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_sgd_0001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4623
- Accuracy: 0.8217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1763 | 1.0 | 375 | 1.1547 | 0.3917 |
| 1.0966 | 2.0 | 750 | 1.0897 | 0.42 |
| 1.0223 | 3.0 | 1125 | 1.0444 | 0.46 |
| 0.9886 | 4.0 | 1500 | 1.0052 | 0.4917 |
| 0.9546 | 5.0 | 1875 | 0.9693 | 0.515 |
| 0.932 | 6.0 | 2250 | 0.9344 | 0.54 |
| 0.8619 | 7.0 | 2625 | 0.9000 | 0.57 |
| 0.857 | 8.0 | 3000 | 0.8647 | 0.5967 |
| 0.8079 | 9.0 | 3375 | 0.8304 | 0.62 |
| 0.7619 | 10.0 | 3750 | 0.7976 | 0.645 |
| 0.7316 | 11.0 | 4125 | 0.7657 | 0.665 |
| 0.6666 | 12.0 | 4500 | 0.7355 | 0.68 |
| 0.6961 | 13.0 | 4875 | 0.7078 | 0.69 |
| 0.6607 | 14.0 | 5250 | 0.6819 | 0.7083 |
| 0.6448 | 15.0 | 5625 | 0.6579 | 0.725 |
| 0.6031 | 16.0 | 6000 | 0.6371 | 0.7333 |
| 0.633 | 17.0 | 6375 | 0.6195 | 0.7433 |
| 0.6177 | 18.0 | 6750 | 0.6022 | 0.7533 |
| 0.5854 | 19.0 | 7125 | 0.5875 | 0.765 |
| 0.5213 | 20.0 | 7500 | 0.5748 | 0.77 |
| 0.5296 | 21.0 | 7875 | 0.5628 | 0.7833 |
| 0.5226 | 22.0 | 8250 | 0.5527 | 0.7917 |
| 0.5777 | 23.0 | 8625 | 0.5439 | 0.795 |
| 0.5616 | 24.0 | 9000 | 0.5354 | 0.8017 |
| 0.5254 | 25.0 | 9375 | 0.5279 | 0.8067 |
| 0.5443 | 26.0 | 9750 | 0.5213 | 0.8067 |
| 0.5349 | 27.0 | 10125 | 0.5152 | 0.8133 |
| 0.5476 | 28.0 | 10500 | 0.5090 | 0.8133 |
| 0.5198 | 29.0 | 10875 | 0.5041 | 0.815 |
| 0.4665 | 30.0 | 11250 | 0.4997 | 0.8167 |
| 0.5013 | 31.0 | 11625 | 0.4955 | 0.8167 |
| 0.5242 | 32.0 | 12000 | 0.4917 | 0.8167 |
| 0.5162 | 33.0 | 12375 | 0.4881 | 0.8167 |
| 0.5094 | 34.0 | 12750 | 0.4847 | 0.815 |
| 0.4537 | 35.0 | 13125 | 0.4817 | 0.8167 |
| 0.4056 | 36.0 | 13500 | 0.4788 | 0.8167 |
| 0.4566 | 37.0 | 13875 | 0.4763 | 0.8167 |
| 0.4864 | 38.0 | 14250 | 0.4740 | 0.8183 |
| 0.4572 | 39.0 | 14625 | 0.4721 | 0.82 |
| 0.5272 | 40.0 | 15000 | 0.4702 | 0.82 |
| 0.4662 | 41.0 | 15375 | 0.4685 | 0.82 |
| 0.4598 | 42.0 | 15750 | 0.4671 | 0.82 |
| 0.4764 | 43.0 | 16125 | 0.4660 | 0.82 |
| 0.4497 | 44.0 | 16500 | 0.4650 | 0.82 |
| 0.4734 | 45.0 | 16875 | 0.4641 | 0.82 |
| 0.4953 | 46.0 | 17250 | 0.4634 | 0.82 |
| 0.4817 | 47.0 | 17625 | 0.4629 | 0.8217 |
| 0.4691 | 48.0 | 18000 | 0.4625 | 0.8217 |
| 0.4502 | 49.0 | 18375 | 0.4623 | 0.8217 |
| 0.4257 | 50.0 | 18750 | 0.4623 | 0.8217 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_sgd_0001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_sgd_0001_fold4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4817
- Accuracy: 0.7983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1553 | 1.0 | 375 | 1.2548 | 0.355 |
| 1.0974 | 2.0 | 750 | 1.1671 | 0.3783 |
| 0.9812 | 3.0 | 1125 | 1.0817 | 0.4167 |
| 0.9358 | 4.0 | 1500 | 1.0016 | 0.455 |
| 0.8746 | 5.0 | 1875 | 0.9264 | 0.53 |
| 0.8021 | 6.0 | 2250 | 0.8621 | 0.5983 |
| 0.7908 | 7.0 | 2625 | 0.8069 | 0.6717 |
| 0.763 | 8.0 | 3000 | 0.7629 | 0.69 |
| 0.6997 | 9.0 | 3375 | 0.7270 | 0.7133 |
| 0.6962 | 10.0 | 3750 | 0.6950 | 0.7167 |
| 0.6391 | 11.0 | 4125 | 0.6689 | 0.7283 |
| 0.6231 | 12.0 | 4500 | 0.6467 | 0.735 |
| 0.6127 | 13.0 | 4875 | 0.6274 | 0.7483 |
| 0.6297 | 14.0 | 5250 | 0.6106 | 0.7517 |
| 0.6056 | 15.0 | 5625 | 0.5959 | 0.7633 |
| 0.5383 | 16.0 | 6000 | 0.5842 | 0.76 |
| 0.5862 | 17.0 | 6375 | 0.5727 | 0.76 |
| 0.5466 | 18.0 | 6750 | 0.5631 | 0.7683 |
| 0.6063 | 19.0 | 7125 | 0.5554 | 0.77 |
| 0.5382 | 20.0 | 7500 | 0.5477 | 0.7733 |
| 0.5719 | 21.0 | 7875 | 0.5406 | 0.7733 |
| 0.5194 | 22.0 | 8250 | 0.5342 | 0.7833 |
| 0.5408 | 23.0 | 8625 | 0.5290 | 0.7833 |
| 0.5327 | 24.0 | 9000 | 0.5248 | 0.7817 |
| 0.5341 | 25.0 | 9375 | 0.5207 | 0.7833 |
| 0.5248 | 26.0 | 9750 | 0.5168 | 0.7833 |
| 0.5823 | 27.0 | 10125 | 0.5128 | 0.7867 |
| 0.4919 | 28.0 | 10500 | 0.5098 | 0.79 |
| 0.4902 | 29.0 | 10875 | 0.5067 | 0.7933 |
| 0.5047 | 30.0 | 11250 | 0.5038 | 0.795 |
| 0.4943 | 31.0 | 11625 | 0.5008 | 0.7983 |
| 0.5058 | 32.0 | 12000 | 0.4990 | 0.7983 |
| 0.4976 | 33.0 | 12375 | 0.4965 | 0.7967 |
| 0.5168 | 34.0 | 12750 | 0.4952 | 0.795 |
| 0.5069 | 35.0 | 13125 | 0.4933 | 0.795 |
| 0.4844 | 36.0 | 13500 | 0.4915 | 0.7967 |
| 0.5181 | 37.0 | 13875 | 0.4900 | 0.7983 |
| 0.5125 | 38.0 | 14250 | 0.4886 | 0.7983 |
| 0.5414 | 39.0 | 14625 | 0.4875 | 0.7983 |
| 0.5265 | 40.0 | 15000 | 0.4865 | 0.7983 |
| 0.5089 | 41.0 | 15375 | 0.4855 | 0.7983 |
| 0.5041 | 42.0 | 15750 | 0.4845 | 0.7983 |
| 0.5029 | 43.0 | 16125 | 0.4836 | 0.7983 |
| 0.4723 | 44.0 | 16500 | 0.4830 | 0.7983 |
| 0.4754 | 45.0 | 16875 | 0.4827 | 0.7983 |
| 0.4906 | 46.0 | 17250 | 0.4823 | 0.7983 |
| 0.5249 | 47.0 | 17625 | 0.4820 | 0.7983 |
| 0.4858 | 48.0 | 18000 | 0.4818 | 0.7983 |
| 0.4635 | 49.0 | 18375 | 0.4818 | 0.7983 |
| 0.4753 | 50.0 | 18750 | 0.4817 | 0.7983 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_sgd_001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_sgd_001_fold4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4156
- Accuracy: 0.8467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6893 | 1.0 | 375 | 0.6764 | 0.7333 |
| 0.5522 | 2.0 | 750 | 0.5194 | 0.7917 |
| 0.409 | 3.0 | 1125 | 0.4634 | 0.805 |
| 0.4507 | 4.0 | 1500 | 0.4337 | 0.81 |
| 0.416 | 5.0 | 1875 | 0.4157 | 0.8233 |
| 0.3067 | 6.0 | 2250 | 0.4110 | 0.825 |
| 0.3644 | 7.0 | 2625 | 0.3987 | 0.8333 |
| 0.3243 | 8.0 | 3000 | 0.3991 | 0.835 |
| 0.3062 | 9.0 | 3375 | 0.3919 | 0.8383 |
| 0.308 | 10.0 | 3750 | 0.3990 | 0.8383 |
| 0.2735 | 11.0 | 4125 | 0.3981 | 0.835 |
| 0.2384 | 12.0 | 4500 | 0.3880 | 0.8417 |
| 0.2357 | 13.0 | 4875 | 0.3907 | 0.845 |
| 0.3175 | 14.0 | 5250 | 0.3900 | 0.8417 |
| 0.2423 | 15.0 | 5625 | 0.3853 | 0.8517 |
| 0.1987 | 16.0 | 6000 | 0.3848 | 0.8433 |
| 0.2594 | 17.0 | 6375 | 0.3874 | 0.845 |
| 0.2225 | 18.0 | 6750 | 0.3883 | 0.8533 |
| 0.247 | 19.0 | 7125 | 0.3920 | 0.8383 |
| 0.2235 | 20.0 | 7500 | 0.3894 | 0.8433 |
| 0.2203 | 21.0 | 7875 | 0.3971 | 0.8417 |
| 0.2258 | 22.0 | 8250 | 0.3954 | 0.8533 |
| 0.2363 | 23.0 | 8625 | 0.3968 | 0.845 |
| 0.2288 | 24.0 | 9000 | 0.3993 | 0.8467 |
| 0.2646 | 25.0 | 9375 | 0.4039 | 0.84 |
| 0.1839 | 26.0 | 9750 | 0.3987 | 0.8433 |
| 0.2779 | 27.0 | 10125 | 0.4000 | 0.845 |
| 0.1848 | 28.0 | 10500 | 0.4019 | 0.8367 |
| 0.2029 | 29.0 | 10875 | 0.4110 | 0.84 |
| 0.2593 | 30.0 | 11250 | 0.4030 | 0.845 |
| 0.2187 | 31.0 | 11625 | 0.4051 | 0.8417 |
| 0.1821 | 32.0 | 12000 | 0.4072 | 0.8467 |
| 0.2095 | 33.0 | 12375 | 0.4076 | 0.8433 |
| 0.2109 | 34.0 | 12750 | 0.4087 | 0.8433 |
| 0.1759 | 35.0 | 13125 | 0.4129 | 0.84 |
| 0.1595 | 36.0 | 13500 | 0.4130 | 0.8433 |
| 0.2131 | 37.0 | 13875 | 0.4150 | 0.84 |
| 0.2036 | 38.0 | 14250 | 0.4132 | 0.85 |
| 0.247 | 39.0 | 14625 | 0.4135 | 0.8433 |
| 0.2148 | 40.0 | 15000 | 0.4147 | 0.8433 |
| 0.2333 | 41.0 | 15375 | 0.4120 | 0.8433 |
| 0.213 | 42.0 | 15750 | 0.4128 | 0.8433 |
| 0.1929 | 43.0 | 16125 | 0.4163 | 0.84 |
| 0.1822 | 44.0 | 16500 | 0.4161 | 0.845 |
| 0.2316 | 45.0 | 16875 | 0.4158 | 0.845 |
| 0.1873 | 46.0 | 17250 | 0.4147 | 0.845 |
| 0.2645 | 47.0 | 17625 | 0.4157 | 0.845 |
| 0.1954 | 48.0 | 18000 | 0.4157 | 0.845 |
| 0.1804 | 49.0 | 18375 | 0.4155 | 0.8467 |
| 0.1952 | 50.0 | 18750 | 0.4156 | 0.8467 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_sgd_0001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_sgd_0001_fold5
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4509
- Accuracy: 0.8217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.119 | 1.0 | 375 | 1.2346 | 0.355 |
| 1.0823 | 2.0 | 750 | 1.1546 | 0.3883 |
| 0.9626 | 3.0 | 1125 | 1.0734 | 0.4317 |
| 0.8877 | 4.0 | 1500 | 0.9982 | 0.49 |
| 0.8704 | 5.0 | 1875 | 0.9307 | 0.5383 |
| 0.82 | 6.0 | 2250 | 0.8704 | 0.5833 |
| 0.7688 | 7.0 | 2625 | 0.8174 | 0.63 |
| 0.7442 | 8.0 | 3000 | 0.7715 | 0.6583 |
| 0.726 | 9.0 | 3375 | 0.7322 | 0.6833 |
| 0.6394 | 10.0 | 3750 | 0.6995 | 0.695 |
| 0.6379 | 11.0 | 4125 | 0.6704 | 0.7167 |
| 0.6169 | 12.0 | 4500 | 0.6463 | 0.735 |
| 0.5956 | 13.0 | 4875 | 0.6254 | 0.7483 |
| 0.617 | 14.0 | 5250 | 0.6064 | 0.7617 |
| 0.6108 | 15.0 | 5625 | 0.5908 | 0.7717 |
| 0.578 | 16.0 | 6000 | 0.5757 | 0.7767 |
| 0.591 | 17.0 | 6375 | 0.5636 | 0.7817 |
| 0.5966 | 18.0 | 6750 | 0.5528 | 0.785 |
| 0.6007 | 19.0 | 7125 | 0.5440 | 0.7883 |
| 0.5282 | 20.0 | 7500 | 0.5352 | 0.7983 |
| 0.5197 | 21.0 | 7875 | 0.5276 | 0.8033 |
| 0.5125 | 22.0 | 8250 | 0.5198 | 0.8067 |
| 0.5868 | 23.0 | 8625 | 0.5131 | 0.8083 |
| 0.5885 | 24.0 | 9000 | 0.5069 | 0.8117 |
| 0.5176 | 25.0 | 9375 | 0.5018 | 0.8133 |
| 0.5257 | 26.0 | 9750 | 0.4968 | 0.8167 |
| 0.563 | 27.0 | 10125 | 0.4923 | 0.8167 |
| 0.5177 | 28.0 | 10500 | 0.4880 | 0.815 |
| 0.5208 | 29.0 | 10875 | 0.4843 | 0.8183 |
| 0.4749 | 30.0 | 11250 | 0.4801 | 0.8183 |
| 0.5211 | 31.0 | 11625 | 0.4762 | 0.8167 |
| 0.5578 | 32.0 | 12000 | 0.4734 | 0.8167 |
| 0.5196 | 33.0 | 12375 | 0.4707 | 0.8183 |
| 0.5191 | 34.0 | 12750 | 0.4684 | 0.82 |
| 0.4852 | 35.0 | 13125 | 0.4662 | 0.82 |
| 0.4553 | 36.0 | 13500 | 0.4634 | 0.8183 |
| 0.4575 | 37.0 | 13875 | 0.4613 | 0.82 |
| 0.5121 | 38.0 | 14250 | 0.4600 | 0.82 |
| 0.4948 | 39.0 | 14625 | 0.4580 | 0.82 |
| 0.5112 | 40.0 | 15000 | 0.4566 | 0.82 |
| 0.5002 | 41.0 | 15375 | 0.4556 | 0.82 |
| 0.4865 | 42.0 | 15750 | 0.4545 | 0.82 |
| 0.5291 | 43.0 | 16125 | 0.4534 | 0.82 |
| 0.4479 | 44.0 | 16500 | 0.4529 | 0.82 |
| 0.4858 | 45.0 | 16875 | 0.4523 | 0.82 |
| 0.5195 | 46.0 | 17250 | 0.4518 | 0.82 |
| 0.5088 | 47.0 | 17625 | 0.4513 | 0.8217 |
| 0.4798 | 48.0 | 18000 | 0.4511 | 0.8217 |
| 0.4938 | 49.0 | 18375 | 0.4509 | 0.8217 |
| 0.4932 | 50.0 | 18750 | 0.4509 | 0.8217 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_sgd_001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_sgd_001_fold5
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2832
- Accuracy: 0.8867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6263 | 1.0 | 375 | 0.6777 | 0.725 |
| 0.5866 | 2.0 | 750 | 0.4870 | 0.81 |
| 0.4483 | 3.0 | 1125 | 0.4247 | 0.825 |
| 0.4271 | 4.0 | 1500 | 0.3855 | 0.84 |
| 0.4066 | 5.0 | 1875 | 0.3633 | 0.8467 |
| 0.3828 | 6.0 | 2250 | 0.3474 | 0.8417 |
| 0.309 | 7.0 | 2625 | 0.3371 | 0.8583 |
| 0.3188 | 8.0 | 3000 | 0.3295 | 0.86 |
| 0.3147 | 9.0 | 3375 | 0.3210 | 0.8633 |
| 0.2842 | 10.0 | 3750 | 0.3163 | 0.8633 |
| 0.258 | 11.0 | 4125 | 0.3059 | 0.87 |
| 0.2796 | 12.0 | 4500 | 0.3036 | 0.8717 |
| 0.2552 | 13.0 | 4875 | 0.2994 | 0.87 |
| 0.2763 | 14.0 | 5250 | 0.2979 | 0.8633 |
| 0.2925 | 15.0 | 5625 | 0.3004 | 0.865 |
| 0.2222 | 16.0 | 6000 | 0.2915 | 0.8767 |
| 0.2839 | 17.0 | 6375 | 0.2879 | 0.8783 |
| 0.2546 | 18.0 | 6750 | 0.2876 | 0.88 |
| 0.2528 | 19.0 | 7125 | 0.2899 | 0.8817 |
| 0.1895 | 20.0 | 7500 | 0.2841 | 0.885 |
| 0.2366 | 21.0 | 7875 | 0.2901 | 0.8767 |
| 0.2149 | 22.0 | 8250 | 0.2831 | 0.8883 |
| 0.2987 | 23.0 | 8625 | 0.2845 | 0.8833 |
| 0.232 | 24.0 | 9000 | 0.2818 | 0.885 |
| 0.2416 | 25.0 | 9375 | 0.2809 | 0.8883 |
| 0.2147 | 26.0 | 9750 | 0.2789 | 0.8867 |
| 0.2824 | 27.0 | 10125 | 0.2796 | 0.8883 |
| 0.2229 | 28.0 | 10500 | 0.2814 | 0.8883 |
| 0.2625 | 29.0 | 10875 | 0.2884 | 0.8767 |
| 0.1908 | 30.0 | 11250 | 0.2826 | 0.885 |
| 0.2464 | 31.0 | 11625 | 0.2786 | 0.8867 |
| 0.2333 | 32.0 | 12000 | 0.2809 | 0.89 |
| 0.2568 | 33.0 | 12375 | 0.2768 | 0.8867 |
| 0.2444 | 34.0 | 12750 | 0.2777 | 0.8883 |
| 0.1971 | 35.0 | 13125 | 0.2787 | 0.8883 |
| 0.1586 | 36.0 | 13500 | 0.2808 | 0.8867 |
| 0.1628 | 37.0 | 13875 | 0.2838 | 0.8817 |
| 0.2206 | 38.0 | 14250 | 0.2772 | 0.8867 |
| 0.1707 | 39.0 | 14625 | 0.2818 | 0.8833 |
| 0.2328 | 40.0 | 15000 | 0.2820 | 0.8867 |
| 0.1705 | 41.0 | 15375 | 0.2828 | 0.89 |
| 0.1753 | 42.0 | 15750 | 0.2851 | 0.8867 |
| 0.2269 | 43.0 | 16125 | 0.2832 | 0.8933 |
| 0.1772 | 44.0 | 16500 | 0.2830 | 0.8883 |
| 0.235 | 45.0 | 16875 | 0.2841 | 0.8883 |
| 0.251 | 46.0 | 17250 | 0.2828 | 0.8867 |
| 0.2199 | 47.0 | 17625 | 0.2831 | 0.8883 |
| 0.1679 | 48.0 | 18000 | 0.2835 | 0.8867 |
| 0.2096 | 49.0 | 18375 | 0.2833 | 0.8867 |
| 0.22 | 50.0 | 18750 | 0.2832 | 0.8867 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_tiny_sgd_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_sgd_00001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0635
- Accuracy: 0.4541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.346 | 1.0 | 376 | 1.2991 | 0.3489 |
| 1.3817 | 2.0 | 752 | 1.2686 | 0.3589 |
| 1.3103 | 3.0 | 1128 | 1.2425 | 0.3656 |
| 1.3556 | 4.0 | 1504 | 1.2205 | 0.3656 |
| 1.2443 | 5.0 | 1880 | 1.2020 | 0.3723 |
| 1.1947 | 6.0 | 2256 | 1.1865 | 0.3806 |
| 1.184 | 7.0 | 2632 | 1.1737 | 0.3940 |
| 1.2121 | 8.0 | 3008 | 1.1630 | 0.3873 |
| 1.1793 | 9.0 | 3384 | 1.1540 | 0.3773 |
| 1.1564 | 10.0 | 3760 | 1.1464 | 0.3740 |
| 1.148 | 11.0 | 4136 | 1.1397 | 0.3756 |
| 1.1774 | 12.0 | 4512 | 1.1340 | 0.3756 |
| 1.1493 | 13.0 | 4888 | 1.1288 | 0.3790 |
| 1.1491 | 14.0 | 5264 | 1.1241 | 0.3790 |
| 1.1465 | 15.0 | 5640 | 1.1198 | 0.3856 |
| 1.1089 | 16.0 | 6016 | 1.1159 | 0.3990 |
| 1.1015 | 17.0 | 6392 | 1.1122 | 0.4057 |
| 1.1166 | 18.0 | 6768 | 1.1086 | 0.4073 |
| 1.1502 | 19.0 | 7144 | 1.1053 | 0.4124 |
| 1.124 | 20.0 | 7520 | 1.1022 | 0.4174 |
| 1.1102 | 21.0 | 7896 | 1.0992 | 0.4207 |
| 1.0904 | 22.0 | 8272 | 1.0964 | 0.4190 |
| 1.0897 | 23.0 | 8648 | 1.0937 | 0.4207 |
| 1.1449 | 24.0 | 9024 | 1.0912 | 0.4190 |
| 1.0609 | 25.0 | 9400 | 1.0888 | 0.4157 |
| 1.0747 | 26.0 | 9776 | 1.0865 | 0.4207 |
| 1.0631 | 27.0 | 10152 | 1.0844 | 0.4240 |
| 1.0872 | 28.0 | 10528 | 1.0823 | 0.4274 |
| 1.0811 | 29.0 | 10904 | 1.0804 | 0.4290 |
| 1.1082 | 30.0 | 11280 | 1.0786 | 0.4307 |
| 1.0863 | 31.0 | 11656 | 1.0769 | 0.4324 |
| 1.103 | 32.0 | 12032 | 1.0753 | 0.4290 |
| 1.0918 | 33.0 | 12408 | 1.0738 | 0.4324 |
| 1.06 | 34.0 | 12784 | 1.0725 | 0.4391 |
| 1.0723 | 35.0 | 13160 | 1.0712 | 0.4424 |
| 1.0366 | 36.0 | 13536 | 1.0701 | 0.4457 |
| 1.0655 | 37.0 | 13912 | 1.0690 | 0.4474 |
| 1.0787 | 38.0 | 14288 | 1.0681 | 0.4457 |
| 1.0751 | 39.0 | 14664 | 1.0672 | 0.4474 |
| 1.0508 | 40.0 | 15040 | 1.0665 | 0.4541 |
| 1.0565 | 41.0 | 15416 | 1.0658 | 0.4541 |
| 1.0404 | 42.0 | 15792 | 1.0652 | 0.4541 |
| 1.0767 | 43.0 | 16168 | 1.0648 | 0.4541 |
| 1.076 | 44.0 | 16544 | 1.0644 | 0.4541 |
| 1.0183 | 45.0 | 16920 | 1.0640 | 0.4541 |
| 1.0393 | 46.0 | 17296 | 1.0638 | 0.4541 |
| 1.065 | 47.0 | 17672 | 1.0636 | 0.4541 |
| 1.0432 | 48.0 | 18048 | 1.0635 | 0.4541 |
| 1.0432 | 49.0 | 18424 | 1.0635 | 0.4541 |
| 1.0255 | 50.0 | 18800 | 1.0635 | 0.4541 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_tiny_sgd_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_sgd_00001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0573
- Accuracy: 0.4476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.3977 | 1.0 | 375 | 1.3189 | 0.3428 |
| 1.3124 | 2.0 | 750 | 1.2870 | 0.3411 |
| 1.2542 | 3.0 | 1125 | 1.2595 | 0.3378 |
| 1.2046 | 4.0 | 1500 | 1.2361 | 0.3478 |
| 1.2563 | 5.0 | 1875 | 1.2165 | 0.3544 |
| 1.2759 | 6.0 | 2250 | 1.1999 | 0.3561 |
| 1.1771 | 7.0 | 2625 | 1.1858 | 0.3527 |
| 1.1858 | 8.0 | 3000 | 1.1739 | 0.3710 |
| 1.1713 | 9.0 | 3375 | 1.1636 | 0.3644 |
| 1.1774 | 10.0 | 3750 | 1.1549 | 0.3760 |
| 1.1522 | 11.0 | 4125 | 1.1472 | 0.3760 |
| 1.1182 | 12.0 | 4500 | 1.1403 | 0.3744 |
| 1.1161 | 13.0 | 4875 | 1.1344 | 0.3827 |
| 1.1676 | 14.0 | 5250 | 1.1289 | 0.3827 |
| 1.1382 | 15.0 | 5625 | 1.1238 | 0.3860 |
| 1.129 | 16.0 | 6000 | 1.1191 | 0.3943 |
| 1.1144 | 17.0 | 6375 | 1.1146 | 0.3910 |
| 1.1043 | 18.0 | 6750 | 1.1105 | 0.3894 |
| 1.1008 | 19.0 | 7125 | 1.1065 | 0.3960 |
| 1.1097 | 20.0 | 7500 | 1.1028 | 0.4077 |
| 1.1084 | 21.0 | 7875 | 1.0993 | 0.4093 |
| 1.0777 | 22.0 | 8250 | 1.0960 | 0.4110 |
| 1.0857 | 23.0 | 8625 | 1.0928 | 0.4126 |
| 1.096 | 24.0 | 9000 | 1.0898 | 0.4126 |
| 1.1016 | 25.0 | 9375 | 1.0869 | 0.4176 |
| 1.0637 | 26.0 | 9750 | 1.0843 | 0.4226 |
| 1.0804 | 27.0 | 10125 | 1.0817 | 0.4226 |
| 1.0961 | 28.0 | 10500 | 1.0793 | 0.4226 |
| 1.0888 | 29.0 | 10875 | 1.0771 | 0.4293 |
| 1.0508 | 30.0 | 11250 | 1.0750 | 0.4293 |
| 1.0685 | 31.0 | 11625 | 1.0730 | 0.4326 |
| 1.1026 | 32.0 | 12000 | 1.0712 | 0.4309 |
| 1.0612 | 33.0 | 12375 | 1.0694 | 0.4359 |
| 1.0734 | 34.0 | 12750 | 1.0679 | 0.4393 |
| 1.0868 | 35.0 | 13125 | 1.0664 | 0.4393 |
| 1.0597 | 36.0 | 13500 | 1.0650 | 0.4393 |
| 1.0653 | 37.0 | 13875 | 1.0638 | 0.4409 |
| 1.0598 | 38.0 | 14250 | 1.0627 | 0.4443 |
| 1.0773 | 39.0 | 14625 | 1.0617 | 0.4443 |
| 1.0819 | 40.0 | 15000 | 1.0608 | 0.4443 |
| 1.0608 | 41.0 | 15375 | 1.0600 | 0.4459 |
| 1.0652 | 42.0 | 15750 | 1.0594 | 0.4459 |
| 1.04 | 43.0 | 16125 | 1.0588 | 0.4476 |
| 1.0518 | 44.0 | 16500 | 1.0583 | 0.4476 |
| 1.0814 | 45.0 | 16875 | 1.0580 | 0.4476 |
| 1.0536 | 46.0 | 17250 | 1.0577 | 0.4476 |
| 1.0612 | 47.0 | 17625 | 1.0575 | 0.4476 |
| 1.0833 | 48.0 | 18000 | 1.0574 | 0.4476 |
| 1.0816 | 49.0 | 18375 | 1.0573 | 0.4476 |
| 1.0754 | 50.0 | 18750 | 1.0573 | 0.4476 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_sgd_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_sgd_00001_fold1
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1524
- Accuracy: 0.3706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2623 | 1.0 | 376 | 1.3473 | 0.3088 |
| 1.2409 | 2.0 | 752 | 1.3383 | 0.3122 |
| 1.2444 | 3.0 | 1128 | 1.3298 | 0.3105 |
| 1.1982 | 4.0 | 1504 | 1.3215 | 0.3105 |
| 1.2082 | 5.0 | 1880 | 1.3136 | 0.3105 |
| 1.1814 | 6.0 | 2256 | 1.3057 | 0.3139 |
| 1.1633 | 7.0 | 2632 | 1.2980 | 0.3172 |
| 1.2064 | 8.0 | 3008 | 1.2906 | 0.3205 |
| 1.1298 | 9.0 | 3384 | 1.2834 | 0.3205 |
| 1.1231 | 10.0 | 3760 | 1.2765 | 0.3322 |
| 1.171 | 11.0 | 4136 | 1.2696 | 0.3372 |
| 1.1505 | 12.0 | 4512 | 1.2632 | 0.3439 |
| 1.1137 | 13.0 | 4888 | 1.2569 | 0.3472 |
| 1.1229 | 14.0 | 5264 | 1.2508 | 0.3456 |
| 1.1641 | 15.0 | 5640 | 1.2450 | 0.3439 |
| 1.1335 | 16.0 | 6016 | 1.2391 | 0.3439 |
| 1.0584 | 17.0 | 6392 | 1.2337 | 0.3439 |
| 1.1251 | 18.0 | 6768 | 1.2284 | 0.3456 |
| 1.105 | 19.0 | 7144 | 1.2233 | 0.3506 |
| 1.0972 | 20.0 | 7520 | 1.2186 | 0.3506 |
| 1.0751 | 21.0 | 7896 | 1.2139 | 0.3539 |
| 1.0864 | 22.0 | 8272 | 1.2094 | 0.3539 |
| 1.1021 | 23.0 | 8648 | 1.2051 | 0.3539 |
| 1.1159 | 24.0 | 9024 | 1.2011 | 0.3539 |
| 1.0862 | 25.0 | 9400 | 1.1972 | 0.3556 |
| 1.0706 | 26.0 | 9776 | 1.1934 | 0.3589 |
| 1.0809 | 27.0 | 10152 | 1.1898 | 0.3589 |
| 1.0663 | 28.0 | 10528 | 1.1865 | 0.3589 |
| 1.0982 | 29.0 | 10904 | 1.1833 | 0.3606 |
| 1.1121 | 30.0 | 11280 | 1.1802 | 0.3623 |
| 1.0485 | 31.0 | 11656 | 1.1773 | 0.3656 |
| 1.0472 | 32.0 | 12032 | 1.1746 | 0.3656 |
| 1.0601 | 33.0 | 12408 | 1.1721 | 0.3689 |
| 1.0381 | 34.0 | 12784 | 1.1697 | 0.3689 |
| 1.072 | 35.0 | 13160 | 1.1675 | 0.3689 |
| 1.087 | 36.0 | 13536 | 1.1654 | 0.3689 |
| 1.0074 | 37.0 | 13912 | 1.1635 | 0.3706 |
| 1.0562 | 38.0 | 14288 | 1.1618 | 0.3689 |
| 1.0371 | 39.0 | 14664 | 1.1602 | 0.3689 |
| 1.0517 | 40.0 | 15040 | 1.1588 | 0.3706 |
| 1.0384 | 41.0 | 15416 | 1.1575 | 0.3706 |
| 1.0222 | 42.0 | 15792 | 1.1563 | 0.3706 |
| 1.0143 | 43.0 | 16168 | 1.1553 | 0.3706 |
| 0.9973 | 44.0 | 16544 | 1.1545 | 0.3689 |
| 1.0445 | 45.0 | 16920 | 1.1538 | 0.3689 |
| 1.0408 | 46.0 | 17296 | 1.1532 | 0.3706 |
| 1.0166 | 47.0 | 17672 | 1.1528 | 0.3706 |
| 1.0266 | 48.0 | 18048 | 1.1525 | 0.3706 |
| 1.0337 | 49.0 | 18424 | 1.1524 | 0.3706 |
| 1.0214 | 50.0 | 18800 | 1.1524 | 0.3706 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_rms_001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_rms_001_fold1
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7105
- Accuracy: 0.7396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1123 | 1.0 | 376 | 1.1251 | 0.3356 |
| 0.9352 | 2.0 | 752 | 0.8920 | 0.5259 |
| 0.794 | 3.0 | 1128 | 0.7965 | 0.5526 |
| 0.8608 | 4.0 | 1504 | 0.8108 | 0.5643 |
| 0.8805 | 5.0 | 1880 | 0.7967 | 0.5543 |
| 0.7654 | 6.0 | 2256 | 0.8049 | 0.5693 |
| 0.7409 | 7.0 | 2632 | 0.7866 | 0.5726 |
| 0.7913 | 8.0 | 3008 | 0.7897 | 0.5893 |
| 0.7475 | 9.0 | 3384 | 0.7603 | 0.6127 |
| 0.7457 | 10.0 | 3760 | 0.7604 | 0.5810 |
| 0.6765 | 11.0 | 4136 | 0.7505 | 0.6060 |
| 0.7009 | 12.0 | 4512 | 0.7036 | 0.6628 |
| 0.682 | 13.0 | 4888 | 0.7131 | 0.6477 |
| 0.665 | 14.0 | 5264 | 0.7097 | 0.6628 |
| 0.6075 | 15.0 | 5640 | 0.7283 | 0.6361 |
| 0.6104 | 16.0 | 6016 | 0.7252 | 0.6845 |
| 0.5756 | 17.0 | 6392 | 0.7019 | 0.6761 |
| 0.6308 | 18.0 | 6768 | 0.7144 | 0.6678 |
| 0.6215 | 19.0 | 7144 | 0.6989 | 0.6644 |
| 0.5991 | 20.0 | 7520 | 0.6580 | 0.7262 |
| 0.644 | 21.0 | 7896 | 0.6509 | 0.7129 |
| 0.5561 | 22.0 | 8272 | 0.6219 | 0.7112 |
| 0.5743 | 23.0 | 8648 | 0.6259 | 0.7095 |
| 0.5779 | 24.0 | 9024 | 0.7360 | 0.6611 |
| 0.5703 | 25.0 | 9400 | 0.7402 | 0.6544 |
| 0.59 | 26.0 | 9776 | 0.6505 | 0.7179 |
| 0.4484 | 27.0 | 10152 | 0.7061 | 0.6945 |
| 0.5078 | 28.0 | 10528 | 0.6625 | 0.7012 |
| 0.4947 | 29.0 | 10904 | 0.7197 | 0.6878 |
| 0.4804 | 30.0 | 11280 | 0.6601 | 0.7129 |
| 0.571 | 31.0 | 11656 | 0.6610 | 0.6978 |
| 0.5506 | 32.0 | 12032 | 0.6726 | 0.7012 |
| 0.4066 | 33.0 | 12408 | 0.6633 | 0.7095 |
| 0.4713 | 34.0 | 12784 | 0.6198 | 0.7245 |
| 0.4603 | 35.0 | 13160 | 0.6655 | 0.7145 |
| 0.4936 | 36.0 | 13536 | 0.6620 | 0.7212 |
| 0.4422 | 37.0 | 13912 | 0.6199 | 0.7446 |
| 0.4404 | 38.0 | 14288 | 0.6881 | 0.7062 |
| 0.4643 | 39.0 | 14664 | 0.6209 | 0.7412 |
| 0.4403 | 40.0 | 15040 | 0.6524 | 0.7496 |
| 0.4197 | 41.0 | 15416 | 0.6575 | 0.7229 |
| 0.3846 | 42.0 | 15792 | 0.6496 | 0.7295 |
| 0.3794 | 43.0 | 16168 | 0.6583 | 0.7179 |
| 0.4461 | 44.0 | 16544 | 0.6644 | 0.7329 |
| 0.3616 | 45.0 | 16920 | 0.6911 | 0.7396 |
| 0.3764 | 46.0 | 17296 | 0.7023 | 0.7279 |
| 0.39 | 47.0 | 17672 | 0.6999 | 0.7379 |
| 0.3595 | 48.0 | 18048 | 0.7003 | 0.7379 |
| 0.3678 | 49.0 | 18424 | 0.6974 | 0.7379 |
| 0.2726 | 50.0 | 18800 | 0.7105 | 0.7396 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
Nubletz/msi-resnet-pretrain
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# msi-resnet-pretrain
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3514
- Accuracy: 0.8862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4387 | 1.0 | 1562 | 0.3894 | 0.8795 |
| 0.2626 | 2.0 | 3125 | 0.3142 | 0.9024 |
| 0.2134 | 3.0 | 4687 | 0.3767 | 0.8694 |
| 0.1452 | 4.0 | 6250 | 0.3211 | 0.8947 |
| 0.1773 | 5.0 | 7810 | 0.3514 | 0.8862 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"adi",
"back",
"deb",
"lym",
"muc",
"mus",
"norm",
"str",
"tum"
] |
hkivancoral/smids_5x_deit_tiny_sgd_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_sgd_00001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0782
- Accuracy: 0.445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4131 | 1.0 | 375 | 1.3423 | 0.3433 |
| 1.3593 | 2.0 | 750 | 1.3099 | 0.3483 |
| 1.3082 | 3.0 | 1125 | 1.2818 | 0.3517 |
| 1.3385 | 4.0 | 1500 | 1.2580 | 0.36 |
| 1.2471 | 5.0 | 1875 | 1.2378 | 0.3633 |
| 1.2728 | 6.0 | 2250 | 1.2206 | 0.3667 |
| 1.2244 | 7.0 | 2625 | 1.2061 | 0.3767 |
| 1.1927 | 8.0 | 3000 | 1.1938 | 0.385 |
| 1.1353 | 9.0 | 3375 | 1.1833 | 0.39 |
| 1.1411 | 10.0 | 3750 | 1.1743 | 0.39 |
| 1.1528 | 11.0 | 4125 | 1.1664 | 0.395 |
| 1.1479 | 12.0 | 4500 | 1.1594 | 0.3917 |
| 1.1757 | 13.0 | 4875 | 1.1532 | 0.3917 |
| 1.1667 | 14.0 | 5250 | 1.1477 | 0.4017 |
| 1.1486 | 15.0 | 5625 | 1.1425 | 0.3967 |
| 1.0937 | 16.0 | 6000 | 1.1378 | 0.4017 |
| 1.1232 | 17.0 | 6375 | 1.1333 | 0.4133 |
| 1.1438 | 18.0 | 6750 | 1.1292 | 0.4183 |
| 1.0814 | 19.0 | 7125 | 1.1253 | 0.42 |
| 1.101 | 20.0 | 7500 | 1.1217 | 0.4183 |
| 1.0634 | 21.0 | 7875 | 1.1182 | 0.42 |
| 1.0937 | 22.0 | 8250 | 1.1150 | 0.4167 |
| 1.107 | 23.0 | 8625 | 1.1120 | 0.4183 |
| 1.1086 | 24.0 | 9000 | 1.1091 | 0.42 |
| 1.0802 | 25.0 | 9375 | 1.1064 | 0.4217 |
| 1.1004 | 26.0 | 9750 | 1.1038 | 0.4233 |
| 1.0865 | 27.0 | 10125 | 1.1014 | 0.4267 |
| 1.0686 | 28.0 | 10500 | 1.0991 | 0.425 |
| 1.0719 | 29.0 | 10875 | 1.0969 | 0.4267 |
| 1.0892 | 30.0 | 11250 | 1.0949 | 0.4267 |
| 1.0865 | 31.0 | 11625 | 1.0931 | 0.4233 |
| 1.1008 | 32.0 | 12000 | 1.0913 | 0.425 |
| 1.0834 | 33.0 | 12375 | 1.0897 | 0.4267 |
| 1.085 | 34.0 | 12750 | 1.0882 | 0.4317 |
| 1.0201 | 35.0 | 13125 | 1.0868 | 0.4367 |
| 1.043 | 36.0 | 13500 | 1.0855 | 0.4367 |
| 1.0791 | 37.0 | 13875 | 1.0844 | 0.4367 |
| 1.0443 | 38.0 | 14250 | 1.0833 | 0.4367 |
| 1.0648 | 39.0 | 14625 | 1.0824 | 0.4383 |
| 1.0415 | 40.0 | 15000 | 1.0816 | 0.4417 |
| 1.025 | 41.0 | 15375 | 1.0808 | 0.4417 |
| 1.0078 | 42.0 | 15750 | 1.0802 | 0.4417 |
| 1.0383 | 43.0 | 16125 | 1.0797 | 0.4433 |
| 1.061 | 44.0 | 16500 | 1.0792 | 0.4433 |
| 1.0733 | 45.0 | 16875 | 1.0789 | 0.4433 |
| 1.039 | 46.0 | 17250 | 1.0786 | 0.4433 |
| 1.091 | 47.0 | 17625 | 1.0784 | 0.445 |
| 1.0592 | 48.0 | 18000 | 1.0783 | 0.445 |
| 1.0783 | 49.0 | 18375 | 1.0782 | 0.445 |
| 1.066 | 50.0 | 18750 | 1.0782 | 0.445 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_sgd_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_sgd_00001_fold2
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1209
- Accuracy: 0.4010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1533 | 1.0 | 375 | 1.3089 | 0.3344 |
| 1.2124 | 2.0 | 750 | 1.2999 | 0.3344 |
| 1.1985 | 3.0 | 1125 | 1.2914 | 0.3428 |
| 1.1579 | 4.0 | 1500 | 1.2833 | 0.3461 |
| 1.1291 | 5.0 | 1875 | 1.2755 | 0.3461 |
| 1.194 | 6.0 | 2250 | 1.2681 | 0.3478 |
| 1.2016 | 7.0 | 2625 | 1.2608 | 0.3494 |
| 1.1347 | 8.0 | 3000 | 1.2537 | 0.3527 |
| 1.1472 | 9.0 | 3375 | 1.2468 | 0.3577 |
| 1.15 | 10.0 | 3750 | 1.2403 | 0.3611 |
| 1.1134 | 11.0 | 4125 | 1.2339 | 0.3661 |
| 1.1681 | 12.0 | 4500 | 1.2277 | 0.3694 |
| 1.1002 | 13.0 | 4875 | 1.2218 | 0.3677 |
| 1.1221 | 14.0 | 5250 | 1.2161 | 0.3677 |
| 1.0969 | 15.0 | 5625 | 1.2104 | 0.3694 |
| 1.1378 | 16.0 | 6000 | 1.2051 | 0.3694 |
| 1.0509 | 17.0 | 6375 | 1.1999 | 0.3727 |
| 1.0539 | 18.0 | 6750 | 1.1948 | 0.3727 |
| 1.1469 | 19.0 | 7125 | 1.1900 | 0.3760 |
| 1.0806 | 20.0 | 7500 | 1.1853 | 0.3760 |
| 1.1095 | 21.0 | 7875 | 1.1807 | 0.3760 |
| 1.0474 | 22.0 | 8250 | 1.1764 | 0.3760 |
| 1.0756 | 23.0 | 8625 | 1.1722 | 0.3810 |
| 1.1044 | 24.0 | 9000 | 1.1682 | 0.3794 |
| 1.1189 | 25.0 | 9375 | 1.1645 | 0.3844 |
| 1.0607 | 26.0 | 9750 | 1.1609 | 0.3844 |
| 1.1097 | 27.0 | 10125 | 1.1574 | 0.3844 |
| 1.0713 | 28.0 | 10500 | 1.1541 | 0.3860 |
| 1.0338 | 29.0 | 10875 | 1.1510 | 0.3877 |
| 1.0753 | 30.0 | 11250 | 1.1479 | 0.3910 |
| 1.0493 | 31.0 | 11625 | 1.1452 | 0.3910 |
| 1.0423 | 32.0 | 12000 | 1.1425 | 0.3910 |
| 1.0585 | 33.0 | 12375 | 1.1400 | 0.3943 |
| 1.0104 | 34.0 | 12750 | 1.1377 | 0.3960 |
| 1.0421 | 35.0 | 13125 | 1.1356 | 0.3960 |
| 1.0328 | 36.0 | 13500 | 1.1336 | 0.3977 |
| 1.0499 | 37.0 | 13875 | 1.1317 | 0.3993 |
| 1.0006 | 38.0 | 14250 | 1.1300 | 0.4010 |
| 1.0528 | 39.0 | 14625 | 1.1285 | 0.4010 |
| 1.0416 | 40.0 | 15000 | 1.1271 | 0.4010 |
| 1.0633 | 41.0 | 15375 | 1.1258 | 0.4010 |
| 1.0643 | 42.0 | 15750 | 1.1247 | 0.4027 |
| 1.0051 | 43.0 | 16125 | 1.1238 | 0.4027 |
| 1.0289 | 44.0 | 16500 | 1.1230 | 0.4027 |
| 0.9766 | 45.0 | 16875 | 1.1223 | 0.4010 |
| 1.0401 | 46.0 | 17250 | 1.1218 | 0.4010 |
| 1.0257 | 47.0 | 17625 | 1.1214 | 0.4010 |
| 1.0309 | 48.0 | 18000 | 1.1211 | 0.4010 |
| 1.0074 | 49.0 | 18375 | 1.1210 | 0.4010 |
| 1.0327 | 50.0 | 18750 | 1.1209 | 0.4010 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_rms_001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_rms_001_fold2
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5865
- Accuracy: 0.7953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0449 | 1.0 | 375 | 0.9946 | 0.4576 |
| 0.9062 | 2.0 | 750 | 0.8678 | 0.5341 |
| 0.8013 | 3.0 | 1125 | 1.1322 | 0.4709 |
| 0.7159 | 4.0 | 1500 | 0.7319 | 0.6373 |
| 0.717 | 5.0 | 1875 | 0.7090 | 0.6672 |
| 0.6942 | 6.0 | 2250 | 0.6958 | 0.6356 |
| 0.7767 | 7.0 | 2625 | 0.6812 | 0.7022 |
| 0.7025 | 8.0 | 3000 | 0.6844 | 0.6406 |
| 0.731 | 9.0 | 3375 | 0.6703 | 0.6872 |
| 0.712 | 10.0 | 3750 | 0.7094 | 0.6789 |
| 0.6865 | 11.0 | 4125 | 0.6498 | 0.6972 |
| 0.7524 | 12.0 | 4500 | 0.6865 | 0.6955 |
| 0.6624 | 13.0 | 4875 | 0.6872 | 0.6772 |
| 0.6979 | 14.0 | 5250 | 0.6496 | 0.6972 |
| 0.6174 | 15.0 | 5625 | 0.6736 | 0.6805 |
| 0.6379 | 16.0 | 6000 | 0.6464 | 0.6889 |
| 0.6532 | 17.0 | 6375 | 0.6449 | 0.7271 |
| 0.6218 | 18.0 | 6750 | 0.6026 | 0.7421 |
| 0.6018 | 19.0 | 7125 | 0.6684 | 0.6988 |
| 0.6058 | 20.0 | 7500 | 0.6198 | 0.7205 |
| 0.6269 | 21.0 | 7875 | 0.6185 | 0.7338 |
| 0.586 | 22.0 | 8250 | 0.5945 | 0.7571 |
| 0.6047 | 23.0 | 8625 | 0.5838 | 0.7404 |
| 0.5645 | 24.0 | 9000 | 0.5895 | 0.7304 |
| 0.5266 | 25.0 | 9375 | 0.6076 | 0.7554 |
| 0.5433 | 26.0 | 9750 | 0.6078 | 0.7205 |
| 0.6677 | 27.0 | 10125 | 0.5591 | 0.7837 |
| 0.5463 | 28.0 | 10500 | 0.6091 | 0.7488 |
| 0.5494 | 29.0 | 10875 | 0.5955 | 0.7471 |
| 0.4887 | 30.0 | 11250 | 0.5393 | 0.7987 |
| 0.5572 | 31.0 | 11625 | 0.5935 | 0.7537 |
| 0.5382 | 32.0 | 12000 | 0.6529 | 0.7288 |
| 0.5356 | 33.0 | 12375 | 0.5723 | 0.7787 |
| 0.5102 | 34.0 | 12750 | 0.5659 | 0.7720 |
| 0.5047 | 35.0 | 13125 | 0.5433 | 0.7887 |
| 0.4869 | 36.0 | 13500 | 0.5564 | 0.7687 |
| 0.4821 | 37.0 | 13875 | 0.5581 | 0.7754 |
| 0.455 | 38.0 | 14250 | 0.5595 | 0.7837 |
| 0.4345 | 39.0 | 14625 | 0.5481 | 0.7854 |
| 0.4695 | 40.0 | 15000 | 0.5459 | 0.8003 |
| 0.4129 | 41.0 | 15375 | 0.5458 | 0.8020 |
| 0.4369 | 42.0 | 15750 | 0.5508 | 0.7953 |
| 0.4043 | 43.0 | 16125 | 0.5495 | 0.7854 |
| 0.4715 | 44.0 | 16500 | 0.5470 | 0.7987 |
| 0.4036 | 45.0 | 16875 | 0.5777 | 0.7887 |
| 0.3786 | 46.0 | 17250 | 0.5867 | 0.8003 |
| 0.4177 | 47.0 | 17625 | 0.5806 | 0.7770 |
| 0.3538 | 48.0 | 18000 | 0.5857 | 0.7937 |
| 0.3987 | 49.0 | 18375 | 0.5813 | 0.8020 |
| 0.3452 | 50.0 | 18750 | 0.5865 | 0.7953 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_tiny_sgd_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_sgd_00001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0682
- Accuracy: 0.4133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.3789 | 1.0 | 375 | 1.3304 | 0.3467 |
| 1.3522 | 2.0 | 750 | 1.2980 | 0.3417 |
| 1.2851 | 3.0 | 1125 | 1.2700 | 0.3467 |
| 1.3268 | 4.0 | 1500 | 1.2462 | 0.35 |
| 1.2083 | 5.0 | 1875 | 1.2264 | 0.3533 |
| 1.2564 | 6.0 | 2250 | 1.2096 | 0.3667 |
| 1.2076 | 7.0 | 2625 | 1.1953 | 0.375 |
| 1.1738 | 8.0 | 3000 | 1.1833 | 0.375 |
| 1.1964 | 9.0 | 3375 | 1.1730 | 0.3767 |
| 1.1824 | 10.0 | 3750 | 1.1642 | 0.375 |
| 1.1746 | 11.0 | 4125 | 1.1567 | 0.375 |
| 1.0941 | 12.0 | 4500 | 1.1499 | 0.3783 |
| 1.1561 | 13.0 | 4875 | 1.1439 | 0.3817 |
| 1.1702 | 14.0 | 5250 | 1.1384 | 0.3817 |
| 1.1181 | 15.0 | 5625 | 1.1334 | 0.3867 |
| 1.149 | 16.0 | 6000 | 1.1288 | 0.3833 |
| 1.1131 | 17.0 | 6375 | 1.1244 | 0.3867 |
| 1.1335 | 18.0 | 6750 | 1.1203 | 0.39 |
| 1.105 | 19.0 | 7125 | 1.1164 | 0.3933 |
| 1.0655 | 20.0 | 7500 | 1.1128 | 0.3933 |
| 1.1098 | 21.0 | 7875 | 1.1094 | 0.395 |
| 1.0972 | 22.0 | 8250 | 1.1061 | 0.3933 |
| 1.112 | 23.0 | 8625 | 1.1030 | 0.3917 |
| 1.0932 | 24.0 | 9000 | 1.1001 | 0.395 |
| 1.0801 | 25.0 | 9375 | 1.0974 | 0.3933 |
| 1.1085 | 26.0 | 9750 | 1.0947 | 0.4 |
| 1.1153 | 27.0 | 10125 | 1.0922 | 0.4 |
| 1.0883 | 28.0 | 10500 | 1.0899 | 0.4 |
| 1.0621 | 29.0 | 10875 | 1.0877 | 0.4017 |
| 1.0559 | 30.0 | 11250 | 1.0856 | 0.4017 |
| 1.0795 | 31.0 | 11625 | 1.0837 | 0.4 |
| 1.1076 | 32.0 | 12000 | 1.0819 | 0.4017 |
| 1.1027 | 33.0 | 12375 | 1.0802 | 0.405 |
| 1.0471 | 34.0 | 12750 | 1.0787 | 0.41 |
| 1.032 | 35.0 | 13125 | 1.0772 | 0.4117 |
| 1.0529 | 36.0 | 13500 | 1.0759 | 0.4083 |
| 1.0365 | 37.0 | 13875 | 1.0747 | 0.4067 |
| 1.0659 | 38.0 | 14250 | 1.0736 | 0.4067 |
| 1.073 | 39.0 | 14625 | 1.0726 | 0.4117 |
| 1.1034 | 40.0 | 15000 | 1.0717 | 0.4117 |
| 1.0918 | 41.0 | 15375 | 1.0710 | 0.4117 |
| 1.0873 | 42.0 | 15750 | 1.0703 | 0.4133 |
| 1.0582 | 43.0 | 16125 | 1.0697 | 0.4133 |
| 1.0527 | 44.0 | 16500 | 1.0693 | 0.4133 |
| 1.0394 | 45.0 | 16875 | 1.0689 | 0.4133 |
| 1.0718 | 46.0 | 17250 | 1.0686 | 0.4133 |
| 1.0719 | 47.0 | 17625 | 1.0684 | 0.4133 |
| 1.0655 | 48.0 | 18000 | 1.0683 | 0.4133 |
| 1.0516 | 49.0 | 18375 | 1.0682 | 0.4133 |
| 1.0396 | 50.0 | 18750 | 1.0682 | 0.4133 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
zabir735/outputs
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0483
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cpu
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"bad oil palm seed",
"good oil palm seed"
] |
hkivancoral/smids_5x_beit_base_sgd_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_sgd_00001_fold3
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1248
- Accuracy: 0.3967
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2551 | 1.0 | 375 | 1.3211 | 0.3167 |
| 1.2561 | 2.0 | 750 | 1.3119 | 0.32 |
| 1.2134 | 3.0 | 1125 | 1.3028 | 0.325 |
| 1.226 | 4.0 | 1500 | 1.2942 | 0.325 |
| 1.1635 | 5.0 | 1875 | 1.2859 | 0.3267 |
| 1.2304 | 6.0 | 2250 | 1.2778 | 0.3333 |
| 1.1734 | 7.0 | 2625 | 1.2702 | 0.3383 |
| 1.1724 | 8.0 | 3000 | 1.2625 | 0.3417 |
| 1.1336 | 9.0 | 3375 | 1.2554 | 0.3467 |
| 1.1266 | 10.0 | 3750 | 1.2486 | 0.3517 |
| 1.1276 | 11.0 | 4125 | 1.2419 | 0.355 |
| 1.1538 | 12.0 | 4500 | 1.2355 | 0.355 |
| 1.1425 | 13.0 | 4875 | 1.2292 | 0.3567 |
| 1.1463 | 14.0 | 5250 | 1.2233 | 0.36 |
| 1.1661 | 15.0 | 5625 | 1.2174 | 0.3633 |
| 1.1118 | 16.0 | 6000 | 1.2118 | 0.365 |
| 1.123 | 17.0 | 6375 | 1.2063 | 0.3667 |
| 1.1065 | 18.0 | 6750 | 1.2010 | 0.3667 |
| 1.1074 | 19.0 | 7125 | 1.1959 | 0.365 |
| 1.0742 | 20.0 | 7500 | 1.1911 | 0.3717 |
| 1.0616 | 21.0 | 7875 | 1.1865 | 0.3717 |
| 1.0745 | 22.0 | 8250 | 1.1820 | 0.3717 |
| 1.0871 | 23.0 | 8625 | 1.1777 | 0.3717 |
| 1.031 | 24.0 | 9000 | 1.1737 | 0.3717 |
| 1.0843 | 25.0 | 9375 | 1.1697 | 0.375 |
| 1.0616 | 26.0 | 9750 | 1.1660 | 0.3767 |
| 1.0414 | 27.0 | 10125 | 1.1624 | 0.3783 |
| 1.0303 | 28.0 | 10500 | 1.1590 | 0.3783 |
| 0.9887 | 29.0 | 10875 | 1.1558 | 0.38 |
| 1.0267 | 30.0 | 11250 | 1.1528 | 0.38 |
| 1.0792 | 31.0 | 11625 | 1.1499 | 0.3833 |
| 1.0736 | 32.0 | 12000 | 1.1472 | 0.3883 |
| 1.0868 | 33.0 | 12375 | 1.1446 | 0.39 |
| 1.0257 | 34.0 | 12750 | 1.1422 | 0.3883 |
| 1.0237 | 35.0 | 13125 | 1.1400 | 0.39 |
| 1.0201 | 36.0 | 13500 | 1.1379 | 0.39 |
| 1.0769 | 37.0 | 13875 | 1.1360 | 0.3917 |
| 1.032 | 38.0 | 14250 | 1.1343 | 0.3933 |
| 1.0317 | 39.0 | 14625 | 1.1327 | 0.395 |
| 1.0402 | 40.0 | 15000 | 1.1312 | 0.395 |
| 0.957 | 41.0 | 15375 | 1.1300 | 0.395 |
| 1.0445 | 42.0 | 15750 | 1.1288 | 0.395 |
| 1.0399 | 43.0 | 16125 | 1.1278 | 0.395 |
| 1.0323 | 44.0 | 16500 | 1.1270 | 0.3967 |
| 1.0444 | 45.0 | 16875 | 1.1263 | 0.3967 |
| 0.9983 | 46.0 | 17250 | 1.1257 | 0.3967 |
| 1.042 | 47.0 | 17625 | 1.1253 | 0.3967 |
| 1.0685 | 48.0 | 18000 | 1.1250 | 0.3967 |
| 1.0486 | 49.0 | 18375 | 1.1249 | 0.3967 |
| 1.0457 | 50.0 | 18750 | 1.1248 | 0.3967 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_tiny_sgd_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_sgd_00001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0692
- Accuracy: 0.45
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4013 | 1.0 | 375 | 1.3267 | 0.3533 |
| 1.3461 | 2.0 | 750 | 1.2947 | 0.36 |
| 1.2972 | 3.0 | 1125 | 1.2670 | 0.3617 |
| 1.31 | 4.0 | 1500 | 1.2433 | 0.3683 |
| 1.2221 | 5.0 | 1875 | 1.2236 | 0.3717 |
| 1.2656 | 6.0 | 2250 | 1.2067 | 0.375 |
| 1.2312 | 7.0 | 2625 | 1.1923 | 0.3817 |
| 1.1861 | 8.0 | 3000 | 1.1803 | 0.3817 |
| 1.1289 | 9.0 | 3375 | 1.1699 | 0.385 |
| 1.218 | 10.0 | 3750 | 1.1612 | 0.3867 |
| 1.1921 | 11.0 | 4125 | 1.1535 | 0.3983 |
| 1.1315 | 12.0 | 4500 | 1.1468 | 0.405 |
| 1.1732 | 13.0 | 4875 | 1.1407 | 0.4133 |
| 1.1412 | 14.0 | 5250 | 1.1354 | 0.41 |
| 1.1502 | 15.0 | 5625 | 1.1305 | 0.4133 |
| 1.126 | 16.0 | 6000 | 1.1259 | 0.4133 |
| 1.1098 | 17.0 | 6375 | 1.1217 | 0.4117 |
| 1.1197 | 18.0 | 6750 | 1.1177 | 0.4083 |
| 1.1329 | 19.0 | 7125 | 1.1140 | 0.4083 |
| 1.0741 | 20.0 | 7500 | 1.1105 | 0.4117 |
| 1.0617 | 21.0 | 7875 | 1.1072 | 0.4117 |
| 1.0917 | 22.0 | 8250 | 1.1041 | 0.41 |
| 1.0822 | 23.0 | 8625 | 1.1011 | 0.41 |
| 1.1336 | 24.0 | 9000 | 1.0984 | 0.4167 |
| 1.0665 | 25.0 | 9375 | 1.0958 | 0.415 |
| 1.1097 | 26.0 | 9750 | 1.0934 | 0.415 |
| 1.0499 | 27.0 | 10125 | 1.0911 | 0.4183 |
| 1.1202 | 28.0 | 10500 | 1.0889 | 0.4167 |
| 1.1038 | 29.0 | 10875 | 1.0869 | 0.4283 |
| 1.0838 | 30.0 | 11250 | 1.0850 | 0.4333 |
| 1.0717 | 31.0 | 11625 | 1.0832 | 0.4367 |
| 1.0773 | 32.0 | 12000 | 1.0816 | 0.4383 |
| 1.0858 | 33.0 | 12375 | 1.0800 | 0.44 |
| 1.0072 | 34.0 | 12750 | 1.0786 | 0.44 |
| 1.0435 | 35.0 | 13125 | 1.0773 | 0.4417 |
| 1.047 | 36.0 | 13500 | 1.0761 | 0.4433 |
| 1.0361 | 37.0 | 13875 | 1.0750 | 0.4483 |
| 1.0477 | 38.0 | 14250 | 1.0740 | 0.45 |
| 1.0658 | 39.0 | 14625 | 1.0731 | 0.4483 |
| 1.0711 | 40.0 | 15000 | 1.0723 | 0.445 |
| 1.0473 | 41.0 | 15375 | 1.0716 | 0.445 |
| 1.0521 | 42.0 | 15750 | 1.0711 | 0.445 |
| 1.0368 | 43.0 | 16125 | 1.0705 | 0.4467 |
| 1.0636 | 44.0 | 16500 | 1.0701 | 0.4483 |
| 1.0424 | 45.0 | 16875 | 1.0698 | 0.4483 |
| 1.0442 | 46.0 | 17250 | 1.0695 | 0.45 |
| 1.0667 | 47.0 | 17625 | 1.0694 | 0.45 |
| 1.0523 | 48.0 | 18000 | 1.0693 | 0.45 |
| 1.0135 | 49.0 | 18375 | 1.0692 | 0.45 |
| 1.0393 | 50.0 | 18750 | 1.0692 | 0.45 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_rms_001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_rms_001_fold3
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2450
- Accuracy: 0.7883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8383 | 1.0 | 375 | 0.9251 | 0.4967 |
| 0.7811 | 2.0 | 750 | 0.8274 | 0.55 |
| 0.7757 | 3.0 | 1125 | 0.8322 | 0.55 |
| 0.774 | 4.0 | 1500 | 0.7903 | 0.5667 |
| 0.7988 | 5.0 | 1875 | 0.7818 | 0.59 |
| 0.7926 | 6.0 | 2250 | 0.7711 | 0.595 |
| 0.7549 | 7.0 | 2625 | 0.7682 | 0.6267 |
| 0.7997 | 8.0 | 3000 | 0.7569 | 0.61 |
| 0.6926 | 9.0 | 3375 | 0.7561 | 0.6417 |
| 0.7413 | 10.0 | 3750 | 0.7251 | 0.6567 |
| 0.6722 | 11.0 | 4125 | 0.7285 | 0.6533 |
| 0.7582 | 12.0 | 4500 | 0.7029 | 0.66 |
| 0.6728 | 13.0 | 4875 | 0.7283 | 0.6433 |
| 0.6373 | 14.0 | 5250 | 0.7252 | 0.6333 |
| 0.648 | 15.0 | 5625 | 0.7000 | 0.67 |
| 0.6675 | 16.0 | 6000 | 0.7072 | 0.6683 |
| 0.7316 | 17.0 | 6375 | 0.7063 | 0.6717 |
| 0.7151 | 18.0 | 6750 | 0.6856 | 0.6683 |
| 0.6082 | 19.0 | 7125 | 0.6800 | 0.6817 |
| 0.6879 | 20.0 | 7500 | 0.6816 | 0.6733 |
| 0.5586 | 21.0 | 7875 | 0.6735 | 0.695 |
| 0.6065 | 22.0 | 8250 | 0.6507 | 0.71 |
| 0.5783 | 23.0 | 8625 | 0.6597 | 0.69 |
| 0.6456 | 24.0 | 9000 | 0.6102 | 0.74 |
| 0.5238 | 25.0 | 9375 | 0.6683 | 0.7117 |
| 0.5326 | 26.0 | 9750 | 0.6240 | 0.7183 |
| 0.5499 | 27.0 | 10125 | 0.6403 | 0.7083 |
| 0.5607 | 28.0 | 10500 | 0.5945 | 0.7417 |
| 0.4887 | 29.0 | 10875 | 0.6536 | 0.71 |
| 0.5354 | 30.0 | 11250 | 0.5785 | 0.725 |
| 0.5136 | 31.0 | 11625 | 0.6072 | 0.7517 |
| 0.5448 | 32.0 | 12000 | 0.6265 | 0.7383 |
| 0.4542 | 33.0 | 12375 | 0.6265 | 0.7417 |
| 0.4208 | 34.0 | 12750 | 0.6113 | 0.745 |
| 0.3509 | 35.0 | 13125 | 0.6279 | 0.7467 |
| 0.4112 | 36.0 | 13500 | 0.6145 | 0.74 |
| 0.3719 | 37.0 | 13875 | 0.6674 | 0.745 |
| 0.3029 | 38.0 | 14250 | 0.6977 | 0.7583 |
| 0.3416 | 39.0 | 14625 | 0.6751 | 0.7717 |
| 0.3246 | 40.0 | 15000 | 0.6878 | 0.7633 |
| 0.2432 | 41.0 | 15375 | 0.6417 | 0.79 |
| 0.2014 | 42.0 | 15750 | 0.7882 | 0.78 |
| 0.2354 | 43.0 | 16125 | 0.8175 | 0.7817 |
| 0.1797 | 44.0 | 16500 | 0.8553 | 0.79 |
| 0.1419 | 45.0 | 16875 | 0.9481 | 0.765 |
| 0.1815 | 46.0 | 17250 | 1.0306 | 0.765 |
| 0.1604 | 47.0 | 17625 | 1.0263 | 0.765 |
| 0.103 | 48.0 | 18000 | 1.1281 | 0.7833 |
| 0.0441 | 49.0 | 18375 | 1.2055 | 0.79 |
| 0.0741 | 50.0 | 18750 | 1.2450 | 0.7883 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_sgd_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_sgd_00001_fold4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1429
- Accuracy: 0.3883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2481 | 1.0 | 375 | 1.3387 | 0.3367 |
| 1.2853 | 2.0 | 750 | 1.3297 | 0.34 |
| 1.2245 | 3.0 | 1125 | 1.3209 | 0.34 |
| 1.2269 | 4.0 | 1500 | 1.3124 | 0.34 |
| 1.1881 | 5.0 | 1875 | 1.3044 | 0.34 |
| 1.1962 | 6.0 | 2250 | 1.2965 | 0.3417 |
| 1.1766 | 7.0 | 2625 | 1.2890 | 0.345 |
| 1.1422 | 8.0 | 3000 | 1.2816 | 0.3467 |
| 1.1265 | 9.0 | 3375 | 1.2745 | 0.3483 |
| 1.1532 | 10.0 | 3750 | 1.2676 | 0.3517 |
| 1.1456 | 11.0 | 4125 | 1.2609 | 0.3533 |
| 1.1221 | 12.0 | 4500 | 1.2545 | 0.355 |
| 1.1397 | 13.0 | 4875 | 1.2483 | 0.355 |
| 1.1323 | 14.0 | 5250 | 1.2422 | 0.3583 |
| 1.1113 | 15.0 | 5625 | 1.2362 | 0.3617 |
| 1.1197 | 16.0 | 6000 | 1.2307 | 0.3633 |
| 1.1175 | 17.0 | 6375 | 1.2251 | 0.3667 |
| 1.1137 | 18.0 | 6750 | 1.2199 | 0.3683 |
| 1.1317 | 19.0 | 7125 | 1.2149 | 0.3667 |
| 1.0985 | 20.0 | 7500 | 1.2099 | 0.37 |
| 1.1037 | 21.0 | 7875 | 1.2054 | 0.37 |
| 1.1051 | 22.0 | 8250 | 1.2008 | 0.3717 |
| 1.1012 | 23.0 | 8625 | 1.1965 | 0.3717 |
| 1.0418 | 24.0 | 9000 | 1.1925 | 0.375 |
| 1.0922 | 25.0 | 9375 | 1.1886 | 0.3767 |
| 1.0809 | 26.0 | 9750 | 1.1848 | 0.3767 |
| 1.096 | 27.0 | 10125 | 1.1812 | 0.3767 |
| 1.0328 | 28.0 | 10500 | 1.1778 | 0.375 |
| 1.0501 | 29.0 | 10875 | 1.1745 | 0.3767 |
| 1.065 | 30.0 | 11250 | 1.1714 | 0.3767 |
| 1.0717 | 31.0 | 11625 | 1.1685 | 0.3783 |
| 1.104 | 32.0 | 12000 | 1.1658 | 0.3767 |
| 1.0567 | 33.0 | 12375 | 1.1632 | 0.3783 |
| 1.0632 | 34.0 | 12750 | 1.1607 | 0.3783 |
| 1.0635 | 35.0 | 13125 | 1.1585 | 0.3783 |
| 1.0477 | 36.0 | 13500 | 1.1563 | 0.3833 |
| 1.0721 | 37.0 | 13875 | 1.1544 | 0.385 |
| 1.0594 | 38.0 | 14250 | 1.1526 | 0.385 |
| 1.0484 | 39.0 | 14625 | 1.1510 | 0.3867 |
| 1.0408 | 40.0 | 15000 | 1.1495 | 0.3883 |
| 1.0421 | 41.0 | 15375 | 1.1482 | 0.39 |
| 1.0561 | 42.0 | 15750 | 1.1470 | 0.3883 |
| 1.0338 | 43.0 | 16125 | 1.1460 | 0.3883 |
| 1.0224 | 44.0 | 16500 | 1.1451 | 0.3883 |
| 1.0269 | 45.0 | 16875 | 1.1444 | 0.3883 |
| 1.0608 | 46.0 | 17250 | 1.1438 | 0.3883 |
| 1.0652 | 47.0 | 17625 | 1.1434 | 0.3883 |
| 1.0189 | 48.0 | 18000 | 1.1431 | 0.3883 |
| 1.0225 | 49.0 | 18375 | 1.1429 | 0.3883 |
| 1.0356 | 50.0 | 18750 | 1.1429 | 0.3883 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_rms_001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_rms_001_fold4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7051
- Accuracy: 0.7983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7798 | 1.0 | 375 | 0.7672 | 0.5633 |
| 0.803 | 2.0 | 750 | 0.7688 | 0.58 |
| 0.7686 | 3.0 | 1125 | 0.7514 | 0.61 |
| 0.7375 | 4.0 | 1500 | 0.8350 | 0.5517 |
| 0.7507 | 5.0 | 1875 | 0.8001 | 0.595 |
| 0.7083 | 6.0 | 2250 | 0.7244 | 0.65 |
| 0.708 | 7.0 | 2625 | 0.7289 | 0.6467 |
| 0.7266 | 8.0 | 3000 | 0.7325 | 0.6633 |
| 0.6418 | 9.0 | 3375 | 0.6940 | 0.6917 |
| 0.673 | 10.0 | 3750 | 0.7042 | 0.6617 |
| 0.6803 | 11.0 | 4125 | 0.6907 | 0.6817 |
| 0.64 | 12.0 | 4500 | 0.6890 | 0.675 |
| 0.6467 | 13.0 | 4875 | 0.7095 | 0.67 |
| 0.6428 | 14.0 | 5250 | 0.6543 | 0.7083 |
| 0.6389 | 15.0 | 5625 | 0.5890 | 0.7383 |
| 0.5885 | 16.0 | 6000 | 0.5874 | 0.7383 |
| 0.5689 | 17.0 | 6375 | 0.6828 | 0.705 |
| 0.5988 | 18.0 | 6750 | 0.6153 | 0.74 |
| 0.5869 | 19.0 | 7125 | 0.5556 | 0.745 |
| 0.5829 | 20.0 | 7500 | 0.5816 | 0.7417 |
| 0.5202 | 21.0 | 7875 | 0.6299 | 0.7267 |
| 0.4671 | 22.0 | 8250 | 0.5955 | 0.7383 |
| 0.4713 | 23.0 | 8625 | 0.5489 | 0.7783 |
| 0.4814 | 24.0 | 9000 | 0.6063 | 0.76 |
| 0.4578 | 25.0 | 9375 | 0.6548 | 0.7367 |
| 0.4226 | 26.0 | 9750 | 0.5459 | 0.75 |
| 0.349 | 27.0 | 10125 | 0.6223 | 0.76 |
| 0.3499 | 28.0 | 10500 | 0.5682 | 0.7817 |
| 0.2869 | 29.0 | 10875 | 0.7135 | 0.7717 |
| 0.3419 | 30.0 | 11250 | 0.6094 | 0.7833 |
| 0.3402 | 31.0 | 11625 | 0.6473 | 0.785 |
| 0.3025 | 32.0 | 12000 | 0.6500 | 0.7783 |
| 0.2278 | 33.0 | 12375 | 0.7439 | 0.7633 |
| 0.2211 | 34.0 | 12750 | 0.7227 | 0.775 |
| 0.1813 | 35.0 | 13125 | 0.7187 | 0.8033 |
| 0.1887 | 36.0 | 13500 | 0.7980 | 0.7883 |
| 0.2308 | 37.0 | 13875 | 0.8180 | 0.8 |
| 0.1362 | 38.0 | 14250 | 0.8499 | 0.7867 |
| 0.1204 | 39.0 | 14625 | 0.8914 | 0.8033 |
| 0.1182 | 40.0 | 15000 | 0.9026 | 0.7933 |
| 0.1271 | 41.0 | 15375 | 1.1021 | 0.775 |
| 0.0646 | 42.0 | 15750 | 1.1489 | 0.7967 |
| 0.0428 | 43.0 | 16125 | 1.2387 | 0.8067 |
| 0.0277 | 44.0 | 16500 | 1.2320 | 0.81 |
| 0.0276 | 45.0 | 16875 | 1.3879 | 0.79 |
| 0.0246 | 46.0 | 17250 | 1.4881 | 0.8033 |
| 0.0344 | 47.0 | 17625 | 1.5278 | 0.7983 |
| 0.006 | 48.0 | 18000 | 1.5757 | 0.8017 |
| 0.0048 | 49.0 | 18375 | 1.6617 | 0.8033 |
| 0.0042 | 50.0 | 18750 | 1.7051 | 0.7983 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_sgd_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_sgd_00001_fold5
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1301
- Accuracy: 0.4017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2159 | 1.0 | 375 | 1.3104 | 0.3133 |
| 1.2415 | 2.0 | 750 | 1.3020 | 0.3233 |
| 1.2057 | 3.0 | 1125 | 1.2939 | 0.3233 |
| 1.176 | 4.0 | 1500 | 1.2863 | 0.3267 |
| 1.2191 | 5.0 | 1875 | 1.2790 | 0.3267 |
| 1.1863 | 6.0 | 2250 | 1.2719 | 0.3333 |
| 1.2037 | 7.0 | 2625 | 1.2651 | 0.34 |
| 1.177 | 8.0 | 3000 | 1.2586 | 0.3483 |
| 1.1576 | 9.0 | 3375 | 1.2521 | 0.35 |
| 1.0865 | 10.0 | 3750 | 1.2459 | 0.3517 |
| 1.1578 | 11.0 | 4125 | 1.2399 | 0.3533 |
| 1.1516 | 12.0 | 4500 | 1.2341 | 0.355 |
| 1.1216 | 13.0 | 4875 | 1.2282 | 0.355 |
| 1.1365 | 14.0 | 5250 | 1.2228 | 0.3583 |
| 1.1282 | 15.0 | 5625 | 1.2175 | 0.3583 |
| 1.1187 | 16.0 | 6000 | 1.2123 | 0.3633 |
| 1.1048 | 17.0 | 6375 | 1.2074 | 0.365 |
| 1.1548 | 18.0 | 6750 | 1.2025 | 0.365 |
| 1.1271 | 19.0 | 7125 | 1.1978 | 0.3683 |
| 1.1003 | 20.0 | 7500 | 1.1934 | 0.3717 |
| 1.0771 | 21.0 | 7875 | 1.1891 | 0.3733 |
| 1.0833 | 22.0 | 8250 | 1.1849 | 0.3767 |
| 1.1002 | 23.0 | 8625 | 1.1809 | 0.3783 |
| 1.0994 | 24.0 | 9000 | 1.1772 | 0.3833 |
| 1.0715 | 25.0 | 9375 | 1.1735 | 0.385 |
| 1.1029 | 26.0 | 9750 | 1.1700 | 0.3867 |
| 1.1056 | 27.0 | 10125 | 1.1666 | 0.3867 |
| 1.022 | 28.0 | 10500 | 1.1633 | 0.3883 |
| 1.0343 | 29.0 | 10875 | 1.1602 | 0.3867 |
| 1.0325 | 30.0 | 11250 | 1.1573 | 0.3883 |
| 1.0378 | 31.0 | 11625 | 1.1546 | 0.3883 |
| 1.0659 | 32.0 | 12000 | 1.1519 | 0.3867 |
| 1.0282 | 33.0 | 12375 | 1.1495 | 0.3867 |
| 1.0519 | 34.0 | 12750 | 1.1472 | 0.3883 |
| 1.0399 | 35.0 | 13125 | 1.1451 | 0.3883 |
| 1.0632 | 36.0 | 13500 | 1.1430 | 0.39 |
| 1.015 | 37.0 | 13875 | 1.1411 | 0.39 |
| 1.0714 | 38.0 | 14250 | 1.1394 | 0.39 |
| 0.9921 | 39.0 | 14625 | 1.1379 | 0.3917 |
| 1.0391 | 40.0 | 15000 | 1.1365 | 0.3917 |
| 1.0121 | 41.0 | 15375 | 1.1352 | 0.395 |
| 1.0675 | 42.0 | 15750 | 1.1341 | 0.3967 |
| 1.0815 | 43.0 | 16125 | 1.1331 | 0.3967 |
| 1.0054 | 44.0 | 16500 | 1.1322 | 0.3967 |
| 1.0674 | 45.0 | 16875 | 1.1316 | 0.3983 |
| 1.0115 | 46.0 | 17250 | 1.1310 | 0.4 |
| 1.0426 | 47.0 | 17625 | 1.1306 | 0.4017 |
| 1.0416 | 48.0 | 18000 | 1.1303 | 0.4017 |
| 1.0297 | 49.0 | 18375 | 1.1302 | 0.4017 |
| 1.0431 | 50.0 | 18750 | 1.1301 | 0.4017 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_rms_001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_rms_001_fold5
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3262
- Accuracy: 0.8217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.931 | 1.0 | 375 | 0.8668 | 0.5083 |
| 0.8597 | 2.0 | 750 | 0.7892 | 0.6017 |
| 0.7587 | 3.0 | 1125 | 0.7350 | 0.6383 |
| 0.7046 | 4.0 | 1500 | 0.7282 | 0.65 |
| 0.6817 | 5.0 | 1875 | 0.7027 | 0.6567 |
| 0.6292 | 6.0 | 2250 | 0.6987 | 0.6683 |
| 0.6024 | 7.0 | 2625 | 0.5984 | 0.7267 |
| 0.6528 | 8.0 | 3000 | 0.5956 | 0.7267 |
| 0.5546 | 9.0 | 3375 | 0.5629 | 0.765 |
| 0.4767 | 10.0 | 3750 | 0.5576 | 0.75 |
| 0.4967 | 11.0 | 4125 | 0.4703 | 0.8017 |
| 0.3904 | 12.0 | 4500 | 0.4630 | 0.8083 |
| 0.395 | 13.0 | 4875 | 0.4837 | 0.8 |
| 0.4102 | 14.0 | 5250 | 0.4887 | 0.815 |
| 0.4425 | 15.0 | 5625 | 0.4472 | 0.8317 |
| 0.269 | 16.0 | 6000 | 0.4817 | 0.8133 |
| 0.3554 | 17.0 | 6375 | 0.4030 | 0.8483 |
| 0.3667 | 18.0 | 6750 | 0.4187 | 0.83 |
| 0.2943 | 19.0 | 7125 | 0.4575 | 0.8333 |
| 0.2361 | 20.0 | 7500 | 0.4670 | 0.8317 |
| 0.2672 | 21.0 | 7875 | 0.4447 | 0.8383 |
| 0.2065 | 22.0 | 8250 | 0.4671 | 0.8267 |
| 0.3036 | 23.0 | 8625 | 0.5659 | 0.8167 |
| 0.1998 | 24.0 | 9000 | 0.5359 | 0.8233 |
| 0.1813 | 25.0 | 9375 | 0.4898 | 0.85 |
| 0.16 | 26.0 | 9750 | 0.5701 | 0.835 |
| 0.1617 | 27.0 | 10125 | 0.5423 | 0.8333 |
| 0.1338 | 28.0 | 10500 | 0.5644 | 0.8483 |
| 0.1411 | 29.0 | 10875 | 0.5853 | 0.8267 |
| 0.0859 | 30.0 | 11250 | 0.6605 | 0.8217 |
| 0.101 | 31.0 | 11625 | 0.7234 | 0.8317 |
| 0.0828 | 32.0 | 12000 | 0.6563 | 0.8367 |
| 0.1039 | 33.0 | 12375 | 0.7913 | 0.82 |
| 0.0772 | 34.0 | 12750 | 0.8613 | 0.82 |
| 0.0737 | 35.0 | 13125 | 0.7477 | 0.8283 |
| 0.0714 | 36.0 | 13500 | 0.9064 | 0.83 |
| 0.0337 | 37.0 | 13875 | 0.8383 | 0.8367 |
| 0.094 | 38.0 | 14250 | 0.9398 | 0.8233 |
| 0.0203 | 39.0 | 14625 | 0.9121 | 0.8267 |
| 0.0289 | 40.0 | 15000 | 1.0830 | 0.8283 |
| 0.0242 | 41.0 | 15375 | 1.1069 | 0.825 |
| 0.0154 | 42.0 | 15750 | 1.1781 | 0.8117 |
| 0.009 | 43.0 | 16125 | 1.1755 | 0.8167 |
| 0.0144 | 44.0 | 16500 | 1.1730 | 0.8233 |
| 0.0239 | 45.0 | 16875 | 1.4682 | 0.8083 |
| 0.0221 | 46.0 | 17250 | 1.3105 | 0.82 |
| 0.0362 | 47.0 | 17625 | 1.3368 | 0.8317 |
| 0.0008 | 48.0 | 18000 | 1.2965 | 0.8317 |
| 0.0038 | 49.0 | 18375 | 1.2931 | 0.8317 |
| 0.0178 | 50.0 | 18750 | 1.3262 | 0.8217 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
aaa12963337/msi-dinat-mini-pretrain
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# msi-dinat-mini-pretrain
This model is a fine-tuned version of [shi-labs/dinat-mini-in1k-224](https://huggingface.co/shi-labs/dinat-mini-in1k-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5378
- Accuracy: 0.8937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1041 | 1.0 | 1562 | 0.4092 | 0.8756 |
| 0.0527 | 2.0 | 3125 | 0.6298 | 0.8765 |
| 0.0611 | 3.0 | 4686 | 0.5378 | 0.8937 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"adi",
"back",
"deb",
"lym",
"muc",
"mus",
"norm",
"str",
"tum"
] |
aaa12963337/msi-resnet-18-pretrain
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# msi-resnet-18-pretrain
This model is a fine-tuned version of [microsoft/resnet-18](https://huggingface.co/microsoft/resnet-18) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4121
- Accuracy: 0.8675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1724 | 1.0 | 1562 | 0.3597 | 0.8806 |
| 0.0543 | 2.0 | 3125 | 0.3707 | 0.8875 |
| 0.0834 | 3.0 | 4686 | 0.4121 | 0.8675 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"adi",
"back",
"deb",
"lym",
"muc",
"mus",
"norm",
"str",
"tum"
] |
aaa12963337/msi-dinat-mini
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# msi-dinat-mini
This model was trained from scratch on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8735
- Accuracy: 0.6308
- F1: 0.4532
- Precision: 0.6338
- Recall: 0.3526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.5414 | 1.0 | 2015 | 0.7584 | 0.5874 | 0.3960 | 0.5427 | 0.3117 |
| 0.4715 | 2.0 | 4031 | 0.7695 | 0.6208 | 0.4593 | 0.6021 | 0.3712 |
| 0.4159 | 3.0 | 6047 | 0.7922 | 0.6230 | 0.4637 | 0.6056 | 0.3757 |
| 0.3774 | 4.0 | 8063 | 0.8166 | 0.6286 | 0.4589 | 0.6235 | 0.3630 |
| 0.3635 | 5.0 | 10078 | 0.8123 | 0.6349 | 0.4889 | 0.6225 | 0.4026 |
| 0.3471 | 6.0 | 12094 | 0.8481 | 0.6265 | 0.4575 | 0.6186 | 0.3630 |
| 0.3616 | 7.0 | 14110 | 0.8605 | 0.6284 | 0.4514 | 0.6279 | 0.3524 |
| 0.3517 | 8.0 | 16126 | 0.8661 | 0.6329 | 0.4600 | 0.6356 | 0.3604 |
| 0.3476 | 9.0 | 18141 | 0.8631 | 0.6330 | 0.4619 | 0.6346 | 0.3631 |
| 0.3469 | 10.0 | 20150 | 0.8735 | 0.6308 | 0.4532 | 0.6338 | 0.3526 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"0",
"1"
] |
aaa12963337/msi-resnet-18
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# msi-resnet-18
This model was trained from scratch on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6854
- Accuracy: 0.6337
- F1: 0.5299
- Precision: 0.5977
- Recall: 0.4760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.499 | 1.0 | 2015 | 0.7028 | 0.6189 | 0.4730 | 0.5911 | 0.3942 |
| 0.4738 | 2.0 | 4031 | 0.7003 | 0.6268 | 0.4981 | 0.5979 | 0.4268 |
| 0.4788 | 3.0 | 6047 | 0.7195 | 0.6148 | 0.4517 | 0.5906 | 0.3657 |
| 0.4523 | 4.0 | 8060 | 0.6854 | 0.6337 | 0.5299 | 0.5977 | 0.4760 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"0",
"1"
] |
dima806/face_emotions_image_detection
|
Predicts face emotion based on facial image.
See https://www.kaggle.com/code/dima806/face-emotions-image-detection-vit for more details.
```
Classification report:
precision recall f1-score support
Ahegao 0.9738 0.9919 0.9828 1611
Angry 0.8439 0.6580 0.7394 1611
Happy 0.8939 0.9261 0.9098 1611
Neutral 0.6056 0.7635 0.6755 1611
Sad 0.6661 0.5140 0.5802 1611
Surprise 0.7704 0.8733 0.8186 1610
accuracy 0.7878 9665
macro avg 0.7923 0.7878 0.7844 9665
weighted avg 0.7923 0.7878 0.7844 9665
```
|
[
"ahegao",
"angry",
"happy",
"neutral",
"sad",
"surprise"
] |
hkivancoral/smids_5x_beit_base_rms_0001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_rms_0001_fold1
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9099
- Accuracy: 0.9032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2877 | 1.0 | 376 | 0.3734 | 0.8397 |
| 0.263 | 2.0 | 752 | 0.2897 | 0.8898 |
| 0.214 | 3.0 | 1128 | 0.4392 | 0.8815 |
| 0.1401 | 4.0 | 1504 | 0.3640 | 0.8865 |
| 0.0913 | 5.0 | 1880 | 0.4071 | 0.8982 |
| 0.0792 | 6.0 | 2256 | 0.5796 | 0.8765 |
| 0.0326 | 7.0 | 2632 | 0.5828 | 0.8781 |
| 0.0613 | 8.0 | 3008 | 0.5485 | 0.8965 |
| 0.0614 | 9.0 | 3384 | 0.5394 | 0.8815 |
| 0.0378 | 10.0 | 3760 | 0.5802 | 0.8932 |
| 0.016 | 11.0 | 4136 | 0.5517 | 0.8998 |
| 0.1114 | 12.0 | 4512 | 0.5851 | 0.8798 |
| 0.0304 | 13.0 | 4888 | 0.5301 | 0.8731 |
| 0.0236 | 14.0 | 5264 | 0.6243 | 0.8965 |
| 0.0147 | 15.0 | 5640 | 0.5697 | 0.8998 |
| 0.0009 | 16.0 | 6016 | 0.5289 | 0.9098 |
| 0.003 | 17.0 | 6392 | 0.6450 | 0.8932 |
| 0.045 | 18.0 | 6768 | 0.7662 | 0.8915 |
| 0.0003 | 19.0 | 7144 | 0.6709 | 0.8898 |
| 0.0083 | 20.0 | 7520 | 0.7941 | 0.8865 |
| 0.0011 | 21.0 | 7896 | 0.8204 | 0.8831 |
| 0.0265 | 22.0 | 8272 | 0.7663 | 0.8798 |
| 0.0065 | 23.0 | 8648 | 0.7543 | 0.8865 |
| 0.0005 | 24.0 | 9024 | 0.8605 | 0.8881 |
| 0.0223 | 25.0 | 9400 | 0.7879 | 0.8815 |
| 0.0093 | 26.0 | 9776 | 0.8444 | 0.8748 |
| 0.0004 | 27.0 | 10152 | 0.7708 | 0.8781 |
| 0.0001 | 28.0 | 10528 | 0.7477 | 0.8948 |
| 0.0051 | 29.0 | 10904 | 0.7865 | 0.8831 |
| 0.0131 | 30.0 | 11280 | 0.8049 | 0.9098 |
| 0.0 | 31.0 | 11656 | 0.8832 | 0.8915 |
| 0.0001 | 32.0 | 12032 | 0.8723 | 0.8965 |
| 0.0007 | 33.0 | 12408 | 1.0043 | 0.8865 |
| 0.0001 | 34.0 | 12784 | 0.9137 | 0.8881 |
| 0.0019 | 35.0 | 13160 | 0.7356 | 0.9048 |
| 0.0 | 36.0 | 13536 | 0.7048 | 0.9032 |
| 0.0 | 37.0 | 13912 | 0.8706 | 0.9015 |
| 0.0 | 38.0 | 14288 | 0.7699 | 0.9032 |
| 0.0 | 39.0 | 14664 | 0.8383 | 0.8982 |
| 0.0 | 40.0 | 15040 | 0.8533 | 0.9048 |
| 0.0008 | 41.0 | 15416 | 0.8710 | 0.9015 |
| 0.0001 | 42.0 | 15792 | 0.9271 | 0.8898 |
| 0.0 | 43.0 | 16168 | 0.9308 | 0.8982 |
| 0.0 | 44.0 | 16544 | 0.9577 | 0.8982 |
| 0.0 | 45.0 | 16920 | 0.9412 | 0.8898 |
| 0.0033 | 46.0 | 17296 | 0.9423 | 0.8998 |
| 0.0 | 47.0 | 17672 | 0.9136 | 0.9048 |
| 0.0 | 48.0 | 18048 | 0.9005 | 0.9065 |
| 0.0001 | 49.0 | 18424 | 0.9138 | 0.9048 |
| 0.0023 | 50.0 | 18800 | 0.9099 | 0.9032 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_rms_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_rms_00001_fold1
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0277
- Accuracy: 0.8965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.169 | 1.0 | 376 | 0.2845 | 0.8848 |
| 0.1527 | 2.0 | 752 | 0.2709 | 0.9132 |
| 0.1446 | 3.0 | 1128 | 0.3421 | 0.8998 |
| 0.0485 | 4.0 | 1504 | 0.4474 | 0.9065 |
| 0.0159 | 5.0 | 1880 | 0.4847 | 0.8965 |
| 0.0162 | 6.0 | 2256 | 0.6046 | 0.8982 |
| 0.0753 | 7.0 | 2632 | 0.6419 | 0.8898 |
| 0.0455 | 8.0 | 3008 | 0.7218 | 0.8965 |
| 0.0437 | 9.0 | 3384 | 0.8405 | 0.8815 |
| 0.007 | 10.0 | 3760 | 0.7349 | 0.9015 |
| 0.0254 | 11.0 | 4136 | 0.8461 | 0.8915 |
| 0.0214 | 12.0 | 4512 | 0.7638 | 0.8898 |
| 0.0283 | 13.0 | 4888 | 0.8735 | 0.8948 |
| 0.0331 | 14.0 | 5264 | 0.8577 | 0.8932 |
| 0.0029 | 15.0 | 5640 | 0.9013 | 0.8982 |
| 0.0041 | 16.0 | 6016 | 0.9992 | 0.8698 |
| 0.0007 | 17.0 | 6392 | 0.9147 | 0.8865 |
| 0.0019 | 18.0 | 6768 | 0.9339 | 0.8915 |
| 0.0002 | 19.0 | 7144 | 0.8625 | 0.8982 |
| 0.0341 | 20.0 | 7520 | 0.9287 | 0.8815 |
| 0.0 | 21.0 | 7896 | 1.0011 | 0.8831 |
| 0.0 | 22.0 | 8272 | 0.8805 | 0.8948 |
| 0.0028 | 23.0 | 8648 | 0.9347 | 0.8965 |
| 0.0001 | 24.0 | 9024 | 0.9930 | 0.8965 |
| 0.001 | 25.0 | 9400 | 1.0054 | 0.8982 |
| 0.029 | 26.0 | 9776 | 0.8994 | 0.8932 |
| 0.0028 | 27.0 | 10152 | 0.9209 | 0.8865 |
| 0.0009 | 28.0 | 10528 | 0.9409 | 0.8998 |
| 0.0018 | 29.0 | 10904 | 1.0441 | 0.8848 |
| 0.0163 | 30.0 | 11280 | 0.9017 | 0.9032 |
| 0.0 | 31.0 | 11656 | 0.8554 | 0.9015 |
| 0.0 | 32.0 | 12032 | 0.8702 | 0.9048 |
| 0.0001 | 33.0 | 12408 | 0.9551 | 0.8965 |
| 0.0 | 34.0 | 12784 | 0.9265 | 0.8982 |
| 0.0004 | 35.0 | 13160 | 1.0253 | 0.8865 |
| 0.0044 | 36.0 | 13536 | 0.9098 | 0.8948 |
| 0.0003 | 37.0 | 13912 | 0.9290 | 0.9032 |
| 0.0 | 38.0 | 14288 | 1.0072 | 0.8948 |
| 0.0 | 39.0 | 14664 | 1.0677 | 0.8948 |
| 0.0 | 40.0 | 15040 | 1.0064 | 0.8982 |
| 0.0 | 41.0 | 15416 | 0.9891 | 0.8982 |
| 0.0 | 42.0 | 15792 | 1.0628 | 0.8948 |
| 0.0 | 43.0 | 16168 | 1.0396 | 0.8915 |
| 0.0 | 44.0 | 16544 | 1.0033 | 0.8982 |
| 0.0 | 45.0 | 16920 | 1.0214 | 0.8998 |
| 0.0033 | 46.0 | 17296 | 1.0498 | 0.8898 |
| 0.0 | 47.0 | 17672 | 1.0375 | 0.8932 |
| 0.0 | 48.0 | 18048 | 1.0305 | 0.8898 |
| 0.0 | 49.0 | 18424 | 1.0285 | 0.8948 |
| 0.0028 | 50.0 | 18800 | 1.0277 | 0.8965 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_tiny_rms_001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_rms_001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6720
- Accuracy: 0.7563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.9519 | 1.0 | 376 | 0.9699 | 0.4808 |
| 0.8617 | 2.0 | 752 | 0.8618 | 0.5392 |
| 0.8149 | 3.0 | 1128 | 0.8048 | 0.5893 |
| 0.8075 | 4.0 | 1504 | 0.7999 | 0.5609 |
| 0.9135 | 5.0 | 1880 | 0.7865 | 0.6160 |
| 0.783 | 6.0 | 2256 | 0.8586 | 0.5893 |
| 0.725 | 7.0 | 2632 | 0.8054 | 0.6227 |
| 0.6972 | 8.0 | 3008 | 0.7248 | 0.6444 |
| 0.72 | 9.0 | 3384 | 0.7167 | 0.6661 |
| 0.7292 | 10.0 | 3760 | 0.7657 | 0.6795 |
| 0.645 | 11.0 | 4136 | 0.6894 | 0.6861 |
| 0.7059 | 12.0 | 4512 | 0.7066 | 0.6928 |
| 0.7086 | 13.0 | 4888 | 0.7125 | 0.6995 |
| 0.6705 | 14.0 | 5264 | 0.6700 | 0.7078 |
| 0.6566 | 15.0 | 5640 | 0.6881 | 0.6861 |
| 0.5734 | 16.0 | 6016 | 0.7052 | 0.6694 |
| 0.5199 | 17.0 | 6392 | 0.7378 | 0.6628 |
| 0.659 | 18.0 | 6768 | 0.6486 | 0.7112 |
| 0.6288 | 19.0 | 7144 | 0.7161 | 0.6528 |
| 0.566 | 20.0 | 7520 | 0.6171 | 0.7212 |
| 0.6474 | 21.0 | 7896 | 0.6184 | 0.7262 |
| 0.5542 | 22.0 | 8272 | 0.6826 | 0.6861 |
| 0.5759 | 23.0 | 8648 | 0.6131 | 0.7229 |
| 0.6266 | 24.0 | 9024 | 0.6647 | 0.7112 |
| 0.6436 | 25.0 | 9400 | 0.6298 | 0.7078 |
| 0.5378 | 26.0 | 9776 | 0.6147 | 0.7229 |
| 0.534 | 27.0 | 10152 | 0.6258 | 0.7179 |
| 0.4794 | 28.0 | 10528 | 0.6515 | 0.7095 |
| 0.5282 | 29.0 | 10904 | 0.6735 | 0.6912 |
| 0.4828 | 30.0 | 11280 | 0.6279 | 0.7179 |
| 0.5597 | 31.0 | 11656 | 0.6003 | 0.7295 |
| 0.5931 | 32.0 | 12032 | 0.6323 | 0.7362 |
| 0.4604 | 33.0 | 12408 | 0.6185 | 0.7446 |
| 0.473 | 34.0 | 12784 | 0.6171 | 0.7396 |
| 0.5357 | 35.0 | 13160 | 0.6139 | 0.7279 |
| 0.5273 | 36.0 | 13536 | 0.6022 | 0.7379 |
| 0.446 | 37.0 | 13912 | 0.6164 | 0.7362 |
| 0.5051 | 38.0 | 14288 | 0.6160 | 0.7329 |
| 0.5127 | 39.0 | 14664 | 0.6147 | 0.7629 |
| 0.5424 | 40.0 | 15040 | 0.5988 | 0.7579 |
| 0.4672 | 41.0 | 15416 | 0.6152 | 0.7613 |
| 0.4259 | 42.0 | 15792 | 0.6298 | 0.7429 |
| 0.4313 | 43.0 | 16168 | 0.6086 | 0.7462 |
| 0.4716 | 44.0 | 16544 | 0.6307 | 0.7496 |
| 0.4303 | 45.0 | 16920 | 0.6176 | 0.7513 |
| 0.3889 | 46.0 | 17296 | 0.6198 | 0.7479 |
| 0.4191 | 47.0 | 17672 | 0.6340 | 0.7563 |
| 0.3752 | 48.0 | 18048 | 0.6420 | 0.7596 |
| 0.3744 | 49.0 | 18424 | 0.6614 | 0.7529 |
| 0.3137 | 50.0 | 18800 | 0.6720 | 0.7563 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_tiny_rms_001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_rms_001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4964
- Accuracy: 0.8486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0328 | 1.0 | 375 | 0.9569 | 0.4459 |
| 0.8929 | 2.0 | 750 | 0.8978 | 0.5374 |
| 0.8224 | 3.0 | 1125 | 0.7888 | 0.5574 |
| 0.8327 | 4.0 | 1500 | 0.8571 | 0.5641 |
| 0.7266 | 5.0 | 1875 | 1.1729 | 0.5025 |
| 0.6507 | 6.0 | 2250 | 0.7875 | 0.6456 |
| 0.6983 | 7.0 | 2625 | 0.6489 | 0.6972 |
| 0.6312 | 8.0 | 3000 | 0.7326 | 0.6789 |
| 0.641 | 9.0 | 3375 | 0.5505 | 0.7488 |
| 0.6354 | 10.0 | 3750 | 0.5766 | 0.7354 |
| 0.5813 | 11.0 | 4125 | 0.4910 | 0.7920 |
| 0.6084 | 12.0 | 4500 | 0.5458 | 0.7720 |
| 0.4944 | 13.0 | 4875 | 0.4657 | 0.8020 |
| 0.5555 | 14.0 | 5250 | 0.5401 | 0.7621 |
| 0.526 | 15.0 | 5625 | 0.4958 | 0.7837 |
| 0.3751 | 16.0 | 6000 | 0.4911 | 0.8037 |
| 0.4264 | 17.0 | 6375 | 0.5204 | 0.7837 |
| 0.4312 | 18.0 | 6750 | 0.5011 | 0.7953 |
| 0.3686 | 19.0 | 7125 | 0.4979 | 0.7970 |
| 0.3954 | 20.0 | 7500 | 0.4812 | 0.8120 |
| 0.3782 | 21.0 | 7875 | 0.4706 | 0.8120 |
| 0.3544 | 22.0 | 8250 | 0.4461 | 0.8353 |
| 0.3759 | 23.0 | 8625 | 0.4516 | 0.8319 |
| 0.3473 | 24.0 | 9000 | 0.4332 | 0.8270 |
| 0.2572 | 25.0 | 9375 | 0.5951 | 0.8203 |
| 0.3628 | 26.0 | 9750 | 0.5630 | 0.7887 |
| 0.2737 | 27.0 | 10125 | 0.5304 | 0.8336 |
| 0.2272 | 28.0 | 10500 | 0.5597 | 0.8319 |
| 0.2226 | 29.0 | 10875 | 0.5680 | 0.8419 |
| 0.1778 | 30.0 | 11250 | 0.6295 | 0.8170 |
| 0.2382 | 31.0 | 11625 | 0.6223 | 0.8270 |
| 0.1721 | 32.0 | 12000 | 0.6049 | 0.8469 |
| 0.219 | 33.0 | 12375 | 0.5556 | 0.8569 |
| 0.0972 | 34.0 | 12750 | 0.6389 | 0.8502 |
| 0.1781 | 35.0 | 13125 | 0.7873 | 0.8253 |
| 0.1052 | 36.0 | 13500 | 0.8815 | 0.8236 |
| 0.1087 | 37.0 | 13875 | 0.7444 | 0.8453 |
| 0.09 | 38.0 | 14250 | 0.9779 | 0.8253 |
| 0.0859 | 39.0 | 14625 | 0.8817 | 0.8386 |
| 0.0521 | 40.0 | 15000 | 0.9849 | 0.8453 |
| 0.081 | 41.0 | 15375 | 1.0555 | 0.8203 |
| 0.0225 | 42.0 | 15750 | 1.1081 | 0.8303 |
| 0.0521 | 43.0 | 16125 | 1.2294 | 0.8253 |
| 0.0259 | 44.0 | 16500 | 1.3035 | 0.8336 |
| 0.0403 | 45.0 | 16875 | 1.3613 | 0.8253 |
| 0.0225 | 46.0 | 17250 | 1.4500 | 0.8103 |
| 0.0235 | 47.0 | 17625 | 1.5096 | 0.8270 |
| 0.0002 | 48.0 | 18000 | 1.5022 | 0.8469 |
| 0.0101 | 49.0 | 18375 | 1.4968 | 0.8469 |
| 0.0029 | 50.0 | 18750 | 1.4964 | 0.8486 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_rms_0001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_rms_0001_fold2
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9869
- Accuracy: 0.9068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3386 | 1.0 | 375 | 0.2631 | 0.8902 |
| 0.2769 | 2.0 | 750 | 0.2812 | 0.8852 |
| 0.1948 | 3.0 | 1125 | 0.4161 | 0.8686 |
| 0.1489 | 4.0 | 1500 | 0.3316 | 0.8852 |
| 0.1015 | 5.0 | 1875 | 0.3966 | 0.8835 |
| 0.0659 | 6.0 | 2250 | 0.5521 | 0.8686 |
| 0.0987 | 7.0 | 2625 | 0.4706 | 0.8852 |
| 0.0304 | 8.0 | 3000 | 0.6100 | 0.8835 |
| 0.0177 | 9.0 | 3375 | 0.5599 | 0.8835 |
| 0.0365 | 10.0 | 3750 | 0.5970 | 0.8902 |
| 0.07 | 11.0 | 4125 | 0.5587 | 0.8869 |
| 0.025 | 12.0 | 4500 | 0.6283 | 0.8885 |
| 0.013 | 13.0 | 4875 | 0.4540 | 0.9035 |
| 0.0155 | 14.0 | 5250 | 0.6593 | 0.8869 |
| 0.0612 | 15.0 | 5625 | 0.6571 | 0.8935 |
| 0.0058 | 16.0 | 6000 | 0.6333 | 0.8835 |
| 0.0564 | 17.0 | 6375 | 0.5490 | 0.8918 |
| 0.0204 | 18.0 | 6750 | 0.7225 | 0.8985 |
| 0.0128 | 19.0 | 7125 | 0.4844 | 0.9135 |
| 0.0241 | 20.0 | 7500 | 0.5085 | 0.9018 |
| 0.0042 | 21.0 | 7875 | 0.5500 | 0.9135 |
| 0.0209 | 22.0 | 8250 | 0.6987 | 0.8869 |
| 0.0277 | 23.0 | 8625 | 0.7227 | 0.8902 |
| 0.027 | 24.0 | 9000 | 0.8023 | 0.8769 |
| 0.0061 | 25.0 | 9375 | 0.7219 | 0.8985 |
| 0.0004 | 26.0 | 9750 | 0.7303 | 0.8935 |
| 0.0002 | 27.0 | 10125 | 0.6194 | 0.9118 |
| 0.0002 | 28.0 | 10500 | 0.7358 | 0.9085 |
| 0.0068 | 29.0 | 10875 | 0.7598 | 0.9002 |
| 0.0002 | 30.0 | 11250 | 0.7703 | 0.8935 |
| 0.0136 | 31.0 | 11625 | 0.7951 | 0.8902 |
| 0.0053 | 32.0 | 12000 | 0.8891 | 0.8918 |
| 0.0038 | 33.0 | 12375 | 0.7625 | 0.9018 |
| 0.0002 | 34.0 | 12750 | 0.8776 | 0.9052 |
| 0.0 | 35.0 | 13125 | 0.9210 | 0.9002 |
| 0.0195 | 36.0 | 13500 | 0.7510 | 0.9151 |
| 0.0008 | 37.0 | 13875 | 0.7794 | 0.9135 |
| 0.0007 | 38.0 | 14250 | 0.8315 | 0.9085 |
| 0.0005 | 39.0 | 14625 | 0.7854 | 0.9151 |
| 0.0033 | 40.0 | 15000 | 0.8459 | 0.9101 |
| 0.0001 | 41.0 | 15375 | 0.9023 | 0.9002 |
| 0.0027 | 42.0 | 15750 | 1.0108 | 0.9018 |
| 0.0026 | 43.0 | 16125 | 1.0264 | 0.8952 |
| 0.0026 | 44.0 | 16500 | 0.9790 | 0.9035 |
| 0.0027 | 45.0 | 16875 | 0.9445 | 0.9101 |
| 0.0 | 46.0 | 17250 | 0.9135 | 0.9185 |
| 0.0057 | 47.0 | 17625 | 0.9222 | 0.9085 |
| 0.0 | 48.0 | 18000 | 0.9390 | 0.9085 |
| 0.0052 | 49.0 | 18375 | 0.9876 | 0.9052 |
| 0.0025 | 50.0 | 18750 | 0.9869 | 0.9068 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_rms_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_rms_00001_fold2
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9950
- Accuracy: 0.9052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2499 | 1.0 | 375 | 0.2304 | 0.9185 |
| 0.1775 | 2.0 | 750 | 0.2550 | 0.9018 |
| 0.1217 | 3.0 | 1125 | 0.4013 | 0.8869 |
| 0.0433 | 4.0 | 1500 | 0.5189 | 0.8819 |
| 0.0358 | 5.0 | 1875 | 0.4893 | 0.8985 |
| 0.0375 | 6.0 | 2250 | 0.5725 | 0.9052 |
| 0.0405 | 7.0 | 2625 | 0.5904 | 0.9101 |
| 0.012 | 8.0 | 3000 | 0.7002 | 0.9018 |
| 0.0053 | 9.0 | 3375 | 0.7065 | 0.9052 |
| 0.0037 | 10.0 | 3750 | 0.7485 | 0.8985 |
| 0.0126 | 11.0 | 4125 | 0.7919 | 0.8985 |
| 0.0005 | 12.0 | 4500 | 0.7919 | 0.8968 |
| 0.0159 | 13.0 | 4875 | 0.8564 | 0.8902 |
| 0.0001 | 14.0 | 5250 | 0.8426 | 0.8852 |
| 0.0002 | 15.0 | 5625 | 0.8433 | 0.8918 |
| 0.0148 | 16.0 | 6000 | 0.7634 | 0.9018 |
| 0.0208 | 17.0 | 6375 | 0.8403 | 0.8952 |
| 0.0001 | 18.0 | 6750 | 0.8471 | 0.9018 |
| 0.0491 | 19.0 | 7125 | 0.8371 | 0.9035 |
| 0.0 | 20.0 | 7500 | 0.7423 | 0.9052 |
| 0.0126 | 21.0 | 7875 | 0.8759 | 0.8935 |
| 0.0008 | 22.0 | 8250 | 0.8648 | 0.9002 |
| 0.0 | 23.0 | 8625 | 0.9554 | 0.9002 |
| 0.0005 | 24.0 | 9000 | 0.9755 | 0.8918 |
| 0.0184 | 25.0 | 9375 | 0.9160 | 0.8918 |
| 0.0 | 26.0 | 9750 | 0.9691 | 0.8918 |
| 0.0 | 27.0 | 10125 | 0.8701 | 0.8968 |
| 0.0002 | 28.0 | 10500 | 0.7677 | 0.9035 |
| 0.0001 | 29.0 | 10875 | 0.9258 | 0.9035 |
| 0.0033 | 30.0 | 11250 | 0.9080 | 0.9002 |
| 0.0045 | 31.0 | 11625 | 1.0210 | 0.8935 |
| 0.0045 | 32.0 | 12000 | 0.9883 | 0.8985 |
| 0.0017 | 33.0 | 12375 | 0.8984 | 0.9035 |
| 0.0 | 34.0 | 12750 | 0.8844 | 0.9101 |
| 0.0007 | 35.0 | 13125 | 0.9085 | 0.8918 |
| 0.0002 | 36.0 | 13500 | 0.9790 | 0.9035 |
| 0.0 | 37.0 | 13875 | 1.0705 | 0.8985 |
| 0.0 | 38.0 | 14250 | 1.0172 | 0.9035 |
| 0.0 | 39.0 | 14625 | 1.0259 | 0.9052 |
| 0.0032 | 40.0 | 15000 | 1.0712 | 0.9018 |
| 0.0 | 41.0 | 15375 | 1.0107 | 0.9002 |
| 0.0025 | 42.0 | 15750 | 1.0002 | 0.9068 |
| 0.0023 | 43.0 | 16125 | 1.0032 | 0.9035 |
| 0.003 | 44.0 | 16500 | 0.9837 | 0.9052 |
| 0.0018 | 45.0 | 16875 | 1.0127 | 0.9035 |
| 0.0 | 46.0 | 17250 | 0.9843 | 0.9068 |
| 0.0056 | 47.0 | 17625 | 1.0283 | 0.9002 |
| 0.0033 | 48.0 | 18000 | 1.0135 | 0.9052 |
| 0.0031 | 49.0 | 18375 | 0.9997 | 0.9052 |
| 0.0025 | 50.0 | 18750 | 0.9950 | 0.9052 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
aaa12963337/msi-vit-small-pretrain
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# msi-vit-small-pretrain
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4835
- Accuracy: 0.6394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0897 | 1.0 | 781 | 1.7652 | 0.6574 |
| 0.0539 | 2.0 | 1562 | 2.5512 | 0.6017 |
| 0.0127 | 3.0 | 2343 | 2.4835 | 0.6394 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"adi",
"back",
"deb",
"lym",
"muc",
"mus",
"norm",
"str",
"tum"
] |
hkivancoral/smids_5x_deit_tiny_rms_001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_rms_001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8753
- Accuracy: 0.7767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7906 | 1.0 | 375 | 0.9128 | 0.4983 |
| 0.7765 | 2.0 | 750 | 0.9232 | 0.4617 |
| 0.7977 | 3.0 | 1125 | 0.8743 | 0.5267 |
| 0.8093 | 4.0 | 1500 | 0.7926 | 0.5767 |
| 0.8508 | 5.0 | 1875 | 0.7894 | 0.5733 |
| 0.7532 | 6.0 | 2250 | 0.7991 | 0.6117 |
| 0.7584 | 7.0 | 2625 | 0.7566 | 0.625 |
| 0.7398 | 8.0 | 3000 | 0.7364 | 0.6083 |
| 0.7009 | 9.0 | 3375 | 0.7452 | 0.64 |
| 0.7014 | 10.0 | 3750 | 0.7192 | 0.6433 |
| 0.7226 | 11.0 | 4125 | 0.7119 | 0.6383 |
| 0.7293 | 12.0 | 4500 | 0.7180 | 0.6467 |
| 0.6344 | 13.0 | 4875 | 0.7612 | 0.6117 |
| 0.6251 | 14.0 | 5250 | 0.7810 | 0.66 |
| 0.6301 | 15.0 | 5625 | 0.6950 | 0.6733 |
| 0.6252 | 16.0 | 6000 | 0.7106 | 0.6767 |
| 0.688 | 17.0 | 6375 | 0.7082 | 0.6883 |
| 0.7261 | 18.0 | 6750 | 0.6859 | 0.6883 |
| 0.5633 | 19.0 | 7125 | 0.6734 | 0.7033 |
| 0.6092 | 20.0 | 7500 | 0.6580 | 0.7283 |
| 0.4728 | 21.0 | 7875 | 0.6793 | 0.7033 |
| 0.5681 | 22.0 | 8250 | 0.6598 | 0.7217 |
| 0.5951 | 23.0 | 8625 | 0.6134 | 0.7533 |
| 0.6592 | 24.0 | 9000 | 0.5954 | 0.7467 |
| 0.5215 | 25.0 | 9375 | 0.5847 | 0.74 |
| 0.5272 | 26.0 | 9750 | 0.6243 | 0.7017 |
| 0.5866 | 27.0 | 10125 | 0.6339 | 0.7233 |
| 0.5766 | 28.0 | 10500 | 0.5466 | 0.765 |
| 0.463 | 29.0 | 10875 | 0.5734 | 0.7583 |
| 0.5041 | 30.0 | 11250 | 0.5320 | 0.775 |
| 0.5133 | 31.0 | 11625 | 0.5507 | 0.7683 |
| 0.5402 | 32.0 | 12000 | 0.5711 | 0.7517 |
| 0.4526 | 33.0 | 12375 | 0.5736 | 0.7483 |
| 0.4724 | 34.0 | 12750 | 0.5009 | 0.79 |
| 0.3951 | 35.0 | 13125 | 0.5483 | 0.77 |
| 0.3876 | 36.0 | 13500 | 0.5689 | 0.755 |
| 0.3627 | 37.0 | 13875 | 0.5639 | 0.7733 |
| 0.4378 | 38.0 | 14250 | 0.5663 | 0.765 |
| 0.3725 | 39.0 | 14625 | 0.5574 | 0.7867 |
| 0.3444 | 40.0 | 15000 | 0.5740 | 0.7733 |
| 0.3158 | 41.0 | 15375 | 0.5671 | 0.7717 |
| 0.29 | 42.0 | 15750 | 0.6455 | 0.78 |
| 0.3784 | 43.0 | 16125 | 0.6093 | 0.785 |
| 0.318 | 44.0 | 16500 | 0.6835 | 0.7683 |
| 0.2949 | 45.0 | 16875 | 0.7092 | 0.7733 |
| 0.2996 | 46.0 | 17250 | 0.6699 | 0.7767 |
| 0.2938 | 47.0 | 17625 | 0.7545 | 0.7917 |
| 0.2248 | 48.0 | 18000 | 0.8050 | 0.775 |
| 0.2309 | 49.0 | 18375 | 0.8518 | 0.7767 |
| 0.1878 | 50.0 | 18750 | 0.8753 | 0.7767 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_rms_0001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_rms_0001_fold3
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3488
- Accuracy: 0.855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5467 | 1.0 | 375 | 0.5969 | 0.7483 |
| 0.4112 | 2.0 | 750 | 0.6522 | 0.7967 |
| 0.3586 | 3.0 | 1125 | 0.4558 | 0.83 |
| 0.3318 | 4.0 | 1500 | 0.3669 | 0.8567 |
| 0.318 | 5.0 | 1875 | 0.4227 | 0.8267 |
| 0.2611 | 6.0 | 2250 | 0.4142 | 0.8467 |
| 0.2866 | 7.0 | 2625 | 0.4534 | 0.83 |
| 0.2297 | 8.0 | 3000 | 0.4296 | 0.8517 |
| 0.1623 | 9.0 | 3375 | 0.5359 | 0.835 |
| 0.1313 | 10.0 | 3750 | 0.5677 | 0.8433 |
| 0.1856 | 11.0 | 4125 | 0.5198 | 0.8667 |
| 0.087 | 12.0 | 4500 | 0.6463 | 0.8417 |
| 0.0974 | 13.0 | 4875 | 0.5874 | 0.8417 |
| 0.0478 | 14.0 | 5250 | 0.7058 | 0.84 |
| 0.0326 | 15.0 | 5625 | 0.7427 | 0.8283 |
| 0.0198 | 16.0 | 6000 | 0.8945 | 0.84 |
| 0.0746 | 17.0 | 6375 | 0.8489 | 0.8333 |
| 0.1024 | 18.0 | 6750 | 0.7564 | 0.8383 |
| 0.0499 | 19.0 | 7125 | 0.8028 | 0.8483 |
| 0.0808 | 20.0 | 7500 | 1.0400 | 0.8267 |
| 0.0495 | 21.0 | 7875 | 1.0596 | 0.83 |
| 0.0441 | 22.0 | 8250 | 0.9512 | 0.85 |
| 0.0385 | 23.0 | 8625 | 0.8380 | 0.8483 |
| 0.0162 | 24.0 | 9000 | 1.0671 | 0.8517 |
| 0.0061 | 25.0 | 9375 | 0.8747 | 0.86 |
| 0.0284 | 26.0 | 9750 | 1.0398 | 0.815 |
| 0.0446 | 27.0 | 10125 | 0.9748 | 0.845 |
| 0.0208 | 28.0 | 10500 | 1.0700 | 0.8517 |
| 0.0357 | 29.0 | 10875 | 1.0579 | 0.845 |
| 0.0301 | 30.0 | 11250 | 0.9043 | 0.8583 |
| 0.0099 | 31.0 | 11625 | 0.9420 | 0.8533 |
| 0.0327 | 32.0 | 12000 | 1.0192 | 0.8467 |
| 0.0502 | 33.0 | 12375 | 0.8952 | 0.8517 |
| 0.0352 | 34.0 | 12750 | 0.9041 | 0.8667 |
| 0.0188 | 35.0 | 13125 | 1.2059 | 0.8433 |
| 0.0229 | 36.0 | 13500 | 1.2761 | 0.84 |
| 0.0123 | 37.0 | 13875 | 1.1077 | 0.8583 |
| 0.0002 | 38.0 | 14250 | 1.1468 | 0.85 |
| 0.0009 | 39.0 | 14625 | 1.1590 | 0.8617 |
| 0.0211 | 40.0 | 15000 | 1.3901 | 0.8683 |
| 0.001 | 41.0 | 15375 | 1.2933 | 0.8533 |
| 0.0077 | 42.0 | 15750 | 1.1576 | 0.8583 |
| 0.0369 | 43.0 | 16125 | 1.3070 | 0.8433 |
| 0.0132 | 44.0 | 16500 | 1.0120 | 0.8633 |
| 0.0003 | 45.0 | 16875 | 1.2641 | 0.8633 |
| 0.0001 | 46.0 | 17250 | 1.2268 | 0.8633 |
| 0.0004 | 47.0 | 17625 | 1.1854 | 0.8583 |
| 0.0001 | 48.0 | 18000 | 1.3326 | 0.865 |
| 0.0187 | 49.0 | 18375 | 1.3505 | 0.8567 |
| 0.0011 | 50.0 | 18750 | 1.3488 | 0.855 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_rms_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_rms_00001_fold3
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9185
- Accuracy: 0.9133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2283 | 1.0 | 375 | 0.2339 | 0.9183 |
| 0.0993 | 2.0 | 750 | 0.2718 | 0.9117 |
| 0.0737 | 3.0 | 1125 | 0.3409 | 0.9133 |
| 0.0242 | 4.0 | 1500 | 0.3831 | 0.92 |
| 0.0583 | 5.0 | 1875 | 0.4502 | 0.915 |
| 0.0327 | 6.0 | 2250 | 0.4867 | 0.9183 |
| 0.0228 | 7.0 | 2625 | 0.5343 | 0.9333 |
| 0.0011 | 8.0 | 3000 | 0.6969 | 0.91 |
| 0.0064 | 9.0 | 3375 | 0.8807 | 0.89 |
| 0.0562 | 10.0 | 3750 | 0.8333 | 0.9117 |
| 0.0104 | 11.0 | 4125 | 0.6930 | 0.9083 |
| 0.0118 | 12.0 | 4500 | 0.8317 | 0.8933 |
| 0.0001 | 13.0 | 4875 | 0.8634 | 0.9 |
| 0.0154 | 14.0 | 5250 | 0.7424 | 0.9233 |
| 0.0231 | 15.0 | 5625 | 0.8048 | 0.9133 |
| 0.0 | 16.0 | 6000 | 0.8245 | 0.9083 |
| 0.0137 | 17.0 | 6375 | 0.7565 | 0.92 |
| 0.0069 | 18.0 | 6750 | 0.7751 | 0.9183 |
| 0.0115 | 19.0 | 7125 | 0.7824 | 0.9233 |
| 0.0051 | 20.0 | 7500 | 0.7691 | 0.9183 |
| 0.0 | 21.0 | 7875 | 0.8067 | 0.9117 |
| 0.0207 | 22.0 | 8250 | 0.7817 | 0.915 |
| 0.0053 | 23.0 | 8625 | 0.8276 | 0.9083 |
| 0.0001 | 24.0 | 9000 | 0.7978 | 0.9117 |
| 0.0169 | 25.0 | 9375 | 0.8806 | 0.9067 |
| 0.0007 | 26.0 | 9750 | 0.8278 | 0.9267 |
| 0.0038 | 27.0 | 10125 | 0.9428 | 0.9183 |
| 0.0 | 28.0 | 10500 | 0.8806 | 0.9167 |
| 0.0 | 29.0 | 10875 | 0.8180 | 0.91 |
| 0.0029 | 30.0 | 11250 | 0.9090 | 0.9117 |
| 0.0 | 31.0 | 11625 | 0.8537 | 0.9133 |
| 0.0002 | 32.0 | 12000 | 0.8596 | 0.915 |
| 0.0003 | 33.0 | 12375 | 0.8995 | 0.9183 |
| 0.0 | 34.0 | 12750 | 0.8853 | 0.925 |
| 0.0 | 35.0 | 13125 | 0.8638 | 0.9133 |
| 0.0 | 36.0 | 13500 | 0.8296 | 0.9167 |
| 0.0 | 37.0 | 13875 | 0.8113 | 0.9217 |
| 0.0 | 38.0 | 14250 | 0.8781 | 0.92 |
| 0.027 | 39.0 | 14625 | 0.8890 | 0.92 |
| 0.0038 | 40.0 | 15000 | 0.8330 | 0.925 |
| 0.0 | 41.0 | 15375 | 0.9306 | 0.9167 |
| 0.0 | 42.0 | 15750 | 0.8569 | 0.9183 |
| 0.0 | 43.0 | 16125 | 0.9060 | 0.9133 |
| 0.0 | 44.0 | 16500 | 0.8854 | 0.9167 |
| 0.0 | 45.0 | 16875 | 0.9021 | 0.91 |
| 0.0001 | 46.0 | 17250 | 0.9154 | 0.9133 |
| 0.0 | 47.0 | 17625 | 0.8802 | 0.915 |
| 0.0 | 48.0 | 18000 | 0.8999 | 0.915 |
| 0.0 | 49.0 | 18375 | 0.9100 | 0.9117 |
| 0.0 | 50.0 | 18750 | 0.9185 | 0.9133 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_tiny_rms_001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_rms_001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2524
- Accuracy: 0.7767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.818 | 1.0 | 375 | 0.8019 | 0.5417 |
| 0.7912 | 2.0 | 750 | 0.8025 | 0.57 |
| 0.7276 | 3.0 | 1125 | 0.7672 | 0.6083 |
| 0.7922 | 4.0 | 1500 | 0.6983 | 0.6533 |
| 0.7335 | 5.0 | 1875 | 0.6685 | 0.6917 |
| 0.6959 | 6.0 | 2250 | 0.6471 | 0.7233 |
| 0.623 | 7.0 | 2625 | 0.6073 | 0.7233 |
| 0.6887 | 8.0 | 3000 | 0.6966 | 0.6667 |
| 0.6552 | 9.0 | 3375 | 0.5957 | 0.74 |
| 0.6126 | 10.0 | 3750 | 0.6205 | 0.7 |
| 0.5793 | 11.0 | 4125 | 0.5808 | 0.7567 |
| 0.6219 | 12.0 | 4500 | 0.5874 | 0.745 |
| 0.5436 | 13.0 | 4875 | 0.6140 | 0.7317 |
| 0.6012 | 14.0 | 5250 | 0.5834 | 0.7417 |
| 0.6043 | 15.0 | 5625 | 0.5539 | 0.75 |
| 0.5011 | 16.0 | 6000 | 0.5531 | 0.7383 |
| 0.5057 | 17.0 | 6375 | 0.5890 | 0.75 |
| 0.5517 | 18.0 | 6750 | 0.5510 | 0.7583 |
| 0.5553 | 19.0 | 7125 | 0.5435 | 0.76 |
| 0.5674 | 20.0 | 7500 | 0.4957 | 0.7933 |
| 0.4667 | 21.0 | 7875 | 0.5150 | 0.7867 |
| 0.4405 | 22.0 | 8250 | 0.5576 | 0.7867 |
| 0.4436 | 23.0 | 8625 | 0.4866 | 0.7967 |
| 0.454 | 24.0 | 9000 | 0.5354 | 0.775 |
| 0.4111 | 25.0 | 9375 | 0.5789 | 0.7717 |
| 0.4049 | 26.0 | 9750 | 0.5450 | 0.7817 |
| 0.397 | 27.0 | 10125 | 0.5808 | 0.7883 |
| 0.3436 | 28.0 | 10500 | 0.5933 | 0.7817 |
| 0.3249 | 29.0 | 10875 | 0.5969 | 0.7633 |
| 0.3897 | 30.0 | 11250 | 0.5739 | 0.7817 |
| 0.3938 | 31.0 | 11625 | 0.5794 | 0.7883 |
| 0.2714 | 32.0 | 12000 | 0.6582 | 0.775 |
| 0.2808 | 33.0 | 12375 | 0.6348 | 0.775 |
| 0.321 | 34.0 | 12750 | 0.7200 | 0.7567 |
| 0.2202 | 35.0 | 13125 | 0.6917 | 0.7817 |
| 0.1634 | 36.0 | 13500 | 0.7700 | 0.7733 |
| 0.3232 | 37.0 | 13875 | 0.7503 | 0.785 |
| 0.1845 | 38.0 | 14250 | 0.8724 | 0.7567 |
| 0.1357 | 39.0 | 14625 | 1.0521 | 0.7683 |
| 0.0994 | 40.0 | 15000 | 1.0716 | 0.77 |
| 0.0743 | 41.0 | 15375 | 1.1704 | 0.7717 |
| 0.1059 | 42.0 | 15750 | 1.2031 | 0.7783 |
| 0.0494 | 43.0 | 16125 | 1.3921 | 0.7633 |
| 0.0147 | 44.0 | 16500 | 1.5250 | 0.77 |
| 0.0663 | 45.0 | 16875 | 1.6538 | 0.7667 |
| 0.0618 | 46.0 | 17250 | 1.8210 | 0.765 |
| 0.0041 | 47.0 | 17625 | 1.9243 | 0.7617 |
| 0.0018 | 48.0 | 18000 | 2.1515 | 0.7717 |
| 0.0025 | 49.0 | 18375 | 2.2407 | 0.7683 |
| 0.0002 | 50.0 | 18750 | 2.2524 | 0.7767 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_tiny_rms_001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_rms_001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6094
- Accuracy: 0.7433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8822 | 1.0 | 375 | 0.8719 | 0.52 |
| 0.8936 | 2.0 | 750 | 0.8470 | 0.535 |
| 0.8252 | 3.0 | 1125 | 0.8071 | 0.595 |
| 0.8333 | 4.0 | 1500 | 0.7970 | 0.6017 |
| 0.8046 | 5.0 | 1875 | 0.8070 | 0.5633 |
| 0.8082 | 6.0 | 2250 | 0.9208 | 0.5167 |
| 0.7481 | 7.0 | 2625 | 0.7984 | 0.5633 |
| 0.8409 | 8.0 | 3000 | 0.7900 | 0.5783 |
| 0.7673 | 9.0 | 3375 | 0.7551 | 0.62 |
| 0.7321 | 10.0 | 3750 | 0.7485 | 0.6133 |
| 0.8282 | 11.0 | 4125 | 0.7517 | 0.6083 |
| 0.7206 | 12.0 | 4500 | 0.7745 | 0.6 |
| 0.6841 | 13.0 | 4875 | 0.8307 | 0.5917 |
| 0.7738 | 14.0 | 5250 | 0.7274 | 0.6683 |
| 0.8416 | 15.0 | 5625 | 0.7353 | 0.67 |
| 0.704 | 16.0 | 6000 | 0.7258 | 0.65 |
| 0.6873 | 17.0 | 6375 | 0.7174 | 0.68 |
| 0.714 | 18.0 | 6750 | 0.7557 | 0.6483 |
| 0.7105 | 19.0 | 7125 | 0.6868 | 0.6917 |
| 0.6559 | 20.0 | 7500 | 0.6845 | 0.6783 |
| 0.6717 | 21.0 | 7875 | 0.7043 | 0.67 |
| 0.7139 | 22.0 | 8250 | 0.6944 | 0.68 |
| 0.6633 | 23.0 | 8625 | 0.7071 | 0.6667 |
| 0.6888 | 24.0 | 9000 | 0.6979 | 0.6883 |
| 0.6621 | 25.0 | 9375 | 0.6468 | 0.7117 |
| 0.6157 | 26.0 | 9750 | 0.6767 | 0.6833 |
| 0.6777 | 27.0 | 10125 | 0.7097 | 0.67 |
| 0.7108 | 28.0 | 10500 | 0.6811 | 0.6917 |
| 0.8139 | 29.0 | 10875 | 0.6750 | 0.7067 |
| 0.6291 | 30.0 | 11250 | 0.6415 | 0.7133 |
| 0.5725 | 31.0 | 11625 | 0.6769 | 0.6833 |
| 0.6243 | 32.0 | 12000 | 0.6733 | 0.7267 |
| 0.6311 | 33.0 | 12375 | 0.6227 | 0.7217 |
| 0.6254 | 34.0 | 12750 | 0.6222 | 0.72 |
| 0.567 | 35.0 | 13125 | 0.6040 | 0.735 |
| 0.5363 | 36.0 | 13500 | 0.5935 | 0.7533 |
| 0.6308 | 37.0 | 13875 | 0.6047 | 0.7267 |
| 0.5334 | 38.0 | 14250 | 0.6481 | 0.7217 |
| 0.5951 | 39.0 | 14625 | 0.6059 | 0.7317 |
| 0.6325 | 40.0 | 15000 | 0.6172 | 0.735 |
| 0.5905 | 41.0 | 15375 | 0.6255 | 0.7233 |
| 0.6095 | 42.0 | 15750 | 0.5896 | 0.7433 |
| 0.49 | 43.0 | 16125 | 0.5925 | 0.7367 |
| 0.4891 | 44.0 | 16500 | 0.5937 | 0.7367 |
| 0.4867 | 45.0 | 16875 | 0.5918 | 0.7583 |
| 0.5178 | 46.0 | 17250 | 0.6030 | 0.735 |
| 0.561 | 47.0 | 17625 | 0.6183 | 0.74 |
| 0.4632 | 48.0 | 18000 | 0.5943 | 0.7517 |
| 0.4666 | 49.0 | 18375 | 0.6107 | 0.7417 |
| 0.4901 | 50.0 | 18750 | 0.6094 | 0.7433 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_rms_0001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_rms_0001_fold4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5932
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3203 | 1.0 | 375 | 0.4417 | 0.8283 |
| 0.2426 | 2.0 | 750 | 0.3953 | 0.8467 |
| 0.1231 | 3.0 | 1125 | 0.5794 | 0.86 |
| 0.1615 | 4.0 | 1500 | 0.5639 | 0.8417 |
| 0.0891 | 5.0 | 1875 | 0.5564 | 0.8583 |
| 0.0575 | 6.0 | 2250 | 0.6614 | 0.8617 |
| 0.0358 | 7.0 | 2625 | 0.6797 | 0.8667 |
| 0.0806 | 8.0 | 3000 | 0.6344 | 0.865 |
| 0.0789 | 9.0 | 3375 | 0.6957 | 0.8717 |
| 0.0241 | 10.0 | 3750 | 0.9064 | 0.8567 |
| 0.0385 | 11.0 | 4125 | 0.7288 | 0.865 |
| 0.0734 | 12.0 | 4500 | 0.9419 | 0.8617 |
| 0.0068 | 13.0 | 4875 | 1.1180 | 0.8417 |
| 0.0709 | 14.0 | 5250 | 0.8470 | 0.8517 |
| 0.0043 | 15.0 | 5625 | 0.9006 | 0.8733 |
| 0.0276 | 16.0 | 6000 | 0.9685 | 0.8617 |
| 0.0864 | 17.0 | 6375 | 0.9433 | 0.865 |
| 0.0263 | 18.0 | 6750 | 1.0594 | 0.8667 |
| 0.0288 | 19.0 | 7125 | 0.9857 | 0.8617 |
| 0.0437 | 20.0 | 7500 | 0.9535 | 0.8617 |
| 0.0182 | 21.0 | 7875 | 1.0851 | 0.8633 |
| 0.0247 | 22.0 | 8250 | 0.9263 | 0.8683 |
| 0.0006 | 23.0 | 8625 | 0.9868 | 0.8717 |
| 0.0399 | 24.0 | 9000 | 1.0128 | 0.86 |
| 0.0299 | 25.0 | 9375 | 0.9782 | 0.8867 |
| 0.0002 | 26.0 | 9750 | 1.2403 | 0.8567 |
| 0.0051 | 27.0 | 10125 | 1.1638 | 0.8567 |
| 0.0126 | 28.0 | 10500 | 1.0178 | 0.87 |
| 0.0001 | 29.0 | 10875 | 1.0674 | 0.8717 |
| 0.008 | 30.0 | 11250 | 1.0641 | 0.8633 |
| 0.0072 | 31.0 | 11625 | 1.1590 | 0.8667 |
| 0.0003 | 32.0 | 12000 | 1.3669 | 0.8567 |
| 0.0001 | 33.0 | 12375 | 1.2305 | 0.8733 |
| 0.0072 | 34.0 | 12750 | 1.1660 | 0.865 |
| 0.0 | 35.0 | 13125 | 1.3099 | 0.8683 |
| 0.001 | 36.0 | 13500 | 1.3895 | 0.8617 |
| 0.0003 | 37.0 | 13875 | 1.2632 | 0.8633 |
| 0.0099 | 38.0 | 14250 | 1.2864 | 0.8583 |
| 0.0 | 39.0 | 14625 | 1.2372 | 0.87 |
| 0.0 | 40.0 | 15000 | 1.3431 | 0.85 |
| 0.0 | 41.0 | 15375 | 1.3348 | 0.86 |
| 0.0 | 42.0 | 15750 | 1.4149 | 0.8633 |
| 0.0 | 43.0 | 16125 | 1.5031 | 0.86 |
| 0.0 | 44.0 | 16500 | 1.5165 | 0.8667 |
| 0.0 | 45.0 | 16875 | 1.5362 | 0.8633 |
| 0.0 | 46.0 | 17250 | 1.5276 | 0.865 |
| 0.0 | 47.0 | 17625 | 1.5857 | 0.8667 |
| 0.0 | 48.0 | 18000 | 1.5842 | 0.8667 |
| 0.0 | 49.0 | 18375 | 1.5927 | 0.8683 |
| 0.0 | 50.0 | 18750 | 1.5932 | 0.87 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_rms_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_rms_00001_fold4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3109
- Accuracy: 0.8933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1968 | 1.0 | 375 | 0.3151 | 0.8917 |
| 0.1474 | 2.0 | 750 | 0.4474 | 0.8667 |
| 0.0741 | 3.0 | 1125 | 0.4618 | 0.89 |
| 0.0638 | 4.0 | 1500 | 0.5553 | 0.9083 |
| 0.0357 | 5.0 | 1875 | 0.7199 | 0.8767 |
| 0.027 | 6.0 | 2250 | 0.8598 | 0.8783 |
| 0.0037 | 7.0 | 2625 | 1.0235 | 0.8817 |
| 0.0229 | 8.0 | 3000 | 1.0021 | 0.8817 |
| 0.0002 | 9.0 | 3375 | 1.0533 | 0.88 |
| 0.0003 | 10.0 | 3750 | 1.0170 | 0.8917 |
| 0.0047 | 11.0 | 4125 | 1.0274 | 0.885 |
| 0.0161 | 12.0 | 4500 | 0.9972 | 0.8883 |
| 0.0395 | 13.0 | 4875 | 1.1208 | 0.8817 |
| 0.0195 | 14.0 | 5250 | 1.1819 | 0.8833 |
| 0.0231 | 15.0 | 5625 | 1.2063 | 0.8867 |
| 0.002 | 16.0 | 6000 | 1.1906 | 0.8817 |
| 0.0189 | 17.0 | 6375 | 1.3367 | 0.8683 |
| 0.006 | 18.0 | 6750 | 1.3216 | 0.8767 |
| 0.0201 | 19.0 | 7125 | 1.2482 | 0.8883 |
| 0.0004 | 20.0 | 7500 | 1.3064 | 0.88 |
| 0.0 | 21.0 | 7875 | 1.2624 | 0.8833 |
| 0.0332 | 22.0 | 8250 | 1.2916 | 0.8783 |
| 0.0001 | 23.0 | 8625 | 1.2718 | 0.875 |
| 0.0134 | 24.0 | 9000 | 1.2861 | 0.8767 |
| 0.0091 | 25.0 | 9375 | 1.2558 | 0.8867 |
| 0.0 | 26.0 | 9750 | 1.1412 | 0.875 |
| 0.0003 | 27.0 | 10125 | 1.1757 | 0.8883 |
| 0.0 | 28.0 | 10500 | 1.1969 | 0.885 |
| 0.0001 | 29.0 | 10875 | 1.2159 | 0.8833 |
| 0.0439 | 30.0 | 11250 | 1.2112 | 0.885 |
| 0.0 | 31.0 | 11625 | 1.1996 | 0.8867 |
| 0.0011 | 32.0 | 12000 | 1.2726 | 0.8917 |
| 0.0 | 33.0 | 12375 | 1.2290 | 0.895 |
| 0.003 | 34.0 | 12750 | 1.2689 | 0.885 |
| 0.0001 | 35.0 | 13125 | 1.2685 | 0.8833 |
| 0.0 | 36.0 | 13500 | 1.2338 | 0.89 |
| 0.0 | 37.0 | 13875 | 1.2931 | 0.8817 |
| 0.0037 | 38.0 | 14250 | 1.2980 | 0.8833 |
| 0.0 | 39.0 | 14625 | 1.3078 | 0.895 |
| 0.0 | 40.0 | 15000 | 1.3295 | 0.8867 |
| 0.0 | 41.0 | 15375 | 1.2988 | 0.8917 |
| 0.0 | 42.0 | 15750 | 1.3679 | 0.8817 |
| 0.0 | 43.0 | 16125 | 1.3182 | 0.8833 |
| 0.0 | 44.0 | 16500 | 1.3785 | 0.885 |
| 0.0 | 45.0 | 16875 | 1.3130 | 0.8833 |
| 0.0008 | 46.0 | 17250 | 1.3226 | 0.8917 |
| 0.028 | 47.0 | 17625 | 1.3211 | 0.89 |
| 0.0214 | 48.0 | 18000 | 1.3040 | 0.8933 |
| 0.0 | 49.0 | 18375 | 1.3082 | 0.8917 |
| 0.0 | 50.0 | 18750 | 1.3109 | 0.8933 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
aaa12963337/msi-vit-small
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# msi-vit-small
This model was trained from scratch on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5796
- Accuracy: 0.6000
- F1: 0.2863
- Precision: 0.6336
- Recall: 0.1849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.3142 | 1.0 | 1008 | 0.8965 | 0.6329 | 0.5060 | 0.6079 | 0.4333 |
| 0.2063 | 2.0 | 2016 | 1.5189 | 0.6062 | 0.3005 | 0.6550 | 0.1950 |
| 0.19 | 3.0 | 3024 | 1.4818 | 0.6270 | 0.3399 | 0.7318 | 0.2213 |
| 0.1718 | 4.0 | 4032 | 1.2353 | 0.6046 | 0.4096 | 0.5816 | 0.3161 |
| 0.161 | 5.0 | 5040 | 1.5953 | 0.6342 | 0.3508 | 0.7623 | 0.2278 |
| 0.1805 | 6.0 | 6048 | 1.0789 | 0.6552 | 0.4647 | 0.7119 | 0.3449 |
| 0.1619 | 7.0 | 7056 | 1.2646 | 0.5479 | 0.2591 | 0.4484 | 0.1822 |
| 0.1655 | 8.0 | 8064 | 1.7155 | 0.5910 | 0.2654 | 0.6011 | 0.1703 |
| 0.17 | 9.0 | 9072 | 2.1142 | 0.5797 | 0.1729 | 0.5913 | 0.1012 |
| 0.1703 | 10.0 | 10080 | 1.5796 | 0.6000 | 0.2863 | 0.6336 | 0.1849 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"adi",
"back",
"deb",
"lym",
"muc",
"mus",
"norm",
"str",
"tum"
] |
hkivancoral/smids_5x_deit_tiny_rms_0001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_rms_0001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9972
- Accuracy: 0.9048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2949 | 1.0 | 376 | 0.4792 | 0.7896 |
| 0.1877 | 2.0 | 752 | 0.3869 | 0.8631 |
| 0.1943 | 3.0 | 1128 | 0.4273 | 0.8514 |
| 0.1151 | 4.0 | 1504 | 0.4170 | 0.8932 |
| 0.1309 | 5.0 | 1880 | 0.4159 | 0.8748 |
| 0.0937 | 6.0 | 2256 | 0.5222 | 0.8831 |
| 0.0299 | 7.0 | 2632 | 0.5974 | 0.8932 |
| 0.0659 | 8.0 | 3008 | 0.6171 | 0.8715 |
| 0.0586 | 9.0 | 3384 | 0.7200 | 0.8781 |
| 0.0715 | 10.0 | 3760 | 0.9149 | 0.8664 |
| 0.0752 | 11.0 | 4136 | 0.7964 | 0.8765 |
| 0.0401 | 12.0 | 4512 | 0.6968 | 0.8831 |
| 0.0094 | 13.0 | 4888 | 0.6898 | 0.8865 |
| 0.0111 | 14.0 | 5264 | 0.7411 | 0.8932 |
| 0.0334 | 15.0 | 5640 | 0.8411 | 0.8798 |
| 0.0369 | 16.0 | 6016 | 0.7849 | 0.8798 |
| 0.0017 | 17.0 | 6392 | 0.7191 | 0.8898 |
| 0.0026 | 18.0 | 6768 | 0.8047 | 0.8815 |
| 0.0265 | 19.0 | 7144 | 0.6550 | 0.8982 |
| 0.0527 | 20.0 | 7520 | 0.7590 | 0.8798 |
| 0.0052 | 21.0 | 7896 | 0.7860 | 0.8881 |
| 0.001 | 22.0 | 8272 | 0.8487 | 0.8965 |
| 0.0432 | 23.0 | 8648 | 0.8524 | 0.8865 |
| 0.0032 | 24.0 | 9024 | 0.8174 | 0.9015 |
| 0.0001 | 25.0 | 9400 | 0.8214 | 0.8815 |
| 0.0146 | 26.0 | 9776 | 0.9080 | 0.8765 |
| 0.0 | 27.0 | 10152 | 0.8028 | 0.9032 |
| 0.0001 | 28.0 | 10528 | 0.9579 | 0.8915 |
| 0.0043 | 29.0 | 10904 | 0.8349 | 0.8982 |
| 0.0053 | 30.0 | 11280 | 0.9140 | 0.8831 |
| 0.0204 | 31.0 | 11656 | 0.9273 | 0.8898 |
| 0.0001 | 32.0 | 12032 | 0.9480 | 0.8848 |
| 0.0006 | 33.0 | 12408 | 1.0366 | 0.8865 |
| 0.0042 | 34.0 | 12784 | 1.0682 | 0.8798 |
| 0.0025 | 35.0 | 13160 | 0.9542 | 0.8932 |
| 0.0006 | 36.0 | 13536 | 0.8930 | 0.9048 |
| 0.0001 | 37.0 | 13912 | 0.9451 | 0.8932 |
| 0.0112 | 38.0 | 14288 | 1.0303 | 0.8848 |
| 0.0 | 39.0 | 14664 | 1.0298 | 0.8932 |
| 0.0 | 40.0 | 15040 | 0.9996 | 0.8932 |
| 0.0 | 41.0 | 15416 | 0.9909 | 0.8998 |
| 0.0 | 42.0 | 15792 | 0.9652 | 0.9015 |
| 0.0 | 43.0 | 16168 | 0.9547 | 0.9032 |
| 0.0 | 44.0 | 16544 | 0.9994 | 0.8982 |
| 0.0 | 45.0 | 16920 | 0.9802 | 0.9015 |
| 0.003 | 46.0 | 17296 | 0.9911 | 0.9032 |
| 0.0 | 47.0 | 17672 | 0.9936 | 0.9048 |
| 0.0 | 48.0 | 18048 | 0.9937 | 0.9048 |
| 0.0 | 49.0 | 18424 | 0.9932 | 0.9048 |
| 0.0025 | 50.0 | 18800 | 0.9972 | 0.9048 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
aaa12963337/msi-efficientnet-pretrain
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# msi-efficientnet-pretrain
This model is a fine-tuned version of [google/efficientnet-b0](https://huggingface.co/google/efficientnet-b0) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4941
- Accuracy: 0.8613
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.317 | 1.0 | 781 | 0.9029 | 0.7535 |
| 0.2009 | 2.0 | 1562 | 0.4094 | 0.8840 |
| 0.1405 | 3.0 | 2343 | 0.4941 | 0.8613 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"adi",
"back",
"deb",
"lym",
"muc",
"mus",
"norm",
"str",
"tum"
] |
hkivancoral/smids_5x_beit_base_rms_0001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_rms_0001_fold5
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0211
- Accuracy: 0.905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3069 | 1.0 | 375 | 0.3920 | 0.8783 |
| 0.2324 | 2.0 | 750 | 0.2994 | 0.8883 |
| 0.2111 | 3.0 | 1125 | 0.4025 | 0.8883 |
| 0.1469 | 4.0 | 1500 | 0.4730 | 0.8933 |
| 0.1348 | 5.0 | 1875 | 0.5021 | 0.8667 |
| 0.1083 | 6.0 | 2250 | 0.5547 | 0.875 |
| 0.074 | 7.0 | 2625 | 0.8070 | 0.865 |
| 0.0264 | 8.0 | 3000 | 0.6666 | 0.8817 |
| 0.0566 | 9.0 | 3375 | 0.5845 | 0.8817 |
| 0.0498 | 10.0 | 3750 | 0.6165 | 0.8717 |
| 0.0562 | 11.0 | 4125 | 0.6616 | 0.9017 |
| 0.0419 | 12.0 | 4500 | 0.5768 | 0.9 |
| 0.042 | 13.0 | 4875 | 0.6169 | 0.89 |
| 0.0428 | 14.0 | 5250 | 0.6006 | 0.8967 |
| 0.065 | 15.0 | 5625 | 0.6268 | 0.875 |
| 0.0169 | 16.0 | 6000 | 0.6699 | 0.9017 |
| 0.0201 | 17.0 | 6375 | 0.7528 | 0.8933 |
| 0.0241 | 18.0 | 6750 | 0.6629 | 0.89 |
| 0.0027 | 19.0 | 7125 | 0.6425 | 0.9017 |
| 0.0221 | 20.0 | 7500 | 0.6769 | 0.8917 |
| 0.0018 | 21.0 | 7875 | 0.8187 | 0.8867 |
| 0.0303 | 22.0 | 8250 | 0.6653 | 0.8933 |
| 0.0112 | 23.0 | 8625 | 0.7146 | 0.88 |
| 0.002 | 24.0 | 9000 | 0.7847 | 0.8983 |
| 0.0001 | 25.0 | 9375 | 0.7706 | 0.8933 |
| 0.001 | 26.0 | 9750 | 0.8815 | 0.8933 |
| 0.0089 | 27.0 | 10125 | 0.9055 | 0.8833 |
| 0.0011 | 28.0 | 10500 | 0.8721 | 0.8883 |
| 0.0031 | 29.0 | 10875 | 0.8475 | 0.8917 |
| 0.0096 | 30.0 | 11250 | 0.7033 | 0.9067 |
| 0.0084 | 31.0 | 11625 | 0.7845 | 0.9033 |
| 0.0003 | 32.0 | 12000 | 0.8241 | 0.8967 |
| 0.0002 | 33.0 | 12375 | 0.7939 | 0.905 |
| 0.0 | 34.0 | 12750 | 0.8492 | 0.9117 |
| 0.0039 | 35.0 | 13125 | 0.7919 | 0.905 |
| 0.0 | 36.0 | 13500 | 0.9132 | 0.9017 |
| 0.001 | 37.0 | 13875 | 0.9026 | 0.91 |
| 0.0073 | 38.0 | 14250 | 0.9238 | 0.9 |
| 0.0 | 39.0 | 14625 | 1.0700 | 0.895 |
| 0.0 | 40.0 | 15000 | 1.0185 | 0.9083 |
| 0.0 | 41.0 | 15375 | 1.0113 | 0.9 |
| 0.0 | 42.0 | 15750 | 0.9606 | 0.9033 |
| 0.0 | 43.0 | 16125 | 1.0356 | 0.9 |
| 0.0 | 44.0 | 16500 | 1.0382 | 0.9017 |
| 0.0 | 45.0 | 16875 | 1.0522 | 0.9 |
| 0.0 | 46.0 | 17250 | 1.0733 | 0.8967 |
| 0.0031 | 47.0 | 17625 | 1.0418 | 0.9017 |
| 0.0 | 48.0 | 18000 | 1.0244 | 0.9067 |
| 0.0 | 49.0 | 18375 | 1.0206 | 0.905 |
| 0.0019 | 50.0 | 18750 | 1.0211 | 0.905 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_beit_base_rms_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_rms_00001_fold5
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9550
- Accuracy: 0.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1871 | 1.0 | 375 | 0.3400 | 0.865 |
| 0.1042 | 2.0 | 750 | 0.2834 | 0.9017 |
| 0.1083 | 3.0 | 1125 | 0.3546 | 0.9033 |
| 0.0201 | 4.0 | 1500 | 0.3961 | 0.9133 |
| 0.0166 | 5.0 | 1875 | 0.6199 | 0.9083 |
| 0.0246 | 6.0 | 2250 | 0.6057 | 0.8967 |
| 0.0224 | 7.0 | 2625 | 0.7547 | 0.9117 |
| 0.0003 | 8.0 | 3000 | 0.7052 | 0.9133 |
| 0.0111 | 9.0 | 3375 | 0.7830 | 0.8983 |
| 0.0518 | 10.0 | 3750 | 0.8521 | 0.8967 |
| 0.02 | 11.0 | 4125 | 0.9299 | 0.8933 |
| 0.0138 | 12.0 | 4500 | 0.9525 | 0.8983 |
| 0.0001 | 13.0 | 4875 | 0.8824 | 0.9067 |
| 0.013 | 14.0 | 5250 | 0.9828 | 0.8833 |
| 0.0349 | 15.0 | 5625 | 0.8057 | 0.9033 |
| 0.0008 | 16.0 | 6000 | 0.9444 | 0.8983 |
| 0.0016 | 17.0 | 6375 | 0.8486 | 0.905 |
| 0.0093 | 18.0 | 6750 | 0.8485 | 0.9083 |
| 0.0057 | 19.0 | 7125 | 0.8351 | 0.895 |
| 0.0146 | 20.0 | 7500 | 0.8068 | 0.915 |
| 0.0088 | 21.0 | 7875 | 0.8372 | 0.9117 |
| 0.0074 | 22.0 | 8250 | 0.8780 | 0.905 |
| 0.0068 | 23.0 | 8625 | 0.9227 | 0.9067 |
| 0.0 | 24.0 | 9000 | 0.8408 | 0.9067 |
| 0.0 | 25.0 | 9375 | 0.8878 | 0.9067 |
| 0.0001 | 26.0 | 9750 | 0.6996 | 0.9217 |
| 0.0043 | 27.0 | 10125 | 0.7960 | 0.915 |
| 0.0021 | 28.0 | 10500 | 0.8288 | 0.91 |
| 0.002 | 29.0 | 10875 | 0.8059 | 0.9133 |
| 0.0055 | 30.0 | 11250 | 0.8992 | 0.9117 |
| 0.0001 | 31.0 | 11625 | 0.9502 | 0.9083 |
| 0.0001 | 32.0 | 12000 | 1.0009 | 0.9067 |
| 0.0047 | 33.0 | 12375 | 0.9429 | 0.91 |
| 0.0 | 34.0 | 12750 | 0.9564 | 0.905 |
| 0.0 | 35.0 | 13125 | 0.9119 | 0.9217 |
| 0.0 | 36.0 | 13500 | 1.0028 | 0.9033 |
| 0.0 | 37.0 | 13875 | 0.9150 | 0.91 |
| 0.0054 | 38.0 | 14250 | 0.9393 | 0.91 |
| 0.0 | 39.0 | 14625 | 1.0004 | 0.9067 |
| 0.0 | 40.0 | 15000 | 0.9733 | 0.9083 |
| 0.0001 | 41.0 | 15375 | 0.9774 | 0.9067 |
| 0.0 | 42.0 | 15750 | 0.9404 | 0.9133 |
| 0.0 | 43.0 | 16125 | 0.9724 | 0.9117 |
| 0.0 | 44.0 | 16500 | 0.9389 | 0.915 |
| 0.0 | 45.0 | 16875 | 0.9342 | 0.9167 |
| 0.0 | 46.0 | 17250 | 0.9815 | 0.9117 |
| 0.0058 | 47.0 | 17625 | 0.9724 | 0.9067 |
| 0.0 | 48.0 | 18000 | 0.9650 | 0.9067 |
| 0.0 | 49.0 | 18375 | 0.9572 | 0.9083 |
| 0.0012 | 50.0 | 18750 | 0.9550 | 0.91 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_tiny_rms_0001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_rms_0001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1259
- Accuracy: 0.8735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3139 | 1.0 | 375 | 0.2920 | 0.8835 |
| 0.213 | 2.0 | 750 | 0.3450 | 0.8785 |
| 0.2004 | 3.0 | 1125 | 0.4306 | 0.8719 |
| 0.1151 | 4.0 | 1500 | 0.4856 | 0.8702 |
| 0.1363 | 5.0 | 1875 | 0.5483 | 0.8752 |
| 0.0415 | 6.0 | 2250 | 0.6014 | 0.8719 |
| 0.0888 | 7.0 | 2625 | 0.6594 | 0.8636 |
| 0.0129 | 8.0 | 3000 | 0.7394 | 0.8702 |
| 0.0606 | 9.0 | 3375 | 0.7551 | 0.8619 |
| 0.0273 | 10.0 | 3750 | 0.7977 | 0.8536 |
| 0.0575 | 11.0 | 4125 | 0.7927 | 0.8702 |
| 0.0142 | 12.0 | 4500 | 0.8285 | 0.8619 |
| 0.006 | 13.0 | 4875 | 0.8594 | 0.8819 |
| 0.0339 | 14.0 | 5250 | 0.8600 | 0.8686 |
| 0.0029 | 15.0 | 5625 | 0.9289 | 0.8719 |
| 0.0348 | 16.0 | 6000 | 0.7828 | 0.8819 |
| 0.0273 | 17.0 | 6375 | 0.7381 | 0.8885 |
| 0.029 | 18.0 | 6750 | 0.9087 | 0.8686 |
| 0.0306 | 19.0 | 7125 | 0.9194 | 0.8785 |
| 0.0034 | 20.0 | 7500 | 1.0978 | 0.8619 |
| 0.0052 | 21.0 | 7875 | 0.9530 | 0.8785 |
| 0.0001 | 22.0 | 8250 | 0.9575 | 0.8752 |
| 0.0447 | 23.0 | 8625 | 0.9869 | 0.8819 |
| 0.0122 | 24.0 | 9000 | 0.8869 | 0.8785 |
| 0.0018 | 25.0 | 9375 | 1.0324 | 0.8669 |
| 0.0117 | 26.0 | 9750 | 0.9387 | 0.8852 |
| 0.0206 | 27.0 | 10125 | 1.0468 | 0.8719 |
| 0.0002 | 28.0 | 10500 | 0.9421 | 0.8785 |
| 0.0001 | 29.0 | 10875 | 0.8621 | 0.8968 |
| 0.0027 | 30.0 | 11250 | 0.9653 | 0.8769 |
| 0.0116 | 31.0 | 11625 | 0.9958 | 0.8785 |
| 0.0019 | 32.0 | 12000 | 1.1300 | 0.8752 |
| 0.0084 | 33.0 | 12375 | 1.0346 | 0.8802 |
| 0.0 | 34.0 | 12750 | 1.0458 | 0.8719 |
| 0.0 | 35.0 | 13125 | 1.0740 | 0.8719 |
| 0.0001 | 36.0 | 13500 | 1.0706 | 0.8719 |
| 0.0 | 37.0 | 13875 | 1.2116 | 0.8735 |
| 0.0 | 38.0 | 14250 | 1.1598 | 0.8735 |
| 0.0 | 39.0 | 14625 | 1.1682 | 0.8785 |
| 0.0029 | 40.0 | 15000 | 1.0573 | 0.8835 |
| 0.0 | 41.0 | 15375 | 1.1307 | 0.8735 |
| 0.0028 | 42.0 | 15750 | 1.1484 | 0.8702 |
| 0.0032 | 43.0 | 16125 | 1.1289 | 0.8752 |
| 0.0031 | 44.0 | 16500 | 1.1224 | 0.8769 |
| 0.0027 | 45.0 | 16875 | 1.1287 | 0.8719 |
| 0.0 | 46.0 | 17250 | 1.1176 | 0.8752 |
| 0.006 | 47.0 | 17625 | 1.1207 | 0.8752 |
| 0.0 | 48.0 | 18000 | 1.1234 | 0.8752 |
| 0.0024 | 49.0 | 18375 | 1.1256 | 0.8752 |
| 0.0022 | 50.0 | 18750 | 1.1259 | 0.8735 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_tiny_rms_0001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_rms_0001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0235
- Accuracy: 0.9017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3072 | 1.0 | 375 | 0.3497 | 0.8733 |
| 0.1839 | 2.0 | 750 | 0.4255 | 0.87 |
| 0.1528 | 3.0 | 1125 | 0.4557 | 0.8567 |
| 0.1267 | 4.0 | 1500 | 0.3726 | 0.89 |
| 0.1353 | 5.0 | 1875 | 0.4467 | 0.8917 |
| 0.0943 | 6.0 | 2250 | 0.4927 | 0.91 |
| 0.1102 | 7.0 | 2625 | 0.6801 | 0.8833 |
| 0.1057 | 8.0 | 3000 | 0.6555 | 0.88 |
| 0.032 | 9.0 | 3375 | 0.7410 | 0.8783 |
| 0.0843 | 10.0 | 3750 | 0.8478 | 0.8667 |
| 0.0459 | 11.0 | 4125 | 0.6987 | 0.8917 |
| 0.0092 | 12.0 | 4500 | 0.7040 | 0.8917 |
| 0.0349 | 13.0 | 4875 | 0.7908 | 0.885 |
| 0.0111 | 14.0 | 5250 | 0.7260 | 0.8983 |
| 0.0286 | 15.0 | 5625 | 0.7556 | 0.89 |
| 0.0202 | 16.0 | 6000 | 0.7922 | 0.885 |
| 0.0017 | 17.0 | 6375 | 0.7780 | 0.89 |
| 0.0426 | 18.0 | 6750 | 0.7356 | 0.9033 |
| 0.0036 | 19.0 | 7125 | 0.7906 | 0.88 |
| 0.0088 | 20.0 | 7500 | 0.8591 | 0.8883 |
| 0.014 | 21.0 | 7875 | 0.9590 | 0.8867 |
| 0.0 | 22.0 | 8250 | 0.9929 | 0.8783 |
| 0.0363 | 23.0 | 8625 | 0.9559 | 0.89 |
| 0.0156 | 24.0 | 9000 | 0.9344 | 0.88 |
| 0.0345 | 25.0 | 9375 | 0.8898 | 0.8917 |
| 0.0005 | 26.0 | 9750 | 0.9066 | 0.9 |
| 0.0104 | 27.0 | 10125 | 0.9018 | 0.8983 |
| 0.0026 | 28.0 | 10500 | 0.8354 | 0.89 |
| 0.0098 | 29.0 | 10875 | 1.0679 | 0.885 |
| 0.0077 | 30.0 | 11250 | 0.8084 | 0.8933 |
| 0.007 | 31.0 | 11625 | 0.9761 | 0.8833 |
| 0.0079 | 32.0 | 12000 | 0.8798 | 0.8867 |
| 0.0211 | 33.0 | 12375 | 0.9152 | 0.8967 |
| 0.0205 | 34.0 | 12750 | 0.8595 | 0.8967 |
| 0.0 | 35.0 | 13125 | 0.9123 | 0.8983 |
| 0.0 | 36.0 | 13500 | 1.0918 | 0.8817 |
| 0.0001 | 37.0 | 13875 | 0.9598 | 0.8917 |
| 0.0 | 38.0 | 14250 | 0.9005 | 0.8933 |
| 0.0 | 39.0 | 14625 | 0.9817 | 0.895 |
| 0.003 | 40.0 | 15000 | 1.0214 | 0.8933 |
| 0.0 | 41.0 | 15375 | 1.0132 | 0.895 |
| 0.0012 | 42.0 | 15750 | 1.0443 | 0.8933 |
| 0.0 | 43.0 | 16125 | 1.0086 | 0.895 |
| 0.0 | 44.0 | 16500 | 1.0148 | 0.895 |
| 0.0 | 45.0 | 16875 | 1.0171 | 0.895 |
| 0.0 | 46.0 | 17250 | 1.0091 | 0.8967 |
| 0.0 | 47.0 | 17625 | 1.0118 | 0.8983 |
| 0.0 | 48.0 | 18000 | 1.0184 | 0.9017 |
| 0.0 | 49.0 | 18375 | 1.0213 | 0.9017 |
| 0.0 | 50.0 | 18750 | 1.0235 | 0.9017 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_adamax_001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_adamax_001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7497
- Accuracy: 0.9132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.329 | 1.0 | 376 | 0.4277 | 0.8464 |
| 0.2087 | 2.0 | 752 | 0.3407 | 0.8698 |
| 0.2485 | 3.0 | 1128 | 0.3788 | 0.8598 |
| 0.178 | 4.0 | 1504 | 0.4197 | 0.8531 |
| 0.1258 | 5.0 | 1880 | 0.4173 | 0.8648 |
| 0.1206 | 6.0 | 2256 | 0.3586 | 0.8848 |
| 0.1282 | 7.0 | 2632 | 0.3517 | 0.8865 |
| 0.0583 | 8.0 | 3008 | 0.5359 | 0.8765 |
| 0.0747 | 9.0 | 3384 | 0.5100 | 0.8731 |
| 0.0435 | 10.0 | 3760 | 0.5516 | 0.8781 |
| 0.06 | 11.0 | 4136 | 0.3933 | 0.8998 |
| 0.0257 | 12.0 | 4512 | 0.5267 | 0.8848 |
| 0.0686 | 13.0 | 4888 | 0.4896 | 0.9065 |
| 0.016 | 14.0 | 5264 | 0.5666 | 0.8881 |
| 0.011 | 15.0 | 5640 | 0.5612 | 0.8965 |
| 0.0019 | 16.0 | 6016 | 0.6453 | 0.8848 |
| 0.0015 | 17.0 | 6392 | 0.5726 | 0.8948 |
| 0.0354 | 18.0 | 6768 | 0.5332 | 0.9048 |
| 0.0037 | 19.0 | 7144 | 0.5726 | 0.8965 |
| 0.0094 | 20.0 | 7520 | 0.5926 | 0.9032 |
| 0.0008 | 21.0 | 7896 | 0.5520 | 0.8998 |
| 0.0004 | 22.0 | 8272 | 0.4436 | 0.9165 |
| 0.0006 | 23.0 | 8648 | 0.6077 | 0.8965 |
| 0.001 | 24.0 | 9024 | 0.6248 | 0.9132 |
| 0.0003 | 25.0 | 9400 | 0.6715 | 0.8982 |
| 0.0035 | 26.0 | 9776 | 0.6641 | 0.9082 |
| 0.0 | 27.0 | 10152 | 0.6982 | 0.9048 |
| 0.0 | 28.0 | 10528 | 0.7269 | 0.8982 |
| 0.0054 | 29.0 | 10904 | 0.6756 | 0.9098 |
| 0.0034 | 30.0 | 11280 | 0.6451 | 0.9065 |
| 0.0 | 31.0 | 11656 | 0.6535 | 0.9098 |
| 0.0 | 32.0 | 12032 | 0.6650 | 0.9065 |
| 0.0 | 33.0 | 12408 | 0.6759 | 0.9082 |
| 0.0 | 34.0 | 12784 | 0.6731 | 0.9048 |
| 0.0 | 35.0 | 13160 | 0.6782 | 0.9082 |
| 0.0001 | 36.0 | 13536 | 0.6755 | 0.9032 |
| 0.0 | 37.0 | 13912 | 0.7594 | 0.9098 |
| 0.0 | 38.0 | 14288 | 0.7065 | 0.9115 |
| 0.0 | 39.0 | 14664 | 0.7005 | 0.9082 |
| 0.0 | 40.0 | 15040 | 0.7058 | 0.9149 |
| 0.0 | 41.0 | 15416 | 0.6924 | 0.9115 |
| 0.0 | 42.0 | 15792 | 0.7078 | 0.9149 |
| 0.0 | 43.0 | 16168 | 0.7156 | 0.9149 |
| 0.0 | 44.0 | 16544 | 0.7204 | 0.9165 |
| 0.0 | 45.0 | 16920 | 0.7358 | 0.9149 |
| 0.003 | 46.0 | 17296 | 0.7278 | 0.9165 |
| 0.0 | 47.0 | 17672 | 0.7349 | 0.9149 |
| 0.0 | 48.0 | 18048 | 0.7414 | 0.9149 |
| 0.0 | 49.0 | 18424 | 0.7464 | 0.9149 |
| 0.0023 | 50.0 | 18800 | 0.7497 | 0.9132 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_adamax_0001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_adamax_0001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6997
- Accuracy: 0.9182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2162 | 1.0 | 376 | 0.2261 | 0.9082 |
| 0.1041 | 2.0 | 752 | 0.2663 | 0.8965 |
| 0.0837 | 3.0 | 1128 | 0.3441 | 0.9015 |
| 0.0172 | 4.0 | 1504 | 0.4099 | 0.9048 |
| 0.0131 | 5.0 | 1880 | 0.4724 | 0.9048 |
| 0.0004 | 6.0 | 2256 | 0.4925 | 0.9065 |
| 0.0017 | 7.0 | 2632 | 0.6831 | 0.8965 |
| 0.0006 | 8.0 | 3008 | 0.5273 | 0.9015 |
| 0.0324 | 9.0 | 3384 | 0.5755 | 0.8998 |
| 0.0 | 10.0 | 3760 | 0.6569 | 0.9048 |
| 0.0009 | 11.0 | 4136 | 0.5873 | 0.9082 |
| 0.0003 | 12.0 | 4512 | 0.6069 | 0.9065 |
| 0.0 | 13.0 | 4888 | 0.5862 | 0.9082 |
| 0.0063 | 14.0 | 5264 | 0.6445 | 0.9048 |
| 0.0 | 15.0 | 5640 | 0.6277 | 0.9132 |
| 0.0 | 16.0 | 6016 | 0.7053 | 0.9032 |
| 0.0 | 17.0 | 6392 | 0.6033 | 0.9098 |
| 0.0 | 18.0 | 6768 | 0.6638 | 0.9065 |
| 0.0 | 19.0 | 7144 | 0.6432 | 0.9082 |
| 0.004 | 20.0 | 7520 | 0.6467 | 0.9115 |
| 0.0 | 21.0 | 7896 | 0.7009 | 0.9115 |
| 0.0 | 22.0 | 8272 | 0.7221 | 0.9048 |
| 0.0 | 23.0 | 8648 | 0.6516 | 0.9149 |
| 0.0 | 24.0 | 9024 | 0.6399 | 0.9149 |
| 0.0 | 25.0 | 9400 | 0.6382 | 0.9182 |
| 0.0034 | 26.0 | 9776 | 0.6520 | 0.9098 |
| 0.0 | 27.0 | 10152 | 0.6761 | 0.9115 |
| 0.0 | 28.0 | 10528 | 0.6436 | 0.9182 |
| 0.003 | 29.0 | 10904 | 0.6339 | 0.9115 |
| 0.0041 | 30.0 | 11280 | 0.6392 | 0.9132 |
| 0.0 | 31.0 | 11656 | 0.6548 | 0.9182 |
| 0.0 | 32.0 | 12032 | 0.6680 | 0.9149 |
| 0.0 | 33.0 | 12408 | 0.6562 | 0.9115 |
| 0.0 | 34.0 | 12784 | 0.6705 | 0.9165 |
| 0.0 | 35.0 | 13160 | 0.6801 | 0.9098 |
| 0.0 | 36.0 | 13536 | 0.6653 | 0.9182 |
| 0.0 | 37.0 | 13912 | 0.6565 | 0.9165 |
| 0.0 | 38.0 | 14288 | 0.6618 | 0.9215 |
| 0.0 | 39.0 | 14664 | 0.6597 | 0.9149 |
| 0.0 | 40.0 | 15040 | 0.6689 | 0.9165 |
| 0.0 | 41.0 | 15416 | 0.6826 | 0.9149 |
| 0.0 | 42.0 | 15792 | 0.6835 | 0.9132 |
| 0.0 | 43.0 | 16168 | 0.6862 | 0.9149 |
| 0.0 | 44.0 | 16544 | 0.6860 | 0.9182 |
| 0.0 | 45.0 | 16920 | 0.6904 | 0.9182 |
| 0.0027 | 46.0 | 17296 | 0.6967 | 0.9132 |
| 0.0 | 47.0 | 17672 | 0.6971 | 0.9182 |
| 0.0 | 48.0 | 18048 | 0.6989 | 0.9182 |
| 0.0 | 49.0 | 18424 | 0.7000 | 0.9182 |
| 0.0022 | 50.0 | 18800 | 0.6997 | 0.9182 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_tiny_rms_0001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_rms_0001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4357
- Accuracy: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3287 | 1.0 | 375 | 0.3863 | 0.85 |
| 0.2455 | 2.0 | 750 | 0.3649 | 0.8717 |
| 0.1213 | 3.0 | 1125 | 0.4642 | 0.8583 |
| 0.1727 | 4.0 | 1500 | 0.5805 | 0.8617 |
| 0.1128 | 5.0 | 1875 | 0.6371 | 0.8483 |
| 0.0689 | 6.0 | 2250 | 0.6331 | 0.8683 |
| 0.0983 | 7.0 | 2625 | 0.6829 | 0.865 |
| 0.1105 | 8.0 | 3000 | 0.6645 | 0.8617 |
| 0.0716 | 9.0 | 3375 | 0.9136 | 0.8583 |
| 0.0639 | 10.0 | 3750 | 0.7869 | 0.8867 |
| 0.0325 | 11.0 | 4125 | 0.8744 | 0.8733 |
| 0.0627 | 12.0 | 4500 | 0.9757 | 0.8567 |
| 0.0409 | 13.0 | 4875 | 0.9654 | 0.8633 |
| 0.0848 | 14.0 | 5250 | 0.8074 | 0.8667 |
| 0.0374 | 15.0 | 5625 | 0.9236 | 0.8667 |
| 0.037 | 16.0 | 6000 | 1.0898 | 0.8617 |
| 0.0497 | 17.0 | 6375 | 1.1236 | 0.8583 |
| 0.0095 | 18.0 | 6750 | 1.0183 | 0.87 |
| 0.0289 | 19.0 | 7125 | 1.0208 | 0.8783 |
| 0.0255 | 20.0 | 7500 | 1.1375 | 0.8667 |
| 0.0016 | 21.0 | 7875 | 1.1251 | 0.8617 |
| 0.0005 | 22.0 | 8250 | 1.0252 | 0.8717 |
| 0.015 | 23.0 | 8625 | 1.1223 | 0.865 |
| 0.0375 | 24.0 | 9000 | 1.0372 | 0.8733 |
| 0.0379 | 25.0 | 9375 | 0.9869 | 0.8667 |
| 0.0001 | 26.0 | 9750 | 1.0331 | 0.8733 |
| 0.0134 | 27.0 | 10125 | 0.9754 | 0.885 |
| 0.0 | 28.0 | 10500 | 1.0742 | 0.8583 |
| 0.0001 | 29.0 | 10875 | 1.0378 | 0.88 |
| 0.0 | 30.0 | 11250 | 1.1203 | 0.875 |
| 0.0077 | 31.0 | 11625 | 1.1471 | 0.8783 |
| 0.0003 | 32.0 | 12000 | 1.1437 | 0.8783 |
| 0.0 | 33.0 | 12375 | 1.1521 | 0.875 |
| 0.0003 | 34.0 | 12750 | 1.2362 | 0.865 |
| 0.0 | 35.0 | 13125 | 1.2535 | 0.8567 |
| 0.0 | 36.0 | 13500 | 1.2428 | 0.865 |
| 0.0002 | 37.0 | 13875 | 1.3504 | 0.8583 |
| 0.0191 | 38.0 | 14250 | 1.2705 | 0.87 |
| 0.0 | 39.0 | 14625 | 1.3466 | 0.8667 |
| 0.0 | 40.0 | 15000 | 1.3575 | 0.8617 |
| 0.0 | 41.0 | 15375 | 1.3681 | 0.8667 |
| 0.0 | 42.0 | 15750 | 1.3681 | 0.87 |
| 0.0 | 43.0 | 16125 | 1.3799 | 0.865 |
| 0.0 | 44.0 | 16500 | 1.3559 | 0.8667 |
| 0.0 | 45.0 | 16875 | 1.3770 | 0.865 |
| 0.0 | 46.0 | 17250 | 1.4044 | 0.8667 |
| 0.0 | 47.0 | 17625 | 1.4188 | 0.8683 |
| 0.0 | 48.0 | 18000 | 1.4286 | 0.8667 |
| 0.0 | 49.0 | 18375 | 1.4343 | 0.8667 |
| 0.0 | 50.0 | 18750 | 1.4357 | 0.8667 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_tiny_rms_0001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_rms_0001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9895
- Accuracy: 0.9067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3055 | 1.0 | 375 | 0.4440 | 0.825 |
| 0.2222 | 2.0 | 750 | 0.3224 | 0.8867 |
| 0.2186 | 3.0 | 1125 | 0.3702 | 0.8883 |
| 0.1592 | 4.0 | 1500 | 0.4759 | 0.85 |
| 0.0922 | 5.0 | 1875 | 0.4560 | 0.8767 |
| 0.0977 | 6.0 | 2250 | 0.5531 | 0.875 |
| 0.0567 | 7.0 | 2625 | 0.5054 | 0.8883 |
| 0.0612 | 8.0 | 3000 | 0.5016 | 0.9067 |
| 0.0471 | 9.0 | 3375 | 0.6558 | 0.895 |
| 0.0783 | 10.0 | 3750 | 0.7144 | 0.89 |
| 0.0337 | 11.0 | 4125 | 0.7483 | 0.8833 |
| 0.0522 | 12.0 | 4500 | 0.6408 | 0.8967 |
| 0.0122 | 13.0 | 4875 | 0.5578 | 0.8917 |
| 0.0456 | 14.0 | 5250 | 0.6886 | 0.9 |
| 0.0505 | 15.0 | 5625 | 0.6222 | 0.9067 |
| 0.0186 | 16.0 | 6000 | 0.7341 | 0.8867 |
| 0.0232 | 17.0 | 6375 | 0.6650 | 0.9083 |
| 0.0384 | 18.0 | 6750 | 0.6731 | 0.9133 |
| 0.0134 | 19.0 | 7125 | 0.7917 | 0.8883 |
| 0.0197 | 20.0 | 7500 | 0.7544 | 0.9033 |
| 0.006 | 21.0 | 7875 | 0.7694 | 0.8983 |
| 0.084 | 22.0 | 8250 | 0.7873 | 0.8917 |
| 0.0405 | 23.0 | 8625 | 0.7521 | 0.8967 |
| 0.0002 | 24.0 | 9000 | 0.9409 | 0.8883 |
| 0.0009 | 25.0 | 9375 | 0.8364 | 0.8967 |
| 0.0273 | 26.0 | 9750 | 0.7668 | 0.8933 |
| 0.001 | 27.0 | 10125 | 0.7995 | 0.88 |
| 0.0262 | 28.0 | 10500 | 0.8060 | 0.8883 |
| 0.0003 | 29.0 | 10875 | 0.7588 | 0.9083 |
| 0.0189 | 30.0 | 11250 | 0.9019 | 0.8867 |
| 0.0003 | 31.0 | 11625 | 1.0397 | 0.8867 |
| 0.0 | 32.0 | 12000 | 0.9253 | 0.895 |
| 0.0002 | 33.0 | 12375 | 0.8619 | 0.905 |
| 0.0003 | 34.0 | 12750 | 0.9328 | 0.9 |
| 0.0 | 35.0 | 13125 | 0.9364 | 0.905 |
| 0.0002 | 36.0 | 13500 | 0.9470 | 0.8967 |
| 0.0001 | 37.0 | 13875 | 0.9360 | 0.9033 |
| 0.0033 | 38.0 | 14250 | 1.0063 | 0.9033 |
| 0.0 | 39.0 | 14625 | 0.9618 | 0.9017 |
| 0.0 | 40.0 | 15000 | 0.9713 | 0.9083 |
| 0.0 | 41.0 | 15375 | 0.9440 | 0.9083 |
| 0.0 | 42.0 | 15750 | 0.9330 | 0.91 |
| 0.0 | 43.0 | 16125 | 0.9519 | 0.9083 |
| 0.0 | 44.0 | 16500 | 0.9407 | 0.905 |
| 0.0 | 45.0 | 16875 | 0.9804 | 0.9033 |
| 0.0 | 46.0 | 17250 | 0.9891 | 0.9033 |
| 0.0031 | 47.0 | 17625 | 0.9794 | 0.9033 |
| 0.0 | 48.0 | 18000 | 0.9842 | 0.9033 |
| 0.0 | 49.0 | 18375 | 0.9888 | 0.9067 |
| 0.0021 | 50.0 | 18750 | 0.9895 | 0.9067 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_adamax_001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_adamax_001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1252
- Accuracy: 0.8852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3593 | 1.0 | 375 | 0.2868 | 0.8902 |
| 0.3012 | 2.0 | 750 | 0.2473 | 0.9085 |
| 0.3836 | 3.0 | 1125 | 0.3500 | 0.8619 |
| 0.1484 | 4.0 | 1500 | 0.3561 | 0.8819 |
| 0.142 | 5.0 | 1875 | 0.3496 | 0.8619 |
| 0.1054 | 6.0 | 2250 | 0.5030 | 0.8519 |
| 0.1132 | 7.0 | 2625 | 0.4021 | 0.8769 |
| 0.0387 | 8.0 | 3000 | 0.5600 | 0.8752 |
| 0.0412 | 9.0 | 3375 | 0.4804 | 0.8935 |
| 0.049 | 10.0 | 3750 | 0.4670 | 0.8902 |
| 0.0223 | 11.0 | 4125 | 0.5161 | 0.8852 |
| 0.0227 | 12.0 | 4500 | 0.5268 | 0.8802 |
| 0.029 | 13.0 | 4875 | 0.5511 | 0.8819 |
| 0.0101 | 14.0 | 5250 | 0.5655 | 0.8935 |
| 0.0239 | 15.0 | 5625 | 0.5903 | 0.8885 |
| 0.0204 | 16.0 | 6000 | 0.6826 | 0.8869 |
| 0.0387 | 17.0 | 6375 | 0.6581 | 0.8835 |
| 0.0045 | 18.0 | 6750 | 0.5940 | 0.8869 |
| 0.0004 | 19.0 | 7125 | 0.7563 | 0.8885 |
| 0.0271 | 20.0 | 7500 | 0.5791 | 0.9035 |
| 0.0211 | 21.0 | 7875 | 0.5981 | 0.8869 |
| 0.0086 | 22.0 | 8250 | 0.6990 | 0.8869 |
| 0.0146 | 23.0 | 8625 | 0.6527 | 0.8935 |
| 0.0006 | 24.0 | 9000 | 0.5903 | 0.8885 |
| 0.02 | 25.0 | 9375 | 0.6548 | 0.8952 |
| 0.0007 | 26.0 | 9750 | 0.7230 | 0.8952 |
| 0.0 | 27.0 | 10125 | 0.7646 | 0.9002 |
| 0.0 | 28.0 | 10500 | 0.8095 | 0.8852 |
| 0.0 | 29.0 | 10875 | 0.8926 | 0.8835 |
| 0.0 | 30.0 | 11250 | 0.8629 | 0.8819 |
| 0.0041 | 31.0 | 11625 | 0.8782 | 0.8819 |
| 0.0047 | 32.0 | 12000 | 0.8948 | 0.8819 |
| 0.0063 | 33.0 | 12375 | 0.9158 | 0.8752 |
| 0.0001 | 34.0 | 12750 | 0.9726 | 0.8918 |
| 0.0 | 35.0 | 13125 | 1.0164 | 0.8819 |
| 0.0 | 36.0 | 13500 | 1.0004 | 0.8869 |
| 0.0 | 37.0 | 13875 | 1.0193 | 0.8869 |
| 0.0 | 38.0 | 14250 | 1.0151 | 0.8935 |
| 0.0 | 39.0 | 14625 | 1.0231 | 0.8902 |
| 0.0035 | 40.0 | 15000 | 1.0298 | 0.8852 |
| 0.0 | 41.0 | 15375 | 1.0402 | 0.8902 |
| 0.0028 | 42.0 | 15750 | 1.0577 | 0.8869 |
| 0.0026 | 43.0 | 16125 | 1.0687 | 0.8819 |
| 0.0027 | 44.0 | 16500 | 1.0626 | 0.8852 |
| 0.0029 | 45.0 | 16875 | 1.0972 | 0.8835 |
| 0.0 | 46.0 | 17250 | 1.0976 | 0.8819 |
| 0.0055 | 47.0 | 17625 | 1.1056 | 0.8819 |
| 0.0 | 48.0 | 18000 | 1.1143 | 0.8852 |
| 0.0025 | 49.0 | 18375 | 1.1213 | 0.8835 |
| 0.0024 | 50.0 | 18750 | 1.1252 | 0.8852 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_adamax_0001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_adamax_0001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8969
- Accuracy: 0.8985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2012 | 1.0 | 375 | 0.2802 | 0.8835 |
| 0.1424 | 2.0 | 750 | 0.3530 | 0.8985 |
| 0.0484 | 3.0 | 1125 | 0.5841 | 0.8769 |
| 0.0171 | 4.0 | 1500 | 0.5393 | 0.8968 |
| 0.0016 | 5.0 | 1875 | 0.6879 | 0.8835 |
| 0.0133 | 6.0 | 2250 | 0.7421 | 0.8885 |
| 0.0004 | 7.0 | 2625 | 0.7382 | 0.8869 |
| 0.0224 | 8.0 | 3000 | 0.6881 | 0.8902 |
| 0.0004 | 9.0 | 3375 | 0.7760 | 0.8902 |
| 0.0002 | 10.0 | 3750 | 0.7986 | 0.8852 |
| 0.0045 | 11.0 | 4125 | 0.7173 | 0.8935 |
| 0.0002 | 12.0 | 4500 | 0.8875 | 0.8802 |
| 0.0106 | 13.0 | 4875 | 0.8591 | 0.8918 |
| 0.0009 | 14.0 | 5250 | 0.9035 | 0.8902 |
| 0.0101 | 15.0 | 5625 | 0.8626 | 0.8918 |
| 0.0 | 16.0 | 6000 | 0.9182 | 0.8852 |
| 0.0029 | 17.0 | 6375 | 0.7794 | 0.8952 |
| 0.0 | 18.0 | 6750 | 0.7848 | 0.8935 |
| 0.0001 | 19.0 | 7125 | 0.8673 | 0.8902 |
| 0.0 | 20.0 | 7500 | 0.8428 | 0.8918 |
| 0.0 | 21.0 | 7875 | 0.8282 | 0.8952 |
| 0.0 | 22.0 | 8250 | 0.8604 | 0.8918 |
| 0.0 | 23.0 | 8625 | 0.8223 | 0.8935 |
| 0.0 | 24.0 | 9000 | 0.8436 | 0.8952 |
| 0.0 | 25.0 | 9375 | 0.8078 | 0.8902 |
| 0.0 | 26.0 | 9750 | 0.8487 | 0.8968 |
| 0.0 | 27.0 | 10125 | 0.8273 | 0.8902 |
| 0.0 | 28.0 | 10500 | 0.8385 | 0.8902 |
| 0.0 | 29.0 | 10875 | 0.8210 | 0.8985 |
| 0.0 | 30.0 | 11250 | 0.8440 | 0.8918 |
| 0.0029 | 31.0 | 11625 | 0.8614 | 0.8852 |
| 0.0034 | 32.0 | 12000 | 0.8524 | 0.8935 |
| 0.0033 | 33.0 | 12375 | 0.8611 | 0.8918 |
| 0.0 | 34.0 | 12750 | 0.8778 | 0.8985 |
| 0.0 | 35.0 | 13125 | 0.8525 | 0.8952 |
| 0.0 | 36.0 | 13500 | 0.8763 | 0.8952 |
| 0.0 | 37.0 | 13875 | 0.8733 | 0.9002 |
| 0.0 | 38.0 | 14250 | 0.8847 | 0.8952 |
| 0.0 | 39.0 | 14625 | 0.8741 | 0.8952 |
| 0.0027 | 40.0 | 15000 | 0.8864 | 0.8952 |
| 0.0 | 41.0 | 15375 | 0.8807 | 0.8952 |
| 0.0025 | 42.0 | 15750 | 0.8886 | 0.8952 |
| 0.0024 | 43.0 | 16125 | 0.8857 | 0.8985 |
| 0.0024 | 44.0 | 16500 | 0.8867 | 0.8968 |
| 0.0023 | 45.0 | 16875 | 0.8921 | 0.8985 |
| 0.0 | 46.0 | 17250 | 0.8968 | 0.8985 |
| 0.0048 | 47.0 | 17625 | 0.8952 | 0.8985 |
| 0.0 | 48.0 | 18000 | 0.8977 | 0.8985 |
| 0.0023 | 49.0 | 18375 | 0.8974 | 0.8985 |
| 0.0023 | 50.0 | 18750 | 0.8969 | 0.8985 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_tiny_rms_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_rms_00001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9495
- Accuracy: 0.8982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2593 | 1.0 | 376 | 0.3151 | 0.8765 |
| 0.1718 | 2.0 | 752 | 0.2623 | 0.8998 |
| 0.1615 | 3.0 | 1128 | 0.2861 | 0.8965 |
| 0.0982 | 4.0 | 1504 | 0.3444 | 0.8881 |
| 0.0553 | 5.0 | 1880 | 0.3784 | 0.9098 |
| 0.0747 | 6.0 | 2256 | 0.5204 | 0.8881 |
| 0.0183 | 7.0 | 2632 | 0.5683 | 0.8948 |
| 0.0068 | 8.0 | 3008 | 0.6428 | 0.8998 |
| 0.0727 | 9.0 | 3384 | 0.7962 | 0.8815 |
| 0.0001 | 10.0 | 3760 | 0.7940 | 0.8965 |
| 0.001 | 11.0 | 4136 | 0.9819 | 0.8681 |
| 0.0 | 12.0 | 4512 | 0.8908 | 0.8848 |
| 0.0018 | 13.0 | 4888 | 0.8621 | 0.8865 |
| 0.0198 | 14.0 | 5264 | 0.8948 | 0.8881 |
| 0.0291 | 15.0 | 5640 | 0.9361 | 0.8915 |
| 0.0001 | 16.0 | 6016 | 0.7825 | 0.8948 |
| 0.0 | 17.0 | 6392 | 0.8996 | 0.8815 |
| 0.0001 | 18.0 | 6768 | 0.8212 | 0.8948 |
| 0.0026 | 19.0 | 7144 | 0.8543 | 0.8831 |
| 0.0145 | 20.0 | 7520 | 0.8936 | 0.8881 |
| 0.004 | 21.0 | 7896 | 0.9825 | 0.8815 |
| 0.0 | 22.0 | 8272 | 0.9004 | 0.8932 |
| 0.0001 | 23.0 | 8648 | 0.8961 | 0.8965 |
| 0.0 | 24.0 | 9024 | 1.0000 | 0.8915 |
| 0.0 | 25.0 | 9400 | 0.9507 | 0.8865 |
| 0.079 | 26.0 | 9776 | 1.0040 | 0.8865 |
| 0.0 | 27.0 | 10152 | 0.9365 | 0.8998 |
| 0.0 | 28.0 | 10528 | 0.9689 | 0.8815 |
| 0.0089 | 29.0 | 10904 | 0.9542 | 0.8898 |
| 0.0105 | 30.0 | 11280 | 0.9853 | 0.8898 |
| 0.0 | 31.0 | 11656 | 0.9962 | 0.8965 |
| 0.0 | 32.0 | 12032 | 0.9324 | 0.8982 |
| 0.0 | 33.0 | 12408 | 1.0542 | 0.8881 |
| 0.0 | 34.0 | 12784 | 0.9887 | 0.8932 |
| 0.0 | 35.0 | 13160 | 0.8827 | 0.9082 |
| 0.0 | 36.0 | 13536 | 0.8957 | 0.8982 |
| 0.0 | 37.0 | 13912 | 0.9316 | 0.8932 |
| 0.0 | 38.0 | 14288 | 0.9562 | 0.8915 |
| 0.0 | 39.0 | 14664 | 0.9229 | 0.8982 |
| 0.0 | 40.0 | 15040 | 0.9352 | 0.8932 |
| 0.0 | 41.0 | 15416 | 0.9221 | 0.8915 |
| 0.0 | 42.0 | 15792 | 0.9253 | 0.8965 |
| 0.0 | 43.0 | 16168 | 0.9330 | 0.8881 |
| 0.0 | 44.0 | 16544 | 0.9447 | 0.8965 |
| 0.0 | 45.0 | 16920 | 0.9432 | 0.8965 |
| 0.0047 | 46.0 | 17296 | 0.9445 | 0.8965 |
| 0.0 | 47.0 | 17672 | 0.9464 | 0.8948 |
| 0.0 | 48.0 | 18048 | 0.9465 | 0.8948 |
| 0.0 | 49.0 | 18424 | 0.9475 | 0.8982 |
| 0.0039 | 50.0 | 18800 | 0.9495 | 0.8982 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_adamax_001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_adamax_001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1005
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.307 | 1.0 | 375 | 0.3243 | 0.88 |
| 0.1951 | 2.0 | 750 | 0.2848 | 0.8967 |
| 0.1756 | 3.0 | 1125 | 0.3260 | 0.8767 |
| 0.1301 | 4.0 | 1500 | 0.3461 | 0.8933 |
| 0.1724 | 5.0 | 1875 | 0.3433 | 0.8783 |
| 0.1105 | 6.0 | 2250 | 0.5327 | 0.8517 |
| 0.105 | 7.0 | 2625 | 0.4495 | 0.89 |
| 0.1373 | 8.0 | 3000 | 0.3477 | 0.8933 |
| 0.0545 | 9.0 | 3375 | 0.5403 | 0.8767 |
| 0.026 | 10.0 | 3750 | 0.6392 | 0.8717 |
| 0.0547 | 11.0 | 4125 | 0.6160 | 0.875 |
| 0.0385 | 12.0 | 4500 | 0.5572 | 0.885 |
| 0.0376 | 13.0 | 4875 | 0.6146 | 0.8967 |
| 0.0031 | 14.0 | 5250 | 0.6509 | 0.8883 |
| 0.0185 | 15.0 | 5625 | 0.6515 | 0.885 |
| 0.0353 | 16.0 | 6000 | 0.7637 | 0.885 |
| 0.0052 | 17.0 | 6375 | 0.7211 | 0.8817 |
| 0.011 | 18.0 | 6750 | 0.5915 | 0.9067 |
| 0.0053 | 19.0 | 7125 | 0.6576 | 0.89 |
| 0.0044 | 20.0 | 7500 | 0.6728 | 0.8983 |
| 0.0003 | 21.0 | 7875 | 0.7362 | 0.8817 |
| 0.0001 | 22.0 | 8250 | 0.7370 | 0.8817 |
| 0.0265 | 23.0 | 8625 | 0.6954 | 0.895 |
| 0.0011 | 24.0 | 9000 | 0.7244 | 0.8883 |
| 0.0056 | 25.0 | 9375 | 0.7383 | 0.8917 |
| 0.0 | 26.0 | 9750 | 0.6944 | 0.9033 |
| 0.0001 | 27.0 | 10125 | 0.8581 | 0.8933 |
| 0.0002 | 28.0 | 10500 | 0.7732 | 0.8917 |
| 0.0001 | 29.0 | 10875 | 0.9540 | 0.8867 |
| 0.005 | 30.0 | 11250 | 0.8145 | 0.8933 |
| 0.0003 | 31.0 | 11625 | 0.8223 | 0.8967 |
| 0.0 | 32.0 | 12000 | 0.8225 | 0.89 |
| 0.0 | 33.0 | 12375 | 0.8479 | 0.8933 |
| 0.0 | 34.0 | 12750 | 0.8571 | 0.895 |
| 0.0 | 35.0 | 13125 | 0.9119 | 0.8917 |
| 0.0 | 36.0 | 13500 | 0.9029 | 0.8917 |
| 0.0 | 37.0 | 13875 | 0.9226 | 0.8967 |
| 0.0 | 38.0 | 14250 | 0.9083 | 0.895 |
| 0.0 | 39.0 | 14625 | 1.0048 | 0.8933 |
| 0.0026 | 40.0 | 15000 | 1.0018 | 0.8883 |
| 0.0 | 41.0 | 15375 | 1.0177 | 0.8917 |
| 0.0 | 42.0 | 15750 | 1.0273 | 0.8917 |
| 0.0 | 43.0 | 16125 | 1.0393 | 0.8933 |
| 0.0 | 44.0 | 16500 | 1.0649 | 0.895 |
| 0.0 | 45.0 | 16875 | 1.0825 | 0.8883 |
| 0.0 | 46.0 | 17250 | 1.0743 | 0.895 |
| 0.0 | 47.0 | 17625 | 1.0848 | 0.8917 |
| 0.0 | 48.0 | 18000 | 1.0902 | 0.8917 |
| 0.0 | 49.0 | 18375 | 1.0954 | 0.89 |
| 0.0 | 50.0 | 18750 | 1.1005 | 0.89 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_adamax_0001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_adamax_0001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9417
- Accuracy: 0.9067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2147 | 1.0 | 375 | 0.4092 | 0.8317 |
| 0.1408 | 2.0 | 750 | 0.3269 | 0.915 |
| 0.0228 | 3.0 | 1125 | 0.4567 | 0.9067 |
| 0.0111 | 4.0 | 1500 | 0.5788 | 0.9033 |
| 0.0156 | 5.0 | 1875 | 0.6062 | 0.9 |
| 0.0288 | 6.0 | 2250 | 0.6656 | 0.8917 |
| 0.0007 | 7.0 | 2625 | 0.6456 | 0.9017 |
| 0.0002 | 8.0 | 3000 | 0.6407 | 0.8917 |
| 0.0002 | 9.0 | 3375 | 0.6824 | 0.9083 |
| 0.0084 | 10.0 | 3750 | 0.6593 | 0.905 |
| 0.0203 | 11.0 | 4125 | 0.7617 | 0.9017 |
| 0.0033 | 12.0 | 4500 | 0.7022 | 0.9167 |
| 0.0 | 13.0 | 4875 | 0.8023 | 0.9033 |
| 0.0 | 14.0 | 5250 | 0.8062 | 0.9083 |
| 0.0 | 15.0 | 5625 | 0.8735 | 0.905 |
| 0.0293 | 16.0 | 6000 | 0.8124 | 0.9133 |
| 0.0 | 17.0 | 6375 | 0.8110 | 0.915 |
| 0.0 | 18.0 | 6750 | 0.7934 | 0.9167 |
| 0.0 | 19.0 | 7125 | 0.8257 | 0.9117 |
| 0.0 | 20.0 | 7500 | 0.8169 | 0.905 |
| 0.0 | 21.0 | 7875 | 0.7971 | 0.9167 |
| 0.0 | 22.0 | 8250 | 0.8206 | 0.905 |
| 0.0031 | 23.0 | 8625 | 0.8887 | 0.9067 |
| 0.0 | 24.0 | 9000 | 0.8570 | 0.91 |
| 0.0 | 25.0 | 9375 | 0.9027 | 0.9017 |
| 0.0 | 26.0 | 9750 | 0.8809 | 0.9067 |
| 0.0 | 27.0 | 10125 | 0.8772 | 0.9083 |
| 0.0 | 28.0 | 10500 | 0.8815 | 0.9083 |
| 0.0 | 29.0 | 10875 | 0.8462 | 0.91 |
| 0.0028 | 30.0 | 11250 | 0.8854 | 0.9083 |
| 0.0 | 31.0 | 11625 | 0.8584 | 0.9083 |
| 0.0 | 32.0 | 12000 | 0.8933 | 0.905 |
| 0.0 | 33.0 | 12375 | 0.8718 | 0.9083 |
| 0.0 | 34.0 | 12750 | 0.8798 | 0.9067 |
| 0.0 | 35.0 | 13125 | 0.8653 | 0.9083 |
| 0.0 | 36.0 | 13500 | 0.8742 | 0.9133 |
| 0.0 | 37.0 | 13875 | 0.8914 | 0.9083 |
| 0.0 | 38.0 | 14250 | 0.8921 | 0.91 |
| 0.0 | 39.0 | 14625 | 0.9001 | 0.9083 |
| 0.0025 | 40.0 | 15000 | 0.9101 | 0.9083 |
| 0.0 | 41.0 | 15375 | 0.9161 | 0.9067 |
| 0.0 | 42.0 | 15750 | 0.9182 | 0.9083 |
| 0.0 | 43.0 | 16125 | 0.9246 | 0.905 |
| 0.0 | 44.0 | 16500 | 0.9291 | 0.9083 |
| 0.0 | 45.0 | 16875 | 0.9302 | 0.9067 |
| 0.0 | 46.0 | 17250 | 0.9341 | 0.9067 |
| 0.0 | 47.0 | 17625 | 0.9378 | 0.9067 |
| 0.0 | 48.0 | 18000 | 0.9402 | 0.9067 |
| 0.0 | 49.0 | 18375 | 0.9417 | 0.9067 |
| 0.0 | 50.0 | 18750 | 0.9417 | 0.9067 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_tiny_rms_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_rms_00001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2381
- Accuracy: 0.8769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3659 | 1.0 | 375 | 0.3296 | 0.8686 |
| 0.2195 | 2.0 | 750 | 0.3046 | 0.8802 |
| 0.1328 | 3.0 | 1125 | 0.3414 | 0.8752 |
| 0.0957 | 4.0 | 1500 | 0.3842 | 0.8802 |
| 0.0592 | 5.0 | 1875 | 0.4781 | 0.8885 |
| 0.0554 | 6.0 | 2250 | 0.5329 | 0.8902 |
| 0.0561 | 7.0 | 2625 | 0.7030 | 0.8735 |
| 0.0111 | 8.0 | 3000 | 0.7077 | 0.8785 |
| 0.0138 | 9.0 | 3375 | 0.8845 | 0.8852 |
| 0.0035 | 10.0 | 3750 | 0.8403 | 0.8819 |
| 0.0539 | 11.0 | 4125 | 0.9586 | 0.8702 |
| 0.009 | 12.0 | 4500 | 0.9960 | 0.8802 |
| 0.0001 | 13.0 | 4875 | 1.0306 | 0.8719 |
| 0.0001 | 14.0 | 5250 | 1.0127 | 0.8835 |
| 0.0171 | 15.0 | 5625 | 1.0184 | 0.8885 |
| 0.0071 | 16.0 | 6000 | 0.9932 | 0.8869 |
| 0.0241 | 17.0 | 6375 | 1.0882 | 0.8752 |
| 0.0005 | 18.0 | 6750 | 1.0661 | 0.8902 |
| 0.0877 | 19.0 | 7125 | 1.0148 | 0.8785 |
| 0.0001 | 20.0 | 7500 | 1.0786 | 0.8735 |
| 0.0 | 21.0 | 7875 | 1.0833 | 0.8852 |
| 0.0 | 22.0 | 8250 | 1.1111 | 0.8785 |
| 0.0001 | 23.0 | 8625 | 1.2212 | 0.8752 |
| 0.0 | 24.0 | 9000 | 1.0341 | 0.8752 |
| 0.0 | 25.0 | 9375 | 1.1693 | 0.8752 |
| 0.0 | 26.0 | 9750 | 1.1184 | 0.8819 |
| 0.0 | 27.0 | 10125 | 1.0601 | 0.8785 |
| 0.0009 | 28.0 | 10500 | 1.1933 | 0.8702 |
| 0.0 | 29.0 | 10875 | 1.2058 | 0.8785 |
| 0.0 | 30.0 | 11250 | 1.1743 | 0.8735 |
| 0.0039 | 31.0 | 11625 | 1.2100 | 0.8785 |
| 0.0108 | 32.0 | 12000 | 1.2237 | 0.8769 |
| 0.0031 | 33.0 | 12375 | 1.2193 | 0.8735 |
| 0.0 | 34.0 | 12750 | 1.2009 | 0.8769 |
| 0.0 | 35.0 | 13125 | 1.1695 | 0.8802 |
| 0.0 | 36.0 | 13500 | 1.1623 | 0.8819 |
| 0.0 | 37.0 | 13875 | 1.2497 | 0.8702 |
| 0.0 | 38.0 | 14250 | 1.2770 | 0.8769 |
| 0.0 | 39.0 | 14625 | 1.2424 | 0.8769 |
| 0.0042 | 40.0 | 15000 | 1.2342 | 0.8819 |
| 0.0 | 41.0 | 15375 | 1.2571 | 0.8785 |
| 0.0026 | 42.0 | 15750 | 1.2422 | 0.8702 |
| 0.0032 | 43.0 | 16125 | 1.2321 | 0.8835 |
| 0.0033 | 44.0 | 16500 | 1.2366 | 0.8852 |
| 0.0026 | 45.0 | 16875 | 1.2353 | 0.8802 |
| 0.0 | 46.0 | 17250 | 1.2327 | 0.8785 |
| 0.0046 | 47.0 | 17625 | 1.2346 | 0.8785 |
| 0.0 | 48.0 | 18000 | 1.2359 | 0.8769 |
| 0.0024 | 49.0 | 18375 | 1.2368 | 0.8769 |
| 0.0026 | 50.0 | 18750 | 1.2381 | 0.8769 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_tiny_rms_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_rms_00001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9537
- Accuracy: 0.905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2762 | 1.0 | 375 | 0.2896 | 0.895 |
| 0.1644 | 2.0 | 750 | 0.3438 | 0.8783 |
| 0.0979 | 3.0 | 1125 | 0.3168 | 0.89 |
| 0.0564 | 4.0 | 1500 | 0.3257 | 0.9033 |
| 0.0779 | 5.0 | 1875 | 0.3885 | 0.8983 |
| 0.0908 | 6.0 | 2250 | 0.5137 | 0.8917 |
| 0.0272 | 7.0 | 2625 | 0.6097 | 0.9 |
| 0.0302 | 8.0 | 3000 | 0.6164 | 0.905 |
| 0.0116 | 9.0 | 3375 | 0.6445 | 0.905 |
| 0.03 | 10.0 | 3750 | 0.8252 | 0.8983 |
| 0.0134 | 11.0 | 4125 | 0.8460 | 0.8933 |
| 0.0063 | 12.0 | 4500 | 0.8830 | 0.8933 |
| 0.0001 | 13.0 | 4875 | 0.7641 | 0.8967 |
| 0.0003 | 14.0 | 5250 | 0.6936 | 0.9067 |
| 0.0001 | 15.0 | 5625 | 0.8158 | 0.8933 |
| 0.0147 | 16.0 | 6000 | 0.8005 | 0.9 |
| 0.0 | 17.0 | 6375 | 0.8369 | 0.9083 |
| 0.0 | 18.0 | 6750 | 0.7801 | 0.9117 |
| 0.0003 | 19.0 | 7125 | 0.8314 | 0.9 |
| 0.0272 | 20.0 | 7500 | 0.7717 | 0.9067 |
| 0.0197 | 21.0 | 7875 | 0.7616 | 0.91 |
| 0.0022 | 22.0 | 8250 | 0.8297 | 0.9017 |
| 0.0082 | 23.0 | 8625 | 0.9830 | 0.8967 |
| 0.0 | 24.0 | 9000 | 0.8304 | 0.9033 |
| 0.0 | 25.0 | 9375 | 0.9362 | 0.8967 |
| 0.0 | 26.0 | 9750 | 1.0186 | 0.885 |
| 0.0 | 27.0 | 10125 | 0.9660 | 0.895 |
| 0.0 | 28.0 | 10500 | 0.8565 | 0.9067 |
| 0.0 | 29.0 | 10875 | 1.0314 | 0.9 |
| 0.004 | 30.0 | 11250 | 1.0325 | 0.8917 |
| 0.0 | 31.0 | 11625 | 0.9803 | 0.905 |
| 0.0 | 32.0 | 12000 | 0.9230 | 0.8967 |
| 0.0 | 33.0 | 12375 | 0.9577 | 0.8983 |
| 0.0055 | 34.0 | 12750 | 1.0448 | 0.89 |
| 0.0 | 35.0 | 13125 | 0.9488 | 0.9017 |
| 0.0 | 36.0 | 13500 | 0.9198 | 0.9083 |
| 0.0 | 37.0 | 13875 | 0.9232 | 0.905 |
| 0.0 | 38.0 | 14250 | 1.0528 | 0.89 |
| 0.0 | 39.0 | 14625 | 0.9516 | 0.8983 |
| 0.0039 | 40.0 | 15000 | 0.9539 | 0.8983 |
| 0.0 | 41.0 | 15375 | 0.9633 | 0.8983 |
| 0.0 | 42.0 | 15750 | 0.9250 | 0.9033 |
| 0.0 | 43.0 | 16125 | 0.9440 | 0.9017 |
| 0.0 | 44.0 | 16500 | 0.9475 | 0.905 |
| 0.0 | 45.0 | 16875 | 0.9408 | 0.905 |
| 0.0 | 46.0 | 17250 | 0.9488 | 0.905 |
| 0.0 | 47.0 | 17625 | 0.9474 | 0.905 |
| 0.0 | 48.0 | 18000 | 0.9524 | 0.905 |
| 0.0 | 49.0 | 18375 | 0.9540 | 0.9033 |
| 0.0 | 50.0 | 18750 | 0.9537 | 0.905 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_adamax_001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_adamax_001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4274
- Accuracy: 0.8733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3484 | 1.0 | 375 | 0.4288 | 0.8417 |
| 0.3018 | 2.0 | 750 | 0.4109 | 0.84 |
| 0.131 | 3.0 | 1125 | 0.4491 | 0.8367 |
| 0.167 | 4.0 | 1500 | 0.4912 | 0.8583 |
| 0.1356 | 5.0 | 1875 | 0.4970 | 0.8617 |
| 0.074 | 6.0 | 2250 | 0.5520 | 0.8617 |
| 0.126 | 7.0 | 2625 | 0.5266 | 0.8683 |
| 0.1043 | 8.0 | 3000 | 0.5883 | 0.86 |
| 0.0184 | 9.0 | 3375 | 0.7003 | 0.8583 |
| 0.0576 | 10.0 | 3750 | 0.6626 | 0.87 |
| 0.0647 | 11.0 | 4125 | 0.5819 | 0.8667 |
| 0.0295 | 12.0 | 4500 | 0.8380 | 0.855 |
| 0.0198 | 13.0 | 4875 | 0.7725 | 0.8667 |
| 0.0803 | 14.0 | 5250 | 0.7242 | 0.86 |
| 0.0028 | 15.0 | 5625 | 0.5735 | 0.88 |
| 0.018 | 16.0 | 6000 | 0.9546 | 0.855 |
| 0.0295 | 17.0 | 6375 | 0.8527 | 0.8683 |
| 0.0122 | 18.0 | 6750 | 0.8464 | 0.8617 |
| 0.0006 | 19.0 | 7125 | 0.8600 | 0.8683 |
| 0.0121 | 20.0 | 7500 | 0.8637 | 0.8667 |
| 0.0034 | 21.0 | 7875 | 0.8894 | 0.8783 |
| 0.0002 | 22.0 | 8250 | 0.9509 | 0.855 |
| 0.0032 | 23.0 | 8625 | 1.0099 | 0.865 |
| 0.0103 | 24.0 | 9000 | 1.0826 | 0.8783 |
| 0.0066 | 25.0 | 9375 | 1.2355 | 0.8367 |
| 0.0001 | 26.0 | 9750 | 1.1335 | 0.8683 |
| 0.0066 | 27.0 | 10125 | 0.8709 | 0.88 |
| 0.0 | 28.0 | 10500 | 1.0074 | 0.88 |
| 0.0 | 29.0 | 10875 | 1.1392 | 0.8633 |
| 0.0 | 30.0 | 11250 | 1.2579 | 0.8617 |
| 0.0009 | 31.0 | 11625 | 1.1228 | 0.87 |
| 0.0 | 32.0 | 12000 | 1.2029 | 0.8733 |
| 0.0 | 33.0 | 12375 | 1.1147 | 0.87 |
| 0.0 | 34.0 | 12750 | 1.1837 | 0.865 |
| 0.0 | 35.0 | 13125 | 1.2046 | 0.87 |
| 0.0 | 36.0 | 13500 | 1.2160 | 0.8717 |
| 0.0 | 37.0 | 13875 | 1.2236 | 0.8767 |
| 0.004 | 38.0 | 14250 | 1.2489 | 0.8767 |
| 0.0 | 39.0 | 14625 | 1.2705 | 0.8767 |
| 0.0 | 40.0 | 15000 | 1.2929 | 0.8767 |
| 0.0 | 41.0 | 15375 | 1.3044 | 0.8767 |
| 0.0 | 42.0 | 15750 | 1.3306 | 0.8733 |
| 0.0 | 43.0 | 16125 | 1.3359 | 0.875 |
| 0.0 | 44.0 | 16500 | 1.3566 | 0.8733 |
| 0.0 | 45.0 | 16875 | 1.3753 | 0.875 |
| 0.0 | 46.0 | 17250 | 1.3919 | 0.875 |
| 0.0 | 47.0 | 17625 | 1.4064 | 0.875 |
| 0.0 | 48.0 | 18000 | 1.4171 | 0.8733 |
| 0.0 | 49.0 | 18375 | 1.4242 | 0.8733 |
| 0.0 | 50.0 | 18750 | 1.4274 | 0.8733 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_adamax_0001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_adamax_0001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3050
- Accuracy: 0.8833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2254 | 1.0 | 375 | 0.3701 | 0.855 |
| 0.1315 | 2.0 | 750 | 0.4134 | 0.86 |
| 0.0805 | 3.0 | 1125 | 0.5790 | 0.8933 |
| 0.0294 | 4.0 | 1500 | 0.6055 | 0.8917 |
| 0.0147 | 5.0 | 1875 | 0.8763 | 0.8667 |
| 0.0107 | 6.0 | 2250 | 0.7925 | 0.8817 |
| 0.0094 | 7.0 | 2625 | 0.8429 | 0.8833 |
| 0.0086 | 8.0 | 3000 | 0.8991 | 0.89 |
| 0.0002 | 9.0 | 3375 | 0.9026 | 0.8933 |
| 0.0003 | 10.0 | 3750 | 1.0478 | 0.8683 |
| 0.0026 | 11.0 | 4125 | 1.0371 | 0.8817 |
| 0.0 | 12.0 | 4500 | 1.0179 | 0.88 |
| 0.0 | 13.0 | 4875 | 1.0263 | 0.8733 |
| 0.0038 | 14.0 | 5250 | 1.0099 | 0.8783 |
| 0.0 | 15.0 | 5625 | 1.0357 | 0.875 |
| 0.0 | 16.0 | 6000 | 1.0401 | 0.8733 |
| 0.0 | 17.0 | 6375 | 1.0642 | 0.8767 |
| 0.0051 | 18.0 | 6750 | 1.0754 | 0.875 |
| 0.0 | 19.0 | 7125 | 1.0660 | 0.8767 |
| 0.0 | 20.0 | 7500 | 1.0944 | 0.8783 |
| 0.0 | 21.0 | 7875 | 1.1121 | 0.88 |
| 0.0 | 22.0 | 8250 | 1.0926 | 0.8817 |
| 0.0 | 23.0 | 8625 | 1.0773 | 0.8767 |
| 0.0 | 24.0 | 9000 | 1.1261 | 0.875 |
| 0.0 | 25.0 | 9375 | 1.1126 | 0.8833 |
| 0.0 | 26.0 | 9750 | 1.1400 | 0.8867 |
| 0.0 | 27.0 | 10125 | 1.1471 | 0.8833 |
| 0.0 | 28.0 | 10500 | 1.1463 | 0.8833 |
| 0.0 | 29.0 | 10875 | 1.1486 | 0.885 |
| 0.0 | 30.0 | 11250 | 1.1954 | 0.8783 |
| 0.0 | 31.0 | 11625 | 1.1951 | 0.88 |
| 0.0 | 32.0 | 12000 | 1.2025 | 0.8833 |
| 0.0 | 33.0 | 12375 | 1.2060 | 0.8783 |
| 0.0 | 34.0 | 12750 | 1.2510 | 0.88 |
| 0.0 | 35.0 | 13125 | 1.2394 | 0.885 |
| 0.0 | 36.0 | 13500 | 1.2452 | 0.885 |
| 0.0 | 37.0 | 13875 | 1.2431 | 0.885 |
| 0.0025 | 38.0 | 14250 | 1.2453 | 0.8833 |
| 0.0 | 39.0 | 14625 | 1.2570 | 0.8867 |
| 0.0 | 40.0 | 15000 | 1.2692 | 0.885 |
| 0.0 | 41.0 | 15375 | 1.2782 | 0.885 |
| 0.0 | 42.0 | 15750 | 1.2837 | 0.8833 |
| 0.0 | 43.0 | 16125 | 1.2874 | 0.885 |
| 0.0 | 44.0 | 16500 | 1.2939 | 0.8833 |
| 0.0 | 45.0 | 16875 | 1.2976 | 0.885 |
| 0.0 | 46.0 | 17250 | 1.3011 | 0.885 |
| 0.0 | 47.0 | 17625 | 1.3035 | 0.885 |
| 0.0 | 48.0 | 18000 | 1.3049 | 0.885 |
| 0.0 | 49.0 | 18375 | 1.3052 | 0.8833 |
| 0.0 | 50.0 | 18750 | 1.3050 | 0.8833 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_tiny_rms_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_rms_00001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4112
- Accuracy: 0.8783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2498 | 1.0 | 375 | 0.3750 | 0.8583 |
| 0.2126 | 2.0 | 750 | 0.3946 | 0.8617 |
| 0.0815 | 3.0 | 1125 | 0.3928 | 0.8817 |
| 0.1183 | 4.0 | 1500 | 0.4272 | 0.8733 |
| 0.1029 | 5.0 | 1875 | 0.5782 | 0.8833 |
| 0.0245 | 6.0 | 2250 | 0.6426 | 0.8867 |
| 0.0551 | 7.0 | 2625 | 0.8096 | 0.8733 |
| 0.0319 | 8.0 | 3000 | 0.8011 | 0.8733 |
| 0.0533 | 9.0 | 3375 | 0.8429 | 0.875 |
| 0.0056 | 10.0 | 3750 | 0.9672 | 0.8617 |
| 0.0136 | 11.0 | 4125 | 1.0120 | 0.8667 |
| 0.0031 | 12.0 | 4500 | 0.9881 | 0.87 |
| 0.0176 | 13.0 | 4875 | 1.1184 | 0.8767 |
| 0.0127 | 14.0 | 5250 | 1.1325 | 0.8583 |
| 0.0003 | 15.0 | 5625 | 1.2848 | 0.8683 |
| 0.0058 | 16.0 | 6000 | 1.1232 | 0.87 |
| 0.0002 | 17.0 | 6375 | 1.0571 | 0.8817 |
| 0.0421 | 18.0 | 6750 | 1.2079 | 0.8717 |
| 0.0004 | 19.0 | 7125 | 1.2753 | 0.87 |
| 0.0001 | 20.0 | 7500 | 1.3783 | 0.86 |
| 0.0 | 21.0 | 7875 | 1.3177 | 0.865 |
| 0.002 | 22.0 | 8250 | 1.3637 | 0.8633 |
| 0.0002 | 23.0 | 8625 | 1.4459 | 0.87 |
| 0.0005 | 24.0 | 9000 | 1.2813 | 0.875 |
| 0.0 | 25.0 | 9375 | 1.2487 | 0.88 |
| 0.0 | 26.0 | 9750 | 1.2405 | 0.875 |
| 0.0008 | 27.0 | 10125 | 1.3345 | 0.885 |
| 0.0001 | 28.0 | 10500 | 1.5106 | 0.865 |
| 0.0 | 29.0 | 10875 | 1.2765 | 0.8733 |
| 0.0 | 30.0 | 11250 | 1.2626 | 0.875 |
| 0.0332 | 31.0 | 11625 | 1.3653 | 0.8667 |
| 0.0 | 32.0 | 12000 | 1.3469 | 0.8683 |
| 0.0 | 33.0 | 12375 | 1.2524 | 0.8817 |
| 0.0 | 34.0 | 12750 | 1.2947 | 0.8767 |
| 0.0 | 35.0 | 13125 | 1.2962 | 0.8733 |
| 0.0 | 36.0 | 13500 | 1.3559 | 0.8783 |
| 0.0 | 37.0 | 13875 | 1.3878 | 0.8817 |
| 0.0033 | 38.0 | 14250 | 1.3553 | 0.8767 |
| 0.0 | 39.0 | 14625 | 1.4121 | 0.875 |
| 0.0 | 40.0 | 15000 | 1.4174 | 0.875 |
| 0.0 | 41.0 | 15375 | 1.4132 | 0.875 |
| 0.0 | 42.0 | 15750 | 1.4182 | 0.8767 |
| 0.0 | 43.0 | 16125 | 1.4186 | 0.8767 |
| 0.0 | 44.0 | 16500 | 1.4200 | 0.8767 |
| 0.0 | 45.0 | 16875 | 1.4125 | 0.8783 |
| 0.0 | 46.0 | 17250 | 1.4134 | 0.88 |
| 0.0 | 47.0 | 17625 | 1.4114 | 0.8783 |
| 0.0 | 48.0 | 18000 | 1.4108 | 0.8783 |
| 0.0 | 49.0 | 18375 | 1.4113 | 0.8783 |
| 0.0 | 50.0 | 18750 | 1.4112 | 0.8783 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_adamax_001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_adamax_001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8666
- Accuracy: 0.9067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3384 | 1.0 | 375 | 0.4257 | 0.8267 |
| 0.3243 | 2.0 | 750 | 0.3051 | 0.8883 |
| 0.2315 | 3.0 | 1125 | 0.3393 | 0.8783 |
| 0.1699 | 4.0 | 1500 | 0.4297 | 0.8583 |
| 0.1105 | 5.0 | 1875 | 0.3821 | 0.8983 |
| 0.1049 | 6.0 | 2250 | 0.3824 | 0.895 |
| 0.0625 | 7.0 | 2625 | 0.5340 | 0.8967 |
| 0.0706 | 8.0 | 3000 | 0.5827 | 0.8783 |
| 0.039 | 9.0 | 3375 | 0.4159 | 0.895 |
| 0.0887 | 10.0 | 3750 | 0.4518 | 0.905 |
| 0.042 | 11.0 | 4125 | 0.4385 | 0.91 |
| 0.0677 | 12.0 | 4500 | 0.5266 | 0.8983 |
| 0.0355 | 13.0 | 4875 | 0.4982 | 0.8883 |
| 0.0188 | 14.0 | 5250 | 0.5825 | 0.9083 |
| 0.0091 | 15.0 | 5625 | 0.4685 | 0.915 |
| 0.0008 | 16.0 | 6000 | 0.6661 | 0.8983 |
| 0.026 | 17.0 | 6375 | 0.5630 | 0.9 |
| 0.0121 | 18.0 | 6750 | 0.6999 | 0.8967 |
| 0.0069 | 19.0 | 7125 | 0.5495 | 0.9083 |
| 0.0011 | 20.0 | 7500 | 0.6260 | 0.9033 |
| 0.0026 | 21.0 | 7875 | 0.6616 | 0.91 |
| 0.0056 | 22.0 | 8250 | 0.6236 | 0.915 |
| 0.0072 | 23.0 | 8625 | 0.7060 | 0.905 |
| 0.0005 | 24.0 | 9000 | 0.7311 | 0.9067 |
| 0.0 | 25.0 | 9375 | 0.7450 | 0.91 |
| 0.0001 | 26.0 | 9750 | 0.7238 | 0.91 |
| 0.0019 | 27.0 | 10125 | 0.7673 | 0.9 |
| 0.0007 | 28.0 | 10500 | 0.7394 | 0.91 |
| 0.0072 | 29.0 | 10875 | 0.7457 | 0.91 |
| 0.0069 | 30.0 | 11250 | 0.9604 | 0.8883 |
| 0.0 | 31.0 | 11625 | 0.7446 | 0.91 |
| 0.0 | 32.0 | 12000 | 0.7855 | 0.905 |
| 0.0 | 33.0 | 12375 | 0.7691 | 0.905 |
| 0.0 | 34.0 | 12750 | 0.7719 | 0.9067 |
| 0.0 | 35.0 | 13125 | 0.7976 | 0.9017 |
| 0.0 | 36.0 | 13500 | 0.8067 | 0.9033 |
| 0.0 | 37.0 | 13875 | 0.7973 | 0.9067 |
| 0.0041 | 38.0 | 14250 | 0.8120 | 0.9067 |
| 0.0 | 39.0 | 14625 | 0.8149 | 0.9067 |
| 0.0 | 40.0 | 15000 | 0.7879 | 0.9067 |
| 0.0 | 41.0 | 15375 | 0.8013 | 0.9067 |
| 0.0 | 42.0 | 15750 | 0.8079 | 0.905 |
| 0.0 | 43.0 | 16125 | 0.8212 | 0.9017 |
| 0.0 | 44.0 | 16500 | 0.8180 | 0.905 |
| 0.0 | 45.0 | 16875 | 0.8381 | 0.9067 |
| 0.0 | 46.0 | 17250 | 0.8519 | 0.905 |
| 0.003 | 47.0 | 17625 | 0.8539 | 0.9067 |
| 0.0 | 48.0 | 18000 | 0.8604 | 0.9083 |
| 0.0 | 49.0 | 18375 | 0.8650 | 0.9067 |
| 0.0021 | 50.0 | 18750 | 0.8666 | 0.9067 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_adamax_0001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_adamax_0001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9013
- Accuracy: 0.9017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1558 | 1.0 | 375 | 0.3339 | 0.8717 |
| 0.0801 | 2.0 | 750 | 0.2674 | 0.9 |
| 0.064 | 3.0 | 1125 | 0.4397 | 0.9033 |
| 0.0238 | 4.0 | 1500 | 0.5225 | 0.875 |
| 0.0319 | 5.0 | 1875 | 0.5721 | 0.905 |
| 0.0056 | 6.0 | 2250 | 0.5081 | 0.9083 |
| 0.0107 | 7.0 | 2625 | 0.5806 | 0.91 |
| 0.0108 | 8.0 | 3000 | 0.6004 | 0.9067 |
| 0.0023 | 9.0 | 3375 | 0.7259 | 0.895 |
| 0.0005 | 10.0 | 3750 | 0.7347 | 0.9033 |
| 0.0001 | 11.0 | 4125 | 0.7841 | 0.8967 |
| 0.0 | 12.0 | 4500 | 0.7216 | 0.9167 |
| 0.0 | 13.0 | 4875 | 0.7364 | 0.9083 |
| 0.0053 | 14.0 | 5250 | 0.7059 | 0.9067 |
| 0.0001 | 15.0 | 5625 | 0.7607 | 0.9017 |
| 0.0 | 16.0 | 6000 | 0.7546 | 0.9083 |
| 0.0001 | 17.0 | 6375 | 0.7848 | 0.9033 |
| 0.0042 | 18.0 | 6750 | 0.7392 | 0.8983 |
| 0.0 | 19.0 | 7125 | 0.7453 | 0.9183 |
| 0.0 | 20.0 | 7500 | 0.8298 | 0.9067 |
| 0.0 | 21.0 | 7875 | 0.8069 | 0.9067 |
| 0.0038 | 22.0 | 8250 | 0.7995 | 0.9067 |
| 0.0 | 23.0 | 8625 | 0.8015 | 0.91 |
| 0.0 | 24.0 | 9000 | 0.8099 | 0.9067 |
| 0.0 | 25.0 | 9375 | 0.7950 | 0.9117 |
| 0.0 | 26.0 | 9750 | 0.8272 | 0.91 |
| 0.0 | 27.0 | 10125 | 0.7940 | 0.905 |
| 0.0 | 28.0 | 10500 | 0.8281 | 0.915 |
| 0.0 | 29.0 | 10875 | 0.8337 | 0.9067 |
| 0.0031 | 30.0 | 11250 | 0.8245 | 0.9067 |
| 0.0 | 31.0 | 11625 | 0.8597 | 0.9033 |
| 0.0 | 32.0 | 12000 | 0.8445 | 0.9067 |
| 0.0 | 33.0 | 12375 | 0.8424 | 0.9033 |
| 0.0 | 34.0 | 12750 | 0.8455 | 0.9017 |
| 0.0 | 35.0 | 13125 | 0.8539 | 0.9017 |
| 0.0 | 36.0 | 13500 | 0.8610 | 0.8967 |
| 0.0 | 37.0 | 13875 | 0.8681 | 0.905 |
| 0.0026 | 38.0 | 14250 | 0.8625 | 0.9017 |
| 0.0 | 39.0 | 14625 | 0.8694 | 0.9067 |
| 0.0 | 40.0 | 15000 | 0.8718 | 0.9 |
| 0.0 | 41.0 | 15375 | 0.8794 | 0.905 |
| 0.0 | 42.0 | 15750 | 0.8824 | 0.9 |
| 0.0 | 43.0 | 16125 | 0.8842 | 0.905 |
| 0.0 | 44.0 | 16500 | 0.8874 | 0.9017 |
| 0.0 | 45.0 | 16875 | 0.8897 | 0.9017 |
| 0.0 | 46.0 | 17250 | 0.8954 | 0.9017 |
| 0.0025 | 47.0 | 17625 | 0.8975 | 0.9017 |
| 0.0 | 48.0 | 18000 | 0.9000 | 0.9017 |
| 0.0 | 49.0 | 18375 | 0.9014 | 0.9017 |
| 0.0023 | 50.0 | 18750 | 0.9013 | 0.9017 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_tiny_rms_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_tiny_rms_00001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8603
- Accuracy: 0.905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2338 | 1.0 | 375 | 0.3930 | 0.8433 |
| 0.1865 | 2.0 | 750 | 0.3259 | 0.8733 |
| 0.1356 | 3.0 | 1125 | 0.2805 | 0.9033 |
| 0.0896 | 4.0 | 1500 | 0.3878 | 0.88 |
| 0.0385 | 5.0 | 1875 | 0.4177 | 0.8883 |
| 0.0314 | 6.0 | 2250 | 0.4802 | 0.8967 |
| 0.0588 | 7.0 | 2625 | 0.6345 | 0.895 |
| 0.0141 | 8.0 | 3000 | 0.7091 | 0.9033 |
| 0.0524 | 9.0 | 3375 | 0.8142 | 0.8817 |
| 0.0425 | 10.0 | 3750 | 0.7582 | 0.8983 |
| 0.0006 | 11.0 | 4125 | 0.7258 | 0.9 |
| 0.0097 | 12.0 | 4500 | 0.7403 | 0.9 |
| 0.0104 | 13.0 | 4875 | 0.9310 | 0.89 |
| 0.0001 | 14.0 | 5250 | 0.7672 | 0.9 |
| 0.0 | 15.0 | 5625 | 0.9240 | 0.8917 |
| 0.0003 | 16.0 | 6000 | 0.8712 | 0.8983 |
| 0.0135 | 17.0 | 6375 | 0.7633 | 0.9033 |
| 0.0335 | 18.0 | 6750 | 1.0118 | 0.8917 |
| 0.0155 | 19.0 | 7125 | 0.8189 | 0.905 |
| 0.0 | 20.0 | 7500 | 0.8004 | 0.8983 |
| 0.0 | 21.0 | 7875 | 1.0772 | 0.88 |
| 0.0255 | 22.0 | 8250 | 0.7694 | 0.91 |
| 0.0019 | 23.0 | 8625 | 0.8682 | 0.8983 |
| 0.0 | 24.0 | 9000 | 0.8775 | 0.8933 |
| 0.0 | 25.0 | 9375 | 0.9259 | 0.9017 |
| 0.0 | 26.0 | 9750 | 0.8433 | 0.895 |
| 0.0119 | 27.0 | 10125 | 0.9223 | 0.8983 |
| 0.0 | 28.0 | 10500 | 0.7870 | 0.91 |
| 0.0 | 29.0 | 10875 | 0.9279 | 0.895 |
| 0.0131 | 30.0 | 11250 | 0.9531 | 0.8933 |
| 0.0 | 31.0 | 11625 | 0.8850 | 0.8967 |
| 0.0 | 32.0 | 12000 | 0.8772 | 0.8983 |
| 0.0 | 33.0 | 12375 | 0.8996 | 0.8917 |
| 0.0 | 34.0 | 12750 | 0.9022 | 0.8983 |
| 0.0 | 35.0 | 13125 | 0.8990 | 0.8933 |
| 0.0 | 36.0 | 13500 | 0.8690 | 0.9033 |
| 0.0 | 37.0 | 13875 | 0.8890 | 0.9 |
| 0.0071 | 38.0 | 14250 | 0.8769 | 0.9017 |
| 0.0 | 39.0 | 14625 | 0.8323 | 0.9067 |
| 0.0 | 40.0 | 15000 | 0.8920 | 0.9033 |
| 0.0 | 41.0 | 15375 | 0.8465 | 0.9083 |
| 0.0 | 42.0 | 15750 | 0.8536 | 0.905 |
| 0.0 | 43.0 | 16125 | 0.8497 | 0.905 |
| 0.0 | 44.0 | 16500 | 0.8492 | 0.905 |
| 0.0 | 45.0 | 16875 | 0.8481 | 0.9067 |
| 0.0 | 46.0 | 17250 | 0.8573 | 0.9067 |
| 0.0029 | 47.0 | 17625 | 0.8575 | 0.9067 |
| 0.0 | 48.0 | 18000 | 0.8605 | 0.905 |
| 0.0 | 49.0 | 18375 | 0.8627 | 0.905 |
| 0.0013 | 50.0 | 18750 | 0.8603 | 0.905 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_adamax_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_adamax_00001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5548
- Accuracy: 0.9115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2913 | 1.0 | 376 | 0.3175 | 0.8815 |
| 0.197 | 2.0 | 752 | 0.2822 | 0.8865 |
| 0.1857 | 3.0 | 1128 | 0.2733 | 0.8915 |
| 0.0801 | 4.0 | 1504 | 0.2769 | 0.8965 |
| 0.062 | 5.0 | 1880 | 0.2932 | 0.9048 |
| 0.0559 | 6.0 | 2256 | 0.3199 | 0.9115 |
| 0.0331 | 7.0 | 2632 | 0.3568 | 0.9098 |
| 0.0024 | 8.0 | 3008 | 0.4098 | 0.8982 |
| 0.0164 | 9.0 | 3384 | 0.4359 | 0.9065 |
| 0.0004 | 10.0 | 3760 | 0.4455 | 0.9082 |
| 0.0127 | 11.0 | 4136 | 0.4881 | 0.9082 |
| 0.0001 | 12.0 | 4512 | 0.4919 | 0.9032 |
| 0.0001 | 13.0 | 4888 | 0.4968 | 0.9115 |
| 0.0037 | 14.0 | 5264 | 0.5278 | 0.9065 |
| 0.0001 | 15.0 | 5640 | 0.5316 | 0.9115 |
| 0.0001 | 16.0 | 6016 | 0.5363 | 0.9032 |
| 0.0001 | 17.0 | 6392 | 0.5212 | 0.9149 |
| 0.0001 | 18.0 | 6768 | 0.5353 | 0.9098 |
| 0.0 | 19.0 | 7144 | 0.5265 | 0.9098 |
| 0.0147 | 20.0 | 7520 | 0.5277 | 0.9115 |
| 0.0 | 21.0 | 7896 | 0.5565 | 0.9065 |
| 0.0 | 22.0 | 8272 | 0.5728 | 0.9098 |
| 0.0 | 23.0 | 8648 | 0.5461 | 0.9115 |
| 0.0 | 24.0 | 9024 | 0.5300 | 0.9065 |
| 0.0 | 25.0 | 9400 | 0.5373 | 0.9065 |
| 0.0042 | 26.0 | 9776 | 0.5315 | 0.9082 |
| 0.0 | 27.0 | 10152 | 0.5779 | 0.9065 |
| 0.0 | 28.0 | 10528 | 0.5457 | 0.9098 |
| 0.0079 | 29.0 | 10904 | 0.5511 | 0.9098 |
| 0.003 | 30.0 | 11280 | 0.5454 | 0.9048 |
| 0.0 | 31.0 | 11656 | 0.5479 | 0.9098 |
| 0.0 | 32.0 | 12032 | 0.5371 | 0.9082 |
| 0.0 | 33.0 | 12408 | 0.5701 | 0.9065 |
| 0.0 | 34.0 | 12784 | 0.5431 | 0.9032 |
| 0.0 | 35.0 | 13160 | 0.5470 | 0.9048 |
| 0.0 | 36.0 | 13536 | 0.5461 | 0.9015 |
| 0.0 | 37.0 | 13912 | 0.5481 | 0.9115 |
| 0.0 | 38.0 | 14288 | 0.5522 | 0.9098 |
| 0.0 | 39.0 | 14664 | 0.5539 | 0.9082 |
| 0.0 | 40.0 | 15040 | 0.5537 | 0.9115 |
| 0.0 | 41.0 | 15416 | 0.5471 | 0.9048 |
| 0.0 | 42.0 | 15792 | 0.5483 | 0.9115 |
| 0.0 | 43.0 | 16168 | 0.5497 | 0.9132 |
| 0.0 | 44.0 | 16544 | 0.5527 | 0.9115 |
| 0.0 | 45.0 | 16920 | 0.5532 | 0.9115 |
| 0.0053 | 46.0 | 17296 | 0.5512 | 0.9098 |
| 0.0 | 47.0 | 17672 | 0.5538 | 0.9115 |
| 0.0 | 48.0 | 18048 | 0.5539 | 0.9098 |
| 0.0 | 49.0 | 18424 | 0.5540 | 0.9115 |
| 0.0012 | 50.0 | 18800 | 0.5548 | 0.9115 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_sgd_001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_sgd_001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2661
- Accuracy: 0.8932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7985 | 1.0 | 376 | 0.8182 | 0.6978 |
| 0.594 | 2.0 | 752 | 0.5849 | 0.7746 |
| 0.4653 | 3.0 | 1128 | 0.4811 | 0.8197 |
| 0.4509 | 4.0 | 1504 | 0.4265 | 0.8264 |
| 0.406 | 5.0 | 1880 | 0.3929 | 0.8447 |
| 0.3758 | 6.0 | 2256 | 0.3696 | 0.8581 |
| 0.3147 | 7.0 | 2632 | 0.3531 | 0.8698 |
| 0.3421 | 8.0 | 3008 | 0.3417 | 0.8664 |
| 0.3606 | 9.0 | 3384 | 0.3307 | 0.8798 |
| 0.2866 | 10.0 | 3760 | 0.3222 | 0.8865 |
| 0.2912 | 11.0 | 4136 | 0.3153 | 0.8798 |
| 0.2629 | 12.0 | 4512 | 0.3094 | 0.8865 |
| 0.2464 | 13.0 | 4888 | 0.3048 | 0.8848 |
| 0.2413 | 14.0 | 5264 | 0.3005 | 0.8881 |
| 0.3125 | 15.0 | 5640 | 0.2955 | 0.8932 |
| 0.226 | 16.0 | 6016 | 0.2931 | 0.8865 |
| 0.2346 | 17.0 | 6392 | 0.2899 | 0.8915 |
| 0.2997 | 18.0 | 6768 | 0.2867 | 0.8881 |
| 0.2564 | 19.0 | 7144 | 0.2849 | 0.8898 |
| 0.1951 | 20.0 | 7520 | 0.2838 | 0.8898 |
| 0.2828 | 21.0 | 7896 | 0.2817 | 0.8898 |
| 0.2327 | 22.0 | 8272 | 0.2806 | 0.8898 |
| 0.2604 | 23.0 | 8648 | 0.2786 | 0.8865 |
| 0.2065 | 24.0 | 9024 | 0.2780 | 0.8881 |
| 0.2338 | 25.0 | 9400 | 0.2766 | 0.8881 |
| 0.2197 | 26.0 | 9776 | 0.2745 | 0.8898 |
| 0.1797 | 27.0 | 10152 | 0.2743 | 0.8898 |
| 0.199 | 28.0 | 10528 | 0.2732 | 0.8915 |
| 0.2002 | 29.0 | 10904 | 0.2724 | 0.8898 |
| 0.1586 | 30.0 | 11280 | 0.2714 | 0.8932 |
| 0.1861 | 31.0 | 11656 | 0.2710 | 0.8932 |
| 0.2539 | 32.0 | 12032 | 0.2706 | 0.8948 |
| 0.1906 | 33.0 | 12408 | 0.2700 | 0.8948 |
| 0.1642 | 34.0 | 12784 | 0.2697 | 0.8915 |
| 0.1856 | 35.0 | 13160 | 0.2694 | 0.8915 |
| 0.2084 | 36.0 | 13536 | 0.2691 | 0.8932 |
| 0.1812 | 37.0 | 13912 | 0.2681 | 0.8948 |
| 0.2073 | 38.0 | 14288 | 0.2680 | 0.8948 |
| 0.1854 | 39.0 | 14664 | 0.2677 | 0.8915 |
| 0.1953 | 40.0 | 15040 | 0.2671 | 0.8932 |
| 0.1912 | 41.0 | 15416 | 0.2672 | 0.8948 |
| 0.1646 | 42.0 | 15792 | 0.2669 | 0.8932 |
| 0.1689 | 43.0 | 16168 | 0.2666 | 0.8932 |
| 0.1894 | 44.0 | 16544 | 0.2664 | 0.8932 |
| 0.173 | 45.0 | 16920 | 0.2663 | 0.8932 |
| 0.2186 | 46.0 | 17296 | 0.2661 | 0.8932 |
| 0.1671 | 47.0 | 17672 | 0.2661 | 0.8932 |
| 0.1916 | 48.0 | 18048 | 0.2661 | 0.8932 |
| 0.1583 | 49.0 | 18424 | 0.2661 | 0.8932 |
| 0.137 | 50.0 | 18800 | 0.2661 | 0.8932 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_sgd_001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_sgd_001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3080
- Accuracy: 0.8835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.848 | 1.0 | 375 | 0.8270 | 0.6972 |
| 0.5847 | 2.0 | 750 | 0.5900 | 0.7787 |
| 0.4307 | 3.0 | 1125 | 0.4834 | 0.8087 |
| 0.3555 | 4.0 | 1500 | 0.4305 | 0.8170 |
| 0.3374 | 5.0 | 1875 | 0.3988 | 0.8386 |
| 0.2952 | 6.0 | 2250 | 0.3778 | 0.8419 |
| 0.2869 | 7.0 | 2625 | 0.3638 | 0.8469 |
| 0.3216 | 8.0 | 3000 | 0.3556 | 0.8519 |
| 0.2811 | 9.0 | 3375 | 0.3457 | 0.8502 |
| 0.2716 | 10.0 | 3750 | 0.3402 | 0.8536 |
| 0.2527 | 11.0 | 4125 | 0.3330 | 0.8586 |
| 0.2868 | 12.0 | 4500 | 0.3315 | 0.8586 |
| 0.2215 | 13.0 | 4875 | 0.3264 | 0.8569 |
| 0.2463 | 14.0 | 5250 | 0.3210 | 0.8652 |
| 0.2529 | 15.0 | 5625 | 0.3196 | 0.8702 |
| 0.2264 | 16.0 | 6000 | 0.3180 | 0.8702 |
| 0.2664 | 17.0 | 6375 | 0.3174 | 0.8686 |
| 0.2072 | 18.0 | 6750 | 0.3154 | 0.8719 |
| 0.2246 | 19.0 | 7125 | 0.3137 | 0.8735 |
| 0.2657 | 20.0 | 7500 | 0.3140 | 0.8785 |
| 0.2076 | 21.0 | 7875 | 0.3137 | 0.8752 |
| 0.2093 | 22.0 | 8250 | 0.3110 | 0.8752 |
| 0.2136 | 23.0 | 8625 | 0.3099 | 0.8802 |
| 0.1894 | 24.0 | 9000 | 0.3103 | 0.8752 |
| 0.1535 | 25.0 | 9375 | 0.3079 | 0.8802 |
| 0.2471 | 26.0 | 9750 | 0.3091 | 0.8785 |
| 0.1996 | 27.0 | 10125 | 0.3086 | 0.8785 |
| 0.1679 | 28.0 | 10500 | 0.3082 | 0.8802 |
| 0.2106 | 29.0 | 10875 | 0.3091 | 0.8819 |
| 0.1991 | 30.0 | 11250 | 0.3064 | 0.8852 |
| 0.2072 | 31.0 | 11625 | 0.3086 | 0.8869 |
| 0.1837 | 32.0 | 12000 | 0.3065 | 0.8852 |
| 0.1829 | 33.0 | 12375 | 0.3083 | 0.8869 |
| 0.146 | 34.0 | 12750 | 0.3077 | 0.8852 |
| 0.2024 | 35.0 | 13125 | 0.3064 | 0.8885 |
| 0.1685 | 36.0 | 13500 | 0.3081 | 0.8852 |
| 0.1849 | 37.0 | 13875 | 0.3080 | 0.8852 |
| 0.1672 | 38.0 | 14250 | 0.3074 | 0.8835 |
| 0.1673 | 39.0 | 14625 | 0.3082 | 0.8835 |
| 0.1726 | 40.0 | 15000 | 0.3082 | 0.8852 |
| 0.1673 | 41.0 | 15375 | 0.3089 | 0.8835 |
| 0.177 | 42.0 | 15750 | 0.3082 | 0.8835 |
| 0.1916 | 43.0 | 16125 | 0.3076 | 0.8835 |
| 0.1782 | 44.0 | 16500 | 0.3080 | 0.8835 |
| 0.1543 | 45.0 | 16875 | 0.3089 | 0.8835 |
| 0.2141 | 46.0 | 17250 | 0.3081 | 0.8835 |
| 0.1912 | 47.0 | 17625 | 0.3084 | 0.8835 |
| 0.1718 | 48.0 | 18000 | 0.3084 | 0.8835 |
| 0.1897 | 49.0 | 18375 | 0.3080 | 0.8835 |
| 0.1329 | 50.0 | 18750 | 0.3080 | 0.8835 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_adamax_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_adamax_00001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8137
- Accuracy: 0.8885
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2996 | 1.0 | 375 | 0.3210 | 0.8735 |
| 0.2177 | 2.0 | 750 | 0.2936 | 0.8902 |
| 0.1354 | 3.0 | 1125 | 0.3183 | 0.8852 |
| 0.0424 | 4.0 | 1500 | 0.3451 | 0.8902 |
| 0.0243 | 5.0 | 1875 | 0.3990 | 0.8852 |
| 0.0226 | 6.0 | 2250 | 0.4326 | 0.8869 |
| 0.0168 | 7.0 | 2625 | 0.4792 | 0.8985 |
| 0.0076 | 8.0 | 3000 | 0.5341 | 0.8885 |
| 0.0045 | 9.0 | 3375 | 0.5636 | 0.8852 |
| 0.0004 | 10.0 | 3750 | 0.5967 | 0.8902 |
| 0.0098 | 11.0 | 4125 | 0.6469 | 0.8819 |
| 0.0003 | 12.0 | 4500 | 0.6549 | 0.8835 |
| 0.0002 | 13.0 | 4875 | 0.6787 | 0.8819 |
| 0.0001 | 14.0 | 5250 | 0.6859 | 0.8902 |
| 0.0001 | 15.0 | 5625 | 0.6864 | 0.8902 |
| 0.0001 | 16.0 | 6000 | 0.6911 | 0.8852 |
| 0.0065 | 17.0 | 6375 | 0.7430 | 0.8852 |
| 0.0 | 18.0 | 6750 | 0.7047 | 0.8869 |
| 0.0 | 19.0 | 7125 | 0.7306 | 0.8835 |
| 0.0 | 20.0 | 7500 | 0.7458 | 0.8835 |
| 0.0 | 21.0 | 7875 | 0.7324 | 0.8752 |
| 0.0 | 22.0 | 8250 | 0.7468 | 0.8802 |
| 0.0 | 23.0 | 8625 | 0.7452 | 0.8869 |
| 0.0 | 24.0 | 9000 | 0.7743 | 0.8785 |
| 0.0 | 25.0 | 9375 | 0.7579 | 0.8852 |
| 0.0 | 26.0 | 9750 | 0.7668 | 0.8802 |
| 0.0 | 27.0 | 10125 | 0.7673 | 0.8852 |
| 0.0 | 28.0 | 10500 | 0.7789 | 0.8802 |
| 0.0 | 29.0 | 10875 | 0.7704 | 0.8902 |
| 0.0 | 30.0 | 11250 | 0.7833 | 0.8869 |
| 0.003 | 31.0 | 11625 | 0.7716 | 0.8819 |
| 0.0119 | 32.0 | 12000 | 0.7822 | 0.8902 |
| 0.0022 | 33.0 | 12375 | 0.7945 | 0.8869 |
| 0.0 | 34.0 | 12750 | 0.7824 | 0.8902 |
| 0.0 | 35.0 | 13125 | 0.8058 | 0.8835 |
| 0.0 | 36.0 | 13500 | 0.7871 | 0.8835 |
| 0.0 | 37.0 | 13875 | 0.7901 | 0.8852 |
| 0.0 | 38.0 | 14250 | 0.7886 | 0.8885 |
| 0.0 | 39.0 | 14625 | 0.8070 | 0.8885 |
| 0.0023 | 40.0 | 15000 | 0.8050 | 0.8935 |
| 0.0 | 41.0 | 15375 | 0.8034 | 0.8918 |
| 0.003 | 42.0 | 15750 | 0.8038 | 0.8918 |
| 0.0024 | 43.0 | 16125 | 0.8073 | 0.8902 |
| 0.0024 | 44.0 | 16500 | 0.8102 | 0.8869 |
| 0.0028 | 45.0 | 16875 | 0.8121 | 0.8885 |
| 0.0 | 46.0 | 17250 | 0.8095 | 0.8918 |
| 0.0035 | 47.0 | 17625 | 0.8116 | 0.8885 |
| 0.0 | 48.0 | 18000 | 0.8125 | 0.8885 |
| 0.0026 | 49.0 | 18375 | 0.8131 | 0.8885 |
| 0.0022 | 50.0 | 18750 | 0.8137 | 0.8885 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_sgd_001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_sgd_001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2756
- Accuracy: 0.9017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8095 | 1.0 | 375 | 0.8274 | 0.695 |
| 0.5875 | 2.0 | 750 | 0.5872 | 0.8033 |
| 0.4116 | 3.0 | 1125 | 0.4743 | 0.83 |
| 0.3737 | 4.0 | 1500 | 0.4186 | 0.855 |
| 0.4258 | 5.0 | 1875 | 0.3874 | 0.86 |
| 0.3489 | 6.0 | 2250 | 0.3641 | 0.8633 |
| 0.3464 | 7.0 | 2625 | 0.3477 | 0.8667 |
| 0.3297 | 8.0 | 3000 | 0.3357 | 0.8667 |
| 0.277 | 9.0 | 3375 | 0.3288 | 0.8683 |
| 0.2815 | 10.0 | 3750 | 0.3210 | 0.8733 |
| 0.2355 | 11.0 | 4125 | 0.3122 | 0.8767 |
| 0.2863 | 12.0 | 4500 | 0.3078 | 0.8833 |
| 0.2701 | 13.0 | 4875 | 0.3051 | 0.8833 |
| 0.2736 | 14.0 | 5250 | 0.3025 | 0.8833 |
| 0.2395 | 15.0 | 5625 | 0.2961 | 0.885 |
| 0.2031 | 16.0 | 6000 | 0.2952 | 0.885 |
| 0.2473 | 17.0 | 6375 | 0.2943 | 0.885 |
| 0.2814 | 18.0 | 6750 | 0.2910 | 0.8883 |
| 0.2316 | 19.0 | 7125 | 0.2893 | 0.89 |
| 0.2303 | 20.0 | 7500 | 0.2871 | 0.8883 |
| 0.1561 | 21.0 | 7875 | 0.2852 | 0.8883 |
| 0.1795 | 22.0 | 8250 | 0.2878 | 0.8933 |
| 0.2424 | 23.0 | 8625 | 0.2847 | 0.8933 |
| 0.234 | 24.0 | 9000 | 0.2843 | 0.89 |
| 0.2148 | 25.0 | 9375 | 0.2840 | 0.8883 |
| 0.2353 | 26.0 | 9750 | 0.2810 | 0.8967 |
| 0.2055 | 27.0 | 10125 | 0.2823 | 0.895 |
| 0.2361 | 28.0 | 10500 | 0.2802 | 0.8967 |
| 0.1834 | 29.0 | 10875 | 0.2796 | 0.895 |
| 0.2029 | 30.0 | 11250 | 0.2813 | 0.8967 |
| 0.2296 | 31.0 | 11625 | 0.2806 | 0.8983 |
| 0.2077 | 32.0 | 12000 | 0.2800 | 0.8983 |
| 0.2574 | 33.0 | 12375 | 0.2785 | 0.8983 |
| 0.1786 | 34.0 | 12750 | 0.2764 | 0.8967 |
| 0.1549 | 35.0 | 13125 | 0.2758 | 0.9 |
| 0.1665 | 36.0 | 13500 | 0.2763 | 0.8983 |
| 0.187 | 37.0 | 13875 | 0.2766 | 0.9 |
| 0.1745 | 38.0 | 14250 | 0.2765 | 0.9 |
| 0.1886 | 39.0 | 14625 | 0.2765 | 0.9 |
| 0.1628 | 40.0 | 15000 | 0.2755 | 0.9017 |
| 0.1471 | 41.0 | 15375 | 0.2749 | 0.9017 |
| 0.1795 | 42.0 | 15750 | 0.2759 | 0.9017 |
| 0.2305 | 43.0 | 16125 | 0.2754 | 0.9017 |
| 0.168 | 44.0 | 16500 | 0.2757 | 0.9017 |
| 0.1598 | 45.0 | 16875 | 0.2757 | 0.9033 |
| 0.1914 | 46.0 | 17250 | 0.2755 | 0.9017 |
| 0.2052 | 47.0 | 17625 | 0.2754 | 0.9017 |
| 0.1945 | 48.0 | 18000 | 0.2755 | 0.9017 |
| 0.1764 | 49.0 | 18375 | 0.2755 | 0.9017 |
| 0.1798 | 50.0 | 18750 | 0.2756 | 0.9017 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_adamax_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_adamax_00001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6633
- Accuracy: 0.9183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2813 | 1.0 | 375 | 0.3216 | 0.8783 |
| 0.2111 | 2.0 | 750 | 0.2635 | 0.905 |
| 0.0915 | 3.0 | 1125 | 0.2534 | 0.92 |
| 0.0599 | 4.0 | 1500 | 0.2640 | 0.92 |
| 0.1269 | 5.0 | 1875 | 0.2938 | 0.9217 |
| 0.0531 | 6.0 | 2250 | 0.3570 | 0.9133 |
| 0.0327 | 7.0 | 2625 | 0.3536 | 0.9183 |
| 0.0024 | 8.0 | 3000 | 0.4103 | 0.915 |
| 0.0012 | 9.0 | 3375 | 0.4352 | 0.92 |
| 0.0003 | 10.0 | 3750 | 0.4932 | 0.9133 |
| 0.0002 | 11.0 | 4125 | 0.4821 | 0.9167 |
| 0.0002 | 12.0 | 4500 | 0.5091 | 0.9133 |
| 0.0001 | 13.0 | 4875 | 0.5337 | 0.9167 |
| 0.0001 | 14.0 | 5250 | 0.5297 | 0.9167 |
| 0.0001 | 15.0 | 5625 | 0.5462 | 0.9117 |
| 0.0 | 16.0 | 6000 | 0.5551 | 0.92 |
| 0.0001 | 17.0 | 6375 | 0.5844 | 0.915 |
| 0.0 | 18.0 | 6750 | 0.5622 | 0.9133 |
| 0.0 | 19.0 | 7125 | 0.5918 | 0.9167 |
| 0.0 | 20.0 | 7500 | 0.5875 | 0.915 |
| 0.0 | 21.0 | 7875 | 0.5930 | 0.915 |
| 0.0 | 22.0 | 8250 | 0.6046 | 0.915 |
| 0.0049 | 23.0 | 8625 | 0.6585 | 0.9083 |
| 0.0 | 24.0 | 9000 | 0.6134 | 0.9183 |
| 0.0 | 25.0 | 9375 | 0.6543 | 0.91 |
| 0.0 | 26.0 | 9750 | 0.6179 | 0.9183 |
| 0.0 | 27.0 | 10125 | 0.6159 | 0.9167 |
| 0.0 | 28.0 | 10500 | 0.6181 | 0.9183 |
| 0.0 | 29.0 | 10875 | 0.6318 | 0.9167 |
| 0.0036 | 30.0 | 11250 | 0.6693 | 0.9133 |
| 0.0 | 31.0 | 11625 | 0.6325 | 0.9183 |
| 0.0 | 32.0 | 12000 | 0.6427 | 0.9183 |
| 0.0 | 33.0 | 12375 | 0.6557 | 0.915 |
| 0.0 | 34.0 | 12750 | 0.6550 | 0.915 |
| 0.0 | 35.0 | 13125 | 0.6439 | 0.915 |
| 0.0 | 36.0 | 13500 | 0.6513 | 0.915 |
| 0.0 | 37.0 | 13875 | 0.6496 | 0.915 |
| 0.0 | 38.0 | 14250 | 0.6546 | 0.915 |
| 0.0 | 39.0 | 14625 | 0.6548 | 0.9167 |
| 0.0036 | 40.0 | 15000 | 0.6572 | 0.9167 |
| 0.0 | 41.0 | 15375 | 0.6550 | 0.9183 |
| 0.0 | 42.0 | 15750 | 0.6572 | 0.9167 |
| 0.0 | 43.0 | 16125 | 0.6583 | 0.9183 |
| 0.0 | 44.0 | 16500 | 0.6596 | 0.9183 |
| 0.0 | 45.0 | 16875 | 0.6608 | 0.9183 |
| 0.0 | 46.0 | 17250 | 0.6619 | 0.9183 |
| 0.0 | 47.0 | 17625 | 0.6626 | 0.9183 |
| 0.0 | 48.0 | 18000 | 0.6632 | 0.9183 |
| 0.0 | 49.0 | 18375 | 0.6634 | 0.9183 |
| 0.0 | 50.0 | 18750 | 0.6633 | 0.9183 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_sgd_001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_sgd_001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3268
- Accuracy: 0.8683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8132 | 1.0 | 375 | 0.8089 | 0.7267 |
| 0.5876 | 2.0 | 750 | 0.5635 | 0.8067 |
| 0.4083 | 3.0 | 1125 | 0.4614 | 0.8267 |
| 0.4267 | 4.0 | 1500 | 0.4160 | 0.8333 |
| 0.3511 | 5.0 | 1875 | 0.3904 | 0.845 |
| 0.2917 | 6.0 | 2250 | 0.3744 | 0.85 |
| 0.3158 | 7.0 | 2625 | 0.3647 | 0.8583 |
| 0.2749 | 8.0 | 3000 | 0.3555 | 0.86 |
| 0.3058 | 9.0 | 3375 | 0.3495 | 0.8583 |
| 0.2616 | 10.0 | 3750 | 0.3455 | 0.8667 |
| 0.2281 | 11.0 | 4125 | 0.3414 | 0.8683 |
| 0.2203 | 12.0 | 4500 | 0.3404 | 0.87 |
| 0.1983 | 13.0 | 4875 | 0.3366 | 0.8733 |
| 0.2917 | 14.0 | 5250 | 0.3335 | 0.875 |
| 0.2068 | 15.0 | 5625 | 0.3331 | 0.8767 |
| 0.1668 | 16.0 | 6000 | 0.3311 | 0.8767 |
| 0.2429 | 17.0 | 6375 | 0.3309 | 0.88 |
| 0.2255 | 18.0 | 6750 | 0.3295 | 0.8817 |
| 0.217 | 19.0 | 7125 | 0.3289 | 0.88 |
| 0.1858 | 20.0 | 7500 | 0.3267 | 0.875 |
| 0.1989 | 21.0 | 7875 | 0.3279 | 0.875 |
| 0.1971 | 22.0 | 8250 | 0.3262 | 0.8767 |
| 0.1898 | 23.0 | 8625 | 0.3266 | 0.875 |
| 0.1832 | 24.0 | 9000 | 0.3251 | 0.8767 |
| 0.2234 | 25.0 | 9375 | 0.3260 | 0.875 |
| 0.1745 | 26.0 | 9750 | 0.3248 | 0.8767 |
| 0.2157 | 27.0 | 10125 | 0.3261 | 0.875 |
| 0.1917 | 28.0 | 10500 | 0.3254 | 0.8717 |
| 0.169 | 29.0 | 10875 | 0.3264 | 0.87 |
| 0.238 | 30.0 | 11250 | 0.3268 | 0.8683 |
| 0.1975 | 31.0 | 11625 | 0.3288 | 0.87 |
| 0.139 | 32.0 | 12000 | 0.3247 | 0.8717 |
| 0.1975 | 33.0 | 12375 | 0.3257 | 0.8683 |
| 0.1914 | 34.0 | 12750 | 0.3239 | 0.875 |
| 0.1455 | 35.0 | 13125 | 0.3252 | 0.8717 |
| 0.1611 | 36.0 | 13500 | 0.3266 | 0.8683 |
| 0.2074 | 37.0 | 13875 | 0.3252 | 0.8767 |
| 0.1665 | 38.0 | 14250 | 0.3262 | 0.8683 |
| 0.2306 | 39.0 | 14625 | 0.3258 | 0.8683 |
| 0.1821 | 40.0 | 15000 | 0.3259 | 0.8683 |
| 0.154 | 41.0 | 15375 | 0.3262 | 0.8683 |
| 0.1817 | 42.0 | 15750 | 0.3261 | 0.8683 |
| 0.2063 | 43.0 | 16125 | 0.3258 | 0.8717 |
| 0.1391 | 44.0 | 16500 | 0.3266 | 0.8683 |
| 0.2228 | 45.0 | 16875 | 0.3270 | 0.8683 |
| 0.1476 | 46.0 | 17250 | 0.3269 | 0.87 |
| 0.2362 | 47.0 | 17625 | 0.3264 | 0.8683 |
| 0.1702 | 48.0 | 18000 | 0.3268 | 0.8683 |
| 0.1619 | 49.0 | 18375 | 0.3268 | 0.8683 |
| 0.1838 | 50.0 | 18750 | 0.3268 | 0.8683 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_adamax_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_adamax_00001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1380
- Accuracy: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2793 | 1.0 | 375 | 0.3375 | 0.8733 |
| 0.2086 | 2.0 | 750 | 0.3220 | 0.8767 |
| 0.0773 | 3.0 | 1125 | 0.3501 | 0.87 |
| 0.1352 | 4.0 | 1500 | 0.3537 | 0.8883 |
| 0.0451 | 5.0 | 1875 | 0.4252 | 0.8733 |
| 0.0092 | 6.0 | 2250 | 0.5239 | 0.8683 |
| 0.0099 | 7.0 | 2625 | 0.6333 | 0.8617 |
| 0.0011 | 8.0 | 3000 | 0.6345 | 0.8783 |
| 0.0007 | 9.0 | 3375 | 0.6729 | 0.87 |
| 0.0003 | 10.0 | 3750 | 0.7305 | 0.87 |
| 0.0078 | 11.0 | 4125 | 0.8099 | 0.8533 |
| 0.0001 | 12.0 | 4500 | 0.8279 | 0.8633 |
| 0.0001 | 13.0 | 4875 | 0.8432 | 0.865 |
| 0.0241 | 14.0 | 5250 | 0.8671 | 0.8617 |
| 0.0001 | 15.0 | 5625 | 0.9017 | 0.8733 |
| 0.0 | 16.0 | 6000 | 0.9098 | 0.8683 |
| 0.0001 | 17.0 | 6375 | 0.9302 | 0.8683 |
| 0.021 | 18.0 | 6750 | 0.9362 | 0.865 |
| 0.0 | 19.0 | 7125 | 0.9767 | 0.8767 |
| 0.0 | 20.0 | 7500 | 0.9642 | 0.87 |
| 0.0 | 21.0 | 7875 | 0.9726 | 0.8683 |
| 0.0 | 22.0 | 8250 | 0.9924 | 0.87 |
| 0.0 | 23.0 | 8625 | 1.0407 | 0.8683 |
| 0.0 | 24.0 | 9000 | 0.9983 | 0.865 |
| 0.0 | 25.0 | 9375 | 1.0050 | 0.8767 |
| 0.0 | 26.0 | 9750 | 1.1029 | 0.8683 |
| 0.0 | 27.0 | 10125 | 1.0432 | 0.8633 |
| 0.0 | 28.0 | 10500 | 1.0623 | 0.87 |
| 0.0 | 29.0 | 10875 | 1.0507 | 0.8667 |
| 0.0 | 30.0 | 11250 | 1.0651 | 0.8717 |
| 0.0 | 31.0 | 11625 | 1.0597 | 0.8667 |
| 0.0 | 32.0 | 12000 | 1.0584 | 0.8633 |
| 0.0 | 33.0 | 12375 | 1.0660 | 0.8633 |
| 0.0 | 34.0 | 12750 | 1.0722 | 0.865 |
| 0.0 | 35.0 | 13125 | 1.0821 | 0.865 |
| 0.0 | 36.0 | 13500 | 1.0843 | 0.8617 |
| 0.0 | 37.0 | 13875 | 1.0957 | 0.8683 |
| 0.0089 | 38.0 | 14250 | 1.1135 | 0.8717 |
| 0.0 | 39.0 | 14625 | 1.1105 | 0.8683 |
| 0.0 | 40.0 | 15000 | 1.1158 | 0.8683 |
| 0.0 | 41.0 | 15375 | 1.1145 | 0.8667 |
| 0.0 | 42.0 | 15750 | 1.1172 | 0.8683 |
| 0.0 | 43.0 | 16125 | 1.1213 | 0.8683 |
| 0.0 | 44.0 | 16500 | 1.1274 | 0.8683 |
| 0.0 | 45.0 | 16875 | 1.1305 | 0.8683 |
| 0.0 | 46.0 | 17250 | 1.1329 | 0.8683 |
| 0.0 | 47.0 | 17625 | 1.1347 | 0.8683 |
| 0.0 | 48.0 | 18000 | 1.1363 | 0.8667 |
| 0.0 | 49.0 | 18375 | 1.1373 | 0.8667 |
| 0.0 | 50.0 | 18750 | 1.1380 | 0.8667 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_sgd_001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_sgd_001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2653
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.816 | 1.0 | 375 | 0.8299 | 0.6983 |
| 0.6079 | 2.0 | 750 | 0.5776 | 0.79 |
| 0.4275 | 3.0 | 1125 | 0.4639 | 0.815 |
| 0.435 | 4.0 | 1500 | 0.4090 | 0.8217 |
| 0.3764 | 5.0 | 1875 | 0.3770 | 0.83 |
| 0.3648 | 6.0 | 2250 | 0.3546 | 0.8333 |
| 0.2762 | 7.0 | 2625 | 0.3403 | 0.8383 |
| 0.295 | 8.0 | 3000 | 0.3285 | 0.85 |
| 0.2923 | 9.0 | 3375 | 0.3205 | 0.8517 |
| 0.2438 | 10.0 | 3750 | 0.3147 | 0.855 |
| 0.2438 | 11.0 | 4125 | 0.3070 | 0.86 |
| 0.2431 | 12.0 | 4500 | 0.3020 | 0.8667 |
| 0.2192 | 13.0 | 4875 | 0.2993 | 0.865 |
| 0.2554 | 14.0 | 5250 | 0.2960 | 0.8717 |
| 0.2885 | 15.0 | 5625 | 0.2913 | 0.8783 |
| 0.1896 | 16.0 | 6000 | 0.2879 | 0.88 |
| 0.3057 | 17.0 | 6375 | 0.2870 | 0.8733 |
| 0.2444 | 18.0 | 6750 | 0.2845 | 0.88 |
| 0.2179 | 19.0 | 7125 | 0.2810 | 0.8783 |
| 0.1663 | 20.0 | 7500 | 0.2810 | 0.8783 |
| 0.2067 | 21.0 | 7875 | 0.2763 | 0.8833 |
| 0.197 | 22.0 | 8250 | 0.2779 | 0.8817 |
| 0.3036 | 23.0 | 8625 | 0.2762 | 0.8817 |
| 0.2123 | 24.0 | 9000 | 0.2743 | 0.8817 |
| 0.2471 | 25.0 | 9375 | 0.2741 | 0.8783 |
| 0.2004 | 26.0 | 9750 | 0.2742 | 0.88 |
| 0.2358 | 27.0 | 10125 | 0.2736 | 0.88 |
| 0.2033 | 28.0 | 10500 | 0.2701 | 0.88 |
| 0.2195 | 29.0 | 10875 | 0.2687 | 0.8833 |
| 0.1807 | 30.0 | 11250 | 0.2702 | 0.8817 |
| 0.2285 | 31.0 | 11625 | 0.2693 | 0.8833 |
| 0.2043 | 32.0 | 12000 | 0.2686 | 0.8883 |
| 0.2113 | 33.0 | 12375 | 0.2682 | 0.89 |
| 0.253 | 34.0 | 12750 | 0.2667 | 0.8883 |
| 0.1854 | 35.0 | 13125 | 0.2674 | 0.885 |
| 0.1505 | 36.0 | 13500 | 0.2668 | 0.8867 |
| 0.2105 | 37.0 | 13875 | 0.2666 | 0.885 |
| 0.184 | 38.0 | 14250 | 0.2661 | 0.8867 |
| 0.1486 | 39.0 | 14625 | 0.2664 | 0.8867 |
| 0.2231 | 40.0 | 15000 | 0.2660 | 0.8883 |
| 0.185 | 41.0 | 15375 | 0.2658 | 0.8867 |
| 0.166 | 42.0 | 15750 | 0.2657 | 0.8883 |
| 0.1944 | 43.0 | 16125 | 0.2649 | 0.8933 |
| 0.1752 | 44.0 | 16500 | 0.2656 | 0.8883 |
| 0.1859 | 45.0 | 16875 | 0.2655 | 0.89 |
| 0.2257 | 46.0 | 17250 | 0.2655 | 0.8867 |
| 0.2194 | 47.0 | 17625 | 0.2653 | 0.8883 |
| 0.1536 | 48.0 | 18000 | 0.2653 | 0.8883 |
| 0.1835 | 49.0 | 18375 | 0.2652 | 0.89 |
| 0.1852 | 50.0 | 18750 | 0.2653 | 0.89 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
Nubletz/msi-resnet-50
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# msi-resnet-50
This model is a fine-tuned version of [Nubletz/msi-resnet-pretrain](https://huggingface.co/Nubletz/msi-resnet-pretrain) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 29628148372356011655168.0000
- eval_accuracy: 0.5662
- eval_runtime: 362.9719
- eval_samples_per_second: 78.838
- eval_steps_per_second: 4.929
- epoch: 5.0
- step: 10078
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Framework versions
- Transformers 4.36.1
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"0",
"1"
] |
hkivancoral/smids_5x_deit_base_adamax_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_adamax_00001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6879
- Accuracy: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2538 | 1.0 | 375 | 0.3339 | 0.8467 |
| 0.1892 | 2.0 | 750 | 0.2698 | 0.8917 |
| 0.1183 | 3.0 | 1125 | 0.2626 | 0.9 |
| 0.1141 | 4.0 | 1500 | 0.2917 | 0.8883 |
| 0.071 | 5.0 | 1875 | 0.3136 | 0.9 |
| 0.0217 | 6.0 | 2250 | 0.3618 | 0.8933 |
| 0.0346 | 7.0 | 2625 | 0.4213 | 0.89 |
| 0.0047 | 8.0 | 3000 | 0.4689 | 0.8983 |
| 0.022 | 9.0 | 3375 | 0.5039 | 0.8983 |
| 0.0024 | 10.0 | 3750 | 0.5358 | 0.8917 |
| 0.0059 | 11.0 | 4125 | 0.5640 | 0.895 |
| 0.0001 | 12.0 | 4500 | 0.5647 | 0.8967 |
| 0.0001 | 13.0 | 4875 | 0.6088 | 0.895 |
| 0.0002 | 14.0 | 5250 | 0.5907 | 0.9017 |
| 0.0001 | 15.0 | 5625 | 0.6332 | 0.8967 |
| 0.0001 | 16.0 | 6000 | 0.6424 | 0.8833 |
| 0.0001 | 17.0 | 6375 | 0.6207 | 0.8983 |
| 0.0068 | 18.0 | 6750 | 0.6552 | 0.895 |
| 0.0 | 19.0 | 7125 | 0.6642 | 0.9017 |
| 0.0 | 20.0 | 7500 | 0.6453 | 0.8883 |
| 0.0001 | 21.0 | 7875 | 0.6986 | 0.895 |
| 0.0265 | 22.0 | 8250 | 0.7065 | 0.8883 |
| 0.0 | 23.0 | 8625 | 0.6670 | 0.8967 |
| 0.0 | 24.0 | 9000 | 0.6793 | 0.8967 |
| 0.0 | 25.0 | 9375 | 0.6516 | 0.9017 |
| 0.0 | 26.0 | 9750 | 0.6626 | 0.89 |
| 0.0 | 27.0 | 10125 | 0.6877 | 0.895 |
| 0.0 | 28.0 | 10500 | 0.6598 | 0.8967 |
| 0.0 | 29.0 | 10875 | 0.6682 | 0.8933 |
| 0.0145 | 30.0 | 11250 | 0.6761 | 0.8983 |
| 0.0 | 31.0 | 11625 | 0.6763 | 0.8983 |
| 0.0 | 32.0 | 12000 | 0.6749 | 0.8967 |
| 0.0 | 33.0 | 12375 | 0.6798 | 0.8983 |
| 0.0 | 34.0 | 12750 | 0.6830 | 0.8967 |
| 0.0 | 35.0 | 13125 | 0.6787 | 0.9 |
| 0.0 | 36.0 | 13500 | 0.6883 | 0.895 |
| 0.0 | 37.0 | 13875 | 0.6805 | 0.8983 |
| 0.003 | 38.0 | 14250 | 0.6825 | 0.895 |
| 0.0 | 39.0 | 14625 | 0.6851 | 0.8967 |
| 0.0 | 40.0 | 15000 | 0.6877 | 0.8983 |
| 0.0 | 41.0 | 15375 | 0.6804 | 0.8983 |
| 0.0 | 42.0 | 15750 | 0.6888 | 0.8983 |
| 0.0 | 43.0 | 16125 | 0.6877 | 0.8983 |
| 0.0 | 44.0 | 16500 | 0.6899 | 0.8983 |
| 0.0 | 45.0 | 16875 | 0.6911 | 0.8967 |
| 0.0 | 46.0 | 17250 | 0.6870 | 0.8983 |
| 0.0023 | 47.0 | 17625 | 0.6868 | 0.8983 |
| 0.0 | 48.0 | 18000 | 0.6892 | 0.9 |
| 0.0 | 49.0 | 18375 | 0.6892 | 0.9 |
| 0.0021 | 50.0 | 18750 | 0.6879 | 0.9 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_sgd_0001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_sgd_0001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5141
- Accuracy: 0.8063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.073 | 1.0 | 376 | 1.0746 | 0.4391 |
| 1.0668 | 2.0 | 752 | 1.0543 | 0.4775 |
| 1.0508 | 3.0 | 1128 | 1.0329 | 0.5326 |
| 1.0204 | 4.0 | 1504 | 1.0085 | 0.5676 |
| 0.9684 | 5.0 | 1880 | 0.9803 | 0.6110 |
| 0.9344 | 6.0 | 2256 | 0.9504 | 0.6344 |
| 0.9026 | 7.0 | 2632 | 0.9203 | 0.6511 |
| 0.9041 | 8.0 | 3008 | 0.8901 | 0.6694 |
| 0.8974 | 9.0 | 3384 | 0.8608 | 0.6828 |
| 0.8237 | 10.0 | 3760 | 0.8327 | 0.6945 |
| 0.7953 | 11.0 | 4136 | 0.8062 | 0.6962 |
| 0.7905 | 12.0 | 4512 | 0.7820 | 0.7028 |
| 0.7556 | 13.0 | 4888 | 0.7593 | 0.7129 |
| 0.7345 | 14.0 | 5264 | 0.7385 | 0.7262 |
| 0.7177 | 15.0 | 5640 | 0.7192 | 0.7346 |
| 0.6509 | 16.0 | 6016 | 0.7013 | 0.7412 |
| 0.6656 | 17.0 | 6392 | 0.6845 | 0.7479 |
| 0.6821 | 18.0 | 6768 | 0.6693 | 0.7496 |
| 0.6446 | 19.0 | 7144 | 0.6551 | 0.7546 |
| 0.6223 | 20.0 | 7520 | 0.6422 | 0.7629 |
| 0.6135 | 21.0 | 7896 | 0.6303 | 0.7679 |
| 0.5755 | 22.0 | 8272 | 0.6195 | 0.7713 |
| 0.5973 | 23.0 | 8648 | 0.6095 | 0.7730 |
| 0.6156 | 24.0 | 9024 | 0.6004 | 0.7746 |
| 0.6003 | 25.0 | 9400 | 0.5920 | 0.7780 |
| 0.5642 | 26.0 | 9776 | 0.5842 | 0.7796 |
| 0.5302 | 27.0 | 10152 | 0.5770 | 0.7813 |
| 0.5137 | 28.0 | 10528 | 0.5704 | 0.7863 |
| 0.5415 | 29.0 | 10904 | 0.5643 | 0.7880 |
| 0.5224 | 30.0 | 11280 | 0.5587 | 0.7880 |
| 0.5509 | 31.0 | 11656 | 0.5535 | 0.7930 |
| 0.5721 | 32.0 | 12032 | 0.5488 | 0.7913 |
| 0.5247 | 33.0 | 12408 | 0.5445 | 0.7930 |
| 0.4916 | 34.0 | 12784 | 0.5406 | 0.7963 |
| 0.5191 | 35.0 | 13160 | 0.5369 | 0.7980 |
| 0.5171 | 36.0 | 13536 | 0.5337 | 0.8013 |
| 0.4685 | 37.0 | 13912 | 0.5307 | 0.8063 |
| 0.5439 | 38.0 | 14288 | 0.5280 | 0.8063 |
| 0.4686 | 39.0 | 14664 | 0.5255 | 0.8080 |
| 0.4898 | 40.0 | 15040 | 0.5234 | 0.8063 |
| 0.509 | 41.0 | 15416 | 0.5214 | 0.8063 |
| 0.4464 | 42.0 | 15792 | 0.5198 | 0.8063 |
| 0.4635 | 43.0 | 16168 | 0.5183 | 0.8063 |
| 0.5321 | 44.0 | 16544 | 0.5171 | 0.8063 |
| 0.4939 | 45.0 | 16920 | 0.5161 | 0.8063 |
| 0.4922 | 46.0 | 17296 | 0.5153 | 0.8063 |
| 0.4738 | 47.0 | 17672 | 0.5147 | 0.8063 |
| 0.5216 | 48.0 | 18048 | 0.5143 | 0.8063 |
| 0.4726 | 49.0 | 18424 | 0.5141 | 0.8063 |
| 0.4528 | 50.0 | 18800 | 0.5141 | 0.8063 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_sgd_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_sgd_00001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0498
- Accuracy: 0.5008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1042 | 1.0 | 376 | 1.0929 | 0.3856 |
| 1.1057 | 2.0 | 752 | 1.0909 | 0.3923 |
| 1.1149 | 3.0 | 1128 | 1.0890 | 0.3940 |
| 1.1189 | 4.0 | 1504 | 1.0872 | 0.3907 |
| 1.1034 | 5.0 | 1880 | 1.0854 | 0.3973 |
| 1.0984 | 6.0 | 2256 | 1.0837 | 0.4023 |
| 1.1017 | 7.0 | 2632 | 1.0821 | 0.4073 |
| 1.0896 | 8.0 | 3008 | 1.0805 | 0.4157 |
| 1.0923 | 9.0 | 3384 | 1.0789 | 0.4240 |
| 1.0904 | 10.0 | 3760 | 1.0774 | 0.4257 |
| 1.0756 | 11.0 | 4136 | 1.0759 | 0.4324 |
| 1.0821 | 12.0 | 4512 | 1.0745 | 0.4357 |
| 1.0908 | 13.0 | 4888 | 1.0731 | 0.4424 |
| 1.0966 | 14.0 | 5264 | 1.0718 | 0.4441 |
| 1.0817 | 15.0 | 5640 | 1.0706 | 0.4441 |
| 1.0679 | 16.0 | 6016 | 1.0693 | 0.4457 |
| 1.0876 | 17.0 | 6392 | 1.0681 | 0.4457 |
| 1.064 | 18.0 | 6768 | 1.0670 | 0.4474 |
| 1.072 | 19.0 | 7144 | 1.0658 | 0.4474 |
| 1.09 | 20.0 | 7520 | 1.0648 | 0.4474 |
| 1.081 | 21.0 | 7896 | 1.0637 | 0.4508 |
| 1.0655 | 22.0 | 8272 | 1.0627 | 0.4558 |
| 1.0774 | 23.0 | 8648 | 1.0618 | 0.4574 |
| 1.0736 | 24.0 | 9024 | 1.0609 | 0.4608 |
| 1.0774 | 25.0 | 9400 | 1.0600 | 0.4691 |
| 1.055 | 26.0 | 9776 | 1.0591 | 0.4691 |
| 1.0689 | 27.0 | 10152 | 1.0583 | 0.4674 |
| 1.0612 | 28.0 | 10528 | 1.0576 | 0.4691 |
| 1.0701 | 29.0 | 10904 | 1.0568 | 0.4691 |
| 1.0631 | 30.0 | 11280 | 1.0561 | 0.4741 |
| 1.0623 | 31.0 | 11656 | 1.0555 | 0.4758 |
| 1.0571 | 32.0 | 12032 | 1.0549 | 0.4791 |
| 1.0769 | 33.0 | 12408 | 1.0543 | 0.4841 |
| 1.0511 | 34.0 | 12784 | 1.0537 | 0.4891 |
| 1.0652 | 35.0 | 13160 | 1.0532 | 0.4891 |
| 1.0631 | 36.0 | 13536 | 1.0527 | 0.4908 |
| 1.0446 | 37.0 | 13912 | 1.0523 | 0.4908 |
| 1.0591 | 38.0 | 14288 | 1.0519 | 0.4925 |
| 1.0589 | 39.0 | 14664 | 1.0516 | 0.4925 |
| 1.0552 | 40.0 | 15040 | 1.0512 | 0.4942 |
| 1.0353 | 41.0 | 15416 | 1.0509 | 0.4925 |
| 1.0348 | 42.0 | 15792 | 1.0507 | 0.4958 |
| 1.0561 | 43.0 | 16168 | 1.0505 | 0.4992 |
| 1.0679 | 44.0 | 16544 | 1.0503 | 0.4992 |
| 1.0611 | 45.0 | 16920 | 1.0501 | 0.5008 |
| 1.0413 | 46.0 | 17296 | 1.0500 | 0.5008 |
| 1.0517 | 47.0 | 17672 | 1.0499 | 0.5008 |
| 1.0644 | 48.0 | 18048 | 1.0499 | 0.5008 |
| 1.052 | 49.0 | 18424 | 1.0498 | 0.5008 |
| 1.0428 | 50.0 | 18800 | 1.0498 | 0.5008 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_sgd_0001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_sgd_0001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5171
- Accuracy: 0.7937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0836 | 1.0 | 375 | 1.0885 | 0.3760 |
| 1.0675 | 2.0 | 750 | 1.0687 | 0.4210 |
| 1.0197 | 3.0 | 1125 | 1.0472 | 0.4792 |
| 1.0069 | 4.0 | 1500 | 1.0227 | 0.5291 |
| 0.9845 | 5.0 | 1875 | 0.9951 | 0.5757 |
| 0.9186 | 6.0 | 2250 | 0.9649 | 0.6023 |
| 0.8937 | 7.0 | 2625 | 0.9335 | 0.6240 |
| 0.9019 | 8.0 | 3000 | 0.9023 | 0.6306 |
| 0.8424 | 9.0 | 3375 | 0.8722 | 0.6539 |
| 0.8294 | 10.0 | 3750 | 0.8441 | 0.6755 |
| 0.7985 | 11.0 | 4125 | 0.8177 | 0.7038 |
| 0.7783 | 12.0 | 4500 | 0.7929 | 0.7138 |
| 0.7207 | 13.0 | 4875 | 0.7700 | 0.7238 |
| 0.7333 | 14.0 | 5250 | 0.7486 | 0.7321 |
| 0.7101 | 15.0 | 5625 | 0.7288 | 0.7471 |
| 0.6525 | 16.0 | 6000 | 0.7103 | 0.7554 |
| 0.6829 | 17.0 | 6375 | 0.6933 | 0.7521 |
| 0.6709 | 18.0 | 6750 | 0.6776 | 0.7604 |
| 0.637 | 19.0 | 7125 | 0.6631 | 0.7654 |
| 0.6386 | 20.0 | 7500 | 0.6498 | 0.7704 |
| 0.6136 | 21.0 | 7875 | 0.6376 | 0.7704 |
| 0.6058 | 22.0 | 8250 | 0.6264 | 0.7704 |
| 0.5884 | 23.0 | 8625 | 0.6162 | 0.7704 |
| 0.5772 | 24.0 | 9000 | 0.6067 | 0.7737 |
| 0.5613 | 25.0 | 9375 | 0.5979 | 0.7754 |
| 0.5696 | 26.0 | 9750 | 0.5898 | 0.7787 |
| 0.5932 | 27.0 | 10125 | 0.5824 | 0.7804 |
| 0.5407 | 28.0 | 10500 | 0.5756 | 0.7837 |
| 0.561 | 29.0 | 10875 | 0.5693 | 0.7837 |
| 0.5293 | 30.0 | 11250 | 0.5636 | 0.7854 |
| 0.5123 | 31.0 | 11625 | 0.5583 | 0.7870 |
| 0.5488 | 32.0 | 12000 | 0.5534 | 0.7870 |
| 0.4911 | 33.0 | 12375 | 0.5488 | 0.7870 |
| 0.5069 | 34.0 | 12750 | 0.5447 | 0.7870 |
| 0.5187 | 35.0 | 13125 | 0.5409 | 0.7887 |
| 0.5044 | 36.0 | 13500 | 0.5375 | 0.7903 |
| 0.5147 | 37.0 | 13875 | 0.5344 | 0.7903 |
| 0.4787 | 38.0 | 14250 | 0.5316 | 0.7920 |
| 0.5048 | 39.0 | 14625 | 0.5290 | 0.7920 |
| 0.5066 | 40.0 | 15000 | 0.5268 | 0.7920 |
| 0.5145 | 41.0 | 15375 | 0.5248 | 0.7937 |
| 0.4726 | 42.0 | 15750 | 0.5230 | 0.7937 |
| 0.4727 | 43.0 | 16125 | 0.5215 | 0.7937 |
| 0.5222 | 44.0 | 16500 | 0.5202 | 0.7937 |
| 0.4881 | 45.0 | 16875 | 0.5192 | 0.7937 |
| 0.5004 | 46.0 | 17250 | 0.5184 | 0.7937 |
| 0.4865 | 47.0 | 17625 | 0.5178 | 0.7937 |
| 0.4651 | 48.0 | 18000 | 0.5174 | 0.7937 |
| 0.4822 | 49.0 | 18375 | 0.5172 | 0.7937 |
| 0.4937 | 50.0 | 18750 | 0.5171 | 0.7937 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_sgd_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_sgd_00001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0641
- Accuracy: 0.4459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1035 | 1.0 | 375 | 1.1062 | 0.3344 |
| 1.1126 | 2.0 | 750 | 1.1043 | 0.3344 |
| 1.104 | 3.0 | 1125 | 1.1024 | 0.3344 |
| 1.1172 | 4.0 | 1500 | 1.1007 | 0.3428 |
| 1.1218 | 5.0 | 1875 | 1.0990 | 0.3494 |
| 1.103 | 6.0 | 2250 | 1.0973 | 0.3544 |
| 1.0899 | 7.0 | 2625 | 1.0957 | 0.3594 |
| 1.1072 | 8.0 | 3000 | 1.0942 | 0.3661 |
| 1.0922 | 9.0 | 3375 | 1.0926 | 0.3744 |
| 1.0843 | 10.0 | 3750 | 1.0912 | 0.3727 |
| 1.081 | 11.0 | 4125 | 1.0898 | 0.3710 |
| 1.0891 | 12.0 | 4500 | 1.0884 | 0.3760 |
| 1.0709 | 13.0 | 4875 | 1.0871 | 0.3777 |
| 1.0708 | 14.0 | 5250 | 1.0858 | 0.3827 |
| 1.0647 | 15.0 | 5625 | 1.0846 | 0.3827 |
| 1.0675 | 16.0 | 6000 | 1.0834 | 0.3877 |
| 1.0777 | 17.0 | 6375 | 1.0822 | 0.3927 |
| 1.1021 | 18.0 | 6750 | 1.0811 | 0.3943 |
| 1.075 | 19.0 | 7125 | 1.0800 | 0.3993 |
| 1.08 | 20.0 | 7500 | 1.0789 | 0.3977 |
| 1.0665 | 21.0 | 7875 | 1.0779 | 0.4010 |
| 1.0636 | 22.0 | 8250 | 1.0769 | 0.4010 |
| 1.0724 | 23.0 | 8625 | 1.0760 | 0.4043 |
| 1.075 | 24.0 | 9000 | 1.0751 | 0.4093 |
| 1.0668 | 25.0 | 9375 | 1.0742 | 0.4077 |
| 1.0648 | 26.0 | 9750 | 1.0734 | 0.4160 |
| 1.0792 | 27.0 | 10125 | 1.0726 | 0.4176 |
| 1.068 | 28.0 | 10500 | 1.0718 | 0.4160 |
| 1.0536 | 29.0 | 10875 | 1.0711 | 0.4160 |
| 1.0571 | 30.0 | 11250 | 1.0704 | 0.4193 |
| 1.055 | 31.0 | 11625 | 1.0698 | 0.4226 |
| 1.0604 | 32.0 | 12000 | 1.0691 | 0.4226 |
| 1.0502 | 33.0 | 12375 | 1.0686 | 0.4260 |
| 1.0518 | 34.0 | 12750 | 1.0680 | 0.4243 |
| 1.0472 | 35.0 | 13125 | 1.0675 | 0.4276 |
| 1.0642 | 36.0 | 13500 | 1.0670 | 0.4309 |
| 1.052 | 37.0 | 13875 | 1.0666 | 0.4309 |
| 1.0617 | 38.0 | 14250 | 1.0662 | 0.4309 |
| 1.0473 | 39.0 | 14625 | 1.0658 | 0.4359 |
| 1.0678 | 40.0 | 15000 | 1.0655 | 0.4393 |
| 1.0397 | 41.0 | 15375 | 1.0652 | 0.4393 |
| 1.0482 | 42.0 | 15750 | 1.0650 | 0.4393 |
| 1.0333 | 43.0 | 16125 | 1.0647 | 0.4393 |
| 1.0512 | 44.0 | 16500 | 1.0645 | 0.4409 |
| 1.0593 | 45.0 | 16875 | 1.0644 | 0.4409 |
| 1.0581 | 46.0 | 17250 | 1.0643 | 0.4409 |
| 1.043 | 47.0 | 17625 | 1.0642 | 0.4426 |
| 1.0454 | 48.0 | 18000 | 1.0641 | 0.4443 |
| 1.0474 | 49.0 | 18375 | 1.0641 | 0.4459 |
| 1.0427 | 50.0 | 18750 | 1.0641 | 0.4459 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_sgd_0001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_sgd_0001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5127
- Accuracy: 0.8183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.086 | 1.0 | 375 | 1.0870 | 0.4083 |
| 1.0544 | 2.0 | 750 | 1.0666 | 0.4433 |
| 1.0309 | 3.0 | 1125 | 1.0448 | 0.48 |
| 0.9851 | 4.0 | 1500 | 1.0206 | 0.5333 |
| 0.9784 | 5.0 | 1875 | 0.9932 | 0.575 |
| 0.9454 | 6.0 | 2250 | 0.9632 | 0.6017 |
| 0.9054 | 7.0 | 2625 | 0.9323 | 0.6317 |
| 0.8859 | 8.0 | 3000 | 0.9017 | 0.6617 |
| 0.8212 | 9.0 | 3375 | 0.8722 | 0.675 |
| 0.846 | 10.0 | 3750 | 0.8441 | 0.69 |
| 0.7793 | 11.0 | 4125 | 0.8178 | 0.6967 |
| 0.7556 | 12.0 | 4500 | 0.7933 | 0.72 |
| 0.753 | 13.0 | 4875 | 0.7705 | 0.7333 |
| 0.7537 | 14.0 | 5250 | 0.7493 | 0.7433 |
| 0.7009 | 15.0 | 5625 | 0.7292 | 0.7517 |
| 0.6573 | 16.0 | 6000 | 0.7106 | 0.7517 |
| 0.6888 | 17.0 | 6375 | 0.6931 | 0.7583 |
| 0.6952 | 18.0 | 6750 | 0.6769 | 0.765 |
| 0.6154 | 19.0 | 7125 | 0.6622 | 0.7733 |
| 0.6418 | 20.0 | 7500 | 0.6487 | 0.7833 |
| 0.5745 | 21.0 | 7875 | 0.6362 | 0.7883 |
| 0.5966 | 22.0 | 8250 | 0.6248 | 0.7933 |
| 0.6053 | 23.0 | 8625 | 0.6144 | 0.7983 |
| 0.6111 | 24.0 | 9000 | 0.6047 | 0.8 |
| 0.5758 | 25.0 | 9375 | 0.5959 | 0.805 |
| 0.5916 | 26.0 | 9750 | 0.5876 | 0.8067 |
| 0.5804 | 27.0 | 10125 | 0.5800 | 0.81 |
| 0.5956 | 28.0 | 10500 | 0.5730 | 0.81 |
| 0.5266 | 29.0 | 10875 | 0.5666 | 0.81 |
| 0.5435 | 30.0 | 11250 | 0.5607 | 0.8117 |
| 0.566 | 31.0 | 11625 | 0.5553 | 0.8133 |
| 0.5691 | 32.0 | 12000 | 0.5503 | 0.8133 |
| 0.6036 | 33.0 | 12375 | 0.5456 | 0.815 |
| 0.5178 | 34.0 | 12750 | 0.5415 | 0.8133 |
| 0.4823 | 35.0 | 13125 | 0.5375 | 0.8167 |
| 0.4738 | 36.0 | 13500 | 0.5340 | 0.8183 |
| 0.5126 | 37.0 | 13875 | 0.5307 | 0.8183 |
| 0.4971 | 38.0 | 14250 | 0.5278 | 0.8183 |
| 0.5124 | 39.0 | 14625 | 0.5252 | 0.82 |
| 0.4766 | 40.0 | 15000 | 0.5229 | 0.82 |
| 0.4547 | 41.0 | 15375 | 0.5208 | 0.82 |
| 0.5009 | 42.0 | 15750 | 0.5190 | 0.82 |
| 0.5345 | 43.0 | 16125 | 0.5174 | 0.82 |
| 0.5032 | 44.0 | 16500 | 0.5161 | 0.8183 |
| 0.4932 | 45.0 | 16875 | 0.5150 | 0.8183 |
| 0.5003 | 46.0 | 17250 | 0.5141 | 0.8183 |
| 0.5538 | 47.0 | 17625 | 0.5134 | 0.8183 |
| 0.5053 | 48.0 | 18000 | 0.5130 | 0.8183 |
| 0.5053 | 49.0 | 18375 | 0.5128 | 0.8183 |
| 0.4871 | 50.0 | 18750 | 0.5127 | 0.8183 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_sgd_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_sgd_00001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0622
- Accuracy: 0.4483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1145 | 1.0 | 375 | 1.1057 | 0.3517 |
| 1.1046 | 2.0 | 750 | 1.1036 | 0.365 |
| 1.1155 | 3.0 | 1125 | 1.1017 | 0.3617 |
| 1.0962 | 4.0 | 1500 | 1.0998 | 0.3733 |
| 1.1039 | 5.0 | 1875 | 1.0980 | 0.375 |
| 1.1203 | 6.0 | 2250 | 1.0963 | 0.3833 |
| 1.072 | 7.0 | 2625 | 1.0946 | 0.3833 |
| 1.0855 | 8.0 | 3000 | 1.0930 | 0.3883 |
| 1.0787 | 9.0 | 3375 | 1.0914 | 0.395 |
| 1.1082 | 10.0 | 3750 | 1.0899 | 0.3933 |
| 1.0738 | 11.0 | 4125 | 1.0885 | 0.4033 |
| 1.0788 | 12.0 | 4500 | 1.0870 | 0.4067 |
| 1.0872 | 13.0 | 4875 | 1.0857 | 0.415 |
| 1.0927 | 14.0 | 5250 | 1.0844 | 0.4133 |
| 1.0769 | 15.0 | 5625 | 1.0831 | 0.4167 |
| 1.085 | 16.0 | 6000 | 1.0819 | 0.4217 |
| 1.0634 | 17.0 | 6375 | 1.0806 | 0.42 |
| 1.0703 | 18.0 | 6750 | 1.0795 | 0.42 |
| 1.0536 | 19.0 | 7125 | 1.0783 | 0.4233 |
| 1.0852 | 20.0 | 7500 | 1.0773 | 0.4267 |
| 1.076 | 21.0 | 7875 | 1.0762 | 0.43 |
| 1.0765 | 22.0 | 8250 | 1.0752 | 0.4333 |
| 1.0647 | 23.0 | 8625 | 1.0743 | 0.435 |
| 1.061 | 24.0 | 9000 | 1.0734 | 0.4367 |
| 1.0648 | 25.0 | 9375 | 1.0725 | 0.445 |
| 1.0459 | 26.0 | 9750 | 1.0716 | 0.4417 |
| 1.0405 | 27.0 | 10125 | 1.0708 | 0.4417 |
| 1.062 | 28.0 | 10500 | 1.0700 | 0.4433 |
| 1.0508 | 29.0 | 10875 | 1.0693 | 0.445 |
| 1.06 | 30.0 | 11250 | 1.0686 | 0.445 |
| 1.062 | 31.0 | 11625 | 1.0679 | 0.4433 |
| 1.0622 | 32.0 | 12000 | 1.0673 | 0.4417 |
| 1.056 | 33.0 | 12375 | 1.0667 | 0.44 |
| 1.0406 | 34.0 | 12750 | 1.0662 | 0.4417 |
| 1.0456 | 35.0 | 13125 | 1.0657 | 0.4433 |
| 1.0409 | 36.0 | 13500 | 1.0652 | 0.4483 |
| 1.0536 | 37.0 | 13875 | 1.0648 | 0.4483 |
| 1.0481 | 38.0 | 14250 | 1.0643 | 0.445 |
| 1.0612 | 39.0 | 14625 | 1.0640 | 0.445 |
| 1.0364 | 40.0 | 15000 | 1.0637 | 0.45 |
| 1.0455 | 41.0 | 15375 | 1.0634 | 0.45 |
| 1.0602 | 42.0 | 15750 | 1.0631 | 0.45 |
| 1.0611 | 43.0 | 16125 | 1.0629 | 0.45 |
| 1.0352 | 44.0 | 16500 | 1.0627 | 0.45 |
| 1.0491 | 45.0 | 16875 | 1.0625 | 0.45 |
| 1.0436 | 46.0 | 17250 | 1.0624 | 0.45 |
| 1.0657 | 47.0 | 17625 | 1.0623 | 0.45 |
| 1.0594 | 48.0 | 18000 | 1.0622 | 0.4483 |
| 1.0541 | 49.0 | 18375 | 1.0622 | 0.4483 |
| 1.0444 | 50.0 | 18750 | 1.0622 | 0.4483 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_sgd_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_sgd_00001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0653
- Accuracy: 0.475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1101 | 1.0 | 375 | 1.1105 | 0.3333 |
| 1.1112 | 2.0 | 750 | 1.1084 | 0.3483 |
| 1.1096 | 3.0 | 1125 | 1.1064 | 0.3533 |
| 1.1002 | 4.0 | 1500 | 1.1045 | 0.3633 |
| 1.1197 | 5.0 | 1875 | 1.1027 | 0.3667 |
| 1.0859 | 6.0 | 2250 | 1.1009 | 0.3733 |
| 1.0856 | 7.0 | 2625 | 1.0992 | 0.3833 |
| 1.0857 | 8.0 | 3000 | 1.0975 | 0.3883 |
| 1.0995 | 9.0 | 3375 | 1.0959 | 0.3933 |
| 1.1123 | 10.0 | 3750 | 1.0943 | 0.4 |
| 1.0849 | 11.0 | 4125 | 1.0928 | 0.405 |
| 1.0866 | 12.0 | 4500 | 1.0914 | 0.4067 |
| 1.08 | 13.0 | 4875 | 1.0899 | 0.42 |
| 1.0796 | 14.0 | 5250 | 1.0886 | 0.4217 |
| 1.082 | 15.0 | 5625 | 1.0872 | 0.425 |
| 1.0851 | 16.0 | 6000 | 1.0859 | 0.4283 |
| 1.0643 | 17.0 | 6375 | 1.0847 | 0.4283 |
| 1.0676 | 18.0 | 6750 | 1.0834 | 0.4283 |
| 1.0655 | 19.0 | 7125 | 1.0823 | 0.43 |
| 1.0761 | 20.0 | 7500 | 1.0811 | 0.4283 |
| 1.069 | 21.0 | 7875 | 1.0800 | 0.4333 |
| 1.0634 | 22.0 | 8250 | 1.0790 | 0.435 |
| 1.0576 | 23.0 | 8625 | 1.0780 | 0.4383 |
| 1.0581 | 24.0 | 9000 | 1.0770 | 0.4367 |
| 1.0635 | 25.0 | 9375 | 1.0761 | 0.44 |
| 1.0553 | 26.0 | 9750 | 1.0752 | 0.4417 |
| 1.0268 | 27.0 | 10125 | 1.0743 | 0.4433 |
| 1.0755 | 28.0 | 10500 | 1.0735 | 0.4483 |
| 1.0483 | 29.0 | 10875 | 1.0727 | 0.455 |
| 1.0533 | 30.0 | 11250 | 1.0720 | 0.455 |
| 1.0638 | 31.0 | 11625 | 1.0713 | 0.455 |
| 1.0617 | 32.0 | 12000 | 1.0707 | 0.4567 |
| 1.0624 | 33.0 | 12375 | 1.0700 | 0.46 |
| 1.052 | 34.0 | 12750 | 1.0695 | 0.4667 |
| 1.0462 | 35.0 | 13125 | 1.0689 | 0.4683 |
| 1.0582 | 36.0 | 13500 | 1.0684 | 0.4683 |
| 1.0432 | 37.0 | 13875 | 1.0680 | 0.4683 |
| 1.0468 | 38.0 | 14250 | 1.0675 | 0.47 |
| 1.0664 | 39.0 | 14625 | 1.0671 | 0.4717 |
| 1.0442 | 40.0 | 15000 | 1.0668 | 0.47 |
| 1.0443 | 41.0 | 15375 | 1.0665 | 0.4717 |
| 1.0372 | 42.0 | 15750 | 1.0662 | 0.4717 |
| 1.0389 | 43.0 | 16125 | 1.0660 | 0.4717 |
| 1.044 | 44.0 | 16500 | 1.0658 | 0.4717 |
| 1.0519 | 45.0 | 16875 | 1.0656 | 0.4717 |
| 1.0549 | 46.0 | 17250 | 1.0655 | 0.475 |
| 1.0727 | 47.0 | 17625 | 1.0654 | 0.475 |
| 1.0653 | 48.0 | 18000 | 1.0653 | 0.475 |
| 1.0698 | 49.0 | 18375 | 1.0653 | 0.475 |
| 1.0509 | 50.0 | 18750 | 1.0653 | 0.475 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_sgd_0001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_sgd_0001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4947
- Accuracy: 0.8217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0807 | 1.0 | 375 | 1.0913 | 0.4067 |
| 1.0601 | 2.0 | 750 | 1.0697 | 0.46 |
| 1.0228 | 3.0 | 1125 | 1.0461 | 0.515 |
| 0.9878 | 4.0 | 1500 | 1.0193 | 0.5683 |
| 1.0067 | 5.0 | 1875 | 0.9894 | 0.6 |
| 0.9245 | 6.0 | 2250 | 0.9567 | 0.6167 |
| 0.9152 | 7.0 | 2625 | 0.9222 | 0.6367 |
| 0.8904 | 8.0 | 3000 | 0.8874 | 0.665 |
| 0.8675 | 9.0 | 3375 | 0.8548 | 0.69 |
| 0.8528 | 10.0 | 3750 | 0.8243 | 0.7217 |
| 0.7884 | 11.0 | 4125 | 0.7961 | 0.7333 |
| 0.7678 | 12.0 | 4500 | 0.7701 | 0.7483 |
| 0.7258 | 13.0 | 4875 | 0.7462 | 0.75 |
| 0.7401 | 14.0 | 5250 | 0.7242 | 0.7633 |
| 0.7231 | 15.0 | 5625 | 0.7037 | 0.7683 |
| 0.6595 | 16.0 | 6000 | 0.6848 | 0.7817 |
| 0.6613 | 17.0 | 6375 | 0.6674 | 0.785 |
| 0.6469 | 18.0 | 6750 | 0.6514 | 0.7867 |
| 0.6638 | 19.0 | 7125 | 0.6367 | 0.79 |
| 0.6264 | 20.0 | 7500 | 0.6233 | 0.7917 |
| 0.615 | 21.0 | 7875 | 0.6112 | 0.795 |
| 0.6208 | 22.0 | 8250 | 0.6000 | 0.795 |
| 0.573 | 23.0 | 8625 | 0.5898 | 0.805 |
| 0.5775 | 24.0 | 9000 | 0.5804 | 0.8033 |
| 0.6018 | 25.0 | 9375 | 0.5718 | 0.8033 |
| 0.5747 | 26.0 | 9750 | 0.5639 | 0.8033 |
| 0.5711 | 27.0 | 10125 | 0.5567 | 0.805 |
| 0.5703 | 28.0 | 10500 | 0.5501 | 0.81 |
| 0.5047 | 29.0 | 10875 | 0.5441 | 0.81 |
| 0.5419 | 30.0 | 11250 | 0.5386 | 0.81 |
| 0.5562 | 31.0 | 11625 | 0.5335 | 0.8167 |
| 0.4909 | 32.0 | 12000 | 0.5288 | 0.8183 |
| 0.5437 | 33.0 | 12375 | 0.5245 | 0.82 |
| 0.5223 | 34.0 | 12750 | 0.5206 | 0.8183 |
| 0.4818 | 35.0 | 13125 | 0.5170 | 0.8183 |
| 0.4831 | 36.0 | 13500 | 0.5138 | 0.8183 |
| 0.5242 | 37.0 | 13875 | 0.5109 | 0.82 |
| 0.4897 | 38.0 | 14250 | 0.5083 | 0.8217 |
| 0.5618 | 39.0 | 14625 | 0.5059 | 0.8217 |
| 0.5176 | 40.0 | 15000 | 0.5038 | 0.8217 |
| 0.4753 | 41.0 | 15375 | 0.5019 | 0.8217 |
| 0.464 | 42.0 | 15750 | 0.5003 | 0.8217 |
| 0.5062 | 43.0 | 16125 | 0.4989 | 0.8217 |
| 0.4853 | 44.0 | 16500 | 0.4977 | 0.8217 |
| 0.5132 | 45.0 | 16875 | 0.4967 | 0.8217 |
| 0.4927 | 46.0 | 17250 | 0.4960 | 0.8217 |
| 0.5364 | 47.0 | 17625 | 0.4954 | 0.8217 |
| 0.5219 | 48.0 | 18000 | 0.4950 | 0.8217 |
| 0.4998 | 49.0 | 18375 | 0.4948 | 0.8217 |
| 0.495 | 50.0 | 18750 | 0.4947 | 0.8217 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
ndarocha/swin-tiny-patch4-window7-224-breastdensity
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-breastdensity
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0571
- Accuracy: 0.5236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1872 | 0.99 | 49 | 1.2194 | 0.4320 |
| 1.0998 | 1.99 | 98 | 1.0917 | 0.4807 |
| 1.0623 | 2.98 | 147 | 1.0571 | 0.5236 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"breastdensity1",
"breastdensity2",
"breastdensity3",
"breastdensity4"
] |
hkivancoral/smids_5x_deit_base_sgd_0001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_sgd_0001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5006
- Accuracy: 0.8133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0891 | 1.0 | 375 | 1.0922 | 0.36 |
| 1.0632 | 2.0 | 750 | 1.0728 | 0.4133 |
| 1.0234 | 3.0 | 1125 | 1.0519 | 0.4667 |
| 1.0095 | 4.0 | 1500 | 1.0279 | 0.505 |
| 0.9691 | 5.0 | 1875 | 1.0014 | 0.54 |
| 0.9521 | 6.0 | 2250 | 0.9722 | 0.5683 |
| 0.9099 | 7.0 | 2625 | 0.9405 | 0.6033 |
| 0.8832 | 8.0 | 3000 | 0.9085 | 0.6267 |
| 0.8563 | 9.0 | 3375 | 0.8771 | 0.6533 |
| 0.8097 | 10.0 | 3750 | 0.8470 | 0.685 |
| 0.7629 | 11.0 | 4125 | 0.8186 | 0.705 |
| 0.7531 | 12.0 | 4500 | 0.7923 | 0.715 |
| 0.7082 | 13.0 | 4875 | 0.7677 | 0.7333 |
| 0.7318 | 14.0 | 5250 | 0.7449 | 0.7433 |
| 0.7243 | 15.0 | 5625 | 0.7237 | 0.7533 |
| 0.6668 | 16.0 | 6000 | 0.7041 | 0.7567 |
| 0.6939 | 17.0 | 6375 | 0.6860 | 0.76 |
| 0.6736 | 18.0 | 6750 | 0.6692 | 0.77 |
| 0.6795 | 19.0 | 7125 | 0.6538 | 0.78 |
| 0.6094 | 20.0 | 7500 | 0.6398 | 0.7833 |
| 0.5982 | 21.0 | 7875 | 0.6269 | 0.7817 |
| 0.5784 | 22.0 | 8250 | 0.6150 | 0.7867 |
| 0.6034 | 23.0 | 8625 | 0.6042 | 0.7933 |
| 0.6235 | 24.0 | 9000 | 0.5942 | 0.7967 |
| 0.5888 | 25.0 | 9375 | 0.5851 | 0.7933 |
| 0.5892 | 26.0 | 9750 | 0.5766 | 0.7933 |
| 0.5908 | 27.0 | 10125 | 0.5688 | 0.7983 |
| 0.5781 | 28.0 | 10500 | 0.5616 | 0.7983 |
| 0.5631 | 29.0 | 10875 | 0.5551 | 0.8 |
| 0.5055 | 30.0 | 11250 | 0.5492 | 0.8017 |
| 0.5168 | 31.0 | 11625 | 0.5436 | 0.805 |
| 0.5659 | 32.0 | 12000 | 0.5386 | 0.81 |
| 0.568 | 33.0 | 12375 | 0.5339 | 0.8083 |
| 0.5472 | 34.0 | 12750 | 0.5295 | 0.8117 |
| 0.5227 | 35.0 | 13125 | 0.5256 | 0.81 |
| 0.4679 | 36.0 | 13500 | 0.5220 | 0.81 |
| 0.5236 | 37.0 | 13875 | 0.5188 | 0.8117 |
| 0.5206 | 38.0 | 14250 | 0.5158 | 0.8117 |
| 0.5047 | 39.0 | 14625 | 0.5132 | 0.8133 |
| 0.5461 | 40.0 | 15000 | 0.5108 | 0.8133 |
| 0.495 | 41.0 | 15375 | 0.5087 | 0.8133 |
| 0.508 | 42.0 | 15750 | 0.5069 | 0.8133 |
| 0.5153 | 43.0 | 16125 | 0.5053 | 0.8133 |
| 0.4846 | 44.0 | 16500 | 0.5040 | 0.8133 |
| 0.5055 | 45.0 | 16875 | 0.5029 | 0.8133 |
| 0.5156 | 46.0 | 17250 | 0.5020 | 0.8133 |
| 0.525 | 47.0 | 17625 | 0.5013 | 0.8133 |
| 0.4795 | 48.0 | 18000 | 0.5009 | 0.8133 |
| 0.4888 | 49.0 | 18375 | 0.5006 | 0.8133 |
| 0.4989 | 50.0 | 18750 | 0.5006 | 0.8133 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_sgd_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_sgd_00001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0685
- Accuracy: 0.4283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1186 | 1.0 | 375 | 1.1098 | 0.3483 |
| 1.1102 | 2.0 | 750 | 1.1079 | 0.3417 |
| 1.1136 | 3.0 | 1125 | 1.1060 | 0.345 |
| 1.1131 | 4.0 | 1500 | 1.1043 | 0.35 |
| 1.0975 | 5.0 | 1875 | 1.1026 | 0.3433 |
| 1.1051 | 6.0 | 2250 | 1.1010 | 0.345 |
| 1.0892 | 7.0 | 2625 | 1.0994 | 0.3417 |
| 1.0802 | 8.0 | 3000 | 1.0978 | 0.3533 |
| 1.0951 | 9.0 | 3375 | 1.0964 | 0.3517 |
| 1.0929 | 10.0 | 3750 | 1.0949 | 0.3517 |
| 1.0628 | 11.0 | 4125 | 1.0935 | 0.3533 |
| 1.0809 | 12.0 | 4500 | 1.0922 | 0.36 |
| 1.0566 | 13.0 | 4875 | 1.0909 | 0.375 |
| 1.0849 | 14.0 | 5250 | 1.0897 | 0.38 |
| 1.0684 | 15.0 | 5625 | 1.0884 | 0.3817 |
| 1.0868 | 16.0 | 6000 | 1.0873 | 0.3817 |
| 1.0653 | 17.0 | 6375 | 1.0861 | 0.3817 |
| 1.0768 | 18.0 | 6750 | 1.0850 | 0.385 |
| 1.0758 | 19.0 | 7125 | 1.0839 | 0.385 |
| 1.0932 | 20.0 | 7500 | 1.0829 | 0.3883 |
| 1.072 | 21.0 | 7875 | 1.0819 | 0.39 |
| 1.06 | 22.0 | 8250 | 1.0809 | 0.3917 |
| 1.0521 | 23.0 | 8625 | 1.0800 | 0.3967 |
| 1.0558 | 24.0 | 9000 | 1.0792 | 0.3983 |
| 1.0773 | 25.0 | 9375 | 1.0783 | 0.4 |
| 1.0609 | 26.0 | 9750 | 1.0775 | 0.4 |
| 1.0495 | 27.0 | 10125 | 1.0767 | 0.4017 |
| 1.0658 | 28.0 | 10500 | 1.0760 | 0.4 |
| 1.0475 | 29.0 | 10875 | 1.0753 | 0.4083 |
| 1.0538 | 30.0 | 11250 | 1.0746 | 0.415 |
| 1.0455 | 31.0 | 11625 | 1.0740 | 0.4133 |
| 1.0741 | 32.0 | 12000 | 1.0734 | 0.415 |
| 1.0518 | 33.0 | 12375 | 1.0728 | 0.4167 |
| 1.04 | 34.0 | 12750 | 1.0723 | 0.4183 |
| 1.0566 | 35.0 | 13125 | 1.0718 | 0.4167 |
| 1.0416 | 36.0 | 13500 | 1.0714 | 0.4167 |
| 1.0546 | 37.0 | 13875 | 1.0709 | 0.4217 |
| 1.0514 | 38.0 | 14250 | 1.0706 | 0.4233 |
| 1.0481 | 39.0 | 14625 | 1.0702 | 0.425 |
| 1.0581 | 40.0 | 15000 | 1.0699 | 0.4267 |
| 1.0484 | 41.0 | 15375 | 1.0696 | 0.4267 |
| 1.0544 | 42.0 | 15750 | 1.0694 | 0.4267 |
| 1.0499 | 43.0 | 16125 | 1.0691 | 0.4267 |
| 1.0424 | 44.0 | 16500 | 1.0690 | 0.4267 |
| 1.0515 | 45.0 | 16875 | 1.0688 | 0.4283 |
| 1.0389 | 46.0 | 17250 | 1.0687 | 0.4283 |
| 1.0556 | 47.0 | 17625 | 1.0686 | 0.4283 |
| 1.0595 | 48.0 | 18000 | 1.0685 | 0.4283 |
| 1.0528 | 49.0 | 18375 | 1.0685 | 0.4283 |
| 1.0533 | 50.0 | 18750 | 1.0685 | 0.4283 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
MichalGas/swin-tiny-patch4-window7-224-finetuned-mgasior
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-mgasior
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 46788898816.0
- F1: 0.1890
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 9 | 46788898816.0 | 0.1890 |
| 40845207142.4 | 2.0 | 18 | 46788898816.0 | 0.1732 |
| 41037873152.0 | 3.0 | 27 | 46788898816.0 | 0.1732 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"bipolars",
"clippers",
"graspers",
"hooks",
"irrigators",
"scissorss"
] |
andakm/bmw_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# andakm/bmw_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1751
- Train Accuracy: 0.7941
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 2040, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 0.3531 | 0.7353 | 0 |
| 0.3083 | 0.7941 | 1 |
| 0.2895 | 0.6863 | 2 |
| 0.2210 | 0.7843 | 3 |
| 0.1751 | 0.7941 | 4 |
### Framework versions
- Transformers 4.36.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"1-series",
"3-series",
"4-series",
"5-series",
"6-series",
"7-series",
"8-series",
"m3",
"m4",
"m5"
] |
OkabeRintaro/vit-base-patch16-224-finetuned-imagegpt
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-imagegpt
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2569
- Accuracy: 0.6296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7934 | 0.99 | 58 | 1.2569 | 0.6296 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.0+cpu
- Datasets 2.1.0
- Tokenizers 0.15.0
|
[
"adialer.c",
"agent.fyi",
"allaple.a",
"allaple.l",
"alueron.gen!j",
"autorun.k",
"c2lop.p",
"c2lop.gen!g",
"dialplatform.b",
"dontovo.a",
"fakerean",
"instantaccess",
"lolyda.aa1",
"lolyda.aa2",
"lolyda.aa3",
"lolyda.at",
"malex.gen!j",
"obfuscator.ad",
"rbot!gen",
"skintrim.n",
"swizzor.gen!e",
"swizzor.gen!i",
"vb.at",
"wintrim.bx",
"yuner.a"
] |
hkivancoral/smids_5x_deit_base_rms_001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_rms_001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6839
- Accuracy: 0.7863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1051 | 1.0 | 376 | 1.0840 | 0.3356 |
| 0.8654 | 2.0 | 752 | 0.8754 | 0.4841 |
| 0.7982 | 3.0 | 1128 | 0.7992 | 0.5843 |
| 0.8215 | 4.0 | 1504 | 0.8640 | 0.5509 |
| 0.8937 | 5.0 | 1880 | 0.7446 | 0.6678 |
| 0.7292 | 6.0 | 2256 | 0.7760 | 0.6361 |
| 0.6914 | 7.0 | 2632 | 0.7052 | 0.6694 |
| 0.6499 | 8.0 | 3008 | 0.7542 | 0.6511 |
| 0.6981 | 9.0 | 3384 | 0.6919 | 0.6912 |
| 0.6852 | 10.0 | 3760 | 0.6488 | 0.6995 |
| 0.5929 | 11.0 | 4136 | 0.6360 | 0.7162 |
| 0.6018 | 12.0 | 4512 | 0.6410 | 0.7212 |
| 0.578 | 13.0 | 4888 | 0.6824 | 0.7078 |
| 0.5646 | 14.0 | 5264 | 0.6123 | 0.7546 |
| 0.5813 | 15.0 | 5640 | 0.6611 | 0.7479 |
| 0.5334 | 16.0 | 6016 | 0.6911 | 0.7012 |
| 0.4401 | 17.0 | 6392 | 0.6234 | 0.7362 |
| 0.5629 | 18.0 | 6768 | 0.5782 | 0.7412 |
| 0.5062 | 19.0 | 7144 | 0.6504 | 0.7329 |
| 0.444 | 20.0 | 7520 | 0.5828 | 0.7696 |
| 0.4995 | 21.0 | 7896 | 0.5919 | 0.7446 |
| 0.4251 | 22.0 | 8272 | 0.6276 | 0.7629 |
| 0.4812 | 23.0 | 8648 | 0.6155 | 0.7462 |
| 0.4775 | 24.0 | 9024 | 0.6984 | 0.7179 |
| 0.4597 | 25.0 | 9400 | 0.6577 | 0.7295 |
| 0.4394 | 26.0 | 9776 | 0.5934 | 0.7429 |
| 0.4129 | 27.0 | 10152 | 0.6066 | 0.7563 |
| 0.4098 | 28.0 | 10528 | 0.5792 | 0.7579 |
| 0.4483 | 29.0 | 10904 | 0.5708 | 0.7613 |
| 0.3862 | 30.0 | 11280 | 0.5970 | 0.7679 |
| 0.4253 | 31.0 | 11656 | 0.6053 | 0.7546 |
| 0.4815 | 32.0 | 12032 | 0.5808 | 0.7479 |
| 0.3892 | 33.0 | 12408 | 0.5698 | 0.7613 |
| 0.35 | 34.0 | 12784 | 0.5670 | 0.7563 |
| 0.3952 | 35.0 | 13160 | 0.5921 | 0.7696 |
| 0.4191 | 36.0 | 13536 | 0.5999 | 0.7863 |
| 0.3174 | 37.0 | 13912 | 0.5845 | 0.7679 |
| 0.3864 | 38.0 | 14288 | 0.6529 | 0.7496 |
| 0.4036 | 39.0 | 14664 | 0.6327 | 0.7679 |
| 0.4274 | 40.0 | 15040 | 0.5923 | 0.7646 |
| 0.357 | 41.0 | 15416 | 0.6017 | 0.7863 |
| 0.348 | 42.0 | 15792 | 0.6309 | 0.7763 |
| 0.2967 | 43.0 | 16168 | 0.6418 | 0.7679 |
| 0.3292 | 44.0 | 16544 | 0.6405 | 0.7780 |
| 0.3428 | 45.0 | 16920 | 0.6600 | 0.7813 |
| 0.3127 | 46.0 | 17296 | 0.6429 | 0.7780 |
| 0.2979 | 47.0 | 17672 | 0.6618 | 0.7813 |
| 0.3209 | 48.0 | 18048 | 0.6803 | 0.7796 |
| 0.2866 | 49.0 | 18424 | 0.6856 | 0.7880 |
| 0.2611 | 50.0 | 18800 | 0.6839 | 0.7863 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
merve/pokemon-classifier
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pokemon-classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the pokemon-classification dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3367
- Accuracy: 0.0109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.7242 | 1.0 | 76 | 5.2859 | 0.0068 |
| 4.2781 | 1.99 | 152 | 5.3334 | 0.0109 |
| 4.0798 | 2.99 | 228 | 5.3367 | 0.0109 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"golbat",
"machoke",
"raichu",
"dragonite",
"fearow",
"slowpoke",
"weezing",
"beedrill",
"weedle",
"cloyster",
"vaporeon",
"gyarados",
"golduck",
"zapdos",
"machamp",
"hitmonlee",
"primeape",
"cubone",
"sandslash",
"scyther",
"haunter",
"metapod",
"tentacruel",
"aerodactyl",
"raticate",
"kabutops",
"ninetales",
"zubat",
"rhydon",
"mew",
"pinsir",
"ditto",
"victreebel",
"omanyte",
"horsea",
"magnemite",
"pikachu",
"blastoise",
"venomoth",
"charizard",
"seadra",
"muk",
"spearow",
"bulbasaur",
"bellsprout",
"electrode",
"ivysaur",
"gloom",
"poliwhirl",
"flareon",
"seaking",
"hypno",
"wartortle",
"mankey",
"tentacool",
"exeggcute",
"meowth",
"growlithe",
"tangela",
"drowzee",
"rapidash",
"venonat",
"omastar",
"pidgeot",
"nidorino",
"porygon",
"lickitung",
"rattata",
"machop",
"charmeleon",
"slowbro",
"parasect",
"eevee",
"diglett",
"starmie",
"staryu",
"psyduck",
"dragonair",
"magikarp",
"vileplume",
"marowak",
"pidgeotto",
"shellder",
"mewtwo",
"lapras",
"farfetchd",
"kingler",
"seel",
"kakuna",
"doduo",
"electabuzz",
"charmander",
"rhyhorn",
"tauros",
"dugtrio",
"kabuto",
"poliwrath",
"gengar",
"exeggutor",
"dewgong",
"jigglypuff",
"geodude",
"kadabra",
"nidorina",
"sandshrew",
"grimer",
"persian",
"mrmime",
"pidgey",
"koffing",
"ekans",
"alolan sandslash",
"venusaur",
"snorlax",
"paras",
"jynx",
"chansey",
"weepinbell",
"hitmonchan",
"gastly",
"kangaskhan",
"oddish",
"wigglytuff",
"graveler",
"arcanine",
"clefairy",
"articuno",
"poliwag",
"golem",
"abra",
"squirtle",
"voltorb",
"ponyta",
"moltres",
"nidoqueen",
"magmar",
"onix",
"vulpix",
"butterfree",
"dodrio",
"krabby",
"arbok",
"clefable",
"goldeen",
"magneton",
"dratini",
"caterpie",
"jolteon",
"nidoking",
"alakazam"
] |
hkivancoral/smids_5x_deit_base_rms_001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_rms_001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6861
- Accuracy: 0.7953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.9457 | 1.0 | 375 | 0.8476 | 0.5324 |
| 0.8382 | 2.0 | 750 | 0.8448 | 0.5391 |
| 0.809 | 3.0 | 1125 | 0.7879 | 0.5424 |
| 0.745 | 4.0 | 1500 | 0.7390 | 0.6323 |
| 0.7171 | 5.0 | 1875 | 0.7030 | 0.6622 |
| 0.6968 | 6.0 | 2250 | 0.7889 | 0.6456 |
| 0.6875 | 7.0 | 2625 | 0.6388 | 0.7072 |
| 0.6501 | 8.0 | 3000 | 0.6553 | 0.7038 |
| 0.651 | 9.0 | 3375 | 0.5748 | 0.7354 |
| 0.6283 | 10.0 | 3750 | 0.5788 | 0.7421 |
| 0.6238 | 11.0 | 4125 | 0.5592 | 0.7621 |
| 0.6557 | 12.0 | 4500 | 0.5567 | 0.7471 |
| 0.5365 | 13.0 | 4875 | 0.5457 | 0.7704 |
| 0.6238 | 14.0 | 5250 | 0.5542 | 0.7571 |
| 0.5702 | 15.0 | 5625 | 0.5827 | 0.7321 |
| 0.4879 | 16.0 | 6000 | 0.5598 | 0.7671 |
| 0.5127 | 17.0 | 6375 | 0.5330 | 0.7687 |
| 0.4935 | 18.0 | 6750 | 0.5680 | 0.7537 |
| 0.5245 | 19.0 | 7125 | 0.5110 | 0.7837 |
| 0.4876 | 20.0 | 7500 | 0.5252 | 0.7804 |
| 0.4771 | 21.0 | 7875 | 0.5233 | 0.7770 |
| 0.5009 | 22.0 | 8250 | 0.5096 | 0.8003 |
| 0.4963 | 23.0 | 8625 | 0.5344 | 0.7870 |
| 0.4814 | 24.0 | 9000 | 0.5289 | 0.8087 |
| 0.4304 | 25.0 | 9375 | 0.5706 | 0.7854 |
| 0.4721 | 26.0 | 9750 | 0.5627 | 0.7637 |
| 0.4743 | 27.0 | 10125 | 0.5566 | 0.7770 |
| 0.387 | 28.0 | 10500 | 0.5497 | 0.7953 |
| 0.4853 | 29.0 | 10875 | 0.5210 | 0.7887 |
| 0.347 | 30.0 | 11250 | 0.5276 | 0.8120 |
| 0.4668 | 31.0 | 11625 | 0.5138 | 0.8136 |
| 0.424 | 32.0 | 12000 | 0.5446 | 0.7787 |
| 0.4005 | 33.0 | 12375 | 0.5357 | 0.8020 |
| 0.4114 | 34.0 | 12750 | 0.5259 | 0.7903 |
| 0.3661 | 35.0 | 13125 | 0.5604 | 0.7903 |
| 0.3542 | 36.0 | 13500 | 0.5807 | 0.7987 |
| 0.3711 | 37.0 | 13875 | 0.5589 | 0.8037 |
| 0.3372 | 38.0 | 14250 | 0.5483 | 0.8003 |
| 0.3656 | 39.0 | 14625 | 0.6008 | 0.7704 |
| 0.3244 | 40.0 | 15000 | 0.5754 | 0.8120 |
| 0.3484 | 41.0 | 15375 | 0.5934 | 0.7970 |
| 0.3143 | 42.0 | 15750 | 0.6293 | 0.7937 |
| 0.3053 | 43.0 | 16125 | 0.5834 | 0.7920 |
| 0.3117 | 44.0 | 16500 | 0.5894 | 0.7970 |
| 0.3041 | 45.0 | 16875 | 0.6216 | 0.7887 |
| 0.284 | 46.0 | 17250 | 0.6533 | 0.7953 |
| 0.36 | 47.0 | 17625 | 0.6167 | 0.7987 |
| 0.2345 | 48.0 | 18000 | 0.6554 | 0.8003 |
| 0.2899 | 49.0 | 18375 | 0.6795 | 0.7937 |
| 0.264 | 50.0 | 18750 | 0.6861 | 0.7953 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_rms_001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_rms_001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7864
- Accuracy: 0.775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8829 | 1.0 | 375 | 0.8947 | 0.4967 |
| 0.7778 | 2.0 | 750 | 0.8858 | 0.5283 |
| 0.8311 | 3.0 | 1125 | 0.8355 | 0.55 |
| 0.8294 | 4.0 | 1500 | 0.7887 | 0.6067 |
| 0.8799 | 5.0 | 1875 | 0.8054 | 0.6217 |
| 0.7828 | 6.0 | 2250 | 0.8032 | 0.5883 |
| 0.7591 | 7.0 | 2625 | 0.7342 | 0.6533 |
| 0.7195 | 8.0 | 3000 | 0.7320 | 0.6317 |
| 0.6862 | 9.0 | 3375 | 0.8798 | 0.5583 |
| 0.6609 | 10.0 | 3750 | 0.6983 | 0.6717 |
| 0.6813 | 11.0 | 4125 | 0.7308 | 0.67 |
| 0.7021 | 12.0 | 4500 | 0.7207 | 0.63 |
| 0.6223 | 13.0 | 4875 | 0.6947 | 0.685 |
| 0.5883 | 14.0 | 5250 | 0.6340 | 0.7217 |
| 0.6307 | 15.0 | 5625 | 0.6616 | 0.7033 |
| 0.6001 | 16.0 | 6000 | 0.6868 | 0.6983 |
| 0.6448 | 17.0 | 6375 | 0.6323 | 0.7233 |
| 0.6618 | 18.0 | 6750 | 0.6385 | 0.7233 |
| 0.5927 | 19.0 | 7125 | 0.6305 | 0.71 |
| 0.5637 | 20.0 | 7500 | 0.6279 | 0.7033 |
| 0.5052 | 21.0 | 7875 | 0.6248 | 0.7217 |
| 0.5232 | 22.0 | 8250 | 0.6213 | 0.7133 |
| 0.586 | 23.0 | 8625 | 0.6387 | 0.7217 |
| 0.6497 | 24.0 | 9000 | 0.5879 | 0.7367 |
| 0.5103 | 25.0 | 9375 | 0.6071 | 0.7317 |
| 0.5081 | 26.0 | 9750 | 0.6415 | 0.725 |
| 0.5448 | 27.0 | 10125 | 0.5744 | 0.725 |
| 0.5273 | 28.0 | 10500 | 0.6005 | 0.7333 |
| 0.5173 | 29.0 | 10875 | 0.6315 | 0.725 |
| 0.4436 | 30.0 | 11250 | 0.5604 | 0.7617 |
| 0.5135 | 31.0 | 11625 | 0.5861 | 0.7633 |
| 0.5481 | 32.0 | 12000 | 0.5892 | 0.7467 |
| 0.5389 | 33.0 | 12375 | 0.5940 | 0.7533 |
| 0.5388 | 34.0 | 12750 | 0.5721 | 0.74 |
| 0.4177 | 35.0 | 13125 | 0.6397 | 0.7317 |
| 0.4044 | 36.0 | 13500 | 0.6157 | 0.74 |
| 0.4195 | 37.0 | 13875 | 0.6245 | 0.75 |
| 0.4242 | 38.0 | 14250 | 0.5771 | 0.7567 |
| 0.4027 | 39.0 | 14625 | 0.5507 | 0.7533 |
| 0.3811 | 40.0 | 15000 | 0.6123 | 0.7483 |
| 0.3735 | 41.0 | 15375 | 0.6056 | 0.7583 |
| 0.3892 | 42.0 | 15750 | 0.6319 | 0.7633 |
| 0.3928 | 43.0 | 16125 | 0.6475 | 0.7683 |
| 0.3524 | 44.0 | 16500 | 0.6335 | 0.7783 |
| 0.2896 | 45.0 | 16875 | 0.6766 | 0.7683 |
| 0.3014 | 46.0 | 17250 | 0.7029 | 0.775 |
| 0.2555 | 47.0 | 17625 | 0.6873 | 0.7967 |
| 0.2601 | 48.0 | 18000 | 0.7571 | 0.78 |
| 0.2142 | 49.0 | 18375 | 0.7689 | 0.7833 |
| 0.2176 | 50.0 | 18750 | 0.7864 | 0.775 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
nicolasdupuisroy/vit-letter-identification-v2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-letter-identification-v2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1135
- Accuracy: 0.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 100
- eval_batch_size: 102
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 120.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 3.2331 | 0.0882 |
| 3.2363 | 2.0 | 12 | 3.2025 | 0.1373 |
| 3.2363 | 3.0 | 18 | 3.1761 | 0.1863 |
| 3.1622 | 4.0 | 24 | 3.1238 | 0.2255 |
| 3.0918 | 5.0 | 30 | 3.0789 | 0.3137 |
| 3.0918 | 6.0 | 36 | 3.0280 | 0.3235 |
| 3.0081 | 7.0 | 42 | 2.9878 | 0.3431 |
| 3.0081 | 8.0 | 48 | 2.9316 | 0.3824 |
| 2.9118 | 9.0 | 54 | 2.8864 | 0.4314 |
| 2.8231 | 10.0 | 60 | 2.8314 | 0.4510 |
| 2.8231 | 11.0 | 66 | 2.7817 | 0.5196 |
| 2.7149 | 12.0 | 72 | 2.7278 | 0.5196 |
| 2.7149 | 13.0 | 78 | 2.6796 | 0.5588 |
| 2.6202 | 14.0 | 84 | 2.6203 | 0.5882 |
| 2.5243 | 15.0 | 90 | 2.5674 | 0.5882 |
| 2.5243 | 16.0 | 96 | 2.5170 | 0.6078 |
| 2.4279 | 17.0 | 102 | 2.4672 | 0.6176 |
| 2.4279 | 18.0 | 108 | 2.4285 | 0.5980 |
| 2.3404 | 19.0 | 114 | 2.3784 | 0.6569 |
| 2.2633 | 20.0 | 120 | 2.3348 | 0.6471 |
| 2.2633 | 21.0 | 126 | 2.2872 | 0.6667 |
| 2.1838 | 22.0 | 132 | 2.2539 | 0.6569 |
| 2.1838 | 23.0 | 138 | 2.2232 | 0.6765 |
| 2.1022 | 24.0 | 144 | 2.1867 | 0.6471 |
| 2.0364 | 25.0 | 150 | 2.1489 | 0.6863 |
| 2.0364 | 26.0 | 156 | 2.1099 | 0.7255 |
| 1.96 | 27.0 | 162 | 2.0767 | 0.7157 |
| 1.96 | 28.0 | 168 | 2.0417 | 0.7157 |
| 1.9235 | 29.0 | 174 | 2.0162 | 0.7353 |
| 1.8484 | 30.0 | 180 | 1.9787 | 0.7451 |
| 1.8484 | 31.0 | 186 | 1.9548 | 0.7451 |
| 1.7971 | 32.0 | 192 | 1.9329 | 0.7549 |
| 1.7971 | 33.0 | 198 | 1.9052 | 0.7647 |
| 1.7409 | 34.0 | 204 | 1.8827 | 0.7549 |
| 1.7006 | 35.0 | 210 | 1.8589 | 0.7745 |
| 1.7006 | 36.0 | 216 | 1.8294 | 0.7843 |
| 1.6426 | 37.0 | 222 | 1.8098 | 0.7843 |
| 1.6426 | 38.0 | 228 | 1.7809 | 0.7647 |
| 1.6102 | 39.0 | 234 | 1.7643 | 0.7843 |
| 1.5704 | 40.0 | 240 | 1.7399 | 0.8039 |
| 1.5704 | 41.0 | 246 | 1.7193 | 0.8137 |
| 1.5264 | 42.0 | 252 | 1.6980 | 0.8333 |
| 1.5264 | 43.0 | 258 | 1.6840 | 0.8039 |
| 1.4821 | 44.0 | 264 | 1.6644 | 0.8235 |
| 1.4506 | 45.0 | 270 | 1.6467 | 0.8235 |
| 1.4506 | 46.0 | 276 | 1.6333 | 0.8235 |
| 1.4358 | 47.0 | 282 | 1.6095 | 0.8235 |
| 1.4358 | 48.0 | 288 | 1.5906 | 0.8235 |
| 1.3695 | 49.0 | 294 | 1.5720 | 0.8431 |
| 1.367 | 50.0 | 300 | 1.5610 | 0.8333 |
| 1.367 | 51.0 | 306 | 1.5440 | 0.8529 |
| 1.3299 | 52.0 | 312 | 1.5359 | 0.8333 |
| 1.3299 | 53.0 | 318 | 1.5129 | 0.8333 |
| 1.2765 | 54.0 | 324 | 1.5057 | 0.8235 |
| 1.2785 | 55.0 | 330 | 1.4867 | 0.8235 |
| 1.2785 | 56.0 | 336 | 1.4751 | 0.8333 |
| 1.2355 | 57.0 | 342 | 1.4553 | 0.8235 |
| 1.2355 | 58.0 | 348 | 1.4491 | 0.8235 |
| 1.2418 | 59.0 | 354 | 1.4289 | 0.8431 |
| 1.2058 | 60.0 | 360 | 1.4185 | 0.8235 |
| 1.2058 | 61.0 | 366 | 1.4104 | 0.8333 |
| 1.164 | 62.0 | 372 | 1.3968 | 0.8333 |
| 1.164 | 63.0 | 378 | 1.3846 | 0.8431 |
| 1.1529 | 64.0 | 384 | 1.3697 | 0.8431 |
| 1.1408 | 65.0 | 390 | 1.3633 | 0.8431 |
| 1.1408 | 66.0 | 396 | 1.3505 | 0.8431 |
| 1.1102 | 67.0 | 402 | 1.3371 | 0.8529 |
| 1.1102 | 68.0 | 408 | 1.3282 | 0.8529 |
| 1.0906 | 69.0 | 414 | 1.3240 | 0.8431 |
| 1.0759 | 70.0 | 420 | 1.3163 | 0.8431 |
| 1.0759 | 71.0 | 426 | 1.3044 | 0.8529 |
| 1.0651 | 72.0 | 432 | 1.2924 | 0.8431 |
| 1.0651 | 73.0 | 438 | 1.2867 | 0.8529 |
| 1.0501 | 74.0 | 444 | 1.2749 | 0.8529 |
| 1.0238 | 75.0 | 450 | 1.2688 | 0.8431 |
| 1.0238 | 76.0 | 456 | 1.2568 | 0.8529 |
| 1.0046 | 77.0 | 462 | 1.2502 | 0.8529 |
| 1.0046 | 78.0 | 468 | 1.2460 | 0.8529 |
| 0.9946 | 79.0 | 474 | 1.2455 | 0.8431 |
| 0.9998 | 80.0 | 480 | 1.2343 | 0.8529 |
| 0.9998 | 81.0 | 486 | 1.2286 | 0.8529 |
| 0.9709 | 82.0 | 492 | 1.2195 | 0.8431 |
| 0.9709 | 83.0 | 498 | 1.2126 | 0.8529 |
| 0.963 | 84.0 | 504 | 1.2102 | 0.8431 |
| 0.9499 | 85.0 | 510 | 1.2024 | 0.8431 |
| 0.9499 | 86.0 | 516 | 1.1980 | 0.8529 |
| 0.937 | 87.0 | 522 | 1.1912 | 0.8529 |
| 0.937 | 88.0 | 528 | 1.1883 | 0.8431 |
| 0.9389 | 89.0 | 534 | 1.1845 | 0.8529 |
| 0.9181 | 90.0 | 540 | 1.1811 | 0.8529 |
| 0.9181 | 91.0 | 546 | 1.1777 | 0.8431 |
| 0.9219 | 92.0 | 552 | 1.1743 | 0.8627 |
| 0.9219 | 93.0 | 558 | 1.1675 | 0.8627 |
| 0.9067 | 94.0 | 564 | 1.1598 | 0.8627 |
| 0.9009 | 95.0 | 570 | 1.1601 | 0.8627 |
| 0.9009 | 96.0 | 576 | 1.1564 | 0.8529 |
| 0.8914 | 97.0 | 582 | 1.1505 | 0.8529 |
| 0.8914 | 98.0 | 588 | 1.1487 | 0.8529 |
| 0.8739 | 99.0 | 594 | 1.1480 | 0.8627 |
| 0.8742 | 100.0 | 600 | 1.1413 | 0.8529 |
| 0.8742 | 101.0 | 606 | 1.1368 | 0.8627 |
| 0.8679 | 102.0 | 612 | 1.1361 | 0.8627 |
| 0.8679 | 103.0 | 618 | 1.1317 | 0.8627 |
| 0.8516 | 104.0 | 624 | 1.1296 | 0.8529 |
| 0.876 | 105.0 | 630 | 1.1288 | 0.8627 |
| 0.876 | 106.0 | 636 | 1.1264 | 0.8627 |
| 0.8591 | 107.0 | 642 | 1.1238 | 0.8627 |
| 0.8591 | 108.0 | 648 | 1.1227 | 0.8627 |
| 0.8586 | 109.0 | 654 | 1.1208 | 0.8627 |
| 0.8415 | 110.0 | 660 | 1.1194 | 0.8627 |
| 0.8415 | 111.0 | 666 | 1.1185 | 0.8627 |
| 0.8465 | 112.0 | 672 | 1.1178 | 0.8529 |
| 0.8465 | 113.0 | 678 | 1.1184 | 0.8529 |
| 0.8503 | 114.0 | 684 | 1.1183 | 0.8431 |
| 0.8332 | 115.0 | 690 | 1.1174 | 0.8431 |
| 0.8332 | 116.0 | 696 | 1.1165 | 0.8431 |
| 0.8476 | 117.0 | 702 | 1.1153 | 0.8529 |
| 0.8476 | 118.0 | 708 | 1.1142 | 0.8529 |
| 0.8382 | 119.0 | 714 | 1.1137 | 0.8627 |
| 0.8527 | 120.0 | 720 | 1.1135 | 0.8627 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"a",
"b",
"k",
"l",
"m",
"n",
"o",
"p",
"q",
"r",
"s",
"t",
"c",
"u",
"v",
"w",
"x",
"y",
"z",
"d",
"e",
"f",
"g",
"h",
"i",
"j"
] |
hkivancoral/smids_5x_deit_base_rms_001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_rms_001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7070
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.11 | 1.0 | 375 | 1.0993 | 0.325 |
| 0.9143 | 2.0 | 750 | 0.8093 | 0.5633 |
| 0.8119 | 3.0 | 1125 | 0.7905 | 0.58 |
| 0.8256 | 4.0 | 1500 | 0.7664 | 0.5933 |
| 0.7841 | 5.0 | 1875 | 0.7359 | 0.6383 |
| 0.7919 | 6.0 | 2250 | 0.7445 | 0.6033 |
| 0.7749 | 7.0 | 2625 | 0.6951 | 0.6567 |
| 0.741 | 8.0 | 3000 | 0.7702 | 0.6017 |
| 0.6745 | 9.0 | 3375 | 0.6771 | 0.6967 |
| 0.6495 | 10.0 | 3750 | 0.6304 | 0.7267 |
| 0.6291 | 11.0 | 4125 | 0.6937 | 0.665 |
| 0.6399 | 12.0 | 4500 | 0.6515 | 0.7017 |
| 0.559 | 13.0 | 4875 | 0.7230 | 0.6783 |
| 0.6232 | 14.0 | 5250 | 0.5711 | 0.76 |
| 0.6438 | 15.0 | 5625 | 0.5759 | 0.725 |
| 0.5708 | 16.0 | 6000 | 0.5672 | 0.7433 |
| 0.5817 | 17.0 | 6375 | 0.5983 | 0.7367 |
| 0.5534 | 18.0 | 6750 | 0.5595 | 0.7633 |
| 0.5539 | 19.0 | 7125 | 0.5109 | 0.7767 |
| 0.5681 | 20.0 | 7500 | 0.5115 | 0.7667 |
| 0.481 | 21.0 | 7875 | 0.5451 | 0.765 |
| 0.5126 | 22.0 | 8250 | 0.5810 | 0.7383 |
| 0.5213 | 23.0 | 8625 | 0.5434 | 0.7467 |
| 0.4811 | 24.0 | 9000 | 0.5453 | 0.7833 |
| 0.4831 | 25.0 | 9375 | 0.5703 | 0.7617 |
| 0.4515 | 26.0 | 9750 | 0.5020 | 0.795 |
| 0.4363 | 27.0 | 10125 | 0.5213 | 0.7733 |
| 0.4104 | 28.0 | 10500 | 0.5450 | 0.76 |
| 0.4526 | 29.0 | 10875 | 0.5543 | 0.76 |
| 0.4959 | 30.0 | 11250 | 0.5151 | 0.7683 |
| 0.5018 | 31.0 | 11625 | 0.5269 | 0.7933 |
| 0.4267 | 32.0 | 12000 | 0.5138 | 0.7933 |
| 0.464 | 33.0 | 12375 | 0.5486 | 0.7883 |
| 0.4612 | 34.0 | 12750 | 0.5500 | 0.7867 |
| 0.3667 | 35.0 | 13125 | 0.5319 | 0.7983 |
| 0.3692 | 36.0 | 13500 | 0.6112 | 0.77 |
| 0.443 | 37.0 | 13875 | 0.5427 | 0.8 |
| 0.431 | 38.0 | 14250 | 0.5062 | 0.7967 |
| 0.3837 | 39.0 | 14625 | 0.5492 | 0.7867 |
| 0.421 | 40.0 | 15000 | 0.6140 | 0.7883 |
| 0.3456 | 41.0 | 15375 | 0.6505 | 0.7817 |
| 0.3528 | 42.0 | 15750 | 0.6246 | 0.7733 |
| 0.383 | 43.0 | 16125 | 0.6178 | 0.7933 |
| 0.3235 | 44.0 | 16500 | 0.6286 | 0.7933 |
| 0.3165 | 45.0 | 16875 | 0.6316 | 0.7933 |
| 0.3289 | 46.0 | 17250 | 0.6422 | 0.7933 |
| 0.3499 | 47.0 | 17625 | 0.6701 | 0.8 |
| 0.2803 | 48.0 | 18000 | 0.6802 | 0.7917 |
| 0.279 | 49.0 | 18375 | 0.6961 | 0.8017 |
| 0.2671 | 50.0 | 18750 | 0.7070 | 0.8 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
hkivancoral/smids_5x_deit_base_rms_001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_base_rms_001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5070
- Accuracy: 0.8067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0986 | 1.0 | 375 | 1.0985 | 0.3517 |
| 0.9277 | 2.0 | 750 | 0.8963 | 0.5367 |
| 0.8868 | 3.0 | 1125 | 0.8248 | 0.5567 |
| 0.8432 | 4.0 | 1500 | 0.8168 | 0.5367 |
| 0.7756 | 5.0 | 1875 | 0.8234 | 0.555 |
| 0.7692 | 6.0 | 2250 | 0.7291 | 0.6617 |
| 0.7096 | 7.0 | 2625 | 0.7367 | 0.6533 |
| 0.8553 | 8.0 | 3000 | 0.7548 | 0.63 |
| 0.7339 | 9.0 | 3375 | 0.7547 | 0.6233 |
| 0.6755 | 10.0 | 3750 | 0.7150 | 0.665 |
| 0.7434 | 11.0 | 4125 | 0.7034 | 0.685 |
| 0.6966 | 12.0 | 4500 | 0.6998 | 0.6833 |
| 0.6638 | 13.0 | 4875 | 0.8188 | 0.615 |
| 0.7005 | 14.0 | 5250 | 0.6380 | 0.7233 |
| 0.7307 | 15.0 | 5625 | 0.6467 | 0.7017 |
| 0.6252 | 16.0 | 6000 | 0.6189 | 0.7317 |
| 0.6235 | 17.0 | 6375 | 0.5966 | 0.7267 |
| 0.6067 | 18.0 | 6750 | 0.5889 | 0.7367 |
| 0.6586 | 19.0 | 7125 | 0.5888 | 0.745 |
| 0.553 | 20.0 | 7500 | 0.5461 | 0.7583 |
| 0.5457 | 21.0 | 7875 | 0.5458 | 0.7717 |
| 0.535 | 22.0 | 8250 | 0.5661 | 0.745 |
| 0.5802 | 23.0 | 8625 | 0.5673 | 0.7633 |
| 0.585 | 24.0 | 9000 | 0.5456 | 0.7767 |
| 0.5034 | 25.0 | 9375 | 0.5600 | 0.7517 |
| 0.519 | 26.0 | 9750 | 0.5101 | 0.7767 |
| 0.578 | 27.0 | 10125 | 0.5562 | 0.7517 |
| 0.5681 | 28.0 | 10500 | 0.5592 | 0.7633 |
| 0.5613 | 29.0 | 10875 | 0.5207 | 0.7733 |
| 0.4923 | 30.0 | 11250 | 0.5540 | 0.7683 |
| 0.4514 | 31.0 | 11625 | 0.5170 | 0.795 |
| 0.4948 | 32.0 | 12000 | 0.5569 | 0.775 |
| 0.4729 | 33.0 | 12375 | 0.5006 | 0.7967 |
| 0.4583 | 34.0 | 12750 | 0.5008 | 0.7917 |
| 0.4376 | 35.0 | 13125 | 0.4986 | 0.815 |
| 0.3894 | 36.0 | 13500 | 0.5048 | 0.8033 |
| 0.4227 | 37.0 | 13875 | 0.5449 | 0.7883 |
| 0.4237 | 38.0 | 14250 | 0.4850 | 0.81 |
| 0.3609 | 39.0 | 14625 | 0.4881 | 0.8017 |
| 0.4451 | 40.0 | 15000 | 0.5131 | 0.8067 |
| 0.411 | 41.0 | 15375 | 0.5305 | 0.7983 |
| 0.4629 | 42.0 | 15750 | 0.4959 | 0.8 |
| 0.4034 | 43.0 | 16125 | 0.5125 | 0.8083 |
| 0.3681 | 44.0 | 16500 | 0.5034 | 0.8033 |
| 0.4332 | 45.0 | 16875 | 0.4946 | 0.8017 |
| 0.3808 | 46.0 | 17250 | 0.4987 | 0.8067 |
| 0.3828 | 47.0 | 17625 | 0.5113 | 0.8183 |
| 0.2902 | 48.0 | 18000 | 0.5081 | 0.8 |
| 0.3255 | 49.0 | 18375 | 0.5035 | 0.8083 |
| 0.3922 | 50.0 | 18750 | 0.5070 | 0.8067 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"abnormal_sperm",
"non-sperm",
"normal_sperm"
] |
ongkn/attraction-classifier-swin
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# attraction-classifier-swin
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5367
- Accuracy: 0.7390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 69
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6207 | 0.49 | 100 | 0.5599 | 0.7115 |
| 0.6256 | 0.98 | 200 | 0.5238 | 0.7225 |
| 0.597 | 1.46 | 300 | 0.5003 | 0.7418 |
| 0.6121 | 1.95 | 400 | 0.5409 | 0.7610 |
| 0.5457 | 2.44 | 500 | 0.5123 | 0.7555 |
| 0.5258 | 2.93 | 600 | 0.4792 | 0.7637 |
| 0.504 | 3.41 | 700 | 0.5169 | 0.7390 |
| 0.541 | 3.9 | 800 | 0.4858 | 0.7582 |
| 0.5704 | 4.39 | 900 | 0.5367 | 0.7390 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"neg",
"pos"
] |
suncy13/Foot
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Foot
This model is a fine-tuned version of [facebook/dinov2-base-imagenet1k-1-layer](https://huggingface.co/facebook/dinov2-base-imagenet1k-1-layer) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0747
- Accuracy: 0.4865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 1.2201 | 0.3378 |
| No log | 2.0 | 4 | 1.2243 | 0.2568 |
| No log | 3.0 | 6 | 1.2672 | 0.2703 |
| No log | 4.0 | 8 | 1.2501 | 0.2297 |
| 1.2006 | 5.0 | 10 | 1.1975 | 0.2973 |
| 1.2006 | 6.0 | 12 | 1.1270 | 0.3919 |
| 1.2006 | 7.0 | 14 | 1.0999 | 0.3243 |
| 1.2006 | 8.0 | 16 | 1.1497 | 0.3649 |
| 1.2006 | 9.0 | 18 | 1.1006 | 0.3108 |
| 1.1058 | 10.0 | 20 | 1.1271 | 0.3514 |
| 1.1058 | 11.0 | 22 | 1.1273 | 0.3784 |
| 1.1058 | 12.0 | 24 | 1.1639 | 0.2838 |
| 1.1058 | 13.0 | 26 | 1.1421 | 0.4054 |
| 1.1058 | 14.0 | 28 | 1.1190 | 0.3514 |
| 1.0489 | 15.0 | 30 | 1.1735 | 0.3243 |
| 1.0489 | 16.0 | 32 | 1.1422 | 0.3378 |
| 1.0489 | 17.0 | 34 | 1.1414 | 0.3649 |
| 1.0489 | 18.0 | 36 | 1.1033 | 0.4189 |
| 1.0489 | 19.0 | 38 | 1.0747 | 0.3919 |
| 0.9717 | 20.0 | 40 | 1.0952 | 0.3919 |
| 0.9717 | 21.0 | 42 | 1.1063 | 0.3784 |
| 0.9717 | 22.0 | 44 | 1.0822 | 0.3649 |
| 0.9717 | 23.0 | 46 | 1.0768 | 0.3784 |
| 0.9717 | 24.0 | 48 | 1.0753 | 0.4595 |
| 0.9816 | 25.0 | 50 | 1.0531 | 0.4054 |
| 0.9816 | 26.0 | 52 | 1.0624 | 0.4189 |
| 0.9816 | 27.0 | 54 | 1.0690 | 0.4459 |
| 0.9816 | 28.0 | 56 | 1.1392 | 0.3514 |
| 0.9816 | 29.0 | 58 | 1.0696 | 0.4054 |
| 0.9576 | 30.0 | 60 | 1.0747 | 0.4865 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"flat arch",
"high arch",
"normal arch"
] |
aaa12963337/msi-vit-pretrain_1218
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# msi-vit-pretrain_1218
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7293
- Accuracy: 0.5866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1183 | 1.0 | 781 | 1.3771 | 0.6737 |
| 0.0548 | 2.0 | 1562 | 2.6272 | 0.5738 |
| 0.014 | 3.0 | 2343 | 2.7293 | 0.5866 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"adi",
"back",
"deb",
"lym",
"muc",
"mus",
"norm",
"str",
"tum"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.