model_id
stringlengths
7
105
model_card
stringlengths
1
130k
model_labels
listlengths
2
80k
hkivancoral/hushem_5x_beit_base_sgd_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_sgd_0001_fold3 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4647 - Accuracy: 0.2791 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.5793 | 1.0 | 28 | 1.5789 | 0.2558 | | 1.5183 | 2.0 | 56 | 1.5712 | 0.2558 | | 1.5213 | 3.0 | 84 | 1.5641 | 0.2558 | | 1.4605 | 4.0 | 112 | 1.5574 | 0.2558 | | 1.4855 | 5.0 | 140 | 1.5511 | 0.2558 | | 1.4714 | 6.0 | 168 | 1.5448 | 0.2558 | | 1.5489 | 7.0 | 196 | 1.5392 | 0.2791 | | 1.4903 | 8.0 | 224 | 1.5342 | 0.2791 | | 1.4325 | 9.0 | 252 | 1.5290 | 0.2791 | | 1.4353 | 10.0 | 280 | 1.5246 | 0.2558 | | 1.4693 | 11.0 | 308 | 1.5207 | 0.2558 | | 1.4343 | 12.0 | 336 | 1.5162 | 0.2558 | | 1.4713 | 13.0 | 364 | 1.5122 | 0.2558 | | 1.4732 | 14.0 | 392 | 1.5085 | 0.2558 | | 1.517 | 15.0 | 420 | 1.5050 | 0.2558 | | 1.4521 | 16.0 | 448 | 1.5018 | 0.2558 | | 1.4309 | 17.0 | 476 | 1.4988 | 0.2558 | | 1.4246 | 18.0 | 504 | 1.4964 | 0.2558 | | 1.4231 | 19.0 | 532 | 1.4937 | 0.2558 | | 1.4691 | 20.0 | 560 | 1.4912 | 0.2558 | | 1.4305 | 21.0 | 588 | 1.4888 | 0.2558 | | 1.4575 | 22.0 | 616 | 1.4865 | 0.2558 | | 1.4268 | 23.0 | 644 | 1.4845 | 0.2558 | | 1.3904 | 24.0 | 672 | 1.4827 | 0.2558 | | 1.4432 | 25.0 | 700 | 1.4808 | 0.2558 | | 1.4078 | 26.0 | 728 | 1.4793 | 0.2558 | | 1.382 | 27.0 | 756 | 1.4777 | 0.2558 | | 1.3894 | 28.0 | 784 | 1.4764 | 0.2558 | | 1.4046 | 29.0 | 812 | 1.4751 | 0.2558 | | 1.4273 | 30.0 | 840 | 1.4741 | 0.2791 | | 1.3786 | 31.0 | 868 | 1.4730 | 0.2791 | | 1.3777 | 32.0 | 896 | 1.4719 | 0.2791 | | 1.3887 | 33.0 | 924 | 1.4708 | 0.2791 | | 1.3651 | 34.0 | 952 | 1.4700 | 0.2791 | | 1.4904 | 35.0 | 980 | 1.4692 | 0.2791 | | 1.3288 | 36.0 | 1008 | 1.4686 | 0.2791 | | 1.3653 | 37.0 | 1036 | 1.4680 | 0.2791 | | 1.3833 | 38.0 | 1064 | 1.4673 | 0.2791 | | 1.3973 | 39.0 | 1092 | 1.4668 | 0.2791 | | 1.4044 | 40.0 | 1120 | 1.4663 | 0.2791 | | 1.3896 | 41.0 | 1148 | 1.4659 | 0.2791 | | 1.3676 | 42.0 | 1176 | 1.4656 | 0.2791 | | 1.3444 | 43.0 | 1204 | 1.4654 | 0.2791 | | 1.3782 | 44.0 | 1232 | 1.4651 | 0.2791 | | 1.44 | 45.0 | 1260 | 1.4650 | 0.2791 | | 1.383 | 46.0 | 1288 | 1.4648 | 0.2791 | | 1.3752 | 47.0 | 1316 | 1.4648 | 0.2791 | | 1.343 | 48.0 | 1344 | 1.4647 | 0.2791 | | 1.3923 | 49.0 | 1372 | 1.4647 | 0.2791 | | 1.429 | 50.0 | 1400 | 1.4647 | 0.2791 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
paolinox/mobilenet-finetuned-food101
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilenet-finetuned-food101 This model is a fine-tuned version of [google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 0.5518 - Accuracy: 0.821 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.9575 | 0.153 | | 1.9536 | 2.0 | 12 | 1.8509 | 0.265 | | 1.9536 | 3.0 | 18 | 1.7003 | 0.451 | | 1.7915 | 4.0 | 24 | 1.5181 | 0.578 | | 1.4994 | 5.0 | 30 | 1.3609 | 0.631 | | 1.4994 | 6.0 | 36 | 1.2321 | 0.669 | | 1.2203 | 7.0 | 42 | 1.0696 | 0.69 | | 1.2203 | 8.0 | 48 | 0.9676 | 0.723 | | 1.0215 | 9.0 | 54 | 0.8888 | 0.729 | | 0.8462 | 10.0 | 60 | 0.8380 | 0.74 | | 0.8462 | 11.0 | 66 | 0.7461 | 0.778 | | 0.744 | 12.0 | 72 | 0.6724 | 0.792 | | 0.744 | 13.0 | 78 | 0.7314 | 0.769 | | 0.6496 | 14.0 | 84 | 0.6831 | 0.77 | | 0.6143 | 15.0 | 90 | 0.5937 | 0.81 | | 0.6143 | 16.0 | 96 | 0.6217 | 0.793 | | 0.5468 | 17.0 | 102 | 0.5965 | 0.788 | | 0.5468 | 18.0 | 108 | 0.5944 | 0.813 | | 0.5428 | 19.0 | 114 | 0.5869 | 0.812 | | 0.5193 | 20.0 | 120 | 0.5565 | 0.82 | | 0.5193 | 21.0 | 126 | 0.6155 | 0.803 | | 0.4902 | 22.0 | 132 | 0.5685 | 0.817 | | 0.4902 | 23.0 | 138 | 0.6097 | 0.789 | | 0.4869 | 24.0 | 144 | 0.6002 | 0.8 | | 0.4745 | 25.0 | 150 | 0.5569 | 0.814 | | 0.4745 | 26.0 | 156 | 0.5414 | 0.821 | | 0.4653 | 27.0 | 162 | 0.5806 | 0.807 | | 0.4653 | 28.0 | 168 | 0.5663 | 0.807 | | 0.4543 | 29.0 | 174 | 0.5412 | 0.825 | | 0.4575 | 30.0 | 180 | 0.5518 | 0.821 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "beignets", "bruschetta", "chicken_wings", "hamburger", "pork_chop", "prime_rib", "ramen" ]
paolinox/mobilevit-finetuned-food101
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilevit-finetuned-food101 This model is a fine-tuned version of [apple/mobilevitv2-1.0-imagenet1k-256](https://huggingface.co/apple/mobilevitv2-1.0-imagenet1k-256) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 0.4191 - Accuracy: 0.874 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9487 | 0.98 | 23 | 1.9476 | 0.151 | | 1.9273 | 2.0 | 47 | 1.9070 | 0.24 | | 1.8561 | 2.98 | 70 | 1.8401 | 0.448 | | 1.7788 | 4.0 | 94 | 1.7301 | 0.612 | | 1.6586 | 4.98 | 117 | 1.5863 | 0.676 | | 1.4603 | 6.0 | 141 | 1.4199 | 0.72 | | 1.3027 | 6.98 | 164 | 1.2215 | 0.734 | | 1.1717 | 8.0 | 188 | 1.0581 | 0.759 | | 0.9601 | 8.98 | 211 | 0.9013 | 0.769 | | 0.8482 | 10.0 | 235 | 0.7866 | 0.791 | | 0.7276 | 10.98 | 258 | 0.7112 | 0.803 | | 0.6449 | 12.0 | 282 | 0.6132 | 0.835 | | 0.6279 | 12.98 | 305 | 0.6069 | 0.83 | | 0.5982 | 14.0 | 329 | 0.5637 | 0.832 | | 0.5766 | 14.98 | 352 | 0.5149 | 0.857 | | 0.5345 | 16.0 | 376 | 0.5392 | 0.837 | | 0.494 | 16.98 | 399 | 0.5017 | 0.848 | | 0.4953 | 18.0 | 423 | 0.5002 | 0.846 | | 0.5118 | 18.98 | 446 | 0.4782 | 0.856 | | 0.4708 | 20.0 | 470 | 0.4898 | 0.858 | | 0.4774 | 20.98 | 493 | 0.4769 | 0.851 | | 0.4848 | 22.0 | 517 | 0.4665 | 0.841 | | 0.4533 | 22.98 | 540 | 0.4890 | 0.837 | | 0.4449 | 24.0 | 564 | 0.4558 | 0.857 | | 0.4205 | 24.98 | 587 | 0.4767 | 0.857 | | 0.4417 | 26.0 | 611 | 0.4476 | 0.853 | | 0.4333 | 26.98 | 634 | 0.4853 | 0.834 | | 0.4545 | 28.0 | 658 | 0.4573 | 0.847 | | 0.4489 | 28.98 | 681 | 0.4659 | 0.845 | | 0.4172 | 29.36 | 690 | 0.4191 | 0.874 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "beignets", "bruschetta", "chicken_wings", "hamburger", "pork_chop", "prime_rib", "ramen" ]
Andron00e/ViTForImageClassification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViTForImageClassification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the [CIFAR10](https://huggingface.co/datasets/Andron00e/CIFAR10-custom) dataset. It achieves the following results on the evaluation set: - Loss: 0.1199 - Accuracy: 0.9678 ## Model description [A detailed description of model architecture can be found here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/modeling_vit.py#L756) ## Training and evaluation data [CIFAR10](https://huggingface.co/datasets/Andron00e/CIFAR10-custom) ## Training procedure Straightforward tuning of all model's parameters. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2995 | 0.27 | 100 | 0.3419 | 0.9108 | | 0.2289 | 0.53 | 200 | 0.2482 | 0.9288 | | 0.1811 | 0.8 | 300 | 0.2139 | 0.9357 | | 0.0797 | 1.07 | 400 | 0.1813 | 0.946 | | 0.1128 | 1.33 | 500 | 0.1741 | 0.9452 | | 0.086 | 1.6 | 600 | 0.1659 | 0.9513 | | 0.0815 | 1.87 | 700 | 0.1468 | 0.9547 | | 0.048 | 2.13 | 800 | 0.1393 | 0.9592 | | 0.021 | 2.4 | 900 | 0.1399 | 0.9603 | | 0.0271 | 2.67 | 1000 | 0.1334 | 0.9642 | | 0.0231 | 2.93 | 1100 | 0.1228 | 0.9658 | | 0.0101 | 3.2 | 1200 | 0.1229 | 0.9673 | | 0.0041 | 3.47 | 1300 | 0.1189 | 0.9675 | | 0.0043 | 3.73 | 1400 | 0.1165 | 0.9683 | | 0.0067 | 4.0 | 1500 | 0.1145 | 0.9697 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.14.1
[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck" ]
paolinox/segformer-finetuned-food101
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-finetuned-food101 This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 0.3478 - Accuracy: 0.888 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0272 | 0.98 | 23 | 1.8039 | 0.329 | | 1.5806 | 2.0 | 47 | 1.2465 | 0.608 | | 1.0564 | 2.98 | 70 | 0.7507 | 0.756 | | 0.7358 | 4.0 | 94 | 0.6263 | 0.784 | | 0.6482 | 4.98 | 117 | 0.5551 | 0.795 | | 0.5692 | 6.0 | 141 | 0.5849 | 0.794 | | 0.5552 | 6.98 | 164 | 0.4931 | 0.831 | | 0.4956 | 8.0 | 188 | 0.5166 | 0.83 | | 0.4748 | 8.98 | 211 | 0.4808 | 0.834 | | 0.424 | 10.0 | 235 | 0.4238 | 0.852 | | 0.4314 | 10.98 | 258 | 0.4858 | 0.838 | | 0.4071 | 12.0 | 282 | 0.4304 | 0.858 | | 0.3928 | 12.98 | 305 | 0.4621 | 0.851 | | 0.3695 | 14.0 | 329 | 0.4398 | 0.859 | | 0.3704 | 14.98 | 352 | 0.4172 | 0.855 | | 0.3299 | 16.0 | 376 | 0.4225 | 0.856 | | 0.3391 | 16.98 | 399 | 0.4165 | 0.855 | | 0.3023 | 18.0 | 423 | 0.3828 | 0.869 | | 0.3318 | 18.98 | 446 | 0.4190 | 0.861 | | 0.2994 | 20.0 | 470 | 0.4190 | 0.861 | | 0.323 | 20.98 | 493 | 0.4034 | 0.866 | | 0.2883 | 22.0 | 517 | 0.4083 | 0.874 | | 0.2959 | 22.98 | 540 | 0.4202 | 0.862 | | 0.2665 | 24.0 | 564 | 0.3740 | 0.881 | | 0.2765 | 24.98 | 587 | 0.4123 | 0.866 | | 0.2728 | 26.0 | 611 | 0.3763 | 0.868 | | 0.2817 | 26.98 | 634 | 0.3939 | 0.864 | | 0.2467 | 28.0 | 658 | 0.3938 | 0.87 | | 0.2772 | 28.98 | 681 | 0.4013 | 0.866 | | 0.2243 | 29.36 | 690 | 0.3478 | 0.888 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "beignets", "bruschetta", "chicken_wings", "hamburger", "pork_chop", "prime_rib", "ramen" ]
hkivancoral/hushem_5x_beit_base_sgd_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_sgd_0001_fold4 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3804 - Accuracy: 0.4048 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4632 | 1.0 | 28 | 1.5173 | 0.2143 | | 1.3741 | 2.0 | 56 | 1.5094 | 0.2381 | | 1.4021 | 3.0 | 84 | 1.5010 | 0.2381 | | 1.3681 | 4.0 | 112 | 1.4942 | 0.2381 | | 1.4122 | 5.0 | 140 | 1.4872 | 0.2381 | | 1.3657 | 6.0 | 168 | 1.4803 | 0.2619 | | 1.3993 | 7.0 | 196 | 1.4742 | 0.2619 | | 1.3652 | 8.0 | 224 | 1.4681 | 0.2619 | | 1.3615 | 9.0 | 252 | 1.4624 | 0.2619 | | 1.3492 | 10.0 | 280 | 1.4574 | 0.2619 | | 1.3205 | 11.0 | 308 | 1.4526 | 0.2619 | | 1.3552 | 12.0 | 336 | 1.4476 | 0.2619 | | 1.3393 | 13.0 | 364 | 1.4435 | 0.2619 | | 1.3397 | 14.0 | 392 | 1.4389 | 0.2619 | | 1.3561 | 15.0 | 420 | 1.4347 | 0.2619 | | 1.3361 | 16.0 | 448 | 1.4313 | 0.2619 | | 1.3287 | 17.0 | 476 | 1.4281 | 0.2857 | | 1.3138 | 18.0 | 504 | 1.4246 | 0.3095 | | 1.3241 | 19.0 | 532 | 1.4213 | 0.3095 | | 1.3033 | 20.0 | 560 | 1.4184 | 0.3095 | | 1.3163 | 21.0 | 588 | 1.4155 | 0.3095 | | 1.3116 | 22.0 | 616 | 1.4126 | 0.3095 | | 1.3228 | 23.0 | 644 | 1.4101 | 0.3095 | | 1.3214 | 24.0 | 672 | 1.4076 | 0.3333 | | 1.2818 | 25.0 | 700 | 1.4051 | 0.3333 | | 1.2948 | 26.0 | 728 | 1.4029 | 0.3333 | | 1.3231 | 27.0 | 756 | 1.4008 | 0.3333 | | 1.2969 | 28.0 | 784 | 1.3988 | 0.3333 | | 1.2659 | 29.0 | 812 | 1.3969 | 0.3333 | | 1.2426 | 30.0 | 840 | 1.3952 | 0.3571 | | 1.2934 | 31.0 | 868 | 1.3935 | 0.3810 | | 1.2777 | 32.0 | 896 | 1.3917 | 0.4048 | | 1.2767 | 33.0 | 924 | 1.3904 | 0.4048 | | 1.3162 | 34.0 | 952 | 1.3892 | 0.4048 | | 1.2726 | 35.0 | 980 | 1.3880 | 0.4048 | | 1.294 | 36.0 | 1008 | 1.3868 | 0.4048 | | 1.2554 | 37.0 | 1036 | 1.3858 | 0.4048 | | 1.2838 | 38.0 | 1064 | 1.3848 | 0.4048 | | 1.2842 | 39.0 | 1092 | 1.3839 | 0.4048 | | 1.2721 | 40.0 | 1120 | 1.3832 | 0.4048 | | 1.2562 | 41.0 | 1148 | 1.3826 | 0.4048 | | 1.2576 | 42.0 | 1176 | 1.3821 | 0.4048 | | 1.3 | 43.0 | 1204 | 1.3815 | 0.4048 | | 1.273 | 44.0 | 1232 | 1.3811 | 0.4048 | | 1.2913 | 45.0 | 1260 | 1.3808 | 0.4048 | | 1.2814 | 46.0 | 1288 | 1.3806 | 0.4048 | | 1.2272 | 47.0 | 1316 | 1.3805 | 0.4048 | | 1.2516 | 48.0 | 1344 | 1.3804 | 0.4048 | | 1.2555 | 49.0 | 1372 | 1.3804 | 0.4048 | | 1.3084 | 50.0 | 1400 | 1.3804 | 0.4048 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
edwinpalegre/ee8225-group4-vit-trashnet-enhanced
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ee8225-group4-vit-trashnet-enhanced This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the edwinpalegre/trashnet-enhanced dataset. It achieves the following results on the evaluation set: - Loss: 0.0793 - Accuracy: 0.9817 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0603 | 0.4 | 100 | 0.1482 | 0.9623 | | 0.0363 | 0.8 | 200 | 0.1123 | 0.9698 | | 0.0306 | 1.2 | 300 | 0.1069 | 0.9721 | | 0.023 | 1.61 | 400 | 0.1188 | 0.9706 | | 0.0172 | 2.01 | 500 | 0.1019 | 0.9734 | | 0.0161 | 2.41 | 600 | 0.1112 | 0.9746 | | 0.0163 | 2.81 | 700 | 0.0874 | 0.9801 | | 0.0024 | 3.21 | 800 | 0.0793 | 0.9817 | | 0.0133 | 3.61 | 900 | 0.0831 | 0.9812 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "biodegradable", "cardboard", "glass", "metal", "paper", "plastic", "trash" ]
hkivancoral/hushem_5x_beit_base_sgd_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_sgd_0001_fold5 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4856 - Accuracy: 0.3171 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.5711 | 1.0 | 28 | 1.6258 | 0.2439 | | 1.5362 | 2.0 | 56 | 1.6161 | 0.2439 | | 1.5243 | 3.0 | 84 | 1.6077 | 0.2439 | | 1.5675 | 4.0 | 112 | 1.5988 | 0.2439 | | 1.5133 | 5.0 | 140 | 1.5920 | 0.2439 | | 1.5639 | 6.0 | 168 | 1.5854 | 0.2439 | | 1.555 | 7.0 | 196 | 1.5785 | 0.2439 | | 1.5064 | 8.0 | 224 | 1.5727 | 0.2439 | | 1.4878 | 9.0 | 252 | 1.5672 | 0.2439 | | 1.5121 | 10.0 | 280 | 1.5615 | 0.2439 | | 1.4492 | 11.0 | 308 | 1.5578 | 0.2439 | | 1.5023 | 12.0 | 336 | 1.5529 | 0.2439 | | 1.5035 | 13.0 | 364 | 1.5492 | 0.2439 | | 1.4801 | 14.0 | 392 | 1.5454 | 0.2439 | | 1.4838 | 15.0 | 420 | 1.5419 | 0.2683 | | 1.4587 | 16.0 | 448 | 1.5385 | 0.2683 | | 1.4655 | 17.0 | 476 | 1.5343 | 0.2683 | | 1.4244 | 18.0 | 504 | 1.5315 | 0.2927 | | 1.4339 | 19.0 | 532 | 1.5284 | 0.2927 | | 1.4266 | 20.0 | 560 | 1.5249 | 0.2927 | | 1.4474 | 21.0 | 588 | 1.5220 | 0.2927 | | 1.4652 | 22.0 | 616 | 1.5188 | 0.3171 | | 1.4621 | 23.0 | 644 | 1.5163 | 0.3171 | | 1.4655 | 24.0 | 672 | 1.5146 | 0.3171 | | 1.4192 | 25.0 | 700 | 1.5130 | 0.3171 | | 1.4459 | 26.0 | 728 | 1.5105 | 0.3171 | | 1.469 | 27.0 | 756 | 1.5090 | 0.3171 | | 1.3585 | 28.0 | 784 | 1.5067 | 0.3171 | | 1.4084 | 29.0 | 812 | 1.5049 | 0.3171 | | 1.4047 | 30.0 | 840 | 1.5031 | 0.3171 | | 1.4414 | 31.0 | 868 | 1.5013 | 0.3171 | | 1.3836 | 32.0 | 896 | 1.4995 | 0.3171 | | 1.3896 | 33.0 | 924 | 1.4979 | 0.3171 | | 1.4222 | 34.0 | 952 | 1.4964 | 0.3171 | | 1.4396 | 35.0 | 980 | 1.4952 | 0.3171 | | 1.3891 | 36.0 | 1008 | 1.4939 | 0.3171 | | 1.393 | 37.0 | 1036 | 1.4925 | 0.3171 | | 1.3697 | 38.0 | 1064 | 1.4914 | 0.3171 | | 1.4252 | 39.0 | 1092 | 1.4901 | 0.3171 | | 1.365 | 40.0 | 1120 | 1.4892 | 0.3171 | | 1.4164 | 41.0 | 1148 | 1.4883 | 0.3171 | | 1.3854 | 42.0 | 1176 | 1.4876 | 0.3171 | | 1.3744 | 43.0 | 1204 | 1.4870 | 0.3171 | | 1.4041 | 44.0 | 1232 | 1.4865 | 0.3171 | | 1.3952 | 45.0 | 1260 | 1.4861 | 0.3171 | | 1.3758 | 46.0 | 1288 | 1.4858 | 0.3171 | | 1.3986 | 47.0 | 1316 | 1.4857 | 0.3171 | | 1.3628 | 48.0 | 1344 | 1.4856 | 0.3171 | | 1.4108 | 49.0 | 1372 | 1.4856 | 0.3171 | | 1.4199 | 50.0 | 1400 | 1.4856 | 0.3171 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_5x_beit_base_sgd_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_sgd_00001_fold1 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5922 - Accuracy: 0.2667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4867 | 1.0 | 27 | 1.6071 | 0.2667 | | 1.5392 | 2.0 | 54 | 1.6064 | 0.2667 | | 1.5844 | 3.0 | 81 | 1.6056 | 0.2667 | | 1.5797 | 4.0 | 108 | 1.6050 | 0.2667 | | 1.5108 | 5.0 | 135 | 1.6044 | 0.2667 | | 1.5236 | 6.0 | 162 | 1.6037 | 0.2667 | | 1.5199 | 7.0 | 189 | 1.6031 | 0.2667 | | 1.544 | 8.0 | 216 | 1.6026 | 0.2667 | | 1.5317 | 9.0 | 243 | 1.6020 | 0.2667 | | 1.537 | 10.0 | 270 | 1.6014 | 0.2667 | | 1.5415 | 11.0 | 297 | 1.6010 | 0.2667 | | 1.5478 | 12.0 | 324 | 1.6004 | 0.2667 | | 1.4666 | 13.0 | 351 | 1.6000 | 0.2667 | | 1.5352 | 14.0 | 378 | 1.5995 | 0.2667 | | 1.478 | 15.0 | 405 | 1.5990 | 0.2667 | | 1.5333 | 16.0 | 432 | 1.5986 | 0.2667 | | 1.5245 | 17.0 | 459 | 1.5982 | 0.2667 | | 1.5379 | 18.0 | 486 | 1.5978 | 0.2667 | | 1.52 | 19.0 | 513 | 1.5975 | 0.2667 | | 1.5508 | 20.0 | 540 | 1.5971 | 0.2667 | | 1.5421 | 21.0 | 567 | 1.5967 | 0.2667 | | 1.4919 | 22.0 | 594 | 1.5963 | 0.2667 | | 1.483 | 23.0 | 621 | 1.5960 | 0.2667 | | 1.5087 | 24.0 | 648 | 1.5957 | 0.2667 | | 1.5236 | 25.0 | 675 | 1.5954 | 0.2667 | | 1.5228 | 26.0 | 702 | 1.5951 | 0.2667 | | 1.5439 | 27.0 | 729 | 1.5949 | 0.2667 | | 1.5272 | 28.0 | 756 | 1.5946 | 0.2667 | | 1.5029 | 29.0 | 783 | 1.5943 | 0.2667 | | 1.5695 | 30.0 | 810 | 1.5941 | 0.2667 | | 1.5057 | 31.0 | 837 | 1.5939 | 0.2667 | | 1.5092 | 32.0 | 864 | 1.5937 | 0.2667 | | 1.575 | 33.0 | 891 | 1.5935 | 0.2667 | | 1.5175 | 34.0 | 918 | 1.5934 | 0.2667 | | 1.4801 | 35.0 | 945 | 1.5932 | 0.2667 | | 1.4771 | 36.0 | 972 | 1.5930 | 0.2667 | | 1.5042 | 37.0 | 999 | 1.5929 | 0.2667 | | 1.5372 | 38.0 | 1026 | 1.5928 | 0.2667 | | 1.5158 | 39.0 | 1053 | 1.5927 | 0.2667 | | 1.4902 | 40.0 | 1080 | 1.5926 | 0.2667 | | 1.4904 | 41.0 | 1107 | 1.5925 | 0.2667 | | 1.4817 | 42.0 | 1134 | 1.5924 | 0.2667 | | 1.5064 | 43.0 | 1161 | 1.5923 | 0.2667 | | 1.4625 | 44.0 | 1188 | 1.5923 | 0.2667 | | 1.5064 | 45.0 | 1215 | 1.5923 | 0.2667 | | 1.4956 | 46.0 | 1242 | 1.5922 | 0.2667 | | 1.502 | 47.0 | 1269 | 1.5922 | 0.2667 | | 1.495 | 48.0 | 1296 | 1.5922 | 0.2667 | | 1.4896 | 49.0 | 1323 | 1.5922 | 0.2667 | | 1.5118 | 50.0 | 1350 | 1.5922 | 0.2667 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_5x_beit_base_sgd_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_sgd_00001_fold2 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5367 - Accuracy: 0.2667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.5006 | 1.0 | 27 | 1.5552 | 0.2667 | | 1.5759 | 2.0 | 54 | 1.5543 | 0.2667 | | 1.5707 | 3.0 | 81 | 1.5535 | 0.2667 | | 1.578 | 4.0 | 108 | 1.5527 | 0.2667 | | 1.5119 | 5.0 | 135 | 1.5520 | 0.2667 | | 1.5352 | 6.0 | 162 | 1.5512 | 0.2667 | | 1.5348 | 7.0 | 189 | 1.5504 | 0.2667 | | 1.5693 | 8.0 | 216 | 1.5497 | 0.2667 | | 1.5386 | 9.0 | 243 | 1.5490 | 0.2667 | | 1.5189 | 10.0 | 270 | 1.5483 | 0.2667 | | 1.5597 | 11.0 | 297 | 1.5477 | 0.2667 | | 1.5706 | 12.0 | 324 | 1.5471 | 0.2667 | | 1.5157 | 13.0 | 351 | 1.5465 | 0.2667 | | 1.5457 | 14.0 | 378 | 1.5458 | 0.2667 | | 1.5087 | 15.0 | 405 | 1.5453 | 0.2667 | | 1.5323 | 16.0 | 432 | 1.5447 | 0.2667 | | 1.5363 | 17.0 | 459 | 1.5442 | 0.2667 | | 1.5615 | 18.0 | 486 | 1.5437 | 0.2667 | | 1.5236 | 19.0 | 513 | 1.5433 | 0.2667 | | 1.566 | 20.0 | 540 | 1.5428 | 0.2667 | | 1.5446 | 21.0 | 567 | 1.5424 | 0.2667 | | 1.5289 | 22.0 | 594 | 1.5419 | 0.2667 | | 1.4823 | 23.0 | 621 | 1.5415 | 0.2667 | | 1.5025 | 24.0 | 648 | 1.5411 | 0.2667 | | 1.5362 | 25.0 | 675 | 1.5407 | 0.2667 | | 1.5593 | 26.0 | 702 | 1.5404 | 0.2667 | | 1.5515 | 27.0 | 729 | 1.5401 | 0.2667 | | 1.5275 | 28.0 | 756 | 1.5397 | 0.2667 | | 1.5171 | 29.0 | 783 | 1.5394 | 0.2667 | | 1.5816 | 30.0 | 810 | 1.5391 | 0.2667 | | 1.5294 | 31.0 | 837 | 1.5389 | 0.2667 | | 1.5276 | 32.0 | 864 | 1.5386 | 0.2667 | | 1.5584 | 33.0 | 891 | 1.5384 | 0.2667 | | 1.5549 | 34.0 | 918 | 1.5382 | 0.2667 | | 1.4864 | 35.0 | 945 | 1.5380 | 0.2667 | | 1.4851 | 36.0 | 972 | 1.5378 | 0.2667 | | 1.4835 | 37.0 | 999 | 1.5376 | 0.2667 | | 1.5708 | 38.0 | 1026 | 1.5374 | 0.2667 | | 1.5448 | 39.0 | 1053 | 1.5373 | 0.2667 | | 1.4945 | 40.0 | 1080 | 1.5372 | 0.2667 | | 1.486 | 41.0 | 1107 | 1.5371 | 0.2667 | | 1.5082 | 42.0 | 1134 | 1.5370 | 0.2667 | | 1.5323 | 43.0 | 1161 | 1.5369 | 0.2667 | | 1.4965 | 44.0 | 1188 | 1.5368 | 0.2667 | | 1.5407 | 45.0 | 1215 | 1.5368 | 0.2667 | | 1.5084 | 46.0 | 1242 | 1.5368 | 0.2667 | | 1.5191 | 47.0 | 1269 | 1.5367 | 0.2667 | | 1.5617 | 48.0 | 1296 | 1.5367 | 0.2667 | | 1.4992 | 49.0 | 1323 | 1.5367 | 0.2667 | | 1.4782 | 50.0 | 1350 | 1.5367 | 0.2667 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_5x_beit_base_sgd_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_sgd_00001_fold3 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5681 - Accuracy: 0.2558 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.5838 | 1.0 | 28 | 1.5858 | 0.2558 | | 1.5323 | 2.0 | 56 | 1.5850 | 0.2558 | | 1.5483 | 3.0 | 84 | 1.5842 | 0.2558 | | 1.4864 | 4.0 | 112 | 1.5834 | 0.2558 | | 1.5286 | 5.0 | 140 | 1.5827 | 0.2558 | | 1.5129 | 6.0 | 168 | 1.5819 | 0.2558 | | 1.6083 | 7.0 | 196 | 1.5812 | 0.2558 | | 1.5405 | 8.0 | 224 | 1.5806 | 0.2558 | | 1.5045 | 9.0 | 252 | 1.5799 | 0.2558 | | 1.4827 | 10.0 | 280 | 1.5793 | 0.2558 | | 1.5466 | 11.0 | 308 | 1.5787 | 0.2558 | | 1.502 | 12.0 | 336 | 1.5780 | 0.2558 | | 1.5701 | 13.0 | 364 | 1.5775 | 0.2558 | | 1.5522 | 14.0 | 392 | 1.5769 | 0.2558 | | 1.6273 | 15.0 | 420 | 1.5763 | 0.2558 | | 1.5496 | 16.0 | 448 | 1.5758 | 0.2558 | | 1.5263 | 17.0 | 476 | 1.5753 | 0.2558 | | 1.5326 | 18.0 | 504 | 1.5748 | 0.2558 | | 1.5229 | 19.0 | 532 | 1.5744 | 0.2558 | | 1.6308 | 20.0 | 560 | 1.5739 | 0.2558 | | 1.5402 | 21.0 | 588 | 1.5734 | 0.2558 | | 1.5767 | 22.0 | 616 | 1.5730 | 0.2558 | | 1.546 | 23.0 | 644 | 1.5726 | 0.2558 | | 1.4997 | 24.0 | 672 | 1.5722 | 0.2558 | | 1.5699 | 25.0 | 700 | 1.5719 | 0.2558 | | 1.5518 | 26.0 | 728 | 1.5715 | 0.2558 | | 1.5078 | 27.0 | 756 | 1.5712 | 0.2558 | | 1.509 | 28.0 | 784 | 1.5709 | 0.2558 | | 1.5496 | 29.0 | 812 | 1.5706 | 0.2558 | | 1.5569 | 30.0 | 840 | 1.5704 | 0.2558 | | 1.5113 | 31.0 | 868 | 1.5701 | 0.2558 | | 1.5157 | 32.0 | 896 | 1.5699 | 0.2558 | | 1.5362 | 33.0 | 924 | 1.5696 | 0.2558 | | 1.4946 | 34.0 | 952 | 1.5694 | 0.2558 | | 1.6128 | 35.0 | 980 | 1.5692 | 0.2558 | | 1.4515 | 36.0 | 1008 | 1.5691 | 0.2558 | | 1.4956 | 37.0 | 1036 | 1.5689 | 0.2558 | | 1.5189 | 38.0 | 1064 | 1.5688 | 0.2558 | | 1.571 | 39.0 | 1092 | 1.5687 | 0.2558 | | 1.549 | 40.0 | 1120 | 1.5685 | 0.2558 | | 1.524 | 41.0 | 1148 | 1.5684 | 0.2558 | | 1.5138 | 42.0 | 1176 | 1.5684 | 0.2558 | | 1.4952 | 43.0 | 1204 | 1.5683 | 0.2558 | | 1.5406 | 44.0 | 1232 | 1.5682 | 0.2558 | | 1.6126 | 45.0 | 1260 | 1.5682 | 0.2558 | | 1.5484 | 46.0 | 1288 | 1.5682 | 0.2558 | | 1.5268 | 47.0 | 1316 | 1.5681 | 0.2558 | | 1.4882 | 48.0 | 1344 | 1.5681 | 0.2558 | | 1.5345 | 49.0 | 1372 | 1.5681 | 0.2558 | | 1.5815 | 50.0 | 1400 | 1.5681 | 0.2558 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_5x_beit_base_sgd_00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_sgd_00001_fold4 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4856 - Accuracy: 0.3095 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.5648 | 1.0 | 28 | 1.5024 | 0.3095 | | 1.5958 | 2.0 | 56 | 1.5016 | 0.3095 | | 1.5478 | 3.0 | 84 | 1.5008 | 0.3095 | | 1.6175 | 4.0 | 112 | 1.5001 | 0.3095 | | 1.5019 | 5.0 | 140 | 1.4994 | 0.3095 | | 1.5612 | 6.0 | 168 | 1.4987 | 0.3095 | | 1.5556 | 7.0 | 196 | 1.4981 | 0.3095 | | 1.5275 | 8.0 | 224 | 1.4974 | 0.3095 | | 1.529 | 9.0 | 252 | 1.4968 | 0.3095 | | 1.5306 | 10.0 | 280 | 1.4962 | 0.3095 | | 1.5486 | 11.0 | 308 | 1.4956 | 0.3095 | | 1.5567 | 12.0 | 336 | 1.4950 | 0.3095 | | 1.5578 | 13.0 | 364 | 1.4945 | 0.3095 | | 1.5601 | 14.0 | 392 | 1.4939 | 0.3095 | | 1.5869 | 15.0 | 420 | 1.4934 | 0.3095 | | 1.5292 | 16.0 | 448 | 1.4929 | 0.3095 | | 1.584 | 17.0 | 476 | 1.4924 | 0.3095 | | 1.5709 | 18.0 | 504 | 1.4919 | 0.3095 | | 1.5246 | 19.0 | 532 | 1.4915 | 0.3095 | | 1.508 | 20.0 | 560 | 1.4911 | 0.3095 | | 1.5627 | 21.0 | 588 | 1.4907 | 0.3095 | | 1.543 | 22.0 | 616 | 1.4904 | 0.3095 | | 1.5306 | 23.0 | 644 | 1.4900 | 0.3095 | | 1.5347 | 24.0 | 672 | 1.4896 | 0.3095 | | 1.5296 | 25.0 | 700 | 1.4893 | 0.3095 | | 1.5722 | 26.0 | 728 | 1.4889 | 0.3095 | | 1.6103 | 27.0 | 756 | 1.4886 | 0.3095 | | 1.5352 | 28.0 | 784 | 1.4883 | 0.3095 | | 1.5133 | 29.0 | 812 | 1.4880 | 0.3095 | | 1.4677 | 30.0 | 840 | 1.4878 | 0.3095 | | 1.5424 | 31.0 | 868 | 1.4876 | 0.3095 | | 1.5132 | 32.0 | 896 | 1.4873 | 0.3095 | | 1.5611 | 33.0 | 924 | 1.4871 | 0.3095 | | 1.5494 | 34.0 | 952 | 1.4869 | 0.3095 | | 1.5087 | 35.0 | 980 | 1.4867 | 0.3095 | | 1.5719 | 36.0 | 1008 | 1.4865 | 0.3095 | | 1.5037 | 37.0 | 1036 | 1.4864 | 0.3095 | | 1.5457 | 38.0 | 1064 | 1.4863 | 0.3095 | | 1.5227 | 39.0 | 1092 | 1.4861 | 0.3095 | | 1.5024 | 40.0 | 1120 | 1.4860 | 0.3095 | | 1.5112 | 41.0 | 1148 | 1.4859 | 0.3095 | | 1.4872 | 42.0 | 1176 | 1.4858 | 0.3095 | | 1.5623 | 43.0 | 1204 | 1.4858 | 0.3095 | | 1.5147 | 44.0 | 1232 | 1.4857 | 0.3095 | | 1.5196 | 45.0 | 1260 | 1.4857 | 0.3095 | | 1.5574 | 46.0 | 1288 | 1.4856 | 0.3095 | | 1.5277 | 47.0 | 1316 | 1.4856 | 0.3095 | | 1.602 | 48.0 | 1344 | 1.4856 | 0.3095 | | 1.5259 | 49.0 | 1372 | 1.4856 | 0.3095 | | 1.5075 | 50.0 | 1400 | 1.4856 | 0.3095 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_5x_beit_base_sgd_00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_sgd_00001_fold5 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.6137 - Accuracy: 0.2439 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.5748 | 1.0 | 28 | 1.6349 | 0.2439 | | 1.5498 | 2.0 | 56 | 1.6339 | 0.2439 | | 1.5458 | 3.0 | 84 | 1.6329 | 0.2439 | | 1.5997 | 4.0 | 112 | 1.6319 | 0.2439 | | 1.5518 | 5.0 | 140 | 1.6310 | 0.2439 | | 1.6078 | 6.0 | 168 | 1.6301 | 0.2439 | | 1.6054 | 7.0 | 196 | 1.6292 | 0.2439 | | 1.5635 | 8.0 | 224 | 1.6284 | 0.2439 | | 1.5412 | 9.0 | 252 | 1.6276 | 0.2439 | | 1.5684 | 10.0 | 280 | 1.6268 | 0.2439 | | 1.5211 | 11.0 | 308 | 1.6261 | 0.2439 | | 1.5857 | 12.0 | 336 | 1.6254 | 0.2439 | | 1.5804 | 13.0 | 364 | 1.6248 | 0.2439 | | 1.5778 | 14.0 | 392 | 1.6241 | 0.2439 | | 1.5905 | 15.0 | 420 | 1.6235 | 0.2439 | | 1.5552 | 16.0 | 448 | 1.6228 | 0.2439 | | 1.5712 | 17.0 | 476 | 1.6222 | 0.2439 | | 1.5113 | 18.0 | 504 | 1.6216 | 0.2439 | | 1.5441 | 19.0 | 532 | 1.6210 | 0.2439 | | 1.547 | 20.0 | 560 | 1.6205 | 0.2439 | | 1.5712 | 21.0 | 588 | 1.6200 | 0.2439 | | 1.595 | 22.0 | 616 | 1.6195 | 0.2439 | | 1.6001 | 23.0 | 644 | 1.6190 | 0.2439 | | 1.6008 | 24.0 | 672 | 1.6185 | 0.2439 | | 1.5469 | 25.0 | 700 | 1.6181 | 0.2439 | | 1.567 | 26.0 | 728 | 1.6177 | 0.2439 | | 1.618 | 27.0 | 756 | 1.6173 | 0.2439 | | 1.4849 | 28.0 | 784 | 1.6170 | 0.2439 | | 1.5706 | 29.0 | 812 | 1.6166 | 0.2439 | | 1.5269 | 30.0 | 840 | 1.6163 | 0.2439 | | 1.588 | 31.0 | 868 | 1.6160 | 0.2439 | | 1.5207 | 32.0 | 896 | 1.6157 | 0.2439 | | 1.5395 | 33.0 | 924 | 1.6155 | 0.2439 | | 1.5482 | 34.0 | 952 | 1.6152 | 0.2439 | | 1.6004 | 35.0 | 980 | 1.6150 | 0.2439 | | 1.5389 | 36.0 | 1008 | 1.6148 | 0.2439 | | 1.5566 | 37.0 | 1036 | 1.6146 | 0.2439 | | 1.54 | 38.0 | 1064 | 1.6145 | 0.2439 | | 1.5715 | 39.0 | 1092 | 1.6143 | 0.2439 | | 1.5148 | 40.0 | 1120 | 1.6142 | 0.2439 | | 1.5688 | 41.0 | 1148 | 1.6141 | 0.2439 | | 1.5803 | 42.0 | 1176 | 1.6140 | 0.2439 | | 1.5477 | 43.0 | 1204 | 1.6139 | 0.2439 | | 1.5623 | 44.0 | 1232 | 1.6138 | 0.2439 | | 1.5648 | 45.0 | 1260 | 1.6137 | 0.2439 | | 1.5331 | 46.0 | 1288 | 1.6137 | 0.2439 | | 1.5791 | 47.0 | 1316 | 1.6137 | 0.2439 | | 1.5282 | 48.0 | 1344 | 1.6137 | 0.2439 | | 1.5715 | 49.0 | 1372 | 1.6137 | 0.2439 | | 1.5955 | 50.0 | 1400 | 1.6137 | 0.2439 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_5x_beit_base_rms_001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_rms_001_fold1 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.2430 - Accuracy: 0.4444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.5782 | 1.0 | 27 | 1.4061 | 0.2444 | | 1.4004 | 2.0 | 54 | 1.4559 | 0.2444 | | 1.3873 | 3.0 | 81 | 1.4120 | 0.2444 | | 1.3666 | 4.0 | 108 | 1.6275 | 0.2444 | | 1.3597 | 5.0 | 135 | 1.4398 | 0.2444 | | 1.2814 | 6.0 | 162 | 1.5328 | 0.2444 | | 1.2056 | 7.0 | 189 | 1.5389 | 0.2 | | 1.1635 | 8.0 | 216 | 1.5332 | 0.2444 | | 1.1235 | 9.0 | 243 | 1.6681 | 0.2444 | | 1.1484 | 10.0 | 270 | 1.6176 | 0.2667 | | 1.1757 | 11.0 | 297 | 1.6312 | 0.2444 | | 1.1297 | 12.0 | 324 | 1.5067 | 0.2444 | | 1.1448 | 13.0 | 351 | 1.5657 | 0.2444 | | 1.1725 | 14.0 | 378 | 1.5184 | 0.1556 | | 1.1591 | 15.0 | 405 | 1.5790 | 0.2444 | | 1.1549 | 16.0 | 432 | 1.5501 | 0.2444 | | 1.0865 | 17.0 | 459 | 1.5776 | 0.2444 | | 1.1351 | 18.0 | 486 | 1.6195 | 0.3111 | | 1.0974 | 19.0 | 513 | 1.5360 | 0.2444 | | 1.0992 | 20.0 | 540 | 1.5742 | 0.3111 | | 1.0894 | 21.0 | 567 | 1.4918 | 0.3778 | | 1.0557 | 22.0 | 594 | 1.5742 | 0.2444 | | 1.0574 | 23.0 | 621 | 1.5043 | 0.4222 | | 1.0148 | 24.0 | 648 | 1.3535 | 0.4222 | | 1.1133 | 25.0 | 675 | 1.4897 | 0.4 | | 1.02 | 26.0 | 702 | 1.4554 | 0.4222 | | 1.0107 | 27.0 | 729 | 1.4238 | 0.4 | | 0.9307 | 28.0 | 756 | 1.7644 | 0.3556 | | 0.8335 | 29.0 | 783 | 2.0253 | 0.3556 | | 0.8203 | 30.0 | 810 | 1.7990 | 0.3556 | | 0.7263 | 31.0 | 837 | 1.6909 | 0.3778 | | 0.8387 | 32.0 | 864 | 1.4758 | 0.4 | | 0.6837 | 33.0 | 891 | 2.1584 | 0.3556 | | 0.7155 | 34.0 | 918 | 1.7102 | 0.3778 | | 0.6349 | 35.0 | 945 | 1.1875 | 0.4667 | | 0.6331 | 36.0 | 972 | 1.9965 | 0.4222 | | 0.5871 | 37.0 | 999 | 1.7881 | 0.4 | | 0.595 | 38.0 | 1026 | 1.7629 | 0.4 | | 0.5266 | 39.0 | 1053 | 1.6720 | 0.4222 | | 0.4985 | 40.0 | 1080 | 2.3229 | 0.4222 | | 0.4855 | 41.0 | 1107 | 1.6470 | 0.4444 | | 0.503 | 42.0 | 1134 | 1.7515 | 0.4667 | | 0.4432 | 43.0 | 1161 | 2.0538 | 0.4222 | | 0.3668 | 44.0 | 1188 | 2.1471 | 0.4444 | | 0.3654 | 45.0 | 1215 | 2.0004 | 0.4444 | | 0.3317 | 46.0 | 1242 | 2.1973 | 0.4444 | | 0.2413 | 47.0 | 1269 | 2.2882 | 0.4444 | | 0.2395 | 48.0 | 1296 | 2.2389 | 0.4444 | | 0.2502 | 49.0 | 1323 | 2.2430 | 0.4444 | | 0.237 | 50.0 | 1350 | 2.2430 | 0.4444 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_5x_beit_base_rms_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_rms_001_fold2 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 3.7933 - Accuracy: 0.4444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4827 | 1.0 | 27 | 1.4775 | 0.2444 | | 1.4158 | 2.0 | 54 | 1.4002 | 0.2667 | | 1.3654 | 3.0 | 81 | 1.4674 | 0.2444 | | 1.4175 | 4.0 | 108 | 1.4412 | 0.2444 | | 1.394 | 5.0 | 135 | 1.3951 | 0.2667 | | 1.2686 | 6.0 | 162 | 1.3983 | 0.2444 | | 1.2556 | 7.0 | 189 | 1.4175 | 0.2889 | | 1.2245 | 8.0 | 216 | 1.4754 | 0.2222 | | 1.1427 | 9.0 | 243 | 1.5387 | 0.2 | | 1.1659 | 10.0 | 270 | 1.3896 | 0.3333 | | 1.2047 | 11.0 | 297 | 1.6922 | 0.2444 | | 1.1384 | 12.0 | 324 | 1.4940 | 0.2667 | | 1.1563 | 13.0 | 351 | 1.3730 | 0.2889 | | 1.1141 | 14.0 | 378 | 1.4944 | 0.2222 | | 1.0922 | 15.0 | 405 | 1.4049 | 0.2222 | | 1.0475 | 16.0 | 432 | 1.2541 | 0.4 | | 0.9208 | 17.0 | 459 | 1.2993 | 0.4222 | | 0.9847 | 18.0 | 486 | 1.4111 | 0.4889 | | 0.9327 | 19.0 | 513 | 1.3175 | 0.2889 | | 0.8591 | 20.0 | 540 | 1.2892 | 0.3111 | | 0.7605 | 21.0 | 567 | 1.6440 | 0.2667 | | 0.7953 | 22.0 | 594 | 1.6915 | 0.3778 | | 0.7644 | 23.0 | 621 | 1.6017 | 0.4667 | | 0.7884 | 24.0 | 648 | 1.4064 | 0.2444 | | 0.6883 | 25.0 | 675 | 1.9722 | 0.3111 | | 0.7747 | 26.0 | 702 | 1.9209 | 0.4889 | | 0.7012 | 27.0 | 729 | 2.2074 | 0.5333 | | 0.6951 | 28.0 | 756 | 2.4602 | 0.3556 | | 0.6581 | 29.0 | 783 | 2.1544 | 0.4222 | | 0.6529 | 30.0 | 810 | 2.0677 | 0.3556 | | 0.533 | 31.0 | 837 | 2.1507 | 0.3778 | | 0.6648 | 32.0 | 864 | 2.1628 | 0.4222 | | 0.6094 | 33.0 | 891 | 2.5365 | 0.3778 | | 0.5601 | 34.0 | 918 | 2.8323 | 0.4222 | | 0.519 | 35.0 | 945 | 2.4166 | 0.4 | | 0.5988 | 36.0 | 972 | 2.6302 | 0.4444 | | 0.5359 | 37.0 | 999 | 2.9183 | 0.3778 | | 0.5451 | 38.0 | 1026 | 2.8746 | 0.5111 | | 0.5087 | 39.0 | 1053 | 2.7419 | 0.4667 | | 0.4563 | 40.0 | 1080 | 3.1565 | 0.4222 | | 0.5182 | 41.0 | 1107 | 3.1768 | 0.4444 | | 0.4348 | 42.0 | 1134 | 3.2761 | 0.4222 | | 0.4504 | 43.0 | 1161 | 3.4108 | 0.4667 | | 0.417 | 44.0 | 1188 | 3.5781 | 0.4444 | | 0.4297 | 45.0 | 1215 | 3.6284 | 0.4444 | | 0.3399 | 46.0 | 1242 | 3.7187 | 0.4444 | | 0.3846 | 47.0 | 1269 | 3.7298 | 0.4667 | | 0.3494 | 48.0 | 1296 | 3.7854 | 0.4444 | | 0.3468 | 49.0 | 1323 | 3.7933 | 0.4444 | | 0.3313 | 50.0 | 1350 | 3.7933 | 0.4444 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
sh-zheng/vit-base-patch16-224-in21k-fintuned-SurfaceRoughness
## Vision Transformer (Fine-Tuned model) refer to https://huggingface.co/google/vit-base-patch16-224 for model detail and how to use ## Model Description Predict surface roughness category using snips taken from google maps aerial view. There are 3 categories: surface roughness B, surface roughness C, surface roughness D as defined in ASCE 7-16 section 26.7.2.
[ "roughnessb", "roughnessc", "roughnessd" ]
hkivancoral/hushem_5x_beit_base_rms_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_rms_001_fold3 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4768 - Accuracy: 0.6279 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.51 | 1.0 | 28 | 1.6351 | 0.2558 | | 1.3869 | 2.0 | 56 | 1.4127 | 0.2558 | | 1.3848 | 3.0 | 84 | 1.3895 | 0.2558 | | 1.4113 | 4.0 | 112 | 1.3824 | 0.2558 | | 1.3569 | 5.0 | 140 | 1.4121 | 0.2326 | | 1.4625 | 6.0 | 168 | 1.3739 | 0.2326 | | 1.3804 | 7.0 | 196 | 1.2185 | 0.5349 | | 1.1352 | 8.0 | 224 | 1.1411 | 0.4884 | | 1.0899 | 9.0 | 252 | 1.2426 | 0.3953 | | 1.0945 | 10.0 | 280 | 1.1820 | 0.3488 | | 1.1149 | 11.0 | 308 | 1.4574 | 0.3023 | | 0.9942 | 12.0 | 336 | 1.4728 | 0.3256 | | 1.0204 | 13.0 | 364 | 0.9801 | 0.5581 | | 0.9987 | 14.0 | 392 | 1.0096 | 0.5349 | | 1.0664 | 15.0 | 420 | 1.0007 | 0.5814 | | 0.9463 | 16.0 | 448 | 1.2188 | 0.3953 | | 0.9756 | 17.0 | 476 | 1.1284 | 0.5116 | | 0.9698 | 18.0 | 504 | 1.4394 | 0.4419 | | 1.061 | 19.0 | 532 | 1.1162 | 0.4884 | | 0.8426 | 20.0 | 560 | 1.9296 | 0.3721 | | 0.876 | 21.0 | 588 | 1.0070 | 0.5581 | | 0.8908 | 22.0 | 616 | 1.2196 | 0.5349 | | 0.8599 | 23.0 | 644 | 0.9502 | 0.6047 | | 0.8338 | 24.0 | 672 | 0.8737 | 0.6279 | | 0.785 | 25.0 | 700 | 1.1006 | 0.5814 | | 0.82 | 26.0 | 728 | 1.0398 | 0.5814 | | 0.8016 | 27.0 | 756 | 1.6671 | 0.3256 | | 0.8574 | 28.0 | 784 | 1.1704 | 0.6279 | | 0.8104 | 29.0 | 812 | 1.0502 | 0.6279 | | 0.7421 | 30.0 | 840 | 0.9270 | 0.5814 | | 0.7093 | 31.0 | 868 | 1.8057 | 0.4186 | | 0.7469 | 32.0 | 896 | 0.9665 | 0.5814 | | 0.7175 | 33.0 | 924 | 0.8190 | 0.6512 | | 0.7129 | 34.0 | 952 | 1.0680 | 0.6279 | | 0.7793 | 35.0 | 980 | 1.0966 | 0.5581 | | 0.6879 | 36.0 | 1008 | 0.9990 | 0.5814 | | 0.7016 | 37.0 | 1036 | 1.7556 | 0.4884 | | 0.6238 | 38.0 | 1064 | 1.5792 | 0.4651 | | 0.6025 | 39.0 | 1092 | 1.1502 | 0.6047 | | 0.7264 | 40.0 | 1120 | 1.3317 | 0.5349 | | 0.6063 | 41.0 | 1148 | 1.5492 | 0.5116 | | 0.5816 | 42.0 | 1176 | 1.5787 | 0.5814 | | 0.4627 | 43.0 | 1204 | 1.1301 | 0.6047 | | 0.4652 | 44.0 | 1232 | 1.5008 | 0.6279 | | 0.3885 | 45.0 | 1260 | 1.3167 | 0.6279 | | 0.4003 | 46.0 | 1288 | 1.3851 | 0.6512 | | 0.3882 | 47.0 | 1316 | 1.4601 | 0.6047 | | 0.353 | 48.0 | 1344 | 1.4699 | 0.6279 | | 0.3487 | 49.0 | 1372 | 1.4768 | 0.6279 | | 0.2789 | 50.0 | 1400 | 1.4768 | 0.6279 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_5x_beit_base_rms_001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_rms_001_fold4 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5372 - Accuracy: 0.7619 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4749 | 1.0 | 28 | 1.3999 | 0.2381 | | 1.39 | 2.0 | 56 | 1.4010 | 0.2619 | | 1.4057 | 3.0 | 84 | 1.3886 | 0.2381 | | 1.3953 | 4.0 | 112 | 1.3773 | 0.2381 | | 1.3855 | 5.0 | 140 | 1.3607 | 0.2619 | | 1.3721 | 6.0 | 168 | 1.1238 | 0.5 | | 1.2199 | 7.0 | 196 | 1.2305 | 0.4762 | | 1.1505 | 8.0 | 224 | 0.9832 | 0.4762 | | 1.1076 | 9.0 | 252 | 0.9145 | 0.5476 | | 1.04 | 10.0 | 280 | 0.9689 | 0.5476 | | 0.9947 | 11.0 | 308 | 0.8866 | 0.6429 | | 1.0266 | 12.0 | 336 | 0.8639 | 0.6905 | | 0.9955 | 13.0 | 364 | 0.8959 | 0.6190 | | 0.9564 | 14.0 | 392 | 0.8608 | 0.6667 | | 0.9123 | 15.0 | 420 | 0.7711 | 0.6905 | | 0.9391 | 16.0 | 448 | 0.7070 | 0.7619 | | 0.9117 | 17.0 | 476 | 0.7366 | 0.7619 | | 0.902 | 18.0 | 504 | 0.7650 | 0.7143 | | 0.8479 | 19.0 | 532 | 0.7181 | 0.7381 | | 0.8138 | 20.0 | 560 | 0.8337 | 0.6667 | | 0.7593 | 21.0 | 588 | 0.8325 | 0.6905 | | 0.8558 | 22.0 | 616 | 0.7211 | 0.8095 | | 0.8609 | 23.0 | 644 | 0.7758 | 0.7619 | | 0.7997 | 24.0 | 672 | 0.8535 | 0.7143 | | 0.6915 | 25.0 | 700 | 0.8962 | 0.7381 | | 0.7445 | 26.0 | 728 | 0.7116 | 0.7619 | | 0.6818 | 27.0 | 756 | 0.9464 | 0.5714 | | 0.6812 | 28.0 | 784 | 0.6802 | 0.7143 | | 0.662 | 29.0 | 812 | 1.0464 | 0.5476 | | 0.6161 | 30.0 | 840 | 0.7154 | 0.7857 | | 0.5942 | 31.0 | 868 | 0.6122 | 0.7619 | | 0.571 | 32.0 | 896 | 0.6263 | 0.7857 | | 0.5357 | 33.0 | 924 | 0.8564 | 0.8095 | | 0.4815 | 34.0 | 952 | 0.9986 | 0.7381 | | 0.5261 | 35.0 | 980 | 0.9173 | 0.8095 | | 0.3508 | 36.0 | 1008 | 1.0846 | 0.7619 | | 0.3469 | 37.0 | 1036 | 0.9412 | 0.8333 | | 0.3024 | 38.0 | 1064 | 0.9602 | 0.8333 | | 0.2908 | 39.0 | 1092 | 1.1234 | 0.8333 | | 0.2222 | 40.0 | 1120 | 1.1275 | 0.8095 | | 0.2149 | 41.0 | 1148 | 1.4618 | 0.7381 | | 0.2207 | 42.0 | 1176 | 1.3470 | 0.7857 | | 0.094 | 43.0 | 1204 | 1.5389 | 0.7619 | | 0.1227 | 44.0 | 1232 | 1.3819 | 0.7857 | | 0.0713 | 45.0 | 1260 | 1.5287 | 0.7619 | | 0.0383 | 46.0 | 1288 | 1.5676 | 0.8095 | | 0.0259 | 47.0 | 1316 | 1.4966 | 0.7857 | | 0.023 | 48.0 | 1344 | 1.5355 | 0.7619 | | 0.0304 | 49.0 | 1372 | 1.5372 | 0.7619 | | 0.0233 | 50.0 | 1400 | 1.5372 | 0.7619 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
aldogeova/isa-vit_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # isa-vit_model This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0370 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0947 | 3.85 | 500 | 0.0370 | 0.9850 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
hkivancoral/hushem_5x_beit_base_rms_001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_rms_001_fold5 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.4962 - Accuracy: 0.3415 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.501 | 1.0 | 28 | 1.3919 | 0.2439 | | 1.3993 | 2.0 | 56 | 1.4008 | 0.2683 | | 1.4258 | 3.0 | 84 | 1.4098 | 0.2439 | | 1.4011 | 4.0 | 112 | 1.3674 | 0.2683 | | 1.4153 | 5.0 | 140 | 1.3306 | 0.2683 | | 1.3649 | 6.0 | 168 | 1.4784 | 0.2195 | | 1.3525 | 7.0 | 196 | 1.2906 | 0.4390 | | 1.3374 | 8.0 | 224 | 1.1798 | 0.5122 | | 1.2661 | 9.0 | 252 | 1.3479 | 0.5122 | | 1.3011 | 10.0 | 280 | 1.3054 | 0.4878 | | 1.2212 | 11.0 | 308 | 1.1612 | 0.5122 | | 1.2579 | 12.0 | 336 | 1.2572 | 0.2683 | | 1.2438 | 13.0 | 364 | 1.1160 | 0.4634 | | 1.2218 | 14.0 | 392 | 1.1291 | 0.4878 | | 1.2455 | 15.0 | 420 | 1.4587 | 0.4390 | | 1.2528 | 16.0 | 448 | 1.3009 | 0.5122 | | 1.2445 | 17.0 | 476 | 1.1915 | 0.5122 | | 1.1729 | 18.0 | 504 | 1.3461 | 0.4390 | | 1.2917 | 19.0 | 532 | 1.3956 | 0.3659 | | 1.2335 | 20.0 | 560 | 1.1161 | 0.4146 | | 1.1787 | 21.0 | 588 | 1.4220 | 0.4390 | | 1.1076 | 22.0 | 616 | 1.2157 | 0.5122 | | 1.1837 | 23.0 | 644 | 1.2878 | 0.4634 | | 1.065 | 24.0 | 672 | 1.3373 | 0.3659 | | 1.0753 | 25.0 | 700 | 1.2968 | 0.4634 | | 1.0288 | 26.0 | 728 | 1.2996 | 0.4146 | | 1.0679 | 27.0 | 756 | 1.2975 | 0.3902 | | 1.0591 | 28.0 | 784 | 1.3051 | 0.4634 | | 1.0148 | 29.0 | 812 | 1.2575 | 0.5854 | | 1.0668 | 30.0 | 840 | 1.3174 | 0.3415 | | 0.9767 | 31.0 | 868 | 1.3259 | 0.4390 | | 0.9254 | 32.0 | 896 | 1.3236 | 0.4878 | | 0.9064 | 33.0 | 924 | 1.5265 | 0.3902 | | 0.9504 | 34.0 | 952 | 1.2456 | 0.4390 | | 0.8534 | 35.0 | 980 | 1.2811 | 0.5122 | | 0.8361 | 36.0 | 1008 | 1.2101 | 0.6098 | | 0.7846 | 37.0 | 1036 | 1.3727 | 0.4390 | | 0.7661 | 38.0 | 1064 | 1.4030 | 0.4878 | | 0.8237 | 39.0 | 1092 | 1.3385 | 0.4634 | | 0.7652 | 40.0 | 1120 | 1.6174 | 0.4146 | | 0.6764 | 41.0 | 1148 | 1.6358 | 0.4390 | | 0.5675 | 42.0 | 1176 | 1.7675 | 0.4390 | | 0.5777 | 43.0 | 1204 | 1.8573 | 0.4390 | | 0.5704 | 44.0 | 1232 | 2.0252 | 0.3902 | | 0.5677 | 45.0 | 1260 | 2.0725 | 0.3902 | | 0.4676 | 46.0 | 1288 | 2.4159 | 0.3171 | | 0.4167 | 47.0 | 1316 | 2.4083 | 0.3415 | | 0.416 | 48.0 | 1344 | 2.4826 | 0.3415 | | 0.3715 | 49.0 | 1372 | 2.4962 | 0.3415 | | 0.368 | 50.0 | 1400 | 2.4962 | 0.3415 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_5x_beit_base_rms_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_rms_0001_fold1 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 3.6811 - Accuracy: 0.3556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4287 | 1.0 | 27 | 1.3926 | 0.2222 | | 1.3907 | 2.0 | 54 | 1.3975 | 0.2667 | | 1.358 | 3.0 | 81 | 1.5093 | 0.2444 | | 1.3416 | 4.0 | 108 | 1.5118 | 0.2444 | | 1.2056 | 5.0 | 135 | 1.4928 | 0.2444 | | 1.1299 | 6.0 | 162 | 1.6562 | 0.2222 | | 1.1641 | 7.0 | 189 | 1.5947 | 0.2444 | | 1.1473 | 8.0 | 216 | 1.5964 | 0.2444 | | 1.1298 | 9.0 | 243 | 1.7663 | 0.2444 | | 1.1045 | 10.0 | 270 | 1.6309 | 0.3778 | | 0.8985 | 11.0 | 297 | 1.6908 | 0.4 | | 0.7744 | 12.0 | 324 | 1.3949 | 0.3556 | | 0.7617 | 13.0 | 351 | 1.4646 | 0.3778 | | 0.6843 | 14.0 | 378 | 1.5910 | 0.3778 | | 0.6647 | 15.0 | 405 | 1.8050 | 0.4 | | 0.6363 | 16.0 | 432 | 1.7016 | 0.3333 | | 0.6362 | 17.0 | 459 | 1.8539 | 0.3778 | | 0.6858 | 18.0 | 486 | 1.8678 | 0.3556 | | 0.7039 | 19.0 | 513 | 1.5776 | 0.3556 | | 0.6292 | 20.0 | 540 | 1.8552 | 0.3111 | | 0.4567 | 21.0 | 567 | 1.7854 | 0.3556 | | 0.5954 | 22.0 | 594 | 2.4822 | 0.3556 | | 0.5737 | 23.0 | 621 | 2.0564 | 0.4 | | 0.4941 | 24.0 | 648 | 1.9451 | 0.3111 | | 0.523 | 25.0 | 675 | 2.0359 | 0.3778 | | 0.5221 | 26.0 | 702 | 2.1184 | 0.4 | | 0.4589 | 27.0 | 729 | 2.0471 | 0.3556 | | 0.4473 | 28.0 | 756 | 2.5353 | 0.3556 | | 0.4328 | 29.0 | 783 | 2.7479 | 0.3556 | | 0.4259 | 30.0 | 810 | 2.2239 | 0.3778 | | 0.3698 | 31.0 | 837 | 2.5363 | 0.3556 | | 0.3577 | 32.0 | 864 | 2.5264 | 0.3556 | | 0.3882 | 33.0 | 891 | 2.2649 | 0.3333 | | 0.3526 | 34.0 | 918 | 2.6438 | 0.3556 | | 0.2747 | 35.0 | 945 | 2.3584 | 0.3778 | | 0.2842 | 36.0 | 972 | 2.8515 | 0.3556 | | 0.2603 | 37.0 | 999 | 2.3416 | 0.3778 | | 0.2268 | 38.0 | 1026 | 2.7485 | 0.3778 | | 0.2 | 39.0 | 1053 | 3.3636 | 0.3333 | | 0.2049 | 40.0 | 1080 | 3.1692 | 0.3333 | | 0.1369 | 41.0 | 1107 | 3.3885 | 0.3556 | | 0.1813 | 42.0 | 1134 | 3.3020 | 0.3333 | | 0.1518 | 43.0 | 1161 | 2.8618 | 0.4 | | 0.0986 | 44.0 | 1188 | 3.2902 | 0.3778 | | 0.131 | 45.0 | 1215 | 3.3898 | 0.3333 | | 0.0809 | 46.0 | 1242 | 3.5629 | 0.3333 | | 0.048 | 47.0 | 1269 | 3.7516 | 0.3333 | | 0.038 | 48.0 | 1296 | 3.6814 | 0.3556 | | 0.0465 | 49.0 | 1323 | 3.6811 | 0.3556 | | 0.0644 | 50.0 | 1350 | 3.6811 | 0.3556 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_5x_beit_base_rms_0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_rms_0001_fold2 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 6.9592 - Accuracy: 0.5111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3984 | 1.0 | 27 | 1.3816 | 0.2889 | | 1.3107 | 2.0 | 54 | 1.6609 | 0.2889 | | 1.229 | 3.0 | 81 | 1.5449 | 0.2889 | | 1.3814 | 4.0 | 108 | 1.6341 | 0.2889 | | 1.2031 | 5.0 | 135 | 1.4184 | 0.2667 | | 1.1619 | 6.0 | 162 | 1.4603 | 0.2889 | | 1.1757 | 7.0 | 189 | 1.4200 | 0.2889 | | 1.1575 | 8.0 | 216 | 1.3581 | 0.2889 | | 1.0419 | 9.0 | 243 | 1.5164 | 0.4 | | 1.0334 | 10.0 | 270 | 1.3939 | 0.4889 | | 0.799 | 11.0 | 297 | 1.4216 | 0.5333 | | 0.7589 | 12.0 | 324 | 1.5018 | 0.5111 | | 0.7466 | 13.0 | 351 | 1.2714 | 0.3778 | | 0.7077 | 14.0 | 378 | 1.2899 | 0.4 | | 0.7022 | 15.0 | 405 | 1.4427 | 0.3333 | | 0.6019 | 16.0 | 432 | 1.5793 | 0.4 | | 0.6413 | 17.0 | 459 | 1.5251 | 0.3111 | | 0.6003 | 18.0 | 486 | 2.0148 | 0.4889 | | 0.5924 | 19.0 | 513 | 2.2670 | 0.4889 | | 0.5357 | 20.0 | 540 | 2.0323 | 0.3556 | | 0.5196 | 21.0 | 567 | 2.5285 | 0.4889 | | 0.5137 | 22.0 | 594 | 3.7709 | 0.4222 | | 0.4488 | 23.0 | 621 | 3.1001 | 0.5111 | | 0.4667 | 24.0 | 648 | 2.9452 | 0.4 | | 0.3277 | 25.0 | 675 | 2.8861 | 0.4667 | | 0.3619 | 26.0 | 702 | 3.3939 | 0.5111 | | 0.3379 | 27.0 | 729 | 3.5247 | 0.5333 | | 0.2572 | 28.0 | 756 | 4.2104 | 0.5111 | | 0.2257 | 29.0 | 783 | 3.4821 | 0.4889 | | 0.2189 | 30.0 | 810 | 3.8860 | 0.4667 | | 0.1431 | 31.0 | 837 | 5.2772 | 0.4667 | | 0.2402 | 32.0 | 864 | 6.2470 | 0.4222 | | 0.122 | 33.0 | 891 | 5.2693 | 0.4 | | 0.2017 | 34.0 | 918 | 6.0732 | 0.5111 | | 0.0844 | 35.0 | 945 | 6.0091 | 0.5556 | | 0.1316 | 36.0 | 972 | 6.1584 | 0.4889 | | 0.0377 | 37.0 | 999 | 7.3245 | 0.4889 | | 0.1128 | 38.0 | 1026 | 6.6950 | 0.4444 | | 0.0551 | 39.0 | 1053 | 7.0821 | 0.5111 | | 0.0382 | 40.0 | 1080 | 7.5961 | 0.4889 | | 0.0547 | 41.0 | 1107 | 6.2914 | 0.5111 | | 0.0128 | 42.0 | 1134 | 6.4101 | 0.4889 | | 0.0359 | 43.0 | 1161 | 6.6377 | 0.5111 | | 0.004 | 44.0 | 1188 | 6.6707 | 0.4889 | | 0.0224 | 45.0 | 1215 | 7.0078 | 0.4889 | | 0.0292 | 46.0 | 1242 | 6.9800 | 0.4889 | | 0.0156 | 47.0 | 1269 | 6.9010 | 0.4889 | | 0.0096 | 48.0 | 1296 | 6.9583 | 0.5111 | | 0.0108 | 49.0 | 1323 | 6.9592 | 0.5111 | | 0.0394 | 50.0 | 1350 | 6.9592 | 0.5111 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_5x_beit_base_rms_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_rms_0001_fold3 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.0271 - Accuracy: 0.6744 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4187 | 1.0 | 28 | 1.4291 | 0.2558 | | 1.401 | 2.0 | 56 | 1.4569 | 0.2558 | | 1.367 | 3.0 | 84 | 1.2989 | 0.2791 | | 1.3068 | 4.0 | 112 | 1.1706 | 0.5116 | | 1.282 | 5.0 | 140 | 1.1869 | 0.5581 | | 1.1177 | 6.0 | 168 | 0.8916 | 0.7442 | | 0.8904 | 7.0 | 196 | 0.7798 | 0.7209 | | 0.9449 | 8.0 | 224 | 0.6587 | 0.7674 | | 0.8708 | 9.0 | 252 | 1.0524 | 0.5814 | | 0.9352 | 10.0 | 280 | 0.7664 | 0.6744 | | 0.8718 | 11.0 | 308 | 0.6191 | 0.7907 | | 0.7977 | 12.0 | 336 | 1.1991 | 0.6512 | | 0.8081 | 13.0 | 364 | 0.7062 | 0.7674 | | 0.7399 | 14.0 | 392 | 0.7130 | 0.6744 | | 0.8202 | 15.0 | 420 | 0.7484 | 0.6977 | | 0.7069 | 16.0 | 448 | 0.6665 | 0.6977 | | 0.6169 | 17.0 | 476 | 0.7828 | 0.6279 | | 0.6766 | 18.0 | 504 | 0.9849 | 0.5814 | | 0.6876 | 19.0 | 532 | 0.7015 | 0.7442 | | 0.5123 | 20.0 | 560 | 0.9230 | 0.7442 | | 0.4885 | 21.0 | 588 | 0.9671 | 0.6279 | | 0.5212 | 22.0 | 616 | 1.2712 | 0.6744 | | 0.5047 | 23.0 | 644 | 0.7902 | 0.6512 | | 0.4047 | 24.0 | 672 | 1.3996 | 0.7209 | | 0.361 | 25.0 | 700 | 1.1508 | 0.6279 | | 0.362 | 26.0 | 728 | 1.0709 | 0.6279 | | 0.3752 | 27.0 | 756 | 0.9894 | 0.6512 | | 0.2958 | 28.0 | 784 | 1.2219 | 0.6279 | | 0.3016 | 29.0 | 812 | 0.8154 | 0.6977 | | 0.2083 | 30.0 | 840 | 1.2432 | 0.6047 | | 0.2249 | 31.0 | 868 | 1.5401 | 0.6047 | | 0.1443 | 32.0 | 896 | 1.3193 | 0.6279 | | 0.1501 | 33.0 | 924 | 1.1707 | 0.6977 | | 0.1715 | 34.0 | 952 | 1.1677 | 0.7442 | | 0.2795 | 35.0 | 980 | 1.2992 | 0.6744 | | 0.1174 | 36.0 | 1008 | 1.6643 | 0.6744 | | 0.1132 | 37.0 | 1036 | 1.7522 | 0.6279 | | 0.0738 | 38.0 | 1064 | 1.6182 | 0.6744 | | 0.0433 | 39.0 | 1092 | 2.1223 | 0.6512 | | 0.0483 | 40.0 | 1120 | 2.5522 | 0.5814 | | 0.0333 | 41.0 | 1148 | 1.8374 | 0.6977 | | 0.0107 | 42.0 | 1176 | 1.9629 | 0.6744 | | 0.013 | 43.0 | 1204 | 1.6900 | 0.7209 | | 0.0316 | 44.0 | 1232 | 2.1881 | 0.6512 | | 0.0272 | 45.0 | 1260 | 1.8428 | 0.6744 | | 0.0298 | 46.0 | 1288 | 1.7049 | 0.7674 | | 0.0196 | 47.0 | 1316 | 1.9117 | 0.6744 | | 0.0084 | 48.0 | 1344 | 2.0336 | 0.6744 | | 0.0059 | 49.0 | 1372 | 2.0271 | 0.6744 | | 0.0065 | 50.0 | 1400 | 2.0271 | 0.6744 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_5x_beit_base_rms_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_rms_0001_fold4 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.8159 - Accuracy: 0.7857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4247 | 1.0 | 28 | 1.3411 | 0.2381 | | 1.3595 | 2.0 | 56 | 1.2501 | 0.4286 | | 1.3116 | 3.0 | 84 | 1.5240 | 0.2381 | | 1.303 | 4.0 | 112 | 1.0491 | 0.5238 | | 1.1942 | 5.0 | 140 | 0.8861 | 0.7143 | | 1.1712 | 6.0 | 168 | 0.9106 | 0.5238 | | 0.977 | 7.0 | 196 | 1.1447 | 0.6905 | | 0.9351 | 8.0 | 224 | 0.7191 | 0.7619 | | 0.8453 | 9.0 | 252 | 1.3331 | 0.5714 | | 0.8831 | 10.0 | 280 | 0.8305 | 0.6905 | | 0.8349 | 11.0 | 308 | 0.6872 | 0.7619 | | 0.845 | 12.0 | 336 | 0.7545 | 0.7619 | | 0.784 | 13.0 | 364 | 0.7961 | 0.7857 | | 0.7404 | 14.0 | 392 | 0.6338 | 0.8095 | | 0.6277 | 15.0 | 420 | 0.7200 | 0.7143 | | 0.6386 | 16.0 | 448 | 0.7383 | 0.8095 | | 0.6167 | 17.0 | 476 | 0.5440 | 0.8095 | | 0.5129 | 18.0 | 504 | 0.7061 | 0.7619 | | 0.3836 | 19.0 | 532 | 0.7181 | 0.7381 | | 0.3202 | 20.0 | 560 | 0.4277 | 0.8095 | | 0.1958 | 21.0 | 588 | 1.1637 | 0.7381 | | 0.2343 | 22.0 | 616 | 1.0581 | 0.8095 | | 0.2016 | 23.0 | 644 | 0.8968 | 0.7857 | | 0.116 | 24.0 | 672 | 1.0426 | 0.7857 | | 0.1027 | 25.0 | 700 | 0.6841 | 0.8333 | | 0.1133 | 26.0 | 728 | 0.8260 | 0.8095 | | 0.1258 | 27.0 | 756 | 1.3215 | 0.7619 | | 0.0595 | 28.0 | 784 | 1.0509 | 0.8810 | | 0.0945 | 29.0 | 812 | 1.3868 | 0.7857 | | 0.0022 | 30.0 | 840 | 1.7553 | 0.8095 | | 0.0004 | 31.0 | 868 | 1.9423 | 0.7857 | | 0.0466 | 32.0 | 896 | 2.0945 | 0.8095 | | 0.0367 | 33.0 | 924 | 1.6928 | 0.8095 | | 0.1032 | 34.0 | 952 | 1.3572 | 0.8571 | | 0.0331 | 35.0 | 980 | 2.0437 | 0.8095 | | 0.0001 | 36.0 | 1008 | 2.0414 | 0.8333 | | 0.0286 | 37.0 | 1036 | 2.0546 | 0.7619 | | 0.009 | 38.0 | 1064 | 2.8381 | 0.7857 | | 0.0573 | 39.0 | 1092 | 2.4470 | 0.7857 | | 0.0497 | 40.0 | 1120 | 1.8192 | 0.7857 | | 0.0003 | 41.0 | 1148 | 2.1421 | 0.7143 | | 0.0003 | 42.0 | 1176 | 2.2125 | 0.7381 | | 0.0001 | 43.0 | 1204 | 2.1555 | 0.7619 | | 0.0002 | 44.0 | 1232 | 1.8154 | 0.7381 | | 0.0197 | 45.0 | 1260 | 1.7188 | 0.7381 | | 0.0002 | 46.0 | 1288 | 1.6637 | 0.8095 | | 0.0152 | 47.0 | 1316 | 1.6954 | 0.8095 | | 0.0001 | 48.0 | 1344 | 1.8153 | 0.7857 | | 0.0002 | 49.0 | 1372 | 1.8159 | 0.7857 | | 0.0 | 50.0 | 1400 | 1.8159 | 0.7857 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_5x_beit_base_rms_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_rms_0001_fold5 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 3.4047 - Accuracy: 0.7073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4155 | 1.0 | 28 | 1.3777 | 0.2683 | | 1.3848 | 2.0 | 56 | 1.2989 | 0.2927 | | 1.3314 | 3.0 | 84 | 1.2733 | 0.4878 | | 1.2486 | 4.0 | 112 | 1.0811 | 0.5122 | | 1.2007 | 5.0 | 140 | 0.9236 | 0.5854 | | 1.05 | 6.0 | 168 | 1.1380 | 0.5122 | | 1.0162 | 7.0 | 196 | 0.9574 | 0.5854 | | 0.9476 | 8.0 | 224 | 1.4400 | 0.4878 | | 0.903 | 9.0 | 252 | 0.9012 | 0.6341 | | 0.9351 | 10.0 | 280 | 1.0183 | 0.6829 | | 0.8113 | 11.0 | 308 | 0.9612 | 0.6585 | | 0.8131 | 12.0 | 336 | 1.6631 | 0.4878 | | 0.7921 | 13.0 | 364 | 0.9316 | 0.6829 | | 0.8114 | 14.0 | 392 | 1.3372 | 0.5854 | | 0.7382 | 15.0 | 420 | 1.4796 | 0.6341 | | 0.7119 | 16.0 | 448 | 1.9753 | 0.5366 | | 0.6933 | 17.0 | 476 | 1.3458 | 0.7073 | | 0.591 | 18.0 | 504 | 1.3968 | 0.6585 | | 0.6986 | 19.0 | 532 | 1.4904 | 0.6829 | | 0.6832 | 20.0 | 560 | 1.7362 | 0.6585 | | 0.5173 | 21.0 | 588 | 1.5475 | 0.7317 | | 0.5116 | 22.0 | 616 | 1.9547 | 0.6585 | | 0.4833 | 23.0 | 644 | 2.1246 | 0.6341 | | 0.4295 | 24.0 | 672 | 1.9058 | 0.7317 | | 0.4431 | 25.0 | 700 | 2.4495 | 0.6585 | | 0.3801 | 26.0 | 728 | 1.6867 | 0.7561 | | 0.4263 | 27.0 | 756 | 2.1056 | 0.6585 | | 0.3209 | 28.0 | 784 | 2.6127 | 0.6098 | | 0.29 | 29.0 | 812 | 2.2833 | 0.6341 | | 0.2306 | 30.0 | 840 | 2.6477 | 0.6341 | | 0.2318 | 31.0 | 868 | 2.2205 | 0.6829 | | 0.1766 | 32.0 | 896 | 2.1057 | 0.8293 | | 0.1861 | 33.0 | 924 | 2.9102 | 0.6341 | | 0.2172 | 34.0 | 952 | 2.3319 | 0.7317 | | 0.1336 | 35.0 | 980 | 2.7931 | 0.7073 | | 0.128 | 36.0 | 1008 | 3.2544 | 0.6098 | | 0.1009 | 37.0 | 1036 | 2.3057 | 0.7805 | | 0.1495 | 38.0 | 1064 | 2.9047 | 0.7317 | | 0.0845 | 39.0 | 1092 | 3.1290 | 0.7317 | | 0.064 | 40.0 | 1120 | 2.9682 | 0.7561 | | 0.0399 | 41.0 | 1148 | 2.9364 | 0.7561 | | 0.0198 | 42.0 | 1176 | 4.0340 | 0.6585 | | 0.0179 | 43.0 | 1204 | 3.2313 | 0.7317 | | 0.0799 | 44.0 | 1232 | 3.4340 | 0.7317 | | 0.0495 | 45.0 | 1260 | 3.8737 | 0.6829 | | 0.041 | 46.0 | 1288 | 3.5139 | 0.6829 | | 0.0058 | 47.0 | 1316 | 3.4146 | 0.7073 | | 0.0141 | 48.0 | 1344 | 3.4016 | 0.7073 | | 0.0316 | 49.0 | 1372 | 3.4047 | 0.7073 | | 0.0269 | 50.0 | 1400 | 3.4047 | 0.7073 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_5x_beit_base_rms_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_rms_00001_fold1 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5839 - Accuracy: 0.8222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5474 | 1.0 | 27 | 0.5910 | 0.8444 | | 0.0818 | 2.0 | 54 | 0.5720 | 0.7778 | | 0.0238 | 3.0 | 81 | 0.8576 | 0.7111 | | 0.0074 | 4.0 | 108 | 0.5321 | 0.8667 | | 0.0032 | 5.0 | 135 | 0.4605 | 0.8667 | | 0.0017 | 6.0 | 162 | 0.6849 | 0.7778 | | 0.0024 | 7.0 | 189 | 0.4973 | 0.8667 | | 0.0008 | 8.0 | 216 | 0.4640 | 0.8667 | | 0.0044 | 9.0 | 243 | 0.6817 | 0.8222 | | 0.0005 | 10.0 | 270 | 0.5671 | 0.8222 | | 0.0004 | 11.0 | 297 | 0.5195 | 0.8444 | | 0.0002 | 12.0 | 324 | 0.7506 | 0.8222 | | 0.0007 | 13.0 | 351 | 0.4960 | 0.8667 | | 0.0004 | 14.0 | 378 | 0.4879 | 0.8667 | | 0.0002 | 15.0 | 405 | 0.2878 | 0.8889 | | 0.0004 | 16.0 | 432 | 0.5723 | 0.7778 | | 0.0038 | 17.0 | 459 | 0.8796 | 0.8 | | 0.0011 | 18.0 | 486 | 0.4544 | 0.8444 | | 0.001 | 19.0 | 513 | 0.2346 | 0.8889 | | 0.0001 | 20.0 | 540 | 0.6421 | 0.8444 | | 0.0001 | 21.0 | 567 | 0.5172 | 0.8667 | | 0.0012 | 22.0 | 594 | 0.4729 | 0.8222 | | 0.0001 | 23.0 | 621 | 0.4318 | 0.8222 | | 0.0001 | 24.0 | 648 | 0.4087 | 0.8222 | | 0.0004 | 25.0 | 675 | 0.4267 | 0.8889 | | 0.0001 | 26.0 | 702 | 0.4250 | 0.8667 | | 0.0001 | 27.0 | 729 | 0.3081 | 0.8889 | | 0.0001 | 28.0 | 756 | 0.4008 | 0.8222 | | 0.0 | 29.0 | 783 | 0.3766 | 0.8444 | | 0.0001 | 30.0 | 810 | 0.3622 | 0.9111 | | 0.0 | 31.0 | 837 | 0.4006 | 0.8222 | | 0.0001 | 32.0 | 864 | 0.4743 | 0.8444 | | 0.0001 | 33.0 | 891 | 0.3292 | 0.8889 | | 0.0001 | 34.0 | 918 | 1.1554 | 0.7556 | | 0.0002 | 35.0 | 945 | 0.6888 | 0.8 | | 0.0003 | 36.0 | 972 | 0.4504 | 0.8667 | | 0.0001 | 37.0 | 999 | 0.4287 | 0.8667 | | 0.0 | 38.0 | 1026 | 0.4528 | 0.8667 | | 0.0001 | 39.0 | 1053 | 0.4353 | 0.8667 | | 0.0 | 40.0 | 1080 | 0.4656 | 0.8444 | | 0.0044 | 41.0 | 1107 | 0.4571 | 0.8222 | | 0.0 | 42.0 | 1134 | 0.4813 | 0.8222 | | 0.0004 | 43.0 | 1161 | 0.5618 | 0.8444 | | 0.0 | 44.0 | 1188 | 0.5635 | 0.8444 | | 0.0 | 45.0 | 1215 | 0.5635 | 0.8444 | | 0.0061 | 46.0 | 1242 | 0.5733 | 0.8444 | | 0.0 | 47.0 | 1269 | 0.5697 | 0.8444 | | 0.0001 | 48.0 | 1296 | 0.5838 | 0.8222 | | 0.0001 | 49.0 | 1323 | 0.5839 | 0.8222 | | 0.0 | 50.0 | 1350 | 0.5839 | 0.8222 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_5x_beit_base_rms_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_rms_00001_fold2 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8184 - Accuracy: 0.8667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6865 | 1.0 | 27 | 0.7969 | 0.7556 | | 0.1615 | 2.0 | 54 | 0.9353 | 0.7778 | | 0.041 | 3.0 | 81 | 1.0745 | 0.6444 | | 0.0119 | 4.0 | 108 | 1.0481 | 0.7333 | | 0.0095 | 5.0 | 135 | 0.6063 | 0.8667 | | 0.0013 | 6.0 | 162 | 0.6520 | 0.8444 | | 0.0015 | 7.0 | 189 | 0.7604 | 0.8667 | | 0.0013 | 8.0 | 216 | 0.7595 | 0.8444 | | 0.0008 | 9.0 | 243 | 0.8299 | 0.8444 | | 0.0008 | 10.0 | 270 | 0.6509 | 0.8444 | | 0.0009 | 11.0 | 297 | 0.7989 | 0.8444 | | 0.0002 | 12.0 | 324 | 0.8458 | 0.8444 | | 0.0005 | 13.0 | 351 | 0.6321 | 0.8667 | | 0.0002 | 14.0 | 378 | 0.6972 | 0.8444 | | 0.0002 | 15.0 | 405 | 0.7426 | 0.8667 | | 0.0005 | 16.0 | 432 | 0.9776 | 0.8 | | 0.0023 | 17.0 | 459 | 1.0180 | 0.8 | | 0.0003 | 18.0 | 486 | 1.1105 | 0.7778 | | 0.0006 | 19.0 | 513 | 0.9919 | 0.7556 | | 0.0002 | 20.0 | 540 | 1.0177 | 0.8 | | 0.0012 | 21.0 | 567 | 0.9992 | 0.8444 | | 0.0003 | 22.0 | 594 | 0.9760 | 0.8444 | | 0.0047 | 23.0 | 621 | 0.9891 | 0.8 | | 0.0061 | 24.0 | 648 | 0.9730 | 0.8222 | | 0.0002 | 25.0 | 675 | 0.8247 | 0.8222 | | 0.0001 | 26.0 | 702 | 0.8270 | 0.8667 | | 0.0001 | 27.0 | 729 | 0.7978 | 0.8222 | | 0.0 | 28.0 | 756 | 0.8136 | 0.8444 | | 0.0001 | 29.0 | 783 | 0.8553 | 0.8444 | | 0.0001 | 30.0 | 810 | 0.9423 | 0.8444 | | 0.0001 | 31.0 | 837 | 0.9286 | 0.8222 | | 0.0001 | 32.0 | 864 | 0.9464 | 0.8222 | | 0.0002 | 33.0 | 891 | 0.8713 | 0.8444 | | 0.0001 | 34.0 | 918 | 0.8762 | 0.8444 | | 0.0001 | 35.0 | 945 | 0.9092 | 0.8667 | | 0.0 | 36.0 | 972 | 0.9547 | 0.8444 | | 0.0 | 37.0 | 999 | 0.9283 | 0.8444 | | 0.0 | 38.0 | 1026 | 0.8639 | 0.8444 | | 0.0001 | 39.0 | 1053 | 0.8477 | 0.8667 | | 0.0 | 40.0 | 1080 | 0.8432 | 0.8667 | | 0.0 | 41.0 | 1107 | 0.8325 | 0.8667 | | 0.0 | 42.0 | 1134 | 0.7851 | 0.8667 | | 0.0003 | 43.0 | 1161 | 0.7875 | 0.8667 | | 0.0 | 44.0 | 1188 | 0.7888 | 0.8667 | | 0.0001 | 45.0 | 1215 | 0.8006 | 0.8889 | | 0.0001 | 46.0 | 1242 | 0.8075 | 0.8889 | | 0.0001 | 47.0 | 1269 | 0.8158 | 0.8889 | | 0.0 | 48.0 | 1296 | 0.8184 | 0.8667 | | 0.0002 | 49.0 | 1323 | 0.8184 | 0.8667 | | 0.0001 | 50.0 | 1350 | 0.8184 | 0.8667 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_5x_beit_base_rms_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_rms_00001_fold3 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5241 - Accuracy: 0.9070 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7661 | 1.0 | 28 | 0.5413 | 0.7907 | | 0.1086 | 2.0 | 56 | 0.1845 | 0.9535 | | 0.0112 | 3.0 | 84 | 0.3881 | 0.9070 | | 0.0076 | 4.0 | 112 | 0.3276 | 0.9070 | | 0.0043 | 5.0 | 140 | 0.4248 | 0.9070 | | 0.0016 | 6.0 | 168 | 0.2522 | 0.9302 | | 0.0099 | 7.0 | 196 | 0.2768 | 0.9070 | | 0.0014 | 8.0 | 224 | 0.2639 | 0.9302 | | 0.0013 | 9.0 | 252 | 0.3818 | 0.9070 | | 0.0009 | 10.0 | 280 | 0.1248 | 0.9535 | | 0.0006 | 11.0 | 308 | 0.2509 | 0.9070 | | 0.0003 | 12.0 | 336 | 0.2923 | 0.9070 | | 0.001 | 13.0 | 364 | 0.5107 | 0.8837 | | 0.0019 | 14.0 | 392 | 0.3339 | 0.9535 | | 0.0002 | 15.0 | 420 | 0.3891 | 0.9070 | | 0.0003 | 16.0 | 448 | 0.4248 | 0.9070 | | 0.0005 | 17.0 | 476 | 0.2832 | 0.9535 | | 0.0003 | 18.0 | 504 | 0.3491 | 0.9070 | | 0.0002 | 19.0 | 532 | 0.4104 | 0.9070 | | 0.0001 | 20.0 | 560 | 0.4255 | 0.9070 | | 0.0009 | 21.0 | 588 | 0.4651 | 0.9070 | | 0.0015 | 22.0 | 616 | 0.4792 | 0.9070 | | 0.0001 | 23.0 | 644 | 0.4509 | 0.9070 | | 0.0006 | 24.0 | 672 | 0.5680 | 0.9302 | | 0.0001 | 25.0 | 700 | 0.3224 | 0.9070 | | 0.0001 | 26.0 | 728 | 0.3096 | 0.9302 | | 0.0001 | 27.0 | 756 | 0.6066 | 0.9070 | | 0.0001 | 28.0 | 784 | 0.3940 | 0.9070 | | 0.0001 | 29.0 | 812 | 0.3550 | 0.9070 | | 0.0 | 30.0 | 840 | 0.4157 | 0.9070 | | 0.0001 | 31.0 | 868 | 0.4340 | 0.9070 | | 0.0166 | 32.0 | 896 | 0.6996 | 0.9070 | | 0.0001 | 33.0 | 924 | 0.5595 | 0.9070 | | 0.0 | 34.0 | 952 | 0.3606 | 0.9070 | | 0.0001 | 35.0 | 980 | 0.4821 | 0.9070 | | 0.0013 | 36.0 | 1008 | 0.4503 | 0.9070 | | 0.0001 | 37.0 | 1036 | 0.4301 | 0.9070 | | 0.0001 | 38.0 | 1064 | 0.4884 | 0.9070 | | 0.0 | 39.0 | 1092 | 0.4958 | 0.9070 | | 0.0009 | 40.0 | 1120 | 0.5821 | 0.9070 | | 0.0001 | 41.0 | 1148 | 0.4696 | 0.9070 | | 0.0 | 42.0 | 1176 | 0.4577 | 0.9070 | | 0.0 | 43.0 | 1204 | 0.4998 | 0.9070 | | 0.0 | 44.0 | 1232 | 0.5154 | 0.9070 | | 0.0001 | 45.0 | 1260 | 0.5227 | 0.9070 | | 0.0003 | 46.0 | 1288 | 0.5170 | 0.9070 | | 0.0001 | 47.0 | 1316 | 0.5187 | 0.9070 | | 0.0001 | 48.0 | 1344 | 0.5241 | 0.9070 | | 0.0 | 49.0 | 1372 | 0.5241 | 0.9070 | | 0.0 | 50.0 | 1400 | 0.5241 | 0.9070 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_5x_beit_base_rms_00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_rms_00001_fold4 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4242 - Accuracy: 0.9048 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7963 | 1.0 | 28 | 0.5873 | 0.8095 | | 0.1378 | 2.0 | 56 | 0.2600 | 0.9048 | | 0.0372 | 3.0 | 84 | 0.1249 | 0.9286 | | 0.0142 | 4.0 | 112 | 0.1881 | 0.9048 | | 0.0031 | 5.0 | 140 | 0.2720 | 0.9524 | | 0.0011 | 6.0 | 168 | 0.2309 | 0.9286 | | 0.0018 | 7.0 | 196 | 0.3809 | 0.9048 | | 0.0008 | 8.0 | 224 | 0.3332 | 0.9048 | | 0.0014 | 9.0 | 252 | 0.3365 | 0.8810 | | 0.0123 | 10.0 | 280 | 0.2089 | 0.9286 | | 0.0005 | 11.0 | 308 | 0.1962 | 0.9286 | | 0.0038 | 12.0 | 336 | 0.2845 | 0.9048 | | 0.0078 | 13.0 | 364 | 0.2498 | 0.9048 | | 0.001 | 14.0 | 392 | 0.0353 | 1.0 | | 0.0002 | 15.0 | 420 | 0.1604 | 0.9286 | | 0.0003 | 16.0 | 448 | 0.6770 | 0.8810 | | 0.0002 | 17.0 | 476 | 0.3566 | 0.9048 | | 0.0001 | 18.0 | 504 | 0.1974 | 0.8810 | | 0.0004 | 19.0 | 532 | 0.0247 | 1.0 | | 0.0001 | 20.0 | 560 | 0.0905 | 0.9286 | | 0.0001 | 21.0 | 588 | 0.1806 | 0.9286 | | 0.0011 | 22.0 | 616 | 0.2156 | 0.9524 | | 0.0007 | 23.0 | 644 | 0.4203 | 0.9286 | | 0.0002 | 24.0 | 672 | 0.2731 | 0.9286 | | 0.0054 | 25.0 | 700 | 0.2589 | 0.8810 | | 0.0001 | 26.0 | 728 | 0.2893 | 0.9048 | | 0.0 | 27.0 | 756 | 0.3737 | 0.8810 | | 0.0002 | 28.0 | 784 | 0.3310 | 0.9048 | | 0.0001 | 29.0 | 812 | 0.2394 | 0.9048 | | 0.0 | 30.0 | 840 | 0.2320 | 0.9048 | | 0.0001 | 31.0 | 868 | 0.2751 | 0.9048 | | 0.0012 | 32.0 | 896 | 0.2756 | 0.9048 | | 0.0 | 33.0 | 924 | 0.1983 | 0.9048 | | 0.0001 | 34.0 | 952 | 0.1565 | 0.9048 | | 0.0 | 35.0 | 980 | 0.1912 | 0.9048 | | 0.0001 | 36.0 | 1008 | 0.2103 | 0.9048 | | 0.0 | 37.0 | 1036 | 0.1693 | 0.9048 | | 0.0 | 38.0 | 1064 | 0.1895 | 0.9048 | | 0.0 | 39.0 | 1092 | 0.2300 | 0.9048 | | 0.0018 | 40.0 | 1120 | 0.7391 | 0.9048 | | 0.0 | 41.0 | 1148 | 0.6660 | 0.9048 | | 0.0 | 42.0 | 1176 | 0.5981 | 0.9048 | | 0.0001 | 43.0 | 1204 | 0.6379 | 0.9048 | | 0.0001 | 44.0 | 1232 | 0.5736 | 0.9048 | | 0.0002 | 45.0 | 1260 | 0.4940 | 0.9048 | | 0.0001 | 46.0 | 1288 | 0.4348 | 0.9048 | | 0.0001 | 47.0 | 1316 | 0.4551 | 0.9048 | | 0.0 | 48.0 | 1344 | 0.4241 | 0.9048 | | 0.0026 | 49.0 | 1372 | 0.4242 | 0.9048 | | 0.0 | 50.0 | 1400 | 0.4242 | 0.9048 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_5x_beit_base_rms_00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_5x_beit_base_rms_00001_fold5 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1845 - Accuracy: 0.8049 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9367 | 1.0 | 28 | 0.6533 | 0.7073 | | 0.1926 | 2.0 | 56 | 0.5512 | 0.7805 | | 0.047 | 3.0 | 84 | 0.6007 | 0.8049 | | 0.0193 | 4.0 | 112 | 0.2590 | 0.9024 | | 0.0089 | 5.0 | 140 | 0.4654 | 0.8293 | | 0.0038 | 6.0 | 168 | 0.5932 | 0.8293 | | 0.0017 | 7.0 | 196 | 0.6877 | 0.8293 | | 0.0014 | 8.0 | 224 | 0.7982 | 0.8049 | | 0.0007 | 9.0 | 252 | 0.6044 | 0.8293 | | 0.0007 | 10.0 | 280 | 0.6788 | 0.8537 | | 0.0003 | 11.0 | 308 | 0.6662 | 0.8537 | | 0.0003 | 12.0 | 336 | 0.6588 | 0.8537 | | 0.0002 | 13.0 | 364 | 0.6343 | 0.8293 | | 0.0046 | 14.0 | 392 | 1.0649 | 0.7805 | | 0.0012 | 15.0 | 420 | 0.7359 | 0.8293 | | 0.0005 | 16.0 | 448 | 0.7345 | 0.8293 | | 0.0066 | 17.0 | 476 | 0.7816 | 0.8537 | | 0.0014 | 18.0 | 504 | 0.6553 | 0.8780 | | 0.0003 | 19.0 | 532 | 0.5879 | 0.8780 | | 0.0001 | 20.0 | 560 | 0.6539 | 0.8537 | | 0.0001 | 21.0 | 588 | 0.5762 | 0.8293 | | 0.0006 | 22.0 | 616 | 0.3307 | 0.8293 | | 0.0001 | 23.0 | 644 | 0.6447 | 0.8293 | | 0.0002 | 24.0 | 672 | 0.7471 | 0.8537 | | 0.0002 | 25.0 | 700 | 0.6200 | 0.8537 | | 0.0001 | 26.0 | 728 | 0.9057 | 0.8537 | | 0.0001 | 27.0 | 756 | 0.8578 | 0.8537 | | 0.0004 | 28.0 | 784 | 0.7354 | 0.8537 | | 0.0001 | 29.0 | 812 | 0.8285 | 0.8537 | | 0.0004 | 30.0 | 840 | 0.7442 | 0.8780 | | 0.0001 | 31.0 | 868 | 0.9315 | 0.8049 | | 0.0002 | 32.0 | 896 | 1.0255 | 0.8049 | | 0.0 | 33.0 | 924 | 1.0401 | 0.7805 | | 0.0001 | 34.0 | 952 | 1.0520 | 0.8293 | | 0.0004 | 35.0 | 980 | 0.9869 | 0.8537 | | 0.0 | 36.0 | 1008 | 0.9764 | 0.8537 | | 0.0001 | 37.0 | 1036 | 0.9356 | 0.8537 | | 0.0001 | 38.0 | 1064 | 1.1522 | 0.8049 | | 0.0 | 39.0 | 1092 | 1.0978 | 0.8049 | | 0.0005 | 40.0 | 1120 | 1.0647 | 0.8293 | | 0.0003 | 41.0 | 1148 | 1.2331 | 0.8049 | | 0.0 | 42.0 | 1176 | 1.3110 | 0.8049 | | 0.0 | 43.0 | 1204 | 1.2050 | 0.8049 | | 0.0 | 44.0 | 1232 | 1.1647 | 0.8049 | | 0.0002 | 45.0 | 1260 | 1.2154 | 0.8049 | | 0.0001 | 46.0 | 1288 | 1.2000 | 0.8049 | | 0.0001 | 47.0 | 1316 | 1.1915 | 0.8049 | | 0.0 | 48.0 | 1344 | 1.1844 | 0.8049 | | 0.0001 | 49.0 | 1372 | 1.1845 | 0.8049 | | 0.0 | 50.0 | 1400 | 1.1845 | 0.8049 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
xiaopch/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0520 - Accuracy: 0.9837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2341 | 1.0 | 190 | 0.1160 | 0.9593 | | 0.1813 | 2.0 | 380 | 0.0715 | 0.9752 | | 0.1401 | 3.0 | 570 | 0.0520 | 0.9837 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "annualcrop", "forest", "herbaceousvegetation", "highway", "industrial", "pasture", "permanentcrop", "residential", "river", "sealake" ]
xiaopch/vit-base-patch16-224-finetuned
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1532 - Accuracy: 0.6747 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.8046 | 1.0 | 35 | 1.5308 | 0.6004 | | 1.1931 | 2.0 | 70 | 1.2080 | 0.6526 | | 1.0292 | 3.0 | 105 | 1.1532 | 0.6747 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "spodoptera_litura", "aphid", "beet_armyworm", "borer", "chemical_fertilizer", "cnidocampa_flavescens", "corn_borer", "cotton_bollworm", "fhb", "grasshopper", "longhorn_beetle", "oriental_fruit_fly", "pesticides", "plutella_xylostella", "rice_planthopper", "rice_stem_borer", "rolled_leaf_borer" ]
hkivancoral/smids_1x_beit_base_adamax_001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_adamax_001_fold1 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2333 - Accuracy: 0.8531 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6722 | 1.0 | 76 | 0.7068 | 0.7346 | | 0.5257 | 2.0 | 152 | 0.7906 | 0.6978 | | 0.3766 | 3.0 | 228 | 0.4936 | 0.8030 | | 0.4703 | 4.0 | 304 | 0.5194 | 0.8047 | | 0.3758 | 5.0 | 380 | 0.4944 | 0.8047 | | 0.3685 | 6.0 | 456 | 0.4662 | 0.8364 | | 0.2812 | 7.0 | 532 | 0.5286 | 0.8314 | | 0.2831 | 8.0 | 608 | 0.4636 | 0.8331 | | 0.2359 | 9.0 | 684 | 0.5034 | 0.8063 | | 0.1426 | 10.0 | 760 | 0.5477 | 0.8280 | | 0.2668 | 11.0 | 836 | 0.6880 | 0.8130 | | 0.182 | 12.0 | 912 | 0.6113 | 0.8280 | | 0.1925 | 13.0 | 988 | 0.5781 | 0.8280 | | 0.1404 | 14.0 | 1064 | 0.8189 | 0.8114 | | 0.0795 | 15.0 | 1140 | 0.8425 | 0.8230 | | 0.0585 | 16.0 | 1216 | 0.6551 | 0.8481 | | 0.0935 | 17.0 | 1292 | 0.7044 | 0.8347 | | 0.0369 | 18.0 | 1368 | 0.9110 | 0.8414 | | 0.0816 | 19.0 | 1444 | 0.9853 | 0.8414 | | 0.063 | 20.0 | 1520 | 0.7577 | 0.8464 | | 0.0166 | 21.0 | 1596 | 0.8613 | 0.8381 | | 0.0172 | 22.0 | 1672 | 0.7211 | 0.8548 | | 0.0101 | 23.0 | 1748 | 0.9887 | 0.8297 | | 0.059 | 24.0 | 1824 | 1.1066 | 0.8414 | | 0.0163 | 25.0 | 1900 | 0.8966 | 0.8481 | | 0.0425 | 26.0 | 1976 | 0.9615 | 0.8364 | | 0.0118 | 27.0 | 2052 | 1.0527 | 0.8481 | | 0.0022 | 28.0 | 2128 | 1.0163 | 0.8464 | | 0.0009 | 29.0 | 2204 | 1.0736 | 0.8514 | | 0.0005 | 30.0 | 2280 | 1.0490 | 0.8531 | | 0.0032 | 31.0 | 2356 | 1.1469 | 0.8514 | | 0.0106 | 32.0 | 2432 | 1.1588 | 0.8497 | | 0.06 | 33.0 | 2508 | 1.1292 | 0.8514 | | 0.0041 | 34.0 | 2584 | 1.0765 | 0.8531 | | 0.0193 | 35.0 | 2660 | 1.2132 | 0.8548 | | 0.0004 | 36.0 | 2736 | 1.1489 | 0.8481 | | 0.0134 | 37.0 | 2812 | 1.2292 | 0.8464 | | 0.0047 | 38.0 | 2888 | 1.1921 | 0.8514 | | 0.0036 | 39.0 | 2964 | 1.2034 | 0.8464 | | 0.0001 | 40.0 | 3040 | 1.1597 | 0.8481 | | 0.0065 | 41.0 | 3116 | 1.1753 | 0.8548 | | 0.0001 | 42.0 | 3192 | 1.1808 | 0.8548 | | 0.0 | 43.0 | 3268 | 1.1898 | 0.8564 | | 0.0001 | 44.0 | 3344 | 1.2021 | 0.8581 | | 0.0061 | 45.0 | 3420 | 1.2174 | 0.8564 | | 0.0 | 46.0 | 3496 | 1.2210 | 0.8548 | | 0.0025 | 47.0 | 3572 | 1.2289 | 0.8531 | | 0.0025 | 48.0 | 3648 | 1.2311 | 0.8548 | | 0.0023 | 49.0 | 3724 | 1.2330 | 0.8531 | | 0.0044 | 50.0 | 3800 | 1.2333 | 0.8531 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_1x_beit_base_adamax_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_adamax_001_fold2 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4865 - Accuracy: 0.7903 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9007 | 1.0 | 75 | 0.8802 | 0.5191 | | 0.7789 | 2.0 | 150 | 0.8973 | 0.5424 | | 0.8219 | 3.0 | 225 | 0.7607 | 0.6406 | | 0.7838 | 4.0 | 300 | 0.7358 | 0.6522 | | 0.6602 | 5.0 | 375 | 0.6978 | 0.6672 | | 0.7026 | 6.0 | 450 | 0.6685 | 0.6955 | | 0.6394 | 7.0 | 525 | 0.7731 | 0.6589 | | 0.6471 | 8.0 | 600 | 0.6234 | 0.7138 | | 0.5881 | 9.0 | 675 | 0.6358 | 0.7205 | | 0.5254 | 10.0 | 750 | 0.5746 | 0.7671 | | 0.5153 | 11.0 | 825 | 0.5501 | 0.7704 | | 0.5459 | 12.0 | 900 | 0.5543 | 0.7687 | | 0.5526 | 13.0 | 975 | 0.5321 | 0.7737 | | 0.5236 | 14.0 | 1050 | 0.5404 | 0.7937 | | 0.4317 | 15.0 | 1125 | 0.6220 | 0.7604 | | 0.4195 | 16.0 | 1200 | 0.5679 | 0.7854 | | 0.3753 | 17.0 | 1275 | 0.6021 | 0.7687 | | 0.3821 | 18.0 | 1350 | 0.5958 | 0.7854 | | 0.3599 | 19.0 | 1425 | 0.6478 | 0.7837 | | 0.2813 | 20.0 | 1500 | 0.6634 | 0.7671 | | 0.224 | 21.0 | 1575 | 0.6766 | 0.7820 | | 0.2635 | 22.0 | 1650 | 0.6781 | 0.7870 | | 0.1832 | 23.0 | 1725 | 0.8041 | 0.7604 | | 0.1751 | 24.0 | 1800 | 0.8069 | 0.7671 | | 0.2421 | 25.0 | 1875 | 0.8820 | 0.7737 | | 0.2115 | 26.0 | 1950 | 0.8838 | 0.7970 | | 0.1798 | 27.0 | 2025 | 0.8954 | 0.7787 | | 0.1341 | 28.0 | 2100 | 1.0505 | 0.7987 | | 0.0669 | 29.0 | 2175 | 1.2992 | 0.7770 | | 0.0892 | 30.0 | 2250 | 1.1168 | 0.7987 | | 0.1159 | 31.0 | 2325 | 1.2066 | 0.7870 | | 0.1289 | 32.0 | 2400 | 1.5859 | 0.7687 | | 0.0687 | 33.0 | 2475 | 1.1777 | 0.7887 | | 0.0226 | 34.0 | 2550 | 1.4423 | 0.7854 | | 0.04 | 35.0 | 2625 | 1.4594 | 0.7870 | | 0.0552 | 36.0 | 2700 | 1.3867 | 0.7820 | | 0.0439 | 37.0 | 2775 | 1.4599 | 0.7720 | | 0.0308 | 38.0 | 2850 | 1.4968 | 0.7903 | | 0.0564 | 39.0 | 2925 | 1.5256 | 0.7953 | | 0.0227 | 40.0 | 3000 | 1.4454 | 0.7953 | | 0.0214 | 41.0 | 3075 | 1.3100 | 0.8087 | | 0.0167 | 42.0 | 3150 | 1.4699 | 0.7987 | | 0.0299 | 43.0 | 3225 | 1.4525 | 0.7903 | | 0.0171 | 44.0 | 3300 | 1.3889 | 0.8053 | | 0.011 | 45.0 | 3375 | 1.3819 | 0.7920 | | 0.014 | 46.0 | 3450 | 1.5122 | 0.7903 | | 0.0198 | 47.0 | 3525 | 1.4328 | 0.7920 | | 0.0085 | 48.0 | 3600 | 1.5057 | 0.7920 | | 0.0028 | 49.0 | 3675 | 1.4856 | 0.7903 | | 0.0049 | 50.0 | 3750 | 1.4865 | 0.7903 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
andrecastro/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0271 - Accuracy: 0.9967 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0898 | 1.0 | 327 | 0.0707 | 0.9757 | | 0.0221 | 2.0 | 654 | 0.0278 | 0.9920 | | 0.06 | 3.0 | 981 | 0.0345 | 0.9913 | | 0.0094 | 4.0 | 1309 | 0.0300 | 0.9947 | | 0.0004 | 5.0 | 1636 | 0.0398 | 0.9942 | | 0.0035 | 6.0 | 1963 | 0.0136 | 0.9975 | | 0.0246 | 7.0 | 2290 | 0.0339 | 0.9940 | | 0.0012 | 8.0 | 2618 | 0.0316 | 0.9958 | | 0.0 | 9.0 | 2945 | 0.0302 | 0.9964 | | 0.0 | 10.0 | 3272 | 0.0201 | 0.9973 | | 0.0003 | 11.0 | 3599 | 0.0222 | 0.9955 | | 0.0 | 12.0 | 3927 | 0.0218 | 0.9962 | | 0.0001 | 13.0 | 4254 | 0.0293 | 0.9962 | | 0.0002 | 14.0 | 4581 | 0.0272 | 0.9962 | | 0.0 | 14.99 | 4905 | 0.0271 | 0.9967 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "anormal_total", "normal_total" ]
hkivancoral/smids_1x_beit_base_adamax_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_adamax_001_fold3 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.6207 - Accuracy: 0.8067 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.961 | 1.0 | 75 | 0.9579 | 0.5083 | | 0.8519 | 2.0 | 150 | 0.8223 | 0.555 | | 0.8429 | 3.0 | 225 | 0.8258 | 0.5417 | | 0.8689 | 4.0 | 300 | 1.1933 | 0.5183 | | 0.7212 | 5.0 | 375 | 0.6887 | 0.7133 | | 0.649 | 6.0 | 450 | 0.7128 | 0.6567 | | 0.6409 | 7.0 | 525 | 0.6763 | 0.71 | | 0.5869 | 8.0 | 600 | 0.5948 | 0.7383 | | 0.5565 | 9.0 | 675 | 0.6418 | 0.695 | | 0.5839 | 10.0 | 750 | 0.6087 | 0.7267 | | 0.5293 | 11.0 | 825 | 0.5977 | 0.7267 | | 0.4762 | 12.0 | 900 | 0.5491 | 0.7783 | | 0.4499 | 13.0 | 975 | 0.5838 | 0.7517 | | 0.4302 | 14.0 | 1050 | 0.5473 | 0.77 | | 0.4099 | 15.0 | 1125 | 0.5508 | 0.755 | | 0.3178 | 16.0 | 1200 | 0.5699 | 0.78 | | 0.341 | 17.0 | 1275 | 0.6033 | 0.7933 | | 0.2555 | 18.0 | 1350 | 0.6573 | 0.7767 | | 0.3366 | 19.0 | 1425 | 0.5611 | 0.7933 | | 0.1724 | 20.0 | 1500 | 0.7339 | 0.7933 | | 0.2297 | 21.0 | 1575 | 0.8132 | 0.78 | | 0.2293 | 22.0 | 1650 | 0.7112 | 0.7833 | | 0.1656 | 23.0 | 1725 | 0.8681 | 0.7767 | | 0.1488 | 24.0 | 1800 | 0.9454 | 0.79 | | 0.1667 | 25.0 | 1875 | 0.9934 | 0.7767 | | 0.0534 | 26.0 | 1950 | 0.9484 | 0.7767 | | 0.1635 | 27.0 | 2025 | 1.0833 | 0.77 | | 0.0554 | 28.0 | 2100 | 1.1552 | 0.8017 | | 0.0938 | 29.0 | 2175 | 1.0865 | 0.7917 | | 0.1141 | 30.0 | 2250 | 1.3605 | 0.7883 | | 0.0561 | 31.0 | 2325 | 1.2003 | 0.8033 | | 0.064 | 32.0 | 2400 | 1.3257 | 0.7933 | | 0.0695 | 33.0 | 2475 | 1.6036 | 0.7883 | | 0.0143 | 34.0 | 2550 | 1.5166 | 0.7717 | | 0.0099 | 35.0 | 2625 | 1.5177 | 0.7833 | | 0.046 | 36.0 | 2700 | 1.6809 | 0.7983 | | 0.0535 | 37.0 | 2775 | 1.6548 | 0.7783 | | 0.0142 | 38.0 | 2850 | 1.9052 | 0.7867 | | 0.0043 | 39.0 | 2925 | 1.8855 | 0.785 | | 0.0169 | 40.0 | 3000 | 1.8422 | 0.7983 | | 0.0085 | 41.0 | 3075 | 1.6803 | 0.8033 | | 0.0125 | 42.0 | 3150 | 1.4852 | 0.8033 | | 0.0037 | 43.0 | 3225 | 1.5490 | 0.7883 | | 0.0153 | 44.0 | 3300 | 1.3985 | 0.81 | | 0.0066 | 45.0 | 3375 | 1.5369 | 0.8083 | | 0.0076 | 46.0 | 3450 | 1.5177 | 0.7983 | | 0.0089 | 47.0 | 3525 | 1.6039 | 0.7883 | | 0.0027 | 48.0 | 3600 | 1.6013 | 0.8067 | | 0.0003 | 49.0 | 3675 | 1.6182 | 0.8067 | | 0.0026 | 50.0 | 3750 | 1.6207 | 0.8067 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_1x_beit_base_adamax_001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_adamax_001_fold4 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.7646 - Accuracy: 0.775 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9222 | 1.0 | 75 | 0.8216 | 0.5567 | | 0.8457 | 2.0 | 150 | 0.8398 | 0.57 | | 0.8147 | 3.0 | 225 | 0.7493 | 0.6333 | | 0.7701 | 4.0 | 300 | 0.7606 | 0.6117 | | 0.8026 | 5.0 | 375 | 0.8189 | 0.565 | | 0.6963 | 6.0 | 450 | 0.6808 | 0.665 | | 0.7638 | 7.0 | 525 | 0.6641 | 0.7017 | | 0.6601 | 8.0 | 600 | 0.6495 | 0.6833 | | 0.6719 | 9.0 | 675 | 0.7134 | 0.66 | | 0.5461 | 10.0 | 750 | 0.5791 | 0.7483 | | 0.547 | 11.0 | 825 | 0.5859 | 0.7633 | | 0.4912 | 12.0 | 900 | 0.5937 | 0.735 | | 0.5352 | 13.0 | 975 | 0.5233 | 0.7667 | | 0.4434 | 14.0 | 1050 | 0.5543 | 0.7617 | | 0.4927 | 15.0 | 1125 | 0.7581 | 0.6767 | | 0.4312 | 16.0 | 1200 | 0.5587 | 0.7667 | | 0.3899 | 17.0 | 1275 | 0.6422 | 0.7633 | | 0.3786 | 18.0 | 1350 | 0.6068 | 0.7783 | | 0.4006 | 19.0 | 1425 | 0.6778 | 0.7617 | | 0.3094 | 20.0 | 1500 | 0.6494 | 0.775 | | 0.3319 | 21.0 | 1575 | 0.6363 | 0.765 | | 0.2928 | 22.0 | 1650 | 0.7276 | 0.7817 | | 0.2846 | 23.0 | 1725 | 0.8156 | 0.7733 | | 0.1736 | 24.0 | 1800 | 0.7838 | 0.785 | | 0.2416 | 25.0 | 1875 | 0.8283 | 0.775 | | 0.1805 | 26.0 | 1950 | 0.8042 | 0.7867 | | 0.1895 | 27.0 | 2025 | 1.0411 | 0.7933 | | 0.0832 | 28.0 | 2100 | 1.0766 | 0.7983 | | 0.099 | 29.0 | 2175 | 1.1178 | 0.7683 | | 0.0916 | 30.0 | 2250 | 1.3040 | 0.775 | | 0.128 | 31.0 | 2325 | 1.2237 | 0.7983 | | 0.0775 | 32.0 | 2400 | 1.1999 | 0.79 | | 0.0706 | 33.0 | 2475 | 1.4034 | 0.78 | | 0.0546 | 34.0 | 2550 | 1.4009 | 0.785 | | 0.0453 | 35.0 | 2625 | 1.2357 | 0.7917 | | 0.0136 | 36.0 | 2700 | 1.4685 | 0.79 | | 0.0534 | 37.0 | 2775 | 1.8215 | 0.7717 | | 0.0751 | 38.0 | 2850 | 1.6150 | 0.7833 | | 0.0013 | 39.0 | 2925 | 1.7207 | 0.7917 | | 0.0466 | 40.0 | 3000 | 1.4737 | 0.785 | | 0.0122 | 41.0 | 3075 | 1.5635 | 0.7783 | | 0.0071 | 42.0 | 3150 | 1.6935 | 0.7783 | | 0.0119 | 43.0 | 3225 | 1.6935 | 0.7833 | | 0.0065 | 44.0 | 3300 | 1.7015 | 0.7883 | | 0.0254 | 45.0 | 3375 | 1.7329 | 0.7867 | | 0.0205 | 46.0 | 3450 | 1.6886 | 0.785 | | 0.0082 | 47.0 | 3525 | 1.7094 | 0.7833 | | 0.0134 | 48.0 | 3600 | 1.7793 | 0.78 | | 0.005 | 49.0 | 3675 | 1.7866 | 0.7767 | | 0.0132 | 50.0 | 3750 | 1.7646 | 0.775 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
jsalasr/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0367 - Accuracy: 0.9879 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1993 | 0.9904 | 77 | 0.0961 | 0.9617 | | 0.1729 | 1.9936 | 155 | 0.1151 | 0.9486 | | 0.1509 | 2.9968 | 233 | 0.0603 | 0.9748 | | 0.1081 | 4.0 | 311 | 0.0367 | 0.9879 | | 0.1195 | 4.9904 | 388 | 0.0936 | 0.9627 | | 0.0674 | 5.9936 | 466 | 0.0370 | 0.9849 | | 0.0629 | 6.9968 | 544 | 0.0400 | 0.9839 | | 0.0718 | 8.0 | 622 | 0.0496 | 0.9839 | | 0.0335 | 8.9904 | 699 | 0.0533 | 0.9819 | | 0.0843 | 9.9035 | 770 | 0.0550 | 0.9809 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
[ "clear", "cloudy" ]
HarshaSingamshetty1/roof_classification_rearrange_labels
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # HarshaSingamshetty1/roof_classification_rearrange_labels This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.3721 - Train Accuracy: 0.4404 - Validation Loss: 1.6641 - Validation Accuracy: 0.4000 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.0005, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 3.2127 | 0.1021 | 2.8916 | 0.1340 | 0 | | 2.7296 | 0.1255 | 2.7126 | 0.1213 | 1 | | 2.3888 | 0.2468 | 2.3456 | 0.2489 | 2 | | 2.1480 | 0.2702 | 2.1604 | 0.2830 | 3 | | 2.0789 | 0.3170 | 2.0942 | 0.3106 | 4 | | 1.8117 | 0.3851 | 1.8224 | 0.3766 | 5 | | 1.6477 | 0.3426 | 1.8774 | 0.3596 | 6 | | 1.5677 | 0.4404 | 1.7042 | 0.4362 | 7 | | 1.4018 | 0.4660 | 1.4974 | 0.4553 | 8 | | 1.3721 | 0.4404 | 1.6641 | 0.4000 | 9 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.14.0 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "certainteed_heather blend_landmark", "gaf_golden harvest_timberline american harvest", "tamko_weathered wood_heritage", "gaf_mission brown_timberline hdz", "atlas_tan_pinnacle pristine", "owens corning_desert rose_truedefinition duration", "iko_cornerstone_dynasty", "owen’s corning_midnight plum_duration", "gaf_cedar falls_timberline american harvest", "owens corning_amber_trudefinition duration", "owens corning_estate gray_supreme", "certainteed_weathered wood_presidential shake", "iko_weathered wood_cambridge", "atlas_cool sand_pinnacle sun", "owens corning_driftwood_trudefinition duration", "certainteed_max def weathered wood_landmark pro", "atlas_coastal granite_pinnacle pristine", "gaf_charcoal_timberline hdz", "atlas_cool driftwood_pinnacle sun", "atlas_cool costal cliffs_pinnacle sun", "atlas_weathered wood_pinnacle pristine", "atlas_oyster_pinnacle pristine", "owens corning_sedona canyon_duration designer", "gaf_weathered wood_timberline hdz", "tamko_thunderstorm grey_titan xt", "gaf_shakewood_timberline hdz", "malarkey_weather wood_highlander nex", "iko_dual black_cambridge", "certainteed_weathered wood_landmark", "atlas_cool coral canyon_pinnacle sun", "atlas_summer storm_pinnacle pristine", "gaf_pewter gray_timberline hdz", "atlas_hickory_pinnacle pristine", "gaf_charcoal_grand sequoia", "gaf_driftwood_timberline hdz" ]
PK-B/roof_classification_rearrange_labels
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # PK-B/roof_classification_rearrange_labels This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7457 - Validation Loss: 0.9674 - Train Accuracy: 0.8106 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 18770, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.0001} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 3.3662 | 3.0784 | 0.3894 | 0 | | 2.8003 | 2.5991 | 0.5830 | 1 | | 2.3450 | 2.2234 | 0.6766 | 2 | | 1.9717 | 1.8939 | 0.7532 | 3 | | 1.6915 | 1.6970 | 0.7468 | 4 | | 1.4260 | 1.3627 | 0.8553 | 5 | | 1.1972 | 1.3024 | 0.8064 | 6 | | 1.0469 | 1.0933 | 0.8532 | 7 | | 0.8685 | 1.0638 | 0.8 | 8 | | 0.7457 | 0.9674 | 0.8106 | 9 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.14.0 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "certainteed_heather blend_landmark", "gaf_golden harvest_timberline american harvest", "tamko_weathered wood_heritage", "gaf_mission brown_timberline hdz", "atlas_tan_pinnacle pristine", "owens corning_desert rose_truedefinition duration", "iko_cornerstone_dynasty", "owen’s corning_midnight plum_duration", "gaf_cedar falls_timberline american harvest", "owens corning_amber_trudefinition duration", "owens corning_estate gray_supreme", "certainteed_weathered wood_presidential shake", "iko_weathered wood_cambridge", "atlas_cool sand_pinnacle sun", "owens corning_driftwood_trudefinition duration", "certainteed_max def weathered wood_landmark pro", "atlas_coastal granite_pinnacle pristine", "gaf_charcoal_timberline hdz", "atlas_cool driftwood_pinnacle sun", "atlas_cool costal cliffs_pinnacle sun", "atlas_weathered wood_pinnacle pristine", "atlas_oyster_pinnacle pristine", "owens corning_sedona canyon_duration designer", "gaf_weathered wood_timberline hdz", "tamko_thunderstorm grey_titan xt", "gaf_shakewood_timberline hdz", "malarkey_weather wood_highlander nex", "iko_dual black_cambridge", "certainteed_weathered wood_landmark", "atlas_cool coral canyon_pinnacle sun", "atlas_summer storm_pinnacle pristine", "gaf_pewter gray_timberline hdz", "atlas_hickory_pinnacle pristine", "gaf_charcoal_grand sequoia", "gaf_driftwood_timberline hdz" ]
hkivancoral/smids_1x_beit_base_adamax_001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_adamax_001_fold5 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.1833 - Accuracy: 0.7633 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9804 | 1.0 | 75 | 0.8561 | 0.5383 | | 0.8823 | 2.0 | 150 | 0.7905 | 0.5767 | | 0.8002 | 3.0 | 225 | 0.7961 | 0.5633 | | 0.8142 | 4.0 | 300 | 0.8679 | 0.6133 | | 0.6765 | 5.0 | 375 | 0.6964 | 0.6817 | | 0.652 | 6.0 | 450 | 0.6686 | 0.7 | | 0.6785 | 7.0 | 525 | 0.6625 | 0.7067 | | 0.5659 | 8.0 | 600 | 0.6154 | 0.7217 | | 0.6383 | 9.0 | 675 | 0.6262 | 0.7117 | | 0.5991 | 10.0 | 750 | 0.5856 | 0.7633 | | 0.4627 | 11.0 | 825 | 0.5901 | 0.7633 | | 0.5021 | 12.0 | 900 | 0.5968 | 0.7433 | | 0.5421 | 13.0 | 975 | 0.5857 | 0.74 | | 0.3951 | 14.0 | 1050 | 0.5723 | 0.7733 | | 0.4943 | 15.0 | 1125 | 0.6046 | 0.7533 | | 0.4076 | 16.0 | 1200 | 0.6196 | 0.7567 | | 0.379 | 17.0 | 1275 | 0.5906 | 0.7817 | | 0.3759 | 18.0 | 1350 | 0.5998 | 0.775 | | 0.3383 | 19.0 | 1425 | 0.6508 | 0.7567 | | 0.2622 | 20.0 | 1500 | 0.6675 | 0.775 | | 0.316 | 21.0 | 1575 | 0.7118 | 0.785 | | 0.2478 | 22.0 | 1650 | 0.7508 | 0.78 | | 0.2696 | 23.0 | 1725 | 0.7052 | 0.7733 | | 0.1441 | 24.0 | 1800 | 0.8658 | 0.7783 | | 0.1966 | 25.0 | 1875 | 0.9393 | 0.7417 | | 0.1228 | 26.0 | 1950 | 1.0783 | 0.7567 | | 0.2151 | 27.0 | 2025 | 1.0051 | 0.7533 | | 0.1799 | 28.0 | 2100 | 1.0898 | 0.755 | | 0.1053 | 29.0 | 2175 | 1.0567 | 0.7533 | | 0.122 | 30.0 | 2250 | 1.1544 | 0.7583 | | 0.1375 | 31.0 | 2325 | 1.3014 | 0.7617 | | 0.0659 | 32.0 | 2400 | 1.6359 | 0.765 | | 0.0997 | 33.0 | 2475 | 1.4213 | 0.7717 | | 0.0852 | 34.0 | 2550 | 1.6657 | 0.7467 | | 0.0752 | 35.0 | 2625 | 1.5943 | 0.7733 | | 0.0405 | 36.0 | 2700 | 1.5865 | 0.7583 | | 0.0174 | 37.0 | 2775 | 1.8002 | 0.7533 | | 0.0364 | 38.0 | 2850 | 1.6078 | 0.7583 | | 0.0269 | 39.0 | 2925 | 2.0543 | 0.7667 | | 0.0034 | 40.0 | 3000 | 2.1698 | 0.7517 | | 0.0428 | 41.0 | 3075 | 1.8011 | 0.74 | | 0.0355 | 42.0 | 3150 | 2.1588 | 0.7567 | | 0.0068 | 43.0 | 3225 | 2.0789 | 0.7617 | | 0.013 | 44.0 | 3300 | 2.0235 | 0.76 | | 0.0102 | 45.0 | 3375 | 1.9567 | 0.7567 | | 0.0216 | 46.0 | 3450 | 1.9788 | 0.765 | | 0.0016 | 47.0 | 3525 | 2.1056 | 0.765 | | 0.0046 | 48.0 | 3600 | 2.1156 | 0.7633 | | 0.0115 | 49.0 | 3675 | 2.2014 | 0.7617 | | 0.0156 | 50.0 | 3750 | 2.1833 | 0.7633 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_1x_beit_base_adamax_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_adamax_0001_fold1 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8043 - Accuracy: 0.9032 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3546 | 1.0 | 76 | 0.4865 | 0.7930 | | 0.2399 | 2.0 | 152 | 0.3349 | 0.8731 | | 0.136 | 3.0 | 228 | 0.2999 | 0.8831 | | 0.1619 | 4.0 | 304 | 0.4346 | 0.8698 | | 0.1213 | 5.0 | 380 | 0.4295 | 0.8748 | | 0.0741 | 6.0 | 456 | 0.4439 | 0.8881 | | 0.0995 | 7.0 | 532 | 0.5033 | 0.8815 | | 0.0126 | 8.0 | 608 | 0.4887 | 0.8982 | | 0.0174 | 9.0 | 684 | 0.6241 | 0.8848 | | 0.0036 | 10.0 | 760 | 0.5630 | 0.8898 | | 0.0047 | 11.0 | 836 | 0.6256 | 0.8898 | | 0.025 | 12.0 | 912 | 0.5949 | 0.8982 | | 0.0037 | 13.0 | 988 | 0.6192 | 0.8898 | | 0.0095 | 14.0 | 1064 | 0.6191 | 0.8982 | | 0.0074 | 15.0 | 1140 | 0.6693 | 0.8948 | | 0.0061 | 16.0 | 1216 | 0.6785 | 0.8915 | | 0.0003 | 17.0 | 1292 | 0.6825 | 0.8898 | | 0.0001 | 18.0 | 1368 | 0.7695 | 0.8865 | | 0.0107 | 19.0 | 1444 | 0.6909 | 0.8965 | | 0.0125 | 20.0 | 1520 | 0.7272 | 0.8915 | | 0.0016 | 21.0 | 1596 | 0.7585 | 0.8848 | | 0.0028 | 22.0 | 1672 | 0.7524 | 0.8898 | | 0.0017 | 23.0 | 1748 | 0.8165 | 0.8865 | | 0.0046 | 24.0 | 1824 | 0.7698 | 0.8848 | | 0.004 | 25.0 | 1900 | 0.8060 | 0.8915 | | 0.003 | 26.0 | 1976 | 0.7525 | 0.8998 | | 0.0039 | 27.0 | 2052 | 0.8271 | 0.8848 | | 0.0001 | 28.0 | 2128 | 0.7809 | 0.8965 | | 0.0001 | 29.0 | 2204 | 0.8142 | 0.8948 | | 0.0 | 30.0 | 2280 | 0.7973 | 0.8881 | | 0.0023 | 31.0 | 2356 | 0.7501 | 0.8998 | | 0.0061 | 32.0 | 2432 | 0.7903 | 0.8932 | | 0.0085 | 33.0 | 2508 | 0.7939 | 0.8932 | | 0.0036 | 34.0 | 2584 | 0.7959 | 0.8982 | | 0.0089 | 35.0 | 2660 | 0.7729 | 0.8982 | | 0.0 | 36.0 | 2736 | 0.8000 | 0.8948 | | 0.0038 | 37.0 | 2812 | 0.7757 | 0.8998 | | 0.0028 | 38.0 | 2888 | 0.7902 | 0.8898 | | 0.0024 | 39.0 | 2964 | 0.7785 | 0.9048 | | 0.0001 | 40.0 | 3040 | 0.7668 | 0.9082 | | 0.0052 | 41.0 | 3116 | 0.7725 | 0.9048 | | 0.0 | 42.0 | 3192 | 0.7888 | 0.9032 | | 0.0 | 43.0 | 3268 | 0.7934 | 0.9032 | | 0.0 | 44.0 | 3344 | 0.7962 | 0.9032 | | 0.0053 | 45.0 | 3420 | 0.8046 | 0.9032 | | 0.0 | 46.0 | 3496 | 0.7994 | 0.9032 | | 0.003 | 47.0 | 3572 | 0.8008 | 0.9032 | | 0.0032 | 48.0 | 3648 | 0.8023 | 0.9032 | | 0.0018 | 49.0 | 3724 | 0.8041 | 0.9032 | | 0.0052 | 50.0 | 3800 | 0.8043 | 0.9032 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_1x_beit_base_adamax_0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_adamax_0001_fold2 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7859 - Accuracy: 0.8902 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.357 | 1.0 | 75 | 0.2942 | 0.8852 | | 0.196 | 2.0 | 150 | 0.2977 | 0.8769 | | 0.1343 | 3.0 | 225 | 0.3454 | 0.8835 | | 0.1165 | 4.0 | 300 | 0.4770 | 0.8586 | | 0.0357 | 5.0 | 375 | 0.3863 | 0.8819 | | 0.0407 | 6.0 | 450 | 0.5588 | 0.8785 | | 0.0487 | 7.0 | 525 | 0.5410 | 0.8769 | | 0.0422 | 8.0 | 600 | 0.5327 | 0.8835 | | 0.0252 | 9.0 | 675 | 0.5671 | 0.8885 | | 0.0072 | 10.0 | 750 | 0.5229 | 0.8852 | | 0.0013 | 11.0 | 825 | 0.5397 | 0.9018 | | 0.0233 | 12.0 | 900 | 0.6716 | 0.8902 | | 0.0031 | 13.0 | 975 | 0.6232 | 0.8935 | | 0.0106 | 14.0 | 1050 | 0.6722 | 0.8835 | | 0.0052 | 15.0 | 1125 | 0.5873 | 0.9101 | | 0.0117 | 16.0 | 1200 | 0.6014 | 0.8935 | | 0.0056 | 17.0 | 1275 | 0.6190 | 0.8952 | | 0.018 | 18.0 | 1350 | 0.6714 | 0.8902 | | 0.0034 | 19.0 | 1425 | 0.6903 | 0.8918 | | 0.0034 | 20.0 | 1500 | 0.6789 | 0.8902 | | 0.0018 | 21.0 | 1575 | 0.7049 | 0.8852 | | 0.0015 | 22.0 | 1650 | 0.8451 | 0.8802 | | 0.0032 | 23.0 | 1725 | 0.6725 | 0.8885 | | 0.0116 | 24.0 | 1800 | 0.7163 | 0.8952 | | 0.0001 | 25.0 | 1875 | 0.6827 | 0.8918 | | 0.004 | 26.0 | 1950 | 0.7084 | 0.8885 | | 0.012 | 27.0 | 2025 | 0.7239 | 0.8968 | | 0.0099 | 28.0 | 2100 | 0.7371 | 0.8918 | | 0.0044 | 29.0 | 2175 | 0.7635 | 0.8869 | | 0.0039 | 30.0 | 2250 | 0.7043 | 0.8918 | | 0.0035 | 31.0 | 2325 | 0.7276 | 0.8902 | | 0.0 | 32.0 | 2400 | 0.7428 | 0.8935 | | 0.0 | 33.0 | 2475 | 0.7968 | 0.8852 | | 0.014 | 34.0 | 2550 | 0.7553 | 0.8918 | | 0.0048 | 35.0 | 2625 | 0.7230 | 0.8968 | | 0.0029 | 36.0 | 2700 | 0.7674 | 0.8869 | | 0.0 | 37.0 | 2775 | 0.7425 | 0.8918 | | 0.0023 | 38.0 | 2850 | 0.7970 | 0.8902 | | 0.0047 | 39.0 | 2925 | 0.8047 | 0.8869 | | 0.0021 | 40.0 | 3000 | 0.7994 | 0.8885 | | 0.0 | 41.0 | 3075 | 0.7761 | 0.8852 | | 0.0025 | 42.0 | 3150 | 0.7890 | 0.8885 | | 0.0046 | 43.0 | 3225 | 0.7889 | 0.8885 | | 0.0 | 44.0 | 3300 | 0.7915 | 0.8852 | | 0.0047 | 45.0 | 3375 | 0.7967 | 0.8885 | | 0.0 | 46.0 | 3450 | 0.7946 | 0.8869 | | 0.002 | 47.0 | 3525 | 0.7884 | 0.8885 | | 0.0 | 48.0 | 3600 | 0.7873 | 0.8885 | | 0.0 | 49.0 | 3675 | 0.7859 | 0.8902 | | 0.0 | 50.0 | 3750 | 0.7859 | 0.8902 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_1x_beit_base_adamax_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_adamax_0001_fold3 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6695 - Accuracy: 0.9133 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3952 | 1.0 | 75 | 0.2900 | 0.885 | | 0.2706 | 2.0 | 150 | 0.2858 | 0.895 | | 0.1431 | 3.0 | 225 | 0.3509 | 0.8717 | | 0.0811 | 4.0 | 300 | 0.2863 | 0.9183 | | 0.0342 | 5.0 | 375 | 0.4874 | 0.8783 | | 0.0334 | 6.0 | 450 | 0.4282 | 0.9117 | | 0.0097 | 7.0 | 525 | 0.4216 | 0.92 | | 0.0417 | 8.0 | 600 | 0.4392 | 0.91 | | 0.0057 | 9.0 | 675 | 0.4243 | 0.9183 | | 0.0023 | 10.0 | 750 | 0.5491 | 0.9 | | 0.0342 | 11.0 | 825 | 0.4738 | 0.915 | | 0.015 | 12.0 | 900 | 0.5105 | 0.9267 | | 0.0179 | 13.0 | 975 | 0.6274 | 0.9083 | | 0.0017 | 14.0 | 1050 | 0.5351 | 0.915 | | 0.0012 | 15.0 | 1125 | 0.5446 | 0.905 | | 0.0029 | 16.0 | 1200 | 0.5695 | 0.9067 | | 0.0045 | 17.0 | 1275 | 0.5414 | 0.9133 | | 0.0233 | 18.0 | 1350 | 0.7467 | 0.87 | | 0.0001 | 19.0 | 1425 | 0.5934 | 0.9 | | 0.0112 | 20.0 | 1500 | 0.5736 | 0.9067 | | 0.0001 | 21.0 | 1575 | 0.6327 | 0.9033 | | 0.0084 | 22.0 | 1650 | 0.5946 | 0.915 | | 0.006 | 23.0 | 1725 | 0.5821 | 0.9133 | | 0.0001 | 24.0 | 1800 | 0.6358 | 0.9 | | 0.0 | 25.0 | 1875 | 0.5917 | 0.9117 | | 0.0 | 26.0 | 1950 | 0.5998 | 0.9133 | | 0.0002 | 27.0 | 2025 | 0.5967 | 0.915 | | 0.0001 | 28.0 | 2100 | 0.5752 | 0.9117 | | 0.0001 | 29.0 | 2175 | 0.6692 | 0.9 | | 0.0044 | 30.0 | 2250 | 0.6493 | 0.9033 | | 0.003 | 31.0 | 2325 | 0.6716 | 0.9117 | | 0.0061 | 32.0 | 2400 | 0.7077 | 0.8983 | | 0.0001 | 33.0 | 2475 | 0.6337 | 0.915 | | 0.0003 | 34.0 | 2550 | 0.6698 | 0.9 | | 0.0035 | 35.0 | 2625 | 0.6670 | 0.9033 | | 0.0027 | 36.0 | 2700 | 0.6180 | 0.9067 | | 0.0042 | 37.0 | 2775 | 0.6174 | 0.915 | | 0.0023 | 38.0 | 2850 | 0.6161 | 0.9133 | | 0.0001 | 39.0 | 2925 | 0.6601 | 0.91 | | 0.0029 | 40.0 | 3000 | 0.6359 | 0.91 | | 0.0 | 41.0 | 3075 | 0.6349 | 0.91 | | 0.0022 | 42.0 | 3150 | 0.6576 | 0.9133 | | 0.0028 | 43.0 | 3225 | 0.6662 | 0.9067 | | 0.0 | 44.0 | 3300 | 0.6662 | 0.9083 | | 0.0 | 45.0 | 3375 | 0.6797 | 0.9117 | | 0.0 | 46.0 | 3450 | 0.6797 | 0.91 | | 0.0043 | 47.0 | 3525 | 0.6738 | 0.91 | | 0.0006 | 48.0 | 3600 | 0.6709 | 0.9133 | | 0.0001 | 49.0 | 3675 | 0.6693 | 0.9133 | | 0.0 | 50.0 | 3750 | 0.6695 | 0.9133 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_1x_beit_base_adamax_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_adamax_0001_fold4 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1137 - Accuracy: 0.87 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3443 | 1.0 | 75 | 0.4137 | 0.8583 | | 0.258 | 2.0 | 150 | 0.4036 | 0.8483 | | 0.1343 | 3.0 | 225 | 0.4810 | 0.8533 | | 0.0768 | 4.0 | 300 | 0.5625 | 0.86 | | 0.0189 | 5.0 | 375 | 0.6619 | 0.8617 | | 0.0435 | 6.0 | 450 | 0.6679 | 0.875 | | 0.0162 | 7.0 | 525 | 0.7878 | 0.86 | | 0.0677 | 8.0 | 600 | 0.7298 | 0.875 | | 0.0423 | 9.0 | 675 | 0.8935 | 0.855 | | 0.0172 | 10.0 | 750 | 0.8762 | 0.8717 | | 0.001 | 11.0 | 825 | 0.8614 | 0.865 | | 0.0092 | 12.0 | 900 | 0.8623 | 0.8717 | | 0.0016 | 13.0 | 975 | 0.8916 | 0.87 | | 0.0049 | 14.0 | 1050 | 0.8926 | 0.88 | | 0.0101 | 15.0 | 1125 | 0.9303 | 0.8683 | | 0.0014 | 16.0 | 1200 | 0.9140 | 0.8783 | | 0.001 | 17.0 | 1275 | 0.9424 | 0.8817 | | 0.0053 | 18.0 | 1350 | 0.8806 | 0.8817 | | 0.0012 | 19.0 | 1425 | 0.9188 | 0.8917 | | 0.0147 | 20.0 | 1500 | 0.9436 | 0.8767 | | 0.0025 | 21.0 | 1575 | 0.9848 | 0.88 | | 0.0092 | 22.0 | 1650 | 0.9945 | 0.8817 | | 0.0279 | 23.0 | 1725 | 1.0063 | 0.875 | | 0.0046 | 24.0 | 1800 | 1.0539 | 0.8767 | | 0.0043 | 25.0 | 1875 | 1.0635 | 0.8717 | | 0.0045 | 26.0 | 1950 | 1.0471 | 0.8733 | | 0.0 | 27.0 | 2025 | 1.0128 | 0.8783 | | 0.0004 | 28.0 | 2100 | 1.0296 | 0.8717 | | 0.0001 | 29.0 | 2175 | 1.0117 | 0.875 | | 0.0001 | 30.0 | 2250 | 1.0423 | 0.87 | | 0.0073 | 31.0 | 2325 | 1.0722 | 0.87 | | 0.0 | 32.0 | 2400 | 1.0662 | 0.8767 | | 0.0 | 33.0 | 2475 | 1.0416 | 0.8717 | | 0.0 | 34.0 | 2550 | 1.0959 | 0.8717 | | 0.0034 | 35.0 | 2625 | 1.1220 | 0.87 | | 0.0 | 36.0 | 2700 | 1.1441 | 0.8733 | | 0.0 | 37.0 | 2775 | 1.1553 | 0.8733 | | 0.0022 | 38.0 | 2850 | 1.1117 | 0.8767 | | 0.0 | 39.0 | 2925 | 1.1002 | 0.8717 | | 0.0 | 40.0 | 3000 | 1.1022 | 0.8683 | | 0.003 | 41.0 | 3075 | 1.1129 | 0.8667 | | 0.008 | 42.0 | 3150 | 1.1397 | 0.8667 | | 0.0 | 43.0 | 3225 | 1.1224 | 0.87 | | 0.0 | 44.0 | 3300 | 1.1186 | 0.8717 | | 0.0 | 45.0 | 3375 | 1.1121 | 0.87 | | 0.0001 | 46.0 | 3450 | 1.1134 | 0.87 | | 0.0 | 47.0 | 3525 | 1.1172 | 0.8683 | | 0.0001 | 48.0 | 3600 | 1.1134 | 0.87 | | 0.0023 | 49.0 | 3675 | 1.1139 | 0.87 | | 0.0022 | 50.0 | 3750 | 1.1137 | 0.87 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
xiaopch/vit-base-patch16-224-finetuned-for-agricultural
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-for-agricultural This model is a fine-tuned version of [xiaopch/vit-base-patch16-224-finetuned](https://huggingface.co/xiaopch/vit-base-patch16-224-finetuned) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9246 - Accuracy: 0.7309 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9131 | 1.0 | 35 | 1.0878 | 0.6847 | | 0.8066 | 2.0 | 70 | 0.9933 | 0.7189 | | 0.7259 | 3.0 | 105 | 0.9445 | 0.7249 | | 0.6719 | 4.0 | 140 | 0.9246 | 0.7309 | | 0.6056 | 5.0 | 175 | 0.9258 | 0.7229 | | 0.5576 | 6.0 | 210 | 0.9230 | 0.7309 | | 0.5113 | 7.0 | 245 | 0.9152 | 0.7169 | | 0.488 | 8.0 | 280 | 0.9119 | 0.7209 | | 0.4822 | 9.0 | 315 | 0.9061 | 0.7269 | | 0.4163 | 10.0 | 350 | 0.9039 | 0.7289 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "spodoptera_litura", "aphid", "beet_armyworm", "borer", "chemical_fertilizer", "cnidocampa_flavescens", "corn_borer", "cotton_bollworm", "fhb", "grasshopper", "longhorn_beetle", "oriental_fruit_fly", "pesticides", "plutella_xylostella", "rice_planthopper", "rice_stem_borer", "rolled_leaf_borer" ]
canadianjosieharrison/swinv2-large-patch4-window12-192-22k-augmented
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-large-patch4-window12-192-22k-augmented This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12-192-22k](https://huggingface.co/microsoft/swinv2-large-patch4-window12-192-22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3067 - Accuracy: 0.8723 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 384 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.89 | 3 | 1.4847 | 0.5816 | | No log | 1.78 | 6 | 0.9256 | 0.6950 | | 1.2457 | 2.96 | 10 | 0.6017 | 0.7589 | | 1.2457 | 3.85 | 13 | 0.3806 | 0.8723 | | 1.2457 | 4.74 | 16 | 0.3866 | 0.8440 | | 0.3656 | 5.93 | 20 | 0.3358 | 0.8794 | | 0.3656 | 6.81 | 23 | 0.2803 | 0.8865 | | 0.3656 | 8.0 | 27 | 0.3079 | 0.8723 | | 0.2205 | 8.89 | 30 | 0.3067 | 0.8723 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.1+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "brick", "metal", "null", "other", "rustication", "siding", "stucco", "wood" ]
hkivancoral/smids_1x_beit_base_adamax_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_adamax_0001_fold5 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8148 - Accuracy: 0.88 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3159 | 1.0 | 75 | 0.2787 | 0.8933 | | 0.2494 | 2.0 | 150 | 0.2824 | 0.8917 | | 0.1709 | 3.0 | 225 | 0.2857 | 0.89 | | 0.0771 | 4.0 | 300 | 0.3708 | 0.8933 | | 0.0554 | 5.0 | 375 | 0.4256 | 0.895 | | 0.0571 | 6.0 | 450 | 0.4870 | 0.8867 | | 0.0043 | 7.0 | 525 | 0.5217 | 0.9017 | | 0.0346 | 8.0 | 600 | 0.5838 | 0.8983 | | 0.0305 | 9.0 | 675 | 0.5589 | 0.89 | | 0.0299 | 10.0 | 750 | 0.6507 | 0.8833 | | 0.0112 | 11.0 | 825 | 0.7257 | 0.885 | | 0.0571 | 12.0 | 900 | 0.6425 | 0.8933 | | 0.0111 | 13.0 | 975 | 0.6434 | 0.885 | | 0.0007 | 14.0 | 1050 | 0.6590 | 0.8917 | | 0.0158 | 15.0 | 1125 | 0.6659 | 0.895 | | 0.0001 | 16.0 | 1200 | 0.6546 | 0.8983 | | 0.0007 | 17.0 | 1275 | 0.6736 | 0.8867 | | 0.0231 | 18.0 | 1350 | 0.7021 | 0.8917 | | 0.0081 | 19.0 | 1425 | 0.7031 | 0.8917 | | 0.0001 | 20.0 | 1500 | 0.7077 | 0.8833 | | 0.0034 | 21.0 | 1575 | 0.6794 | 0.885 | | 0.0184 | 22.0 | 1650 | 0.7927 | 0.865 | | 0.0002 | 23.0 | 1725 | 0.7523 | 0.8783 | | 0.0048 | 24.0 | 1800 | 0.7237 | 0.885 | | 0.0065 | 25.0 | 1875 | 0.7425 | 0.8867 | | 0.0064 | 26.0 | 1950 | 0.7940 | 0.8833 | | 0.0055 | 27.0 | 2025 | 0.7223 | 0.8983 | | 0.0092 | 28.0 | 2100 | 0.7594 | 0.8933 | | 0.0 | 29.0 | 2175 | 0.7361 | 0.89 | | 0.0 | 30.0 | 2250 | 0.7567 | 0.89 | | 0.017 | 31.0 | 2325 | 0.7474 | 0.8883 | | 0.0029 | 32.0 | 2400 | 0.8687 | 0.8767 | | 0.0165 | 33.0 | 2475 | 0.8109 | 0.8883 | | 0.0031 | 34.0 | 2550 | 0.8076 | 0.885 | | 0.0039 | 35.0 | 2625 | 0.8393 | 0.8833 | | 0.0031 | 36.0 | 2700 | 0.8234 | 0.8817 | | 0.0001 | 37.0 | 2775 | 0.8155 | 0.8833 | | 0.0034 | 38.0 | 2850 | 0.8110 | 0.89 | | 0.0036 | 39.0 | 2925 | 0.8344 | 0.8817 | | 0.0002 | 40.0 | 3000 | 0.8172 | 0.8833 | | 0.0025 | 41.0 | 3075 | 0.8298 | 0.8817 | | 0.0021 | 42.0 | 3150 | 0.8481 | 0.8817 | | 0.0001 | 43.0 | 3225 | 0.8405 | 0.8817 | | 0.0035 | 44.0 | 3300 | 0.8375 | 0.8833 | | 0.0006 | 45.0 | 3375 | 0.8281 | 0.885 | | 0.0024 | 46.0 | 3450 | 0.8226 | 0.8833 | | 0.0 | 47.0 | 3525 | 0.8109 | 0.8817 | | 0.0 | 48.0 | 3600 | 0.8113 | 0.88 | | 0.0026 | 49.0 | 3675 | 0.8154 | 0.88 | | 0.0067 | 50.0 | 3750 | 0.8148 | 0.88 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_1x_beit_base_adamax_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_adamax_00001_fold1 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6618 - Accuracy: 0.9032 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4246 | 1.0 | 76 | 0.3687 | 0.8598 | | 0.2552 | 2.0 | 152 | 0.2999 | 0.8798 | | 0.1978 | 3.0 | 228 | 0.2886 | 0.8731 | | 0.1972 | 4.0 | 304 | 0.2763 | 0.8865 | | 0.1608 | 5.0 | 380 | 0.2799 | 0.8865 | | 0.1346 | 6.0 | 456 | 0.3048 | 0.8815 | | 0.0943 | 7.0 | 532 | 0.3402 | 0.8898 | | 0.0622 | 8.0 | 608 | 0.3287 | 0.8915 | | 0.0613 | 9.0 | 684 | 0.3634 | 0.8865 | | 0.0585 | 10.0 | 760 | 0.3905 | 0.8881 | | 0.0328 | 11.0 | 836 | 0.3830 | 0.8948 | | 0.0344 | 12.0 | 912 | 0.4094 | 0.8915 | | 0.053 | 13.0 | 988 | 0.4103 | 0.8932 | | 0.0261 | 14.0 | 1064 | 0.4498 | 0.8932 | | 0.0261 | 15.0 | 1140 | 0.4936 | 0.8915 | | 0.0343 | 16.0 | 1216 | 0.4859 | 0.8932 | | 0.0153 | 17.0 | 1292 | 0.5143 | 0.8815 | | 0.0038 | 18.0 | 1368 | 0.5271 | 0.8865 | | 0.0046 | 19.0 | 1444 | 0.5417 | 0.8898 | | 0.0282 | 20.0 | 1520 | 0.5283 | 0.8948 | | 0.0048 | 21.0 | 1596 | 0.5421 | 0.8965 | | 0.0018 | 22.0 | 1672 | 0.5503 | 0.8898 | | 0.0064 | 23.0 | 1748 | 0.5860 | 0.8848 | | 0.0241 | 24.0 | 1824 | 0.5762 | 0.8948 | | 0.0207 | 25.0 | 1900 | 0.5869 | 0.8915 | | 0.0293 | 26.0 | 1976 | 0.5842 | 0.8948 | | 0.0029 | 27.0 | 2052 | 0.6141 | 0.8932 | | 0.0198 | 28.0 | 2128 | 0.6046 | 0.8982 | | 0.0329 | 29.0 | 2204 | 0.6286 | 0.8948 | | 0.0036 | 30.0 | 2280 | 0.6053 | 0.8948 | | 0.0339 | 31.0 | 2356 | 0.6159 | 0.8881 | | 0.0211 | 32.0 | 2432 | 0.6253 | 0.8932 | | 0.0315 | 33.0 | 2508 | 0.6357 | 0.8915 | | 0.0135 | 34.0 | 2584 | 0.6365 | 0.8932 | | 0.0361 | 35.0 | 2660 | 0.6309 | 0.8965 | | 0.0313 | 36.0 | 2736 | 0.6365 | 0.8965 | | 0.0198 | 37.0 | 2812 | 0.6348 | 0.8965 | | 0.0132 | 38.0 | 2888 | 0.6243 | 0.8948 | | 0.0085 | 39.0 | 2964 | 0.6351 | 0.8948 | | 0.001 | 40.0 | 3040 | 0.6372 | 0.8948 | | 0.0149 | 41.0 | 3116 | 0.6607 | 0.8998 | | 0.0056 | 42.0 | 3192 | 0.6570 | 0.9065 | | 0.0011 | 43.0 | 3268 | 0.6635 | 0.8998 | | 0.003 | 44.0 | 3344 | 0.6527 | 0.8982 | | 0.041 | 45.0 | 3420 | 0.6537 | 0.8982 | | 0.0011 | 46.0 | 3496 | 0.6576 | 0.8982 | | 0.0196 | 47.0 | 3572 | 0.6599 | 0.8998 | | 0.0117 | 48.0 | 3648 | 0.6620 | 0.9032 | | 0.0018 | 49.0 | 3724 | 0.6617 | 0.9032 | | 0.0144 | 50.0 | 3800 | 0.6618 | 0.9032 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_1x_beit_base_adamax_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_adamax_00001_fold2 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7289 - Accuracy: 0.8852 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4217 | 1.0 | 75 | 0.3814 | 0.8419 | | 0.2549 | 2.0 | 150 | 0.3222 | 0.8735 | | 0.2258 | 3.0 | 225 | 0.2946 | 0.8802 | | 0.1818 | 4.0 | 300 | 0.2874 | 0.8935 | | 0.1414 | 5.0 | 375 | 0.3063 | 0.8869 | | 0.1276 | 6.0 | 450 | 0.3088 | 0.8835 | | 0.1278 | 7.0 | 525 | 0.3231 | 0.8885 | | 0.0712 | 8.0 | 600 | 0.3560 | 0.8869 | | 0.0461 | 9.0 | 675 | 0.3613 | 0.8918 | | 0.0475 | 10.0 | 750 | 0.3784 | 0.8952 | | 0.0242 | 11.0 | 825 | 0.4079 | 0.8885 | | 0.0506 | 12.0 | 900 | 0.4429 | 0.8869 | | 0.0272 | 13.0 | 975 | 0.4714 | 0.8869 | | 0.0444 | 14.0 | 1050 | 0.5396 | 0.8802 | | 0.0206 | 15.0 | 1125 | 0.5526 | 0.8735 | | 0.0163 | 16.0 | 1200 | 0.5286 | 0.8852 | | 0.0204 | 17.0 | 1275 | 0.5940 | 0.8819 | | 0.0355 | 18.0 | 1350 | 0.5758 | 0.8769 | | 0.0442 | 19.0 | 1425 | 0.5804 | 0.8785 | | 0.0309 | 20.0 | 1500 | 0.5941 | 0.8819 | | 0.0106 | 21.0 | 1575 | 0.6105 | 0.8802 | | 0.0257 | 22.0 | 1650 | 0.6126 | 0.8835 | | 0.0159 | 23.0 | 1725 | 0.6156 | 0.8852 | | 0.0399 | 24.0 | 1800 | 0.6198 | 0.8785 | | 0.0047 | 25.0 | 1875 | 0.6196 | 0.8819 | | 0.0247 | 26.0 | 1950 | 0.6464 | 0.8835 | | 0.024 | 27.0 | 2025 | 0.6527 | 0.8869 | | 0.0438 | 28.0 | 2100 | 0.7050 | 0.8819 | | 0.0088 | 29.0 | 2175 | 0.6605 | 0.8902 | | 0.0182 | 30.0 | 2250 | 0.6570 | 0.8885 | | 0.0251 | 31.0 | 2325 | 0.6796 | 0.8819 | | 0.017 | 32.0 | 2400 | 0.6922 | 0.8852 | | 0.033 | 33.0 | 2475 | 0.7245 | 0.8719 | | 0.0216 | 34.0 | 2550 | 0.6972 | 0.8785 | | 0.0144 | 35.0 | 2625 | 0.7562 | 0.8819 | | 0.0161 | 36.0 | 2700 | 0.6986 | 0.8835 | | 0.0005 | 37.0 | 2775 | 0.6981 | 0.8819 | | 0.0053 | 38.0 | 2850 | 0.7088 | 0.8869 | | 0.0506 | 39.0 | 2925 | 0.7290 | 0.8869 | | 0.0237 | 40.0 | 3000 | 0.7146 | 0.8885 | | 0.0005 | 41.0 | 3075 | 0.7241 | 0.8802 | | 0.0171 | 42.0 | 3150 | 0.7294 | 0.8819 | | 0.0152 | 43.0 | 3225 | 0.7178 | 0.8869 | | 0.0007 | 44.0 | 3300 | 0.7168 | 0.8819 | | 0.0066 | 45.0 | 3375 | 0.7243 | 0.8819 | | 0.0023 | 46.0 | 3450 | 0.7324 | 0.8835 | | 0.053 | 47.0 | 3525 | 0.7341 | 0.8852 | | 0.0015 | 48.0 | 3600 | 0.7298 | 0.8852 | | 0.0034 | 49.0 | 3675 | 0.7290 | 0.8852 | | 0.0068 | 50.0 | 3750 | 0.7289 | 0.8852 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_1x_beit_base_adamax_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_adamax_00001_fold3 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5920 - Accuracy: 0.91 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4085 | 1.0 | 75 | 0.3406 | 0.8733 | | 0.3125 | 2.0 | 150 | 0.2766 | 0.905 | | 0.272 | 3.0 | 225 | 0.2526 | 0.9117 | | 0.2066 | 4.0 | 300 | 0.2426 | 0.9167 | | 0.1315 | 5.0 | 375 | 0.2415 | 0.9233 | | 0.1338 | 6.0 | 450 | 0.2667 | 0.9133 | | 0.095 | 7.0 | 525 | 0.2679 | 0.9183 | | 0.1144 | 8.0 | 600 | 0.2699 | 0.9267 | | 0.038 | 9.0 | 675 | 0.2963 | 0.9183 | | 0.0367 | 10.0 | 750 | 0.3153 | 0.925 | | 0.0325 | 11.0 | 825 | 0.3378 | 0.92 | | 0.0172 | 12.0 | 900 | 0.3441 | 0.9183 | | 0.0285 | 13.0 | 975 | 0.3703 | 0.9217 | | 0.0132 | 14.0 | 1050 | 0.3979 | 0.9117 | | 0.0356 | 15.0 | 1125 | 0.3938 | 0.9167 | | 0.0285 | 16.0 | 1200 | 0.4361 | 0.9117 | | 0.0435 | 17.0 | 1275 | 0.4564 | 0.905 | | 0.0412 | 18.0 | 1350 | 0.4606 | 0.905 | | 0.0106 | 19.0 | 1425 | 0.4449 | 0.9133 | | 0.0192 | 20.0 | 1500 | 0.4442 | 0.9167 | | 0.0051 | 21.0 | 1575 | 0.4723 | 0.9117 | | 0.0266 | 22.0 | 1650 | 0.5052 | 0.9117 | | 0.0217 | 23.0 | 1725 | 0.4785 | 0.915 | | 0.0019 | 24.0 | 1800 | 0.5058 | 0.9117 | | 0.0069 | 25.0 | 1875 | 0.5124 | 0.91 | | 0.0008 | 26.0 | 1950 | 0.5249 | 0.9117 | | 0.0081 | 27.0 | 2025 | 0.5029 | 0.91 | | 0.0213 | 28.0 | 2100 | 0.4919 | 0.9167 | | 0.0025 | 29.0 | 2175 | 0.5055 | 0.9167 | | 0.0366 | 30.0 | 2250 | 0.5226 | 0.9117 | | 0.0192 | 31.0 | 2325 | 0.5652 | 0.91 | | 0.0012 | 32.0 | 2400 | 0.5128 | 0.92 | | 0.0191 | 33.0 | 2475 | 0.5580 | 0.9117 | | 0.0168 | 34.0 | 2550 | 0.5615 | 0.905 | | 0.0045 | 35.0 | 2625 | 0.5647 | 0.9133 | | 0.0069 | 36.0 | 2700 | 0.5389 | 0.91 | | 0.021 | 37.0 | 2775 | 0.5519 | 0.9133 | | 0.0264 | 38.0 | 2850 | 0.5472 | 0.9117 | | 0.0403 | 39.0 | 2925 | 0.5693 | 0.91 | | 0.001 | 40.0 | 3000 | 0.5532 | 0.91 | | 0.0004 | 41.0 | 3075 | 0.5673 | 0.9117 | | 0.0344 | 42.0 | 3150 | 0.5624 | 0.9067 | | 0.0221 | 43.0 | 3225 | 0.5673 | 0.91 | | 0.0004 | 44.0 | 3300 | 0.5783 | 0.91 | | 0.0156 | 45.0 | 3375 | 0.5833 | 0.9083 | | 0.021 | 46.0 | 3450 | 0.5741 | 0.9117 | | 0.0145 | 47.0 | 3525 | 0.5806 | 0.91 | | 0.0049 | 48.0 | 3600 | 0.5891 | 0.91 | | 0.0162 | 49.0 | 3675 | 0.5932 | 0.9083 | | 0.0336 | 50.0 | 3750 | 0.5920 | 0.91 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_1x_beit_base_adamax_00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_adamax_00001_fold4 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9690 - Accuracy: 0.8733 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3842 | 1.0 | 75 | 0.3836 | 0.8567 | | 0.2698 | 2.0 | 150 | 0.3565 | 0.8717 | | 0.2112 | 3.0 | 225 | 0.3725 | 0.8667 | | 0.1563 | 4.0 | 300 | 0.3983 | 0.8683 | | 0.0925 | 5.0 | 375 | 0.3901 | 0.875 | | 0.1014 | 6.0 | 450 | 0.4180 | 0.8817 | | 0.0818 | 7.0 | 525 | 0.4236 | 0.8733 | | 0.0472 | 8.0 | 600 | 0.4670 | 0.87 | | 0.0417 | 9.0 | 675 | 0.5177 | 0.8767 | | 0.0198 | 10.0 | 750 | 0.5528 | 0.8683 | | 0.0232 | 11.0 | 825 | 0.5777 | 0.875 | | 0.0159 | 12.0 | 900 | 0.6214 | 0.8683 | | 0.0174 | 13.0 | 975 | 0.6477 | 0.87 | | 0.0205 | 14.0 | 1050 | 0.7117 | 0.8633 | | 0.0429 | 15.0 | 1125 | 0.7038 | 0.875 | | 0.0098 | 16.0 | 1200 | 0.7398 | 0.8733 | | 0.0056 | 17.0 | 1275 | 0.7568 | 0.8717 | | 0.016 | 18.0 | 1350 | 0.7774 | 0.8733 | | 0.0366 | 19.0 | 1425 | 0.7871 | 0.8783 | | 0.0462 | 20.0 | 1500 | 0.7545 | 0.8867 | | 0.0036 | 21.0 | 1575 | 0.8298 | 0.8767 | | 0.013 | 22.0 | 1650 | 0.8793 | 0.875 | | 0.0139 | 23.0 | 1725 | 0.8645 | 0.88 | | 0.0044 | 24.0 | 1800 | 0.8813 | 0.8717 | | 0.0148 | 25.0 | 1875 | 0.8534 | 0.8767 | | 0.0146 | 26.0 | 1950 | 0.8817 | 0.8767 | | 0.0054 | 27.0 | 2025 | 0.9081 | 0.87 | | 0.0007 | 28.0 | 2100 | 0.8989 | 0.8767 | | 0.0046 | 29.0 | 2175 | 0.8951 | 0.88 | | 0.0234 | 30.0 | 2250 | 0.9014 | 0.8717 | | 0.0106 | 31.0 | 2325 | 0.9119 | 0.8667 | | 0.0085 | 32.0 | 2400 | 0.9313 | 0.8717 | | 0.0036 | 33.0 | 2475 | 0.9195 | 0.8733 | | 0.001 | 34.0 | 2550 | 0.9166 | 0.8717 | | 0.0098 | 35.0 | 2625 | 0.9378 | 0.87 | | 0.0089 | 36.0 | 2700 | 0.9278 | 0.8717 | | 0.0099 | 37.0 | 2775 | 0.9534 | 0.8717 | | 0.0248 | 38.0 | 2850 | 0.9419 | 0.8783 | | 0.0327 | 39.0 | 2925 | 0.9391 | 0.8733 | | 0.0223 | 40.0 | 3000 | 0.9364 | 0.875 | | 0.0147 | 41.0 | 3075 | 0.9305 | 0.8767 | | 0.0288 | 42.0 | 3150 | 0.9572 | 0.8783 | | 0.0191 | 43.0 | 3225 | 0.9619 | 0.875 | | 0.0008 | 44.0 | 3300 | 0.9576 | 0.875 | | 0.0019 | 45.0 | 3375 | 0.9660 | 0.8733 | | 0.0022 | 46.0 | 3450 | 0.9692 | 0.875 | | 0.0015 | 47.0 | 3525 | 0.9668 | 0.875 | | 0.0054 | 48.0 | 3600 | 0.9744 | 0.8733 | | 0.0016 | 49.0 | 3675 | 0.9694 | 0.8733 | | 0.0003 | 50.0 | 3750 | 0.9690 | 0.8733 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
JorgeGIT/finetuned-Leukemia-cell
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-Leukemia-cell This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1128 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 300 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:| | 0.3182 | 2.94 | 100 | 0.2301 | 0.9211 | | 0.2223 | 5.88 | 200 | 0.3411 | 0.8910 | | 0.1695 | 8.82 | 300 | 0.1168 | 0.9624 | | 0.0579 | 11.76 | 400 | 0.1632 | 0.9511 | | 0.1184 | 14.71 | 500 | 0.4665 | 0.8346 | | 0.0575 | 17.65 | 600 | 0.1563 | 0.9586 | | 0.1087 | 20.59 | 700 | 0.2023 | 0.9511 | | 0.1164 | 23.53 | 800 | 0.2283 | 0.9398 | | 0.1144 | 26.47 | 900 | 0.1130 | 0.9624 | | 0.1821 | 29.41 | 1000 | 0.1155 | 0.9737 | | 0.0882 | 32.35 | 1100 | 0.0760 | 0.9850 | | 0.1099 | 35.29 | 1200 | 0.0894 | 0.9737 | | 0.053 | 38.24 | 1300 | 0.1248 | 0.9699 | | 0.0489 | 41.18 | 1400 | 0.1081 | 0.9774 | | 0.065 | 44.12 | 1500 | 0.1694 | 0.9549 | | 0.037 | 47.06 | 1600 | 0.1060 | 0.9699 | | 0.0281 | 50.0 | 1700 | 0.0892 | 0.9737 | | 0.0394 | 52.94 | 1800 | 0.1680 | 0.9624 | | 0.0828 | 55.88 | 1900 | 0.1404 | 0.9774 | | 0.0663 | 58.82 | 2000 | 0.1683 | 0.9662 | | 0.0698 | 61.76 | 2100 | 0.1517 | 0.9624 | | 0.0938 | 64.71 | 2200 | 0.1031 | 0.9737 | | 0.0324 | 67.65 | 2300 | 0.1251 | 0.9812 | | 0.0713 | 70.59 | 2400 | 0.1597 | 0.9662 | | 0.059 | 73.53 | 2500 | 0.1455 | 0.9699 | | 0.0404 | 76.47 | 2600 | 0.0924 | 0.9624 | | 0.0526 | 79.41 | 2700 | 0.0853 | 0.9812 | | 0.0439 | 82.35 | 2800 | 0.0815 | 0.9850 | | 0.0485 | 85.29 | 2900 | 0.1192 | 0.9774 | | 0.0498 | 88.24 | 3000 | 0.0958 | 0.9737 | | 0.0181 | 91.18 | 3100 | 0.1351 | 0.9699 | | 0.0226 | 94.12 | 3200 | 0.1458 | 0.9774 | | 0.1115 | 97.06 | 3300 | 0.1453 | 0.9737 | | 0.0349 | 100.0 | 3400 | 0.1257 | 0.9812 | | 0.0246 | 102.94 | 3500 | 0.1405 | 0.9662 | | 0.0084 | 105.88 | 3600 | 0.0666 | 0.9887 | | 0.0174 | 108.82 | 3700 | 0.1419 | 0.9662 | | 0.0432 | 111.76 | 3800 | 0.2027 | 0.9662 | | 0.0164 | 114.71 | 3900 | 0.0671 | 0.9812 | | 0.0223 | 117.65 | 4000 | 0.0722 | 0.9850 | | 0.012 | 120.59 | 4100 | 0.1285 | 0.9699 | | 0.0143 | 123.53 | 4200 | 0.1102 | 0.9812 | | 0.0254 | 126.47 | 4300 | 0.1139 | 0.9812 | | 0.018 | 129.41 | 4400 | 0.1056 | 0.9737 | | 0.0011 | 132.35 | 4500 | 0.1097 | 0.9774 | | 0.08 | 135.29 | 4600 | 0.1425 | 0.9662 | | 0.0292 | 138.24 | 4700 | 0.0871 | 0.9812 | | 0.0248 | 141.18 | 4800 | 0.1082 | 0.9699 | | 0.0064 | 144.12 | 4900 | 0.0644 | 0.9850 | | 0.0115 | 147.06 | 5000 | 0.0912 | 0.9812 | | 0.052 | 150.0 | 5100 | 0.0927 | 0.9850 | | 0.0103 | 152.94 | 5200 | 0.1129 | 0.9774 | | 0.0185 | 155.88 | 5300 | 0.1250 | 0.9699 | | 0.0185 | 158.82 | 5400 | 0.1226 | 0.9737 | | 0.0002 | 161.76 | 5500 | 0.1146 | 0.9812 | | 0.0249 | 164.71 | 5600 | 0.1945 | 0.9737 | | 0.0165 | 167.65 | 5700 | 0.1875 | 0.9586 | | 0.0028 | 170.59 | 5800 | 0.1045 | 0.9774 | | 0.0044 | 173.53 | 5900 | 0.1279 | 0.9774 | | 0.0078 | 176.47 | 6000 | 0.0967 | 0.9774 | | 0.0093 | 179.41 | 6100 | 0.1450 | 0.9812 | | 0.0261 | 182.35 | 6200 | 0.0815 | 0.9850 | | 0.0218 | 185.29 | 6300 | 0.1586 | 0.9699 | | 0.1184 | 188.24 | 6400 | 0.1481 | 0.9812 | | 0.0011 | 191.18 | 6500 | 0.1698 | 0.9737 | | 0.0131 | 194.12 | 6600 | 0.2247 | 0.9662 | | 0.0156 | 197.06 | 6700 | 0.1205 | 0.9812 | | 0.007 | 200.0 | 6800 | 0.1864 | 0.9699 | | 0.015 | 202.94 | 6900 | 0.1684 | 0.9774 | | 0.0032 | 205.88 | 7000 | 0.0835 | 0.9850 | | 0.0017 | 208.82 | 7100 | 0.1174 | 0.9812 | | 0.0397 | 211.76 | 7200 | 0.1926 | 0.9662 | | 0.0015 | 214.71 | 7300 | 0.1646 | 0.9699 | | 0.0046 | 217.65 | 7400 | 0.1520 | 0.9774 | | 0.0193 | 220.59 | 7500 | 0.1436 | 0.9812 | | 0.0474 | 223.53 | 7600 | 0.1747 | 0.9737 | | 0.001 | 226.47 | 7700 | 0.1647 | 0.9812 | | 0.0005 | 229.41 | 7800 | 0.1992 | 0.9699 | | 0.0119 | 232.35 | 7900 | 0.1545 | 0.9699 | | 0.0153 | 235.29 | 8000 | 0.2018 | 0.9662 | | 0.0106 | 238.24 | 8100 | 0.1798 | 0.9774 | | 0.0012 | 241.18 | 8200 | 0.1896 | 0.9774 | | 0.0 | 244.12 | 8300 | 0.1500 | 0.9812 | | 0.0339 | 247.06 | 8400 | 0.1890 | 0.9662 | | 0.0016 | 250.0 | 8500 | 0.1410 | 0.9812 | | 0.0003 | 252.94 | 8600 | 0.1341 | 0.9812 | | 0.001 | 255.88 | 8700 | 0.1209 | 0.9850 | | 0.0071 | 258.82 | 8800 | 0.1191 | 0.9812 | | 0.0 | 261.76 | 8900 | 0.0960 | 0.9887 | | 0.0016 | 264.71 | 9000 | 0.1063 | 0.9850 | | 0.0048 | 267.65 | 9100 | 0.1583 | 0.9737 | | 0.0026 | 270.59 | 9200 | 0.1473 | 0.9774 | | 0.0006 | 273.53 | 9300 | 0.1325 | 0.9812 | | 0.0226 | 276.47 | 9400 | 0.1214 | 0.9812 | | 0.0075 | 279.41 | 9500 | 0.1399 | 0.9812 | | 0.0047 | 282.35 | 9600 | 0.1291 | 0.9850 | | 0.0 | 285.29 | 9700 | 0.1117 | 0.9812 | | 0.0001 | 288.24 | 9800 | 0.1137 | 0.9850 | | 0.0001 | 291.18 | 9900 | 0.1117 | 0.9850 | | 0.0 | 294.12 | 10000 | 0.1061 | 0.9850 | | 0.0 | 297.06 | 10100 | 0.1129 | 0.9850 | | 0.0057 | 300.0 | 10200 | 0.1128 | 0.9850 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "lla", "folicular", "linfos normales", "llc", "marginal", "mononucleosis", "trico" ]
hkivancoral/smids_1x_beit_base_adamax_00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_adamax_00001_fold5 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6882 - Accuracy: 0.89 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3992 | 1.0 | 75 | 0.3544 | 0.845 | | 0.2938 | 2.0 | 150 | 0.2944 | 0.88 | | 0.2043 | 3.0 | 225 | 0.2889 | 0.8733 | | 0.1457 | 4.0 | 300 | 0.2668 | 0.8917 | | 0.1371 | 5.0 | 375 | 0.2691 | 0.8833 | | 0.1186 | 6.0 | 450 | 0.2876 | 0.8733 | | 0.0675 | 7.0 | 525 | 0.2905 | 0.895 | | 0.0675 | 8.0 | 600 | 0.3070 | 0.8983 | | 0.0951 | 9.0 | 675 | 0.3449 | 0.8917 | | 0.0427 | 10.0 | 750 | 0.3642 | 0.885 | | 0.0217 | 11.0 | 825 | 0.3880 | 0.8817 | | 0.0513 | 12.0 | 900 | 0.3991 | 0.9 | | 0.0247 | 13.0 | 975 | 0.4163 | 0.8983 | | 0.018 | 14.0 | 1050 | 0.4538 | 0.8883 | | 0.0291 | 15.0 | 1125 | 0.4599 | 0.8917 | | 0.0096 | 16.0 | 1200 | 0.5126 | 0.89 | | 0.0106 | 17.0 | 1275 | 0.5125 | 0.8867 | | 0.0447 | 18.0 | 1350 | 0.5410 | 0.8883 | | 0.016 | 19.0 | 1425 | 0.5359 | 0.8883 | | 0.0033 | 20.0 | 1500 | 0.5522 | 0.8867 | | 0.0086 | 21.0 | 1575 | 0.5579 | 0.8883 | | 0.0299 | 22.0 | 1650 | 0.5864 | 0.8833 | | 0.0058 | 23.0 | 1725 | 0.5904 | 0.8867 | | 0.0156 | 24.0 | 1800 | 0.6102 | 0.89 | | 0.0161 | 25.0 | 1875 | 0.6210 | 0.8883 | | 0.0066 | 26.0 | 1950 | 0.6149 | 0.8883 | | 0.0424 | 27.0 | 2025 | 0.6199 | 0.8867 | | 0.011 | 28.0 | 2100 | 0.6388 | 0.8867 | | 0.0021 | 29.0 | 2175 | 0.6358 | 0.8917 | | 0.0014 | 30.0 | 2250 | 0.6319 | 0.8883 | | 0.0203 | 31.0 | 2325 | 0.6459 | 0.89 | | 0.0221 | 32.0 | 2400 | 0.6739 | 0.8883 | | 0.0066 | 33.0 | 2475 | 0.6562 | 0.89 | | 0.0119 | 34.0 | 2550 | 0.6704 | 0.885 | | 0.0088 | 35.0 | 2625 | 0.6526 | 0.89 | | 0.0115 | 36.0 | 2700 | 0.6534 | 0.8867 | | 0.0355 | 37.0 | 2775 | 0.6663 | 0.8883 | | 0.0376 | 38.0 | 2850 | 0.6538 | 0.89 | | 0.0299 | 39.0 | 2925 | 0.6757 | 0.8867 | | 0.0019 | 40.0 | 3000 | 0.6764 | 0.8883 | | 0.0235 | 41.0 | 3075 | 0.6776 | 0.89 | | 0.0081 | 42.0 | 3150 | 0.6798 | 0.8883 | | 0.0053 | 43.0 | 3225 | 0.6758 | 0.8883 | | 0.0234 | 44.0 | 3300 | 0.6788 | 0.8933 | | 0.0053 | 45.0 | 3375 | 0.6853 | 0.8883 | | 0.0121 | 46.0 | 3450 | 0.6875 | 0.8867 | | 0.001 | 47.0 | 3525 | 0.6878 | 0.8883 | | 0.0104 | 48.0 | 3600 | 0.6872 | 0.89 | | 0.0042 | 49.0 | 3675 | 0.6870 | 0.8883 | | 0.0115 | 50.0 | 3750 | 0.6882 | 0.89 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
barghavani/Cheese_xray
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Cheese_xray This model is a fine-tuned version of [barghavani/Cheese_xray](https://huggingface.co/barghavani/Cheese_xray) on the chest-xray-classification dataset. It achieves the following results on the evaluation set: - Loss: 0.2827 - Accuracy: 0.8883 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3993 | 0.99 | 63 | 0.4364 | 0.7165 | | 0.3454 | 1.99 | 127 | 0.3947 | 0.7680 | | 0.3327 | 3.0 | 191 | 0.3582 | 0.8591 | | 0.3329 | 4.0 | 255 | 0.3371 | 0.8746 | | 0.2992 | 4.99 | 318 | 0.3449 | 0.8643 | | 0.3289 | 5.99 | 382 | 0.3172 | 0.8832 | | 0.3309 | 7.0 | 446 | 0.2956 | 0.8935 | | 0.2875 | 8.0 | 510 | 0.2911 | 0.8883 | | 0.2764 | 8.99 | 573 | 0.2884 | 0.9124 | | 0.265 | 9.88 | 630 | 0.2827 | 0.8883 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "normal", "pneumonia" ]
canadianjosieharrison/swinv2-large-patch4-window12-192-22k-baseline
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-large-patch4-window12-192-22k-baseline This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12-192-22k](https://huggingface.co/microsoft/swinv2-large-patch4-window12-192-22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3489 - Accuracy: 0.8765 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 18 - eval_batch_size: 18 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 36 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1721 | 1.0 | 20 | 0.8152 | 0.7407 | | 0.5878 | 2.0 | 40 | 0.4285 | 0.8395 | | 0.5201 | 3.0 | 60 | 0.5102 | 0.8148 | | 0.3366 | 4.0 | 80 | 0.3463 | 0.8519 | | 0.2792 | 5.0 | 100 | 0.4444 | 0.8272 | | 0.2807 | 6.0 | 120 | 0.3282 | 0.8765 | | 0.1978 | 7.0 | 140 | 0.3047 | 0.8642 | | 0.2262 | 8.0 | 160 | 0.4534 | 0.8765 | | 0.176 | 9.0 | 180 | 0.3605 | 0.8148 | | 0.17 | 10.0 | 200 | 0.4222 | 0.8642 | | 0.1445 | 11.0 | 220 | 0.3569 | 0.9012 | | 0.128 | 12.0 | 240 | 0.4649 | 0.8642 | | 0.1316 | 13.0 | 260 | 0.3848 | 0.8765 | | 0.1772 | 14.0 | 280 | 0.4242 | 0.8395 | | 0.1087 | 15.0 | 300 | 0.3756 | 0.8889 | | 0.0858 | 16.0 | 320 | 0.4190 | 0.8519 | | 0.1136 | 17.0 | 340 | 0.4902 | 0.8765 | | 0.0425 | 18.0 | 360 | 0.3041 | 0.9012 | | 0.07 | 19.0 | 380 | 0.3456 | 0.8889 | | 0.0595 | 20.0 | 400 | 0.3489 | 0.8765 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.1+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "brick", "metal", "null", "other", "rustication", "siding", "stucco", "wood" ]
hkivancoral/smids_1x_beit_base_rms_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_rms_00001_fold1 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7081 - Accuracy: 0.8965 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3415 | 1.0 | 76 | 0.3600 | 0.8531 | | 0.1821 | 2.0 | 152 | 0.2813 | 0.8865 | | 0.1106 | 3.0 | 228 | 0.2915 | 0.8965 | | 0.0837 | 4.0 | 304 | 0.4355 | 0.8748 | | 0.0461 | 5.0 | 380 | 0.3524 | 0.8831 | | 0.0314 | 6.0 | 456 | 0.3471 | 0.9065 | | 0.052 | 7.0 | 532 | 0.3906 | 0.9032 | | 0.0094 | 8.0 | 608 | 0.4902 | 0.8998 | | 0.0397 | 9.0 | 684 | 0.5074 | 0.8848 | | 0.0068 | 10.0 | 760 | 0.5396 | 0.8965 | | 0.0009 | 11.0 | 836 | 0.4910 | 0.9032 | | 0.0007 | 12.0 | 912 | 0.5441 | 0.8982 | | 0.0176 | 13.0 | 988 | 0.5729 | 0.8965 | | 0.008 | 14.0 | 1064 | 0.5831 | 0.8965 | | 0.0023 | 15.0 | 1140 | 0.6581 | 0.8982 | | 0.0112 | 16.0 | 1216 | 0.6373 | 0.9048 | | 0.0122 | 17.0 | 1292 | 0.6091 | 0.8982 | | 0.0218 | 18.0 | 1368 | 0.7005 | 0.8965 | | 0.0052 | 19.0 | 1444 | 0.6533 | 0.8998 | | 0.0143 | 20.0 | 1520 | 0.5987 | 0.9048 | | 0.0047 | 21.0 | 1596 | 0.6407 | 0.8982 | | 0.005 | 22.0 | 1672 | 0.7577 | 0.8898 | | 0.0133 | 23.0 | 1748 | 0.7568 | 0.8848 | | 0.0064 | 24.0 | 1824 | 0.6963 | 0.8915 | | 0.0056 | 25.0 | 1900 | 0.6832 | 0.8982 | | 0.0033 | 26.0 | 1976 | 0.6578 | 0.8982 | | 0.0048 | 27.0 | 2052 | 0.6821 | 0.9032 | | 0.0003 | 28.0 | 2128 | 0.6751 | 0.8998 | | 0.0002 | 29.0 | 2204 | 0.6826 | 0.8998 | | 0.0054 | 30.0 | 2280 | 0.7208 | 0.8965 | | 0.0234 | 31.0 | 2356 | 0.7169 | 0.8915 | | 0.0066 | 32.0 | 2432 | 0.7161 | 0.8982 | | 0.0078 | 33.0 | 2508 | 0.6895 | 0.8982 | | 0.004 | 34.0 | 2584 | 0.7616 | 0.8982 | | 0.0117 | 35.0 | 2660 | 0.7211 | 0.9032 | | 0.0 | 36.0 | 2736 | 0.6772 | 0.8982 | | 0.0027 | 37.0 | 2812 | 0.6751 | 0.8998 | | 0.0023 | 38.0 | 2888 | 0.7465 | 0.9082 | | 0.0025 | 39.0 | 2964 | 0.6434 | 0.9132 | | 0.0043 | 40.0 | 3040 | 0.6803 | 0.9032 | | 0.005 | 41.0 | 3116 | 0.6970 | 0.8982 | | 0.0 | 42.0 | 3192 | 0.6953 | 0.8998 | | 0.0002 | 43.0 | 3268 | 0.6864 | 0.8982 | | 0.0001 | 44.0 | 3344 | 0.6955 | 0.9015 | | 0.0058 | 45.0 | 3420 | 0.7259 | 0.8948 | | 0.0 | 46.0 | 3496 | 0.7126 | 0.9032 | | 0.0044 | 47.0 | 3572 | 0.7081 | 0.8965 | | 0.0032 | 48.0 | 3648 | 0.7104 | 0.8965 | | 0.0023 | 49.0 | 3724 | 0.7077 | 0.8965 | | 0.0057 | 50.0 | 3800 | 0.7081 | 0.8965 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
Natalia2314/vit-base-catsVSdogs-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-catsVSdogs-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cats_vs_dogs dataset. It achieves the following results on the evaluation set: - Loss: 0.0523 - Accuracy: 0.98 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0337 | 2.0 | 100 | 0.0523 | 0.98 | | 0.0038 | 4.0 | 200 | 0.0591 | 0.985 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "cat", "dog" ]
Camilosan/Modelo-catsVSdogs
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Modelo-catsVSdogs This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cats_vs_dogs dataset. It achieves the following results on the evaluation set: - Loss: 0.0129 - Accuracy: 0.995 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0333 | 2.0 | 100 | 0.0633 | 0.985 | | 0.0039 | 4.0 | 200 | 0.0129 | 0.995 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "cat", "dog" ]
hkivancoral/smids_1x_beit_base_rms_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_rms_00001_fold2 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7810 - Accuracy: 0.8885 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3067 | 1.0 | 75 | 0.2670 | 0.9018 | | 0.1737 | 2.0 | 150 | 0.2937 | 0.8918 | | 0.1236 | 3.0 | 225 | 0.2592 | 0.8968 | | 0.093 | 4.0 | 300 | 0.2806 | 0.9085 | | 0.0342 | 5.0 | 375 | 0.3377 | 0.9035 | | 0.0514 | 6.0 | 450 | 0.4662 | 0.8769 | | 0.0285 | 7.0 | 525 | 0.4751 | 0.8902 | | 0.0245 | 8.0 | 600 | 0.4931 | 0.8968 | | 0.0336 | 9.0 | 675 | 0.4686 | 0.9035 | | 0.0057 | 10.0 | 750 | 0.6619 | 0.8852 | | 0.0005 | 11.0 | 825 | 0.5601 | 0.9018 | | 0.045 | 12.0 | 900 | 0.6300 | 0.8869 | | 0.0006 | 13.0 | 975 | 0.6005 | 0.8968 | | 0.0104 | 14.0 | 1050 | 0.6903 | 0.8769 | | 0.0013 | 15.0 | 1125 | 0.6574 | 0.8968 | | 0.0022 | 16.0 | 1200 | 0.6330 | 0.8952 | | 0.0138 | 17.0 | 1275 | 0.6340 | 0.9018 | | 0.0109 | 18.0 | 1350 | 0.7199 | 0.8902 | | 0.007 | 19.0 | 1425 | 0.7166 | 0.8968 | | 0.009 | 20.0 | 1500 | 0.8141 | 0.8802 | | 0.0077 | 21.0 | 1575 | 0.8216 | 0.8935 | | 0.0199 | 22.0 | 1650 | 0.8347 | 0.8802 | | 0.0056 | 23.0 | 1725 | 0.7454 | 0.8869 | | 0.0076 | 24.0 | 1800 | 0.6539 | 0.8968 | | 0.0001 | 25.0 | 1875 | 0.7625 | 0.8819 | | 0.0105 | 26.0 | 1950 | 0.7771 | 0.8918 | | 0.0191 | 27.0 | 2025 | 0.7871 | 0.8902 | | 0.0146 | 28.0 | 2100 | 0.7395 | 0.8968 | | 0.0063 | 29.0 | 2175 | 0.7297 | 0.8968 | | 0.0064 | 30.0 | 2250 | 0.6880 | 0.8952 | | 0.0044 | 31.0 | 2325 | 0.7924 | 0.8918 | | 0.0115 | 32.0 | 2400 | 0.7709 | 0.8918 | | 0.0213 | 33.0 | 2475 | 0.6964 | 0.8918 | | 0.0034 | 34.0 | 2550 | 0.7612 | 0.8918 | | 0.0052 | 35.0 | 2625 | 0.7685 | 0.8968 | | 0.0043 | 36.0 | 2700 | 0.8478 | 0.8869 | | 0.0001 | 37.0 | 2775 | 0.7661 | 0.8902 | | 0.0031 | 38.0 | 2850 | 0.7393 | 0.8968 | | 0.0464 | 39.0 | 2925 | 0.7536 | 0.8918 | | 0.0026 | 40.0 | 3000 | 0.7768 | 0.8852 | | 0.0001 | 41.0 | 3075 | 0.8423 | 0.8835 | | 0.0041 | 42.0 | 3150 | 0.7762 | 0.8918 | | 0.0072 | 43.0 | 3225 | 0.7847 | 0.8902 | | 0.0 | 44.0 | 3300 | 0.7699 | 0.8835 | | 0.0049 | 45.0 | 3375 | 0.7675 | 0.8852 | | 0.0001 | 46.0 | 3450 | 0.7767 | 0.8885 | | 0.0029 | 47.0 | 3525 | 0.7697 | 0.8885 | | 0.0 | 48.0 | 3600 | 0.7734 | 0.8885 | | 0.0 | 49.0 | 3675 | 0.7813 | 0.8885 | | 0.0231 | 50.0 | 3750 | 0.7810 | 0.8885 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_1x_beit_base_rms_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_rms_00001_fold3 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6179 - Accuracy: 0.92 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.378 | 1.0 | 75 | 0.2655 | 0.905 | | 0.1995 | 2.0 | 150 | 0.2472 | 0.9067 | | 0.1269 | 3.0 | 225 | 0.2601 | 0.9133 | | 0.0982 | 4.0 | 300 | 0.2718 | 0.9183 | | 0.0334 | 5.0 | 375 | 0.3064 | 0.9183 | | 0.0325 | 6.0 | 450 | 0.3593 | 0.9017 | | 0.0122 | 7.0 | 525 | 0.4158 | 0.9133 | | 0.0276 | 8.0 | 600 | 0.3999 | 0.915 | | 0.0023 | 9.0 | 675 | 0.4376 | 0.91 | | 0.0029 | 10.0 | 750 | 0.4955 | 0.91 | | 0.0282 | 11.0 | 825 | 0.4886 | 0.9133 | | 0.0074 | 12.0 | 900 | 0.4903 | 0.9083 | | 0.0119 | 13.0 | 975 | 0.4968 | 0.9183 | | 0.0151 | 14.0 | 1050 | 0.4966 | 0.9067 | | 0.0139 | 15.0 | 1125 | 0.4573 | 0.9267 | | 0.0049 | 16.0 | 1200 | 0.4797 | 0.9267 | | 0.0357 | 17.0 | 1275 | 0.4808 | 0.9317 | | 0.0195 | 18.0 | 1350 | 0.5297 | 0.9133 | | 0.0164 | 19.0 | 1425 | 0.5446 | 0.9233 | | 0.0136 | 20.0 | 1500 | 0.5630 | 0.915 | | 0.0002 | 21.0 | 1575 | 0.6196 | 0.9083 | | 0.0053 | 22.0 | 1650 | 0.5529 | 0.915 | | 0.002 | 23.0 | 1725 | 0.5621 | 0.9183 | | 0.0001 | 24.0 | 1800 | 0.5333 | 0.9233 | | 0.0008 | 25.0 | 1875 | 0.5371 | 0.9217 | | 0.0014 | 26.0 | 1950 | 0.5172 | 0.93 | | 0.0001 | 27.0 | 2025 | 0.5437 | 0.9233 | | 0.0001 | 28.0 | 2100 | 0.5344 | 0.9283 | | 0.0001 | 29.0 | 2175 | 0.5536 | 0.9183 | | 0.0075 | 30.0 | 2250 | 0.6086 | 0.9083 | | 0.0046 | 31.0 | 2325 | 0.5570 | 0.9133 | | 0.0077 | 32.0 | 2400 | 0.6038 | 0.915 | | 0.0016 | 33.0 | 2475 | 0.6324 | 0.9133 | | 0.0004 | 34.0 | 2550 | 0.5847 | 0.9217 | | 0.0039 | 35.0 | 2625 | 0.6482 | 0.9183 | | 0.0029 | 36.0 | 2700 | 0.6146 | 0.9267 | | 0.0076 | 37.0 | 2775 | 0.5750 | 0.9217 | | 0.0017 | 38.0 | 2850 | 0.5846 | 0.9233 | | 0.0 | 39.0 | 2925 | 0.5952 | 0.9233 | | 0.0018 | 40.0 | 3000 | 0.6016 | 0.9217 | | 0.0 | 41.0 | 3075 | 0.6081 | 0.9267 | | 0.0026 | 42.0 | 3150 | 0.6036 | 0.9233 | | 0.0001 | 43.0 | 3225 | 0.6419 | 0.915 | | 0.0 | 44.0 | 3300 | 0.6346 | 0.915 | | 0.0 | 45.0 | 3375 | 0.6400 | 0.915 | | 0.0001 | 46.0 | 3450 | 0.6220 | 0.9233 | | 0.0039 | 47.0 | 3525 | 0.6179 | 0.9233 | | 0.0001 | 48.0 | 3600 | 0.6159 | 0.9183 | | 0.0 | 49.0 | 3675 | 0.6170 | 0.92 | | 0.0 | 50.0 | 3750 | 0.6179 | 0.92 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_1x_beit_base_rms_00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_rms_00001_fold4 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2766 - Accuracy: 0.8683 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.295 | 1.0 | 75 | 0.3791 | 0.8483 | | 0.2062 | 2.0 | 150 | 0.3581 | 0.8683 | | 0.101 | 3.0 | 225 | 0.4288 | 0.8783 | | 0.0893 | 4.0 | 300 | 0.4864 | 0.8633 | | 0.005 | 5.0 | 375 | 0.6074 | 0.8617 | | 0.0333 | 6.0 | 450 | 0.7247 | 0.855 | | 0.0079 | 7.0 | 525 | 0.7367 | 0.8667 | | 0.0011 | 8.0 | 600 | 0.7491 | 0.8767 | | 0.0337 | 9.0 | 675 | 0.8841 | 0.8667 | | 0.023 | 10.0 | 750 | 0.9423 | 0.8617 | | 0.0015 | 11.0 | 825 | 0.9063 | 0.87 | | 0.0273 | 12.0 | 900 | 0.8724 | 0.8717 | | 0.0004 | 13.0 | 975 | 0.8534 | 0.875 | | 0.0254 | 14.0 | 1050 | 1.0178 | 0.8667 | | 0.0142 | 15.0 | 1125 | 1.0491 | 0.86 | | 0.0062 | 16.0 | 1200 | 1.0376 | 0.8733 | | 0.0472 | 17.0 | 1275 | 1.0729 | 0.8683 | | 0.0106 | 18.0 | 1350 | 1.0840 | 0.875 | | 0.0563 | 19.0 | 1425 | 1.0588 | 0.8733 | | 0.0079 | 20.0 | 1500 | 1.0867 | 0.87 | | 0.0097 | 21.0 | 1575 | 1.1355 | 0.8567 | | 0.0002 | 22.0 | 1650 | 1.1387 | 0.8633 | | 0.0053 | 23.0 | 1725 | 1.0714 | 0.8633 | | 0.0003 | 24.0 | 1800 | 1.0507 | 0.8717 | | 0.006 | 25.0 | 1875 | 1.0737 | 0.87 | | 0.0012 | 26.0 | 1950 | 1.0580 | 0.8817 | | 0.0001 | 27.0 | 2025 | 1.0351 | 0.8733 | | 0.0 | 28.0 | 2100 | 1.0876 | 0.8633 | | 0.0003 | 29.0 | 2175 | 1.1172 | 0.865 | | 0.0 | 30.0 | 2250 | 1.1601 | 0.8567 | | 0.0175 | 31.0 | 2325 | 1.2685 | 0.8683 | | 0.0003 | 32.0 | 2400 | 1.2370 | 0.8617 | | 0.0 | 33.0 | 2475 | 1.2456 | 0.865 | | 0.0 | 34.0 | 2550 | 1.2360 | 0.865 | | 0.0047 | 35.0 | 2625 | 1.3021 | 0.8567 | | 0.0001 | 36.0 | 2700 | 1.2287 | 0.8583 | | 0.0 | 37.0 | 2775 | 1.2544 | 0.8667 | | 0.0032 | 38.0 | 2850 | 1.2432 | 0.8683 | | 0.0007 | 39.0 | 2925 | 1.3277 | 0.8633 | | 0.0 | 40.0 | 3000 | 1.2887 | 0.86 | | 0.0034 | 41.0 | 3075 | 1.2930 | 0.86 | | 0.0066 | 42.0 | 3150 | 1.2756 | 0.855 | | 0.0 | 43.0 | 3225 | 1.2450 | 0.8583 | | 0.0 | 44.0 | 3300 | 1.2340 | 0.8633 | | 0.0001 | 45.0 | 3375 | 1.2507 | 0.8667 | | 0.0 | 46.0 | 3450 | 1.2915 | 0.8633 | | 0.0 | 47.0 | 3525 | 1.2863 | 0.8683 | | 0.0 | 48.0 | 3600 | 1.2824 | 0.8667 | | 0.0022 | 49.0 | 3675 | 1.2757 | 0.8683 | | 0.0021 | 50.0 | 3750 | 1.2766 | 0.8683 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_1x_beit_base_rms_00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_rms_00001_fold5 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9075 - Accuracy: 0.895 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3429 | 1.0 | 75 | 0.3196 | 0.8817 | | 0.2319 | 2.0 | 150 | 0.2825 | 0.8883 | | 0.1642 | 3.0 | 225 | 0.2956 | 0.8883 | | 0.0613 | 4.0 | 300 | 0.2991 | 0.905 | | 0.0375 | 5.0 | 375 | 0.4173 | 0.89 | | 0.0392 | 6.0 | 450 | 0.4376 | 0.895 | | 0.0266 | 7.0 | 525 | 0.5591 | 0.8933 | | 0.0211 | 8.0 | 600 | 0.6357 | 0.8883 | | 0.0129 | 9.0 | 675 | 0.5589 | 0.8967 | | 0.039 | 10.0 | 750 | 0.6087 | 0.8933 | | 0.0196 | 11.0 | 825 | 0.6853 | 0.8967 | | 0.0875 | 12.0 | 900 | 0.6905 | 0.8833 | | 0.0161 | 13.0 | 975 | 0.7505 | 0.8867 | | 0.0005 | 14.0 | 1050 | 0.7592 | 0.875 | | 0.0258 | 15.0 | 1125 | 0.7859 | 0.8783 | | 0.0008 | 16.0 | 1200 | 0.7624 | 0.8783 | | 0.0078 | 17.0 | 1275 | 0.7129 | 0.8917 | | 0.0151 | 18.0 | 1350 | 0.7730 | 0.885 | | 0.015 | 19.0 | 1425 | 0.7612 | 0.88 | | 0.0036 | 20.0 | 1500 | 0.7765 | 0.89 | | 0.0036 | 21.0 | 1575 | 0.7746 | 0.89 | | 0.0163 | 22.0 | 1650 | 0.7920 | 0.88 | | 0.0002 | 23.0 | 1725 | 0.7971 | 0.8867 | | 0.0013 | 24.0 | 1800 | 0.8091 | 0.8833 | | 0.0084 | 25.0 | 1875 | 0.8422 | 0.8817 | | 0.0077 | 26.0 | 1950 | 0.8718 | 0.89 | | 0.0059 | 27.0 | 2025 | 0.8359 | 0.89 | | 0.0135 | 28.0 | 2100 | 0.8777 | 0.8833 | | 0.0007 | 29.0 | 2175 | 0.8422 | 0.895 | | 0.0059 | 30.0 | 2250 | 0.8920 | 0.8933 | | 0.0039 | 31.0 | 2325 | 0.9311 | 0.875 | | 0.0027 | 32.0 | 2400 | 0.8796 | 0.89 | | 0.0001 | 33.0 | 2475 | 0.9632 | 0.88 | | 0.0031 | 34.0 | 2550 | 0.8453 | 0.89 | | 0.0036 | 35.0 | 2625 | 0.8275 | 0.895 | | 0.003 | 36.0 | 2700 | 0.8573 | 0.8883 | | 0.0273 | 37.0 | 2775 | 0.8009 | 0.8967 | | 0.0042 | 38.0 | 2850 | 0.8716 | 0.8917 | | 0.0032 | 39.0 | 2925 | 0.9439 | 0.88 | | 0.0005 | 40.0 | 3000 | 0.8577 | 0.8917 | | 0.0023 | 41.0 | 3075 | 0.8426 | 0.8867 | | 0.0083 | 42.0 | 3150 | 0.8441 | 0.895 | | 0.0 | 43.0 | 3225 | 0.8722 | 0.8883 | | 0.0036 | 44.0 | 3300 | 0.8679 | 0.8883 | | 0.0009 | 45.0 | 3375 | 0.9113 | 0.8917 | | 0.0131 | 46.0 | 3450 | 0.8965 | 0.89 | | 0.0 | 47.0 | 3525 | 0.8892 | 0.8933 | | 0.0001 | 48.0 | 3600 | 0.9072 | 0.8933 | | 0.0024 | 49.0 | 3675 | 0.9074 | 0.8933 | | 0.0054 | 50.0 | 3750 | 0.9075 | 0.895 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
jcollado/swin-tiny-patch4-window7-224-finetuned-cifar10
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-cifar10 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the cifar10 dataset. It achieves the following results on the evaluation set: - Loss: 0.0948 - Accuracy: 0.9698 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5048 | 1.0 | 351 | 0.1324 | 0.9592 | | 0.4048 | 2.0 | 703 | 0.1134 | 0.9628 | | 0.3391 | 2.99 | 1053 | 0.0948 | 0.9698 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck" ]
laiagdla/cancer-Vit
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1898 - Accuracy: 0.9243 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2732 | 0.1 | 100 | 0.3969 | 0.8461 | | 0.2784 | 0.21 | 200 | 0.3714 | 0.8579 | | 0.301 | 0.31 | 300 | 0.3504 | 0.8376 | | 0.2372 | 0.42 | 400 | 0.3391 | 0.8812 | | 0.3136 | 0.52 | 500 | 0.2559 | 0.8967 | | 0.3517 | 0.62 | 600 | 0.4141 | 0.8397 | | 0.3312 | 0.73 | 700 | 0.3043 | 0.8841 | | 0.2515 | 0.83 | 800 | 0.2541 | 0.9062 | | 0.2854 | 0.93 | 900 | 0.2561 | 0.9006 | | 0.2594 | 1.04 | 1000 | 0.2681 | 0.9020 | | 0.177 | 1.14 | 1100 | 0.3406 | 0.8773 | | 0.2717 | 1.25 | 1200 | 0.2266 | 0.9171 | | 0.2197 | 1.35 | 1300 | 0.2080 | 0.9236 | | 0.155 | 1.45 | 1400 | 0.2048 | 0.9236 | | 0.2657 | 1.56 | 1500 | 0.2037 | 0.9256 | | 0.118 | 1.66 | 1600 | 0.2616 | 0.9096 | | 0.1823 | 1.77 | 1700 | 0.2158 | 0.9241 | | 0.2175 | 1.87 | 1800 | 0.2159 | 0.9182 | | 0.143 | 1.97 | 1900 | 0.1898 | 0.9243 | | 0.1051 | 2.08 | 2000 | 0.2308 | 0.9226 | | 0.1963 | 2.18 | 2100 | 0.2354 | 0.9205 | | 0.0524 | 2.28 | 2200 | 0.2298 | 0.9282 | | 0.097 | 2.39 | 2300 | 0.2495 | 0.9241 | | 0.0744 | 2.49 | 2400 | 0.2493 | 0.9194 | | 0.0744 | 2.6 | 2500 | 0.2429 | 0.9323 | | 0.0345 | 2.7 | 2600 | 0.2587 | 0.9252 | | 0.0097 | 2.8 | 2700 | 0.2284 | 0.9265 | | 0.0775 | 2.91 | 2800 | 0.2242 | 0.9321 | | 0.0634 | 3.01 | 2900 | 0.2314 | 0.9286 | | 0.0109 | 3.12 | 3000 | 0.2203 | 0.9338 | | 0.0039 | 3.22 | 3100 | 0.2575 | 0.9358 | | 0.0139 | 3.32 | 3200 | 0.2570 | 0.9356 | | 0.0358 | 3.43 | 3300 | 0.2630 | 0.9335 | | 0.0347 | 3.53 | 3400 | 0.2633 | 0.9358 | | 0.0408 | 3.63 | 3500 | 0.2591 | 0.9335 | | 0.041 | 3.74 | 3600 | 0.2613 | 0.9367 | | 0.004 | 3.84 | 3700 | 0.2587 | 0.9370 | | 0.0389 | 3.95 | 3800 | 0.2535 | 0.9373 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.14.1
[ "0", "1" ]
3una/finetuned-FER2013
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-FER2013 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8366 - Accuracy: 0.7081 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.8119 | 1.0 | 202 | 1.7993 | 0.3079 | | 1.6155 | 2.0 | 404 | 1.5446 | 0.4302 | | 1.4279 | 3.0 | 606 | 1.3084 | 0.5301 | | 1.3222 | 4.0 | 808 | 1.1817 | 0.5590 | | 1.2532 | 5.0 | 1010 | 1.1026 | 0.5789 | | 1.2019 | 6.0 | 1212 | 1.0432 | 0.5998 | | 1.2037 | 7.0 | 1414 | 1.0030 | 0.6137 | | 1.1757 | 8.0 | 1616 | 0.9873 | 0.6235 | | 1.1359 | 9.0 | 1818 | 0.9377 | 0.6423 | | 1.1282 | 10.0 | 2020 | 0.9231 | 0.6486 | | 1.1019 | 11.0 | 2222 | 0.9011 | 0.6562 | | 1.0494 | 12.0 | 2424 | 0.8968 | 0.6545 | | 0.9951 | 13.0 | 2626 | 0.8876 | 0.6607 | | 1.0121 | 14.0 | 2828 | 0.8720 | 0.6695 | | 1.0571 | 15.0 | 3030 | 0.8776 | 0.6691 | | 1.0049 | 16.0 | 3232 | 0.8627 | 0.6733 | | 0.988 | 17.0 | 3434 | 0.8639 | 0.6719 | | 0.9955 | 18.0 | 3636 | 0.8397 | 0.6806 | | 0.9381 | 19.0 | 3838 | 0.8430 | 0.6820 | | 0.9911 | 20.0 | 4040 | 0.8370 | 0.6837 | | 0.9305 | 21.0 | 4242 | 0.8373 | 0.6837 | | 0.9653 | 22.0 | 4444 | 0.8283 | 0.6883 | | 0.9134 | 23.0 | 4646 | 0.8289 | 0.6879 | | 0.9098 | 24.0 | 4848 | 0.8365 | 0.6837 | | 0.8761 | 25.0 | 5050 | 0.8190 | 0.6869 | | 0.9067 | 26.0 | 5252 | 0.8303 | 0.6876 | | 0.8765 | 27.0 | 5454 | 0.8188 | 0.6942 | | 0.8486 | 28.0 | 5656 | 0.8142 | 0.6959 | | 0.9357 | 29.0 | 5858 | 0.8114 | 0.6984 | | 0.9037 | 30.0 | 6060 | 0.8150 | 0.6917 | | 0.8758 | 31.0 | 6262 | 0.8165 | 0.6931 | | 0.8688 | 32.0 | 6464 | 0.8061 | 0.6994 | | 0.8736 | 33.0 | 6666 | 0.8056 | 0.6994 | | 0.8785 | 34.0 | 6868 | 0.8045 | 0.6991 | | 0.8292 | 35.0 | 7070 | 0.8095 | 0.6987 | | 0.8407 | 36.0 | 7272 | 0.8096 | 0.6956 | | 0.8609 | 37.0 | 7474 | 0.8137 | 0.6984 | | 0.9055 | 38.0 | 7676 | 0.8054 | 0.7018 | | 0.8355 | 39.0 | 7878 | 0.8080 | 0.6980 | | 0.8391 | 40.0 | 8080 | 0.8087 | 0.6966 | | 0.7987 | 41.0 | 8282 | 0.8041 | 0.6998 | | 0.818 | 42.0 | 8484 | 0.8070 | 0.7039 | | 0.7836 | 43.0 | 8686 | 0.8091 | 0.7025 | | 0.8348 | 44.0 | 8888 | 0.8047 | 0.7025 | | 0.8205 | 45.0 | 9090 | 0.8076 | 0.7025 | | 0.8023 | 46.0 | 9292 | 0.8056 | 0.7053 | | 0.8241 | 47.0 | 9494 | 0.8022 | 0.7039 | | 0.763 | 48.0 | 9696 | 0.8079 | 0.6994 | | 0.7422 | 49.0 | 9898 | 0.8062 | 0.7039 | | 0.7762 | 50.0 | 10100 | 0.8090 | 0.6998 | | 0.7786 | 51.0 | 10302 | 0.8122 | 0.6994 | | 0.8027 | 52.0 | 10504 | 0.8129 | 0.7043 | | 0.7966 | 53.0 | 10706 | 0.8094 | 0.7039 | | 0.8103 | 54.0 | 10908 | 0.8107 | 0.7039 | | 0.7827 | 55.0 | 11110 | 0.8126 | 0.7057 | | 0.7949 | 56.0 | 11312 | 0.8104 | 0.7119 | | 0.7511 | 57.0 | 11514 | 0.8122 | 0.7050 | | 0.7727 | 58.0 | 11716 | 0.8123 | 0.7078 | | 0.7723 | 59.0 | 11918 | 0.8194 | 0.7015 | | 0.7796 | 60.0 | 12120 | 0.8193 | 0.7053 | | 0.7768 | 61.0 | 12322 | 0.8159 | 0.7029 | | 0.7604 | 62.0 | 12524 | 0.8081 | 0.7085 | | 0.7784 | 63.0 | 12726 | 0.8169 | 0.7106 | | 0.7235 | 64.0 | 12928 | 0.8131 | 0.7015 | | 0.7384 | 65.0 | 13130 | 0.8149 | 0.7085 | | 0.6638 | 66.0 | 13332 | 0.8192 | 0.7078 | | 0.6998 | 67.0 | 13534 | 0.8243 | 0.7113 | | 0.7249 | 68.0 | 13736 | 0.8200 | 0.7015 | | 0.6809 | 69.0 | 13938 | 0.8140 | 0.7081 | | 0.701 | 70.0 | 14140 | 0.8177 | 0.7095 | | 0.7122 | 71.0 | 14342 | 0.8245 | 0.7053 | | 0.7269 | 72.0 | 14544 | 0.8245 | 0.7050 | | 0.6973 | 73.0 | 14746 | 0.8207 | 0.7095 | | 0.7241 | 74.0 | 14948 | 0.8210 | 0.7057 | | 0.7397 | 75.0 | 15150 | 0.8230 | 0.7060 | | 0.6832 | 76.0 | 15352 | 0.8308 | 0.7057 | | 0.7213 | 77.0 | 15554 | 0.8256 | 0.7025 | | 0.7115 | 78.0 | 15756 | 0.8291 | 0.7057 | | 0.688 | 79.0 | 15958 | 0.8337 | 0.7088 | | 0.6997 | 80.0 | 16160 | 0.8312 | 0.7060 | | 0.6924 | 81.0 | 16362 | 0.8321 | 0.7053 | | 0.7382 | 82.0 | 16564 | 0.8340 | 0.7050 | | 0.7513 | 83.0 | 16766 | 0.8320 | 0.7015 | | 0.656 | 84.0 | 16968 | 0.8389 | 0.7053 | | 0.6503 | 85.0 | 17170 | 0.8321 | 0.7085 | | 0.6661 | 86.0 | 17372 | 0.8355 | 0.7092 | | 0.7026 | 87.0 | 17574 | 0.8339 | 0.7088 | | 0.76 | 88.0 | 17776 | 0.8361 | 0.7092 | | 0.696 | 89.0 | 17978 | 0.8343 | 0.7106 | | 0.6713 | 90.0 | 18180 | 0.8337 | 0.7106 | | 0.6621 | 91.0 | 18382 | 0.8349 | 0.7057 | | 0.7042 | 92.0 | 18584 | 0.8360 | 0.7085 | | 0.7087 | 93.0 | 18786 | 0.8353 | 0.7085 | | 0.64 | 94.0 | 18988 | 0.8371 | 0.7088 | | 0.659 | 95.0 | 19190 | 0.8376 | 0.7071 | | 0.6246 | 96.0 | 19392 | 0.8376 | 0.7088 | | 0.6797 | 97.0 | 19594 | 0.8368 | 0.7092 | | 0.6652 | 98.0 | 19796 | 0.8376 | 0.7092 | | 0.629 | 99.0 | 19998 | 0.8370 | 0.7088 | | 0.6762 | 100.0 | 20200 | 0.8366 | 0.7081 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
[ "angry", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
akashmaggon/vit-base-crack-classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-crack-classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0225 - Accuracy: 0.9972 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0086 | 1.0 | 203 | 0.0221 | 0.9958 | | 0.0066 | 2.0 | 406 | 0.0216 | 0.9972 | | 0.0064 | 3.0 | 609 | 0.0225 | 0.9972 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "crack", "scratch", "tire flat", "dent", "glass shatter", "lamp broken" ]
akashmaggon/vit-base-crack-classification-2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-crack-classification-2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0212 - Accuracy: 0.9917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.222 | 1.0 | 203 | 0.2224 | 0.9097 | | 0.0911 | 2.0 | 406 | 0.0806 | 0.9653 | | 0.0163 | 3.0 | 609 | 0.0560 | 0.9681 | | 0.0126 | 4.0 | 812 | 0.0554 | 0.9792 | | 0.0233 | 5.0 | 1015 | 0.0347 | 0.9806 | | 0.0096 | 6.0 | 1218 | 0.0949 | 0.9792 | | 0.0013 | 7.0 | 1421 | 0.0440 | 0.9917 | | 0.0011 | 8.0 | 1624 | 0.0222 | 0.9917 | | 0.0009 | 9.0 | 1827 | 0.0213 | 0.9917 | | 0.0009 | 10.0 | 2030 | 0.0212 | 0.9917 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "crack", "scratch", "tire flat", "dent", "glass shatter", "lamp broken" ]
akashmaggon/vit-base-crack-classification-5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-crack-classification-5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "crack", "scratch", "tire flat", "dent", "glass shatter", "lamp broken" ]
akashmaggon/vit-base-crack-classification-129
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-crack-classification-129 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4641 - Accuracy: 0.8889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3061 | 1.0 | 212 | 1.1094 | 0.6759 | | 0.844 | 2.0 | 424 | 0.7624 | 0.7940 | | 0.5972 | 3.0 | 636 | 0.5760 | 0.8472 | | 0.4424 | 4.0 | 848 | 0.4922 | 0.875 | | 0.3815 | 5.0 | 1060 | 0.4641 | 0.8889 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "crack", "scratch", "tire flat", "dent", "glass shatter", "lamp broken" ]
abhijitgayen/vit-large-0.0
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-lage-beans-demo-v5 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.1027 - Accuracy: 0.9792 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7861 | 0.28 | 100 | 0.7609 | 0.725 | | 0.6276 | 0.56 | 200 | 0.6835 | 0.7764 | | 0.6089 | 0.83 | 300 | 0.5568 | 0.7931 | | 0.302 | 1.11 | 400 | 0.5887 | 0.8194 | | 0.1723 | 1.39 | 500 | 0.2988 | 0.8903 | | 0.2522 | 1.67 | 600 | 0.2564 | 0.9167 | | 0.1115 | 1.94 | 700 | 0.1680 | 0.9472 | | 0.1445 | 2.22 | 800 | 0.2065 | 0.9403 | | 0.0648 | 2.5 | 900 | 0.1673 | 0.9569 | | 0.0209 | 2.78 | 1000 | 0.1636 | 0.9569 | | 0.0003 | 3.06 | 1100 | 0.1293 | 0.9694 | | 0.0034 | 3.33 | 1200 | 0.0817 | 0.9792 | | 0.0006 | 3.61 | 1300 | 0.0874 | 0.9833 | | 0.0023 | 3.89 | 1400 | 0.1076 | 0.9778 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "crack", "scratch", "tire flat", "dent", "glass shatter", "lamp broken" ]
abhijitgayen/super-cool-model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k). It achieves the following results on the evaluation set: - Loss: 0.0816 - Accuracy: 0.9819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5092 | 0.28 | 100 | 0.6420 | 0.7681 | | 0.5076 | 0.56 | 200 | 0.4069 | 0.8722 | | 0.3291 | 0.83 | 300 | 0.4342 | 0.8569 | | 0.108 | 1.11 | 400 | 0.2410 | 0.9292 | | 0.0378 | 1.39 | 500 | 0.3107 | 0.9139 | | 0.1488 | 1.67 | 600 | 0.1984 | 0.9389 | | 0.0532 | 1.94 | 700 | 0.1714 | 0.9514 | | 0.0122 | 2.22 | 800 | 0.1334 | 0.9611 | | 0.0529 | 2.5 | 900 | 0.1139 | 0.9653 | | 0.0221 | 2.78 | 1000 | 0.0875 | 0.9736 | | 0.0052 | 3.06 | 1100 | 0.0816 | 0.9819 | | 0.0045 | 3.33 | 1200 | 0.0873 | 0.9792 | | 0.0113 | 3.61 | 1300 | 0.0882 | 0.9833 | | 0.0043 | 3.89 | 1400 | 0.0865 | 0.9806 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "crack", "scratch", "tire flat", "dent", "glass shatter", "lamp broken" ]
akashmaggon/vit-base-crack-classification-aug
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-crack-classification-aug This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0165 - Accuracy: 0.9907 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4964 | 1.0 | 212 | 0.3400 | 0.8796 | | 0.249 | 2.0 | 424 | 0.1651 | 0.9236 | | 0.1216 | 3.0 | 636 | 0.0585 | 0.9676 | | 0.0488 | 4.0 | 848 | 0.0382 | 0.9769 | | 0.0304 | 5.0 | 1060 | 0.0302 | 0.9907 | | 0.0107 | 6.0 | 1272 | 0.0294 | 0.9838 | | 0.0093 | 7.0 | 1484 | 0.0165 | 0.9907 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "crack", "scratch", "tire flat", "dent", "glass shatter", "lamp broken" ]
Luwayy/disaster_images_model
# 🌍 Disaster Image Classification using Vision Transformer This project uses a fine-tuned Vision Transformer (ViT) model to classify disaster-related images into various categories such as **Water Disaster**, **Fire Disaster**, **Human Damage**, etc. --- ## 🚀 Installation Install the required Python packages: ```bash pip install transformers torch torchvision pillow requests ``` --- ## 🔍 Quick Start Use the pipeline to classify an image directly from a URL: ```python from transformers import pipeline from PIL import Image import requests from io import BytesIO # Load the image classification pipeline pipe = pipeline("image-classification", model="Luwayy/disaster_images_model") # Load an image from a URL url = 'https://www.spml.co.in/Images/blog/wdt&c-152776632.jpg' response = requests.get(url) image = Image.open(BytesIO(response.content)) # Classify the image results = pipe(image) # Print results print(results) ``` **Example Output:** ```json [ {"label": "Water_Disaster", "score": 0.9184}, {"label": "Land_Disaster", "score": 0.0200}, {"label": "Non_Damage", "score": 0.0169}, {"label": "Human_Damage", "score": 0.0164}, {"label": "Fire_Disaster", "score": 0.0143} ] ``` --- ## 🧠 Model Details - **Base Model:** `google/vit-base-patch16-224-in21k` - **Architecture:** Vision Transformer (`ViTForImageClassification`) - **Image Size:** 224x224 - **Classes:** - `Damaged_Infrastructure` - `Fire_Disaster` - `Human_Damage` - `Land_Disaster` - `Non_Damage` - `Water_Disaster` --- ## ⚙️ Training Configuration - **Image Normalization:** Mean `[0.5, 0.5, 0.5]`, Std `[0.5, 0.5, 0.5]` - **Resize Method:** Bilinear to `224x224` - **Augmentations:** Resize, Normalize, Convert to Tensor - **Batch Size:** 16 - **Epochs:** 3 - **Learning Rate:** `3e-5` - **Weight Decay:** `0.01` - **Evaluation Strategy:** Per epoch
[ "damaged_infrastructure", "fire_disaster", "human_damage", "land_disaster", "non_damage", "water_disaster" ]
hkivancoral/smids_1x_beit_base_rms_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_rms_0001_fold1 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7464 - Accuracy: 0.6978 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1002 | 1.0 | 76 | 0.9320 | 0.5459 | | 0.9176 | 2.0 | 152 | 0.9156 | 0.4975 | | 0.8828 | 3.0 | 228 | 1.4808 | 0.3239 | | 0.9116 | 4.0 | 304 | 0.9182 | 0.5058 | | 0.9681 | 5.0 | 380 | 0.8261 | 0.5726 | | 0.8914 | 6.0 | 456 | 0.8412 | 0.5442 | | 0.8118 | 7.0 | 532 | 0.8070 | 0.5843 | | 0.7886 | 8.0 | 608 | 0.7873 | 0.6144 | | 0.8228 | 9.0 | 684 | 0.8018 | 0.5593 | | 0.7855 | 10.0 | 760 | 0.8650 | 0.5659 | | 0.7506 | 11.0 | 836 | 0.8105 | 0.5726 | | 0.8105 | 12.0 | 912 | 0.7718 | 0.5760 | | 0.7542 | 13.0 | 988 | 0.7814 | 0.6027 | | 0.8063 | 14.0 | 1064 | 0.7598 | 0.6244 | | 0.6853 | 15.0 | 1140 | 0.9554 | 0.5526 | | 0.6995 | 16.0 | 1216 | 0.7869 | 0.6277 | | 0.7413 | 17.0 | 1292 | 0.7345 | 0.6561 | | 0.6942 | 18.0 | 1368 | 0.7274 | 0.6511 | | 0.7698 | 19.0 | 1444 | 0.7431 | 0.6711 | | 0.7328 | 20.0 | 1520 | 0.7361 | 0.6327 | | 0.7002 | 21.0 | 1596 | 0.7435 | 0.6427 | | 0.6967 | 22.0 | 1672 | 0.8269 | 0.6010 | | 0.651 | 23.0 | 1748 | 0.7688 | 0.6528 | | 0.6937 | 24.0 | 1824 | 0.7386 | 0.6578 | | 0.5694 | 25.0 | 1900 | 0.7657 | 0.6277 | | 0.6705 | 26.0 | 1976 | 0.7210 | 0.6811 | | 0.5989 | 27.0 | 2052 | 0.7453 | 0.6561 | | 0.6274 | 28.0 | 2128 | 0.7780 | 0.6578 | | 0.5748 | 29.0 | 2204 | 0.7338 | 0.6845 | | 0.6764 | 30.0 | 2280 | 0.7373 | 0.6394 | | 0.6934 | 31.0 | 2356 | 0.7055 | 0.6845 | | 0.6007 | 32.0 | 2432 | 0.7394 | 0.6511 | | 0.5933 | 33.0 | 2508 | 0.7124 | 0.6795 | | 0.5894 | 34.0 | 2584 | 0.7760 | 0.6711 | | 0.6837 | 35.0 | 2660 | 0.7002 | 0.6628 | | 0.5776 | 36.0 | 2736 | 0.7352 | 0.6694 | | 0.6485 | 37.0 | 2812 | 0.7046 | 0.6878 | | 0.5352 | 38.0 | 2888 | 0.7058 | 0.6861 | | 0.577 | 39.0 | 2964 | 0.6974 | 0.7028 | | 0.5712 | 40.0 | 3040 | 0.7122 | 0.6811 | | 0.5117 | 41.0 | 3116 | 0.7026 | 0.6845 | | 0.4908 | 42.0 | 3192 | 0.7187 | 0.7045 | | 0.4784 | 43.0 | 3268 | 0.7103 | 0.7028 | | 0.4739 | 44.0 | 3344 | 0.7027 | 0.7162 | | 0.5942 | 45.0 | 3420 | 0.7242 | 0.6962 | | 0.4258 | 46.0 | 3496 | 0.7593 | 0.6912 | | 0.4726 | 47.0 | 3572 | 0.7433 | 0.6895 | | 0.4422 | 48.0 | 3648 | 0.7412 | 0.6928 | | 0.4049 | 49.0 | 3724 | 0.7425 | 0.6995 | | 0.5059 | 50.0 | 3800 | 0.7464 | 0.6978 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
Raihan004/Hierarchical_Agent_Action
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Hierarchical_Agent_Action This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the agent_action_class dataset. It achieves the following results on the evaluation set: - Loss: 0.5942 - Accuracy: 0.8403 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.4407 | 0.81 | 100 | 2.2716 | 0.6058 | | 1.7756 | 1.61 | 200 | 1.6162 | 0.7065 | | 1.3948 | 2.42 | 300 | 1.2200 | 0.7698 | | 1.131 | 3.23 | 400 | 1.0012 | 0.7856 | | 0.9239 | 4.03 | 500 | 0.9055 | 0.7827 | | 0.8699 | 4.84 | 600 | 0.8103 | 0.7827 | | 0.6707 | 5.65 | 700 | 0.7610 | 0.7842 | | 0.6206 | 6.45 | 800 | 0.7312 | 0.7885 | | 0.5795 | 7.26 | 900 | 0.6989 | 0.8101 | | 0.4914 | 8.06 | 1000 | 0.7066 | 0.7813 | | 0.5087 | 8.87 | 1100 | 0.6398 | 0.8187 | | 0.4373 | 9.68 | 1200 | 0.6293 | 0.8043 | | 0.4365 | 10.48 | 1300 | 0.6726 | 0.7971 | | 0.4517 | 11.29 | 1400 | 0.6047 | 0.8245 | | 0.4114 | 12.1 | 1500 | 0.6088 | 0.8230 | | 0.426 | 12.9 | 1600 | 0.6165 | 0.8201 | | 0.3456 | 13.71 | 1700 | 0.6133 | 0.8259 | | 0.332 | 14.52 | 1800 | 0.6736 | 0.8201 | | 0.3646 | 15.32 | 1900 | 0.6406 | 0.8173 | | 0.3287 | 16.13 | 2000 | 0.6978 | 0.7971 | | 0.2793 | 16.94 | 2100 | 0.6433 | 0.8173 | | 0.2924 | 17.74 | 2200 | 0.6474 | 0.8144 | | 0.2605 | 18.55 | 2300 | 0.6279 | 0.8288 | | 0.2016 | 19.35 | 2400 | 0.6361 | 0.8216 | | 0.2524 | 20.16 | 2500 | 0.6394 | 0.8259 | | 0.2017 | 20.97 | 2600 | 0.6683 | 0.8158 | | 0.2082 | 21.77 | 2700 | 0.6389 | 0.8345 | | 0.2751 | 22.58 | 2800 | 0.6141 | 0.8374 | | 0.207 | 23.39 | 2900 | 0.6052 | 0.8259 | | 0.1791 | 24.19 | 3000 | 0.6332 | 0.8230 | | 0.1719 | 25.0 | 3100 | 0.5942 | 0.8403 | | 0.1685 | 25.81 | 3200 | 0.6121 | 0.8360 | | 0.1557 | 26.61 | 3300 | 0.6237 | 0.8345 | | 0.1694 | 27.42 | 3400 | 0.6372 | 0.8317 | | 0.1927 | 28.23 | 3500 | 0.6378 | 0.8273 | | 0.1375 | 29.03 | 3600 | 0.6258 | 0.8331 | | 0.1653 | 29.84 | 3700 | 0.6262 | 0.8331 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.14.1
[ "কুকুর_কম্পিউটার_ব্যবহার_করা", "কুকুর_খাওয়া", "ছেলে_খেলা_করা", "ছেলে_ঘুমানো", "ছেলে_পান_করা", "ছেলে_পড়া", "ছেলে_রান্না_করা", "ছেলে_লেখা", "ছেলে_হাঁটা", "বিড়াল_কম্পিউটার_ব্যবহার_করা", "বিড়াল_খাওয়া", "বিড়াল_খেলা_করা", "কুকুর_খেলা_করা", "বিড়াল_ঘুমানো", "বিড়াল_পান_করা", "বিড়াল_পড়া", "বিড়াল_হাঁটা", "মেয়ে_কথা_বলা", "মেয়ে_কম্পিউটার_ব্যবহার_করা", "মেয়ে_খাওয়া", "মেয়ে_খেলা_করা", "মেয়ে_ঘুমানো", "মেয়ে_পান_করা", "কুকুর_ঘুমানো", "মেয়ে_পড়া", "মেয়ে_রান্না_করা", "মেয়ে_লেখা", "মেয়ে_হাঁটা", "কুকুর_পান_করা", "কুকুর_পড়া", "কুকুর_হাঁটা", "ছেলে_কথা_বলা", "ছেলে_কম্পিউটার_ব্যবহার_করা", "ছেলে_খাওয়া" ]
hkivancoral/smids_1x_beit_base_rms_0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_rms_0001_fold2 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9358 - Accuracy: 0.7404 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0437 | 1.0 | 75 | 0.9679 | 0.5042 | | 0.9234 | 2.0 | 150 | 0.8669 | 0.5208 | | 1.0795 | 3.0 | 225 | 0.7926 | 0.5874 | | 0.9543 | 4.0 | 300 | 0.8244 | 0.5507 | | 0.8239 | 5.0 | 375 | 0.7959 | 0.5857 | | 0.7924 | 6.0 | 450 | 0.7928 | 0.5890 | | 0.8468 | 7.0 | 525 | 0.7806 | 0.6256 | | 0.8608 | 8.0 | 600 | 0.9027 | 0.5408 | | 0.7878 | 9.0 | 675 | 0.7544 | 0.6373 | | 0.9079 | 10.0 | 750 | 0.7732 | 0.6190 | | 0.7705 | 11.0 | 825 | 0.7349 | 0.6290 | | 0.7586 | 12.0 | 900 | 0.7322 | 0.6306 | | 0.7794 | 13.0 | 975 | 0.7224 | 0.6323 | | 0.7123 | 14.0 | 1050 | 0.7252 | 0.6572 | | 0.744 | 15.0 | 1125 | 0.7450 | 0.5990 | | 0.7086 | 16.0 | 1200 | 0.6962 | 0.6639 | | 0.7295 | 17.0 | 1275 | 0.7508 | 0.6489 | | 0.7289 | 18.0 | 1350 | 0.6978 | 0.6722 | | 0.6947 | 19.0 | 1425 | 0.7112 | 0.6739 | | 0.6923 | 20.0 | 1500 | 0.7131 | 0.6805 | | 0.7545 | 21.0 | 1575 | 0.7480 | 0.6223 | | 0.68 | 22.0 | 1650 | 0.6683 | 0.6839 | | 0.7107 | 23.0 | 1725 | 0.6889 | 0.6772 | | 0.6933 | 24.0 | 1800 | 0.6566 | 0.6822 | | 0.6429 | 25.0 | 1875 | 0.6381 | 0.7005 | | 0.6742 | 26.0 | 1950 | 0.6536 | 0.6822 | | 0.6753 | 27.0 | 2025 | 0.6462 | 0.6889 | | 0.6228 | 28.0 | 2100 | 0.6368 | 0.7022 | | 0.6193 | 29.0 | 2175 | 0.6115 | 0.7171 | | 0.5568 | 30.0 | 2250 | 0.6625 | 0.7188 | | 0.584 | 31.0 | 2325 | 0.6680 | 0.6922 | | 0.581 | 32.0 | 2400 | 0.5723 | 0.7654 | | 0.5698 | 33.0 | 2475 | 0.6173 | 0.7205 | | 0.5032 | 34.0 | 2550 | 0.6176 | 0.7338 | | 0.5019 | 35.0 | 2625 | 0.6137 | 0.7438 | | 0.4921 | 36.0 | 2700 | 0.5855 | 0.7571 | | 0.453 | 37.0 | 2775 | 0.6724 | 0.7271 | | 0.4913 | 38.0 | 2850 | 0.6043 | 0.7720 | | 0.3871 | 39.0 | 2925 | 0.6124 | 0.7704 | | 0.4014 | 40.0 | 3000 | 0.6591 | 0.7521 | | 0.4698 | 41.0 | 3075 | 0.6575 | 0.7604 | | 0.375 | 42.0 | 3150 | 0.6735 | 0.7471 | | 0.317 | 43.0 | 3225 | 0.7867 | 0.7504 | | 0.2968 | 44.0 | 3300 | 0.7423 | 0.7521 | | 0.2919 | 45.0 | 3375 | 0.8253 | 0.7504 | | 0.2598 | 46.0 | 3450 | 0.8629 | 0.7421 | | 0.1951 | 47.0 | 3525 | 0.8586 | 0.7704 | | 0.1905 | 48.0 | 3600 | 0.9010 | 0.7438 | | 0.1278 | 49.0 | 3675 | 0.9354 | 0.7454 | | 0.2294 | 50.0 | 3750 | 0.9358 | 0.7404 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_1x_beit_base_rms_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_rms_0001_fold3 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7846 - Accuracy: 0.7133 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1199 | 1.0 | 75 | 1.1044 | 0.325 | | 1.1759 | 2.0 | 150 | 1.1239 | 0.47 | | 1.1465 | 3.0 | 225 | 0.9168 | 0.5 | | 0.8955 | 4.0 | 300 | 0.8917 | 0.5017 | | 0.8948 | 5.0 | 375 | 0.8301 | 0.5533 | | 0.9774 | 6.0 | 450 | 0.8272 | 0.5467 | | 0.8001 | 7.0 | 525 | 0.8058 | 0.5567 | | 0.7633 | 8.0 | 600 | 0.8140 | 0.545 | | 0.7814 | 9.0 | 675 | 0.7815 | 0.5733 | | 0.8175 | 10.0 | 750 | 0.7839 | 0.5633 | | 0.7605 | 11.0 | 825 | 0.7664 | 0.615 | | 0.762 | 12.0 | 900 | 0.7781 | 0.59 | | 0.6797 | 13.0 | 975 | 0.7875 | 0.575 | | 0.7699 | 14.0 | 1050 | 0.7772 | 0.6117 | | 0.6167 | 15.0 | 1125 | 0.8129 | 0.585 | | 0.7106 | 16.0 | 1200 | 0.7392 | 0.6633 | | 0.7174 | 17.0 | 1275 | 0.7176 | 0.6717 | | 0.704 | 18.0 | 1350 | 0.7772 | 0.63 | | 0.6617 | 19.0 | 1425 | 0.7359 | 0.65 | | 0.6722 | 20.0 | 1500 | 0.7009 | 0.6783 | | 0.676 | 21.0 | 1575 | 0.6946 | 0.6667 | | 0.6441 | 22.0 | 1650 | 0.7089 | 0.6917 | | 0.6565 | 23.0 | 1725 | 0.7160 | 0.665 | | 0.6009 | 24.0 | 1800 | 0.6902 | 0.6783 | | 0.6592 | 25.0 | 1875 | 0.7159 | 0.665 | | 0.6628 | 26.0 | 1950 | 0.7741 | 0.6233 | | 0.6044 | 27.0 | 2025 | 0.7147 | 0.66 | | 0.585 | 28.0 | 2100 | 0.6827 | 0.69 | | 0.5831 | 29.0 | 2175 | 0.6975 | 0.6833 | | 0.6301 | 30.0 | 2250 | 0.6815 | 0.6633 | | 0.6457 | 31.0 | 2325 | 0.6813 | 0.6817 | | 0.6492 | 32.0 | 2400 | 0.6894 | 0.6783 | | 0.5418 | 33.0 | 2475 | 0.7461 | 0.6783 | | 0.5925 | 34.0 | 2550 | 0.6773 | 0.6933 | | 0.5913 | 35.0 | 2625 | 0.6656 | 0.7083 | | 0.5761 | 36.0 | 2700 | 0.6491 | 0.7133 | | 0.528 | 37.0 | 2775 | 0.6784 | 0.7 | | 0.5718 | 38.0 | 2850 | 0.7007 | 0.6783 | | 0.5083 | 39.0 | 2925 | 0.6815 | 0.7 | | 0.5069 | 40.0 | 3000 | 0.6638 | 0.71 | | 0.4838 | 41.0 | 3075 | 0.6813 | 0.7167 | | 0.5071 | 42.0 | 3150 | 0.6709 | 0.7183 | | 0.5091 | 43.0 | 3225 | 0.6746 | 0.7167 | | 0.4355 | 44.0 | 3300 | 0.7138 | 0.71 | | 0.4287 | 45.0 | 3375 | 0.7080 | 0.7133 | | 0.3954 | 46.0 | 3450 | 0.7468 | 0.7 | | 0.3389 | 47.0 | 3525 | 0.7428 | 0.7183 | | 0.3613 | 48.0 | 3600 | 0.7469 | 0.725 | | 0.388 | 49.0 | 3675 | 0.7685 | 0.7167 | | 0.2972 | 50.0 | 3750 | 0.7846 | 0.7133 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
dima806/animal_151_types_image_detection
Returns animal type given image with about 99% accuracy. See https://www.kaggle.com/code/dima806/animal-151-types-detection-vit for more details. ``` Classification report: precision recall f1-score support acinonyx-jubatus 1.0000 1.0000 1.0000 12 aethia-cristatella 1.0000 0.9167 0.9565 12 agalychnis-callidryas 1.0000 1.0000 1.0000 12 agkistrodon-contortrix 1.0000 1.0000 1.0000 12 ailuropoda-melanoleuca 1.0000 1.0000 1.0000 12 ailurus-fulgens 1.0000 1.0000 1.0000 12 alces-alces 1.0000 1.0000 1.0000 12 anas-platyrhynchos 1.0000 1.0000 1.0000 12 ankylosaurus-magniventris 0.9167 0.9167 0.9167 12 apis-mellifera 1.0000 1.0000 1.0000 12 aptenodytes-forsteri 1.0000 1.0000 1.0000 12 aquila-chrysaetos 1.0000 1.0000 1.0000 12 ara-macao 1.0000 1.0000 1.0000 12 architeuthis-dux 0.9231 1.0000 0.9600 12 ardea-herodias 1.0000 1.0000 1.0000 12 balaenoptera-musculus 1.0000 1.0000 1.0000 12 betta-splendens 1.0000 1.0000 1.0000 12 bison-bison 1.0000 1.0000 1.0000 12 bos-gaurus 1.0000 1.0000 1.0000 12 bos-taurus 1.0000 1.0000 1.0000 12 bradypus-variegatus 1.0000 1.0000 1.0000 12 branta-canadensis 1.0000 1.0000 1.0000 12 canis-lupus 1.0000 1.0000 1.0000 12 canis-lupus-familiaris 1.0000 1.0000 1.0000 12 carcharodon-carcharias 1.0000 1.0000 1.0000 12 cardinalis-cardinalis 1.0000 1.0000 1.0000 12 cathartes-aura 1.0000 1.0000 1.0000 12 centrochelys-sulcata 1.0000 1.0000 1.0000 12 centruroides-vittatus 1.0000 1.0000 1.0000 12 ceratitis-capitata 1.0000 0.9167 0.9565 12 ceratotherium-simum 1.0000 1.0000 1.0000 12 chelonia-mydas 1.0000 1.0000 1.0000 12 chrysemys-picta 1.0000 1.0000 1.0000 12 circus-hudsonius 1.0000 1.0000 1.0000 12 codium-fragile 1.0000 1.0000 1.0000 12 coelacanthiformes 0.9231 1.0000 0.9600 12 colaptes-auratus 1.0000 1.0000 1.0000 12 connochaetes-gnou 1.0000 1.0000 1.0000 12 correlophus-ciliatus 1.0000 1.0000 1.0000 12 crocodylus-niloticus 1.0000 1.0000 1.0000 12 crotalus-atrox 1.0000 1.0000 1.0000 12 crotophaga-sulcirostris 1.0000 1.0000 1.0000 12 cryptoprocta-ferox 1.0000 1.0000 1.0000 12 cyanocitta-cristata 1.0000 1.0000 1.0000 12 danaus-plexippus 1.0000 1.0000 1.0000 12 dasypus-novemcinctus 1.0000 0.9167 0.9565 12 delphinapterus-leucas 1.0000 1.0000 1.0000 12 dendrobatidae 1.0000 1.0000 1.0000 12 dermochelys-coriacea 0.9231 1.0000 0.9600 12 desmodus-rotundus 1.0000 0.9167 0.9565 12 diplodocus 1.0000 1.0000 1.0000 12 dugong-dugon 1.0000 1.0000 1.0000 12 eidolon-helvum 1.0000 1.0000 1.0000 12 enhydra-lutris 1.0000 1.0000 1.0000 12 enteroctopus-dofleini 1.0000 1.0000 1.0000 12 equus-caballus 0.9231 1.0000 0.9600 12 equus-quagga 1.0000 1.0000 1.0000 12 eudocimus-albus 1.0000 1.0000 1.0000 12 eunectes-murinus 1.0000 1.0000 1.0000 12 falco-peregrinus 1.0000 1.0000 1.0000 12 felis-catus 1.0000 1.0000 1.0000 12 formicidae 1.0000 1.0000 1.0000 12 gallus-gallus-domesticus 1.0000 1.0000 1.0000 12 gavialis-gangeticus 1.0000 1.0000 1.0000 12 geococcyx-californianus 1.0000 1.0000 1.0000 12 giraffa-camelopardalis 1.0000 1.0000 1.0000 12 gorilla-gorilla 1.0000 1.0000 1.0000 12 haliaeetus-leucocephalus 1.0000 1.0000 1.0000 12 hapalochlaena-maculosa 1.0000 1.0000 1.0000 12 heloderma-suspectum 1.0000 1.0000 1.0000 12 heterocera 0.9231 1.0000 0.9600 12 hippopotamus-amphibius 1.0000 1.0000 1.0000 12 homo-sapiens 0.9231 1.0000 0.9600 12 hydrurga-leptonyx 0.9231 1.0000 0.9600 12 icterus-galbula 1.0000 1.0000 1.0000 12 icterus-gularis 1.0000 1.0000 1.0000 12 icterus-spurius 1.0000 1.0000 1.0000 12 iguana-iguana 1.0000 1.0000 1.0000 12 iguanodon-bernissartensis 1.0000 1.0000 1.0000 12 inia-geoffrensis 1.0000 1.0000 1.0000 12 lampropeltis-triangulum 1.0000 1.0000 1.0000 12 lemur-catta 1.0000 1.0000 1.0000 12 lepus-americanus 1.0000 1.0000 1.0000 12 loxodonta-africana 1.0000 1.0000 1.0000 12 macropus-giganteus 1.0000 1.0000 1.0000 12 malayopython-reticulatus 1.0000 1.0000 1.0000 12 mammuthus-primigeniu 1.0000 1.0000 1.0000 12 martes-americana 1.0000 1.0000 1.0000 12 megaptera-novaeangliae 1.0000 1.0000 1.0000 12 melanerpes-carolinus 1.0000 1.0000 1.0000 12 mellisuga-helenae 1.0000 1.0000 1.0000 12 mergus-serrator 1.0000 1.0000 1.0000 12 mimus-polyglottos 1.0000 1.0000 1.0000 12 monodon-monoceros 0.9231 1.0000 0.9600 12 musca-domestica 1.0000 1.0000 1.0000 12 odobenus-rosmarus 1.0000 1.0000 1.0000 12 okapia-johnstoni 1.0000 1.0000 1.0000 12 ophiophagus-hannah 1.0000 1.0000 1.0000 12 orcinus-orca 1.0000 1.0000 1.0000 12 ornithorhynchus-anatinus 1.0000 1.0000 1.0000 12 ovis-aries 1.0000 1.0000 1.0000 12 ovis-canadensis 1.0000 1.0000 1.0000 12 panthera-leo 1.0000 0.9167 0.9565 12 panthera-onca 0.8571 1.0000 0.9231 12 panthera-pardus 1.0000 0.8333 0.9091 12 panthera-tigris 1.0000 1.0000 1.0000 12 pantherophis-alleghaniensis 1.0000 1.0000 1.0000 12 pantherophis-guttatus 1.0000 1.0000 1.0000 12 papilio-glaucus 1.0000 0.9167 0.9565 12 passerina-ciris 1.0000 1.0000 1.0000 12 pavo-cristatus 1.0000 1.0000 1.0000 12 periplaneta-americana 1.0000 1.0000 1.0000 12 phascolarctos-cinereus 1.0000 1.0000 1.0000 12 phoebetria-fusca 1.0000 1.0000 1.0000 12 phoenicopterus-ruber 1.0000 1.0000 1.0000 12 phyllobates-terribilis 1.0000 1.0000 1.0000 12 physalia-physalis 1.0000 1.0000 1.0000 12 physeter-macrocephalus 0.9231 1.0000 0.9600 12 poecile-atricapillus 1.0000 1.0000 1.0000 12 pongo-abelii 1.0000 1.0000 1.0000 12 procyon-lotor 1.0000 1.0000 1.0000 12 pteranodon-longiceps 1.0000 1.0000 1.0000 12 pterois-mombasae 1.0000 0.8333 0.9091 12 pterois-volitans 0.8571 1.0000 0.9231 12 puma-concolor 1.0000 0.9167 0.9565 12 rattus-rattus 1.0000 1.0000 1.0000 12 rusa-unicolor 1.0000 1.0000 1.0000 12 salmo-salar 1.0000 1.0000 1.0000 12 sciurus-carolinensis 1.0000 1.0000 1.0000 12 smilodon-populator 1.0000 1.0000 1.0000 12 spheniscus-demersus 1.0000 1.0000 1.0000 12 sphyrna-mokarran 1.0000 1.0000 1.0000 12 spinosaurus-aegyptiacus 1.0000 1.0000 1.0000 12 stegosaurus-stenops 1.0000 1.0000 1.0000 12 struthio-camelus 1.0000 1.0000 1.0000 12 tapirus 1.0000 1.0000 1.0000 12 tarsius-pumilus 1.0000 1.0000 1.0000 12 taurotragus-oryx 1.0000 1.0000 1.0000 12 telmatobufo-bullocki 1.0000 1.0000 1.0000 12 thryothorus-ludovicianus 1.0000 1.0000 1.0000 12 triceratops-horridus 1.0000 0.9167 0.9565 12 trilobita 1.0000 0.9167 0.9565 12 turdus-migratorius 1.0000 1.0000 1.0000 12 tursiops-truncatus 1.0000 1.0000 1.0000 12 tyrannosaurus-rex 1.0000 1.0000 1.0000 12 tyrannus-tyrannus 1.0000 1.0000 1.0000 12 ursus-arctos-horribilis 1.0000 1.0000 1.0000 12 ursus-maritimus 1.0000 1.0000 1.0000 12 varanus-komodoensis 1.0000 1.0000 1.0000 12 vulpes-vulpes 1.0000 1.0000 1.0000 12 vultur-gryphus 1.0000 1.0000 1.0000 12 accuracy 0.9923 1812 macro avg 0.9930 0.9923 0.9922 1812 weighted avg 0.9930 0.9923 0.9922 1812 ```
[ "acinonyx-jubatus", "aethia-cristatella", "agalychnis-callidryas", "agkistrodon-contortrix", "ailuropoda-melanoleuca", "ailurus-fulgens", "alces-alces", "anas-platyrhynchos", "ankylosaurus-magniventris", "apis-mellifera", "aptenodytes-forsteri", "aquila-chrysaetos", "ara-macao", "architeuthis-dux", "ardea-herodias", "balaenoptera-musculus", "betta-splendens", "bison-bison", "bos-gaurus", "bos-taurus", "bradypus-variegatus", "branta-canadensis", "canis-lupus", "canis-lupus-familiaris", "carcharodon-carcharias", "cardinalis-cardinalis", "cathartes-aura", "centrochelys-sulcata", "centruroides-vittatus", "ceratitis-capitata", "ceratotherium-simum", "chelonia-mydas", "chrysemys-picta", "circus-hudsonius", "codium-fragile", "coelacanthiformes", "colaptes-auratus", "connochaetes-gnou", "correlophus-ciliatus", "crocodylus-niloticus", "crotalus-atrox", "crotophaga-sulcirostris", "cryptoprocta-ferox", "cyanocitta-cristata", "danaus-plexippus", "dasypus-novemcinctus", "delphinapterus-leucas", "dendrobatidae", "dermochelys-coriacea", "desmodus-rotundus", "diplodocus", "dugong-dugon", "eidolon-helvum", "enhydra-lutris", "enteroctopus-dofleini", "equus-caballus", "equus-quagga", "eudocimus-albus", "eunectes-murinus", "falco-peregrinus", "felis-catus", "formicidae", "gallus-gallus-domesticus", "gavialis-gangeticus", "geococcyx-californianus", "giraffa-camelopardalis", "gorilla-gorilla", "haliaeetus-leucocephalus", "hapalochlaena-maculosa", "heloderma-suspectum", "heterocera", "hippopotamus-amphibius", "homo-sapiens", "hydrurga-leptonyx", "icterus-galbula", "icterus-gularis", "icterus-spurius", "iguana-iguana", "iguanodon-bernissartensis", "inia-geoffrensis", "lampropeltis-triangulum", "lemur-catta", "lepus-americanus", "loxodonta-africana", "macropus-giganteus", "malayopython-reticulatus", "mammuthus-primigeniu", "martes-americana", "megaptera-novaeangliae", "melanerpes-carolinus", "mellisuga-helenae", "mergus-serrator", "mimus-polyglottos", "monodon-monoceros", "musca-domestica", "odobenus-rosmarus", "okapia-johnstoni", "ophiophagus-hannah", "orcinus-orca", "ornithorhynchus-anatinus", "ovis-aries", "ovis-canadensis", "panthera-leo", "panthera-onca", "panthera-pardus", "panthera-tigris", "pantherophis-alleghaniensis", "pantherophis-guttatus", "papilio-glaucus", "passerina-ciris", "pavo-cristatus", "periplaneta-americana", "phascolarctos-cinereus", "phoebetria-fusca", "phoenicopterus-ruber", "phyllobates-terribilis", "physalia-physalis", "physeter-macrocephalus", "poecile-atricapillus", "pongo-abelii", "procyon-lotor", "pteranodon-longiceps", "pterois-mombasae", "pterois-volitans", "puma-concolor", "rattus-rattus", "rusa-unicolor", "salmo-salar", "sciurus-carolinensis", "smilodon-populator", "spheniscus-demersus", "sphyrna-mokarran", "spinosaurus-aegyptiacus", "stegosaurus-stenops", "struthio-camelus", "tapirus", "tarsius-pumilus", "taurotragus-oryx", "telmatobufo-bullocki", "thryothorus-ludovicianus", "triceratops-horridus", "trilobita", "turdus-migratorius", "tursiops-truncatus", "tyrannosaurus-rex", "tyrannus-tyrannus", "ursus-arctos-horribilis", "ursus-maritimus", "varanus-komodoensis", "vulpes-vulpes", "vultur-gryphus" ]
p1atdev/pvc-quality-swinv2-base
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pvc-quality-swinv2-base This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window12-192-22k](https://huggingface.co/microsoft/swinv2-base-patch4-window12-192-22k) on the [pvc figure images dataset](https://huggingface.co/datasets/p1atdev/pvc). It achieves the following results on the evaluation set: - Loss: 1.2396 - Accuracy: 0.5317 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7254 | 0.98 | 39 | 1.4826 | 0.4109 | | 1.3316 | 1.99 | 79 | 1.2177 | 0.5136 | | 1.0864 | 2.99 | 119 | 1.3006 | 0.4653 | | 0.8572 | 4.0 | 159 | 1.2090 | 0.5015 | | 0.7466 | 4.98 | 198 | 1.2150 | 0.5378 | | 0.5986 | 5.99 | 238 | 1.4600 | 0.4955 | | 0.4784 | 6.99 | 278 | 1.4131 | 0.5196 | | 0.3525 | 8.0 | 318 | 1.5256 | 0.4985 | | 0.3472 | 8.98 | 357 | 1.3883 | 0.5166 | | 0.3281 | 9.81 | 390 | 1.5012 | 0.4955 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.0.0+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
[ "best quality", "high quality", "medium quality", "low quality", "worst quality", "parts", "other" ]
SuperMaker/vit-base-patch16-224-in21k-leukemia
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-leukemia This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Leukemia Dataset hosted on kaggle https://www.kaggle.com/datasets/andrewmvd/leukemia-classification. It achieves the following results on the evaluation set: - Train Loss: 0.3256 - Train Accuracy: 0.8795 - Validation Loss: 0.6907 - Validation Accuracy: 0.6848 - Epoch: 13 ## Model description Google Vision Transormer (ViT). fine-tuned on the white blood cancer - Leukemia - dataset ## Intended uses & limitations This model was fine-tuned as a part of my project `LeukemiaAI`, a fully integrated pipeline to detect Leukemia. **Github Repo**: https://github.com/MohammedSaLah-Eldeen/LeukemiaAI ### Training hyperparameters - training_precision: mixed_float16 - optimizer: { 'inner_optimizer': { 'module': 'keras.optimizers.experimental', 'class_name': 'SGD', 'config': { 'name': 'SGD', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': 1, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': { 'module': 'keras.optimizers.schedules', 'class_name': 'CosineDecay', 'config': { 'initial_learning_rate': 0.001, 'decay_steps': 896, 'alpha': 0.0, 'name': None, 'warmup_target': None, 'warmup_steps': 0 }, 'registered_name': None }, 'momentum': 0.9, 'nesterov': False }, 'registered_name': None }, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000 } ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5007 | 0.7629 | 0.7206 | 0.6643 | 0 | | 0.3958 | 0.8418 | 0.7137 | 0.6686 | 1 | | 0.3578 | 0.8632 | 0.6998 | 0.6789 | 2 | | 0.3377 | 0.8713 | 0.6899 | 0.6843 | 3 | | 0.3274 | 0.8778 | 0.6869 | 0.6832 | 4 | | 0.3261 | 0.8792 | 0.6880 | 0.6859 | 5 | | 0.3257 | 0.8797 | 0.6906 | 0.6848 | 6 | | 0.3255 | 0.8796 | 0.6896 | 0.6859 | 7 | | 0.3256 | 0.8794 | 0.6901 | 0.6848 | 8 | | 0.3258 | 0.8795 | 0.6867 | 0.6864 | 9 | | 0.3258 | 0.8793 | 0.6896 | 0.6859 | 10 | | 0.3256 | 0.8796 | 0.6871 | 0.6864 | 11 | | 0.3255 | 0.8795 | 0.6897 | 0.6853 | 12 | | 0.3256 | 0.8795 | 0.6907 | 0.6848 | 13 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.13.0 - Datasets 2.1.0 - Tokenizers 0.14.1
[ "hem", "all" ]
dima806/vegetable_15_types_image_detection
Returns vegetable type based on image. See https://www.kaggle.com/code/dima806/vegetable-image-detection-vit for more details. ``` Classification report: precision recall f1-score support Bean 1.0000 1.0000 1.0000 280 Bitter_Gourd 1.0000 1.0000 1.0000 280 Bottle_Gourd 1.0000 1.0000 1.0000 280 Brinjal 1.0000 1.0000 1.0000 280 Broccoli 1.0000 1.0000 1.0000 280 Cabbage 1.0000 0.9964 0.9982 280 Capsicum 1.0000 1.0000 1.0000 280 Carrot 1.0000 1.0000 1.0000 280 Cauliflower 0.9964 1.0000 0.9982 280 Cucumber 1.0000 1.0000 1.0000 280 Papaya 1.0000 1.0000 1.0000 280 Potato 1.0000 1.0000 1.0000 280 Pumpkin 1.0000 1.0000 1.0000 280 Radish 1.0000 1.0000 1.0000 280 Tomato 1.0000 1.0000 1.0000 280 accuracy 0.9998 4200 macro avg 0.9998 0.9998 0.9998 4200 weighted avg 0.9998 0.9998 0.9998 4200 ```
[ "bean", "bitter_gourd", "bottle_gourd", "brinjal", "broccoli", "cabbage", "capsicum", "carrot", "cauliflower", "cucumber", "papaya", "potato", "pumpkin", "radish", "tomato" ]
kiranlagad/vit-base-patch16-224-finetuned-flower
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-flower This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 2.1.0+cu118 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "daisy", "dandelion", "roses", "sunflowers", "tulips" ]
hkivancoral/smids_1x_beit_base_rms_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_rms_0001_fold4 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6670 - Accuracy: 0.7333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1121 | 1.0 | 75 | 1.0797 | 0.495 | | 1.1167 | 2.0 | 150 | 1.0990 | 0.3383 | | 1.1124 | 3.0 | 225 | 1.0945 | 0.3583 | | 1.0914 | 4.0 | 300 | 1.0750 | 0.35 | | 1.0647 | 5.0 | 375 | 0.8667 | 0.5733 | | 0.9583 | 6.0 | 450 | 0.8905 | 0.51 | | 0.8629 | 7.0 | 525 | 0.7806 | 0.5767 | | 0.8438 | 8.0 | 600 | 0.7603 | 0.5833 | | 0.812 | 9.0 | 675 | 0.7613 | 0.595 | | 0.7427 | 10.0 | 750 | 0.8115 | 0.5917 | | 0.8147 | 11.0 | 825 | 0.7428 | 0.63 | | 0.7859 | 12.0 | 900 | 0.7365 | 0.635 | | 0.8142 | 13.0 | 975 | 0.7468 | 0.6033 | | 0.7961 | 14.0 | 1050 | 0.7567 | 0.5983 | | 0.6725 | 15.0 | 1125 | 0.7876 | 0.6067 | | 0.7608 | 16.0 | 1200 | 0.7339 | 0.635 | | 0.7146 | 17.0 | 1275 | 0.7178 | 0.645 | | 0.6646 | 18.0 | 1350 | 0.7089 | 0.67 | | 0.7767 | 19.0 | 1425 | 0.7436 | 0.6433 | | 0.7149 | 20.0 | 1500 | 0.7664 | 0.655 | | 0.7622 | 21.0 | 1575 | 0.7227 | 0.6617 | | 0.6643 | 22.0 | 1650 | 0.7547 | 0.64 | | 0.7546 | 23.0 | 1725 | 0.7439 | 0.6483 | | 0.727 | 24.0 | 1800 | 0.7101 | 0.6633 | | 0.7334 | 25.0 | 1875 | 0.7022 | 0.6583 | | 0.6824 | 26.0 | 1950 | 0.7040 | 0.6767 | | 0.7383 | 27.0 | 2025 | 0.6953 | 0.6733 | | 0.6459 | 28.0 | 2100 | 0.6860 | 0.6883 | | 0.7094 | 29.0 | 2175 | 0.6882 | 0.695 | | 0.7817 | 30.0 | 2250 | 0.6855 | 0.6883 | | 0.6417 | 31.0 | 2325 | 0.6762 | 0.705 | | 0.7236 | 32.0 | 2400 | 0.6870 | 0.6917 | | 0.6676 | 33.0 | 2475 | 0.7290 | 0.685 | | 0.5839 | 34.0 | 2550 | 0.6648 | 0.7117 | | 0.6323 | 35.0 | 2625 | 0.6543 | 0.7017 | | 0.6129 | 36.0 | 2700 | 0.6910 | 0.6883 | | 0.5785 | 37.0 | 2775 | 0.6666 | 0.7217 | | 0.6055 | 38.0 | 2850 | 0.6452 | 0.7233 | | 0.5778 | 39.0 | 2925 | 0.6586 | 0.7217 | | 0.5892 | 40.0 | 3000 | 0.6725 | 0.7233 | | 0.6346 | 41.0 | 3075 | 0.6632 | 0.715 | | 0.5806 | 42.0 | 3150 | 0.6697 | 0.7217 | | 0.6328 | 43.0 | 3225 | 0.6659 | 0.7117 | | 0.5711 | 44.0 | 3300 | 0.6651 | 0.71 | | 0.5685 | 45.0 | 3375 | 0.6727 | 0.7283 | | 0.4903 | 46.0 | 3450 | 0.6607 | 0.7383 | | 0.5197 | 47.0 | 3525 | 0.6770 | 0.7283 | | 0.5572 | 48.0 | 3600 | 0.6616 | 0.7183 | | 0.5197 | 49.0 | 3675 | 0.6636 | 0.73 | | 0.489 | 50.0 | 3750 | 0.6670 | 0.7333 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_1x_beit_base_rms_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_beit_base_rms_0001_fold5 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6915 - Accuracy: 0.72 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2962 | 1.0 | 75 | 0.9009 | 0.4967 | | 0.8616 | 2.0 | 150 | 0.8829 | 0.5333 | | 0.8905 | 3.0 | 225 | 0.8472 | 0.5367 | | 0.8302 | 4.0 | 300 | 0.9953 | 0.5067 | | 0.8678 | 5.0 | 375 | 0.8690 | 0.525 | | 0.8529 | 6.0 | 450 | 0.8769 | 0.5283 | | 0.8841 | 7.0 | 525 | 0.8786 | 0.53 | | 0.8327 | 8.0 | 600 | 0.8584 | 0.5367 | | 0.8106 | 9.0 | 675 | 0.8478 | 0.5817 | | 0.8163 | 10.0 | 750 | 0.8420 | 0.54 | | 0.8203 | 11.0 | 825 | 0.8233 | 0.615 | | 0.849 | 12.0 | 900 | 0.8207 | 0.56 | | 0.7448 | 13.0 | 975 | 0.9969 | 0.48 | | 0.8104 | 14.0 | 1050 | 0.8107 | 0.5717 | | 0.8455 | 15.0 | 1125 | 0.8387 | 0.56 | | 0.7497 | 16.0 | 1200 | 0.7795 | 0.5983 | | 0.7595 | 17.0 | 1275 | 0.7579 | 0.63 | | 0.7118 | 18.0 | 1350 | 0.7723 | 0.63 | | 0.7898 | 19.0 | 1425 | 0.7567 | 0.635 | | 0.7627 | 20.0 | 1500 | 0.7797 | 0.6367 | | 0.8345 | 21.0 | 1575 | 0.7467 | 0.6217 | | 0.745 | 22.0 | 1650 | 0.7264 | 0.655 | | 0.7402 | 23.0 | 1725 | 0.7241 | 0.6633 | | 0.6239 | 24.0 | 1800 | 0.7183 | 0.665 | | 0.6855 | 25.0 | 1875 | 0.7858 | 0.6333 | | 0.7229 | 26.0 | 1950 | 0.7404 | 0.6333 | | 0.7229 | 27.0 | 2025 | 0.7258 | 0.68 | | 0.7197 | 28.0 | 2100 | 0.6990 | 0.6917 | | 0.7057 | 29.0 | 2175 | 0.7035 | 0.68 | | 0.7315 | 30.0 | 2250 | 0.7188 | 0.6683 | | 0.6562 | 31.0 | 2325 | 0.7484 | 0.6283 | | 0.6918 | 32.0 | 2400 | 0.6817 | 0.6917 | | 0.6871 | 33.0 | 2475 | 0.7362 | 0.6717 | | 0.6724 | 34.0 | 2550 | 0.6752 | 0.7 | | 0.6677 | 35.0 | 2625 | 0.6742 | 0.6933 | | 0.6138 | 36.0 | 2700 | 0.6850 | 0.6867 | | 0.582 | 37.0 | 2775 | 0.6804 | 0.6817 | | 0.6731 | 38.0 | 2850 | 0.6827 | 0.6917 | | 0.5577 | 39.0 | 2925 | 0.7025 | 0.6833 | | 0.5702 | 40.0 | 3000 | 0.6473 | 0.7117 | | 0.578 | 41.0 | 3075 | 0.6455 | 0.72 | | 0.6074 | 42.0 | 3150 | 0.6478 | 0.715 | | 0.6019 | 43.0 | 3225 | 0.6442 | 0.7 | | 0.5836 | 44.0 | 3300 | 0.6632 | 0.6983 | | 0.5466 | 45.0 | 3375 | 0.6605 | 0.68 | | 0.4891 | 46.0 | 3450 | 0.6690 | 0.715 | | 0.5107 | 47.0 | 3525 | 0.6729 | 0.7167 | | 0.3981 | 48.0 | 3600 | 0.7111 | 0.7017 | | 0.434 | 49.0 | 3675 | 0.6850 | 0.7183 | | 0.3741 | 50.0 | 3750 | 0.6915 | 0.72 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
dima806/vessel_ship_types_image_detection
Returns vessel/ship type based on image with about 99% accuracy. See https://www.kaggle.com/code/dima806/vessel-ship-type-detection for more details. ``` Classification report: precision recall f1-score support Cargo 0.9927 0.9623 0.9772 424 Carrier 0.9976 1.0000 0.9988 424 Cruise 1.0000 1.0000 1.0000 424 Military 0.9976 0.9976 0.9976 424 Tankers 0.9679 0.9953 0.9814 424 accuracy 0.9910 2120 macro avg 0.9912 0.9910 0.9910 2120 weighted avg 0.9912 0.9910 0.9910 2120 ```
[ "cargo", "carrier", "cruise", "military", "tankers" ]
krich97/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4435 - Accuracy: 0.8111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5077 | 0.98 | 41 | 0.6378 | 0.6796 | | 0.5111 | 1.99 | 83 | 0.7097 | 0.6577 | | 0.5395 | 2.99 | 125 | 0.5374 | 0.7470 | | 0.5498 | 4.0 | 167 | 0.5524 | 0.7420 | | 0.4754 | 4.98 | 208 | 0.5324 | 0.7639 | | 0.4662 | 5.99 | 250 | 0.4962 | 0.7639 | | 0.4677 | 6.99 | 292 | 0.5070 | 0.7774 | | 0.4525 | 8.0 | 334 | 0.5144 | 0.7673 | | 0.4635 | 8.98 | 375 | 0.4978 | 0.7757 | | 0.4309 | 9.99 | 417 | 0.5388 | 0.7774 | | 0.4292 | 10.99 | 459 | 0.4937 | 0.7825 | | 0.4182 | 12.0 | 501 | 0.5234 | 0.7808 | | 0.4242 | 12.98 | 542 | 0.4539 | 0.7960 | | 0.4053 | 13.99 | 584 | 0.5089 | 0.7858 | | 0.4135 | 14.99 | 626 | 0.4655 | 0.8044 | | 0.3888 | 16.0 | 668 | 0.4398 | 0.8212 | | 0.3701 | 16.98 | 709 | 0.4258 | 0.8145 | | 0.3641 | 17.99 | 751 | 0.4339 | 0.8196 | | 0.3547 | 18.99 | 793 | 0.4556 | 0.7993 | | 0.3623 | 19.64 | 820 | 0.4435 | 0.8111 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "mc", "other" ]
Vero1nika3q/vit-base-patch16-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-eurosat This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9273 - Accuracy: 0.7992 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0627 | 1.0 | 532 | 0.9273 | 0.7992 | ### Framework versions - Transformers 4.30.0 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.13.3
[ "apple_pie", "baby_back_ribs", "baklava", "beef_carpaccio", "beef_tartare", "beet_salad", "beignets", "bibimbap", "bread_pudding", "breakfast_burrito", "bruschetta", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheese_plate", "cheesecake", "chicken_curry", "chicken_quesadilla", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare", "waffles" ]
rochtar/brain_tumors_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # brain_tumors_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the brain-tumor-collection dataset. It achieves the following results on the evaluation set: - Loss: 0.4077 - Accuracy: 0.8975 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.961 | 1.0 | 25 | 0.7429 | 0.6825 | | 0.5196 | 2.0 | 50 | 0.4773 | 0.8725 | | 0.4218 | 3.0 | 75 | 0.4077 | 0.8975 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "glioma tumor", "meningioma tumor", "pituitary tumor", "no tumor" ]
Svetcher/vit-base-patch16-224-in21k-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-finetuned-eurosat This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.3774 - Accuracy: 0.7611 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.4111 | 1.0 | 710 | 2.3774 | 0.7611 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "apple_pie", "baby_back_ribs", "baklava", "beef_carpaccio", "beef_tartare", "beet_salad", "beignets", "bibimbap", "bread_pudding", "breakfast_burrito", "bruschetta", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheese_plate", "cheesecake", "chicken_curry", "chicken_quesadilla", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare", "waffles" ]
Jacques7103/Food-Recognition
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # food-recognition This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2610 - Accuracy: 0.9324 ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification). By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5974 | 0.21 | 100 | 0.6096 | 0.8292 | | 0.5911 | 0.43 | 200 | 0.5204 | 0.8476 | | 0.7085 | 0.64 | 300 | 0.4329 | 0.8708 | | 0.5302 | 0.85 | 400 | 0.4843 | 0.8428 | | 0.2436 | 1.07 | 500 | 0.3767 | 0.886 | | 0.2355 | 1.28 | 600 | 0.3344 | 0.8956 | | 0.1497 | 1.49 | 700 | 0.3447 | 0.8932 | | 0.2213 | 1.71 | 800 | 0.3082 | 0.9072 | | 0.2197 | 1.92 | 900 | 0.3169 | 0.902 | | 0.0719 | 2.13 | 1000 | 0.2977 | 0.9136 | | 0.0526 | 2.35 | 1100 | 0.3455 | 0.9084 | | 0.0926 | 2.56 | 1200 | 0.3140 | 0.9208 | | 0.0427 | 2.77 | 1300 | 0.3307 | 0.9128 | | 0.0716 | 2.99 | 1400 | 0.3007 | 0.9204 | | 0.0151 | 3.2 | 1500 | 0.2791 | 0.9292 | | 0.032 | 3.41 | 1600 | 0.2737 | 0.9296 | | 0.0611 | 3.62 | 1700 | 0.2620 | 0.9336 | | 0.0175 | 3.84 | 1800 | 0.2610 | 0.9324 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.1.1+cpu - Datasets 2.15.0 - Tokenizers 0.15.0
[ "apple_pie", "baby_back_ribs", "baklava", "beef_carpaccio", "beef_tartare", "beet_salad", "beignets", "bibimbap", "bread_pudding", "breakfast_burrito" ]
TechRoC123/carmodel
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # carmodel This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0675 - F1: 0.9931 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1732 | 0.31 | 500 | 0.6651 | 0.8403 | | 0.3989 | 0.62 | 1000 | 0.2942 | 0.9167 | | 0.2136 | 0.93 | 1500 | 0.1782 | 0.9542 | | 0.0549 | 1.23 | 2000 | 0.2001 | 0.9639 | | 0.0287 | 1.54 | 2500 | 0.1304 | 0.9819 | | 0.0091 | 1.85 | 3000 | 0.1112 | 0.9819 | | 0.0039 | 2.16 | 3500 | 0.0667 | 0.9917 | | 0.0023 | 2.47 | 4000 | 0.0708 | 0.9903 | | 0.0002 | 2.78 | 4500 | 0.0635 | 0.9931 | | 0.0002 | 3.09 | 5000 | 0.0619 | 0.9931 | | 0.0002 | 3.4 | 5500 | 0.0730 | 0.9917 | | 0.0 | 3.7 | 6000 | 0.0684 | 0.9917 | | 0.0009 | 4.01 | 6500 | 0.0696 | 0.9917 | | 0.0 | 4.32 | 7000 | 0.0693 | 0.9917 | | 0.0 | 4.63 | 7500 | 0.0686 | 0.9931 | | 0.0004 | 4.94 | 8000 | 0.0675 | 0.9931 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "none", "crack", "scratch", "tire flat", "dent", "glass shatter", "lamp broken" ]
DownwardSpiral33/hands_palms_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # DownwardSpiral33/hands_palms_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4367 - Validation Loss: 0.7459 - Train Accuracy: 0.5806 - Epoch: 38 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 17400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.6873 | 0.6761 | 0.6129 | 0 | | 0.6720 | 0.6625 | 0.6452 | 1 | | 0.6638 | 0.6577 | 0.6452 | 2 | | 0.6634 | 0.6547 | 0.6774 | 3 | | 0.6547 | 0.6507 | 0.6774 | 4 | | 0.6556 | 0.6423 | 0.6774 | 5 | | 0.6433 | 0.6346 | 0.6774 | 6 | | 0.6394 | 0.6293 | 0.7097 | 7 | | 0.6344 | 0.6239 | 0.7419 | 8 | | 0.6205 | 0.6206 | 0.7742 | 9 | | 0.6047 | 0.6115 | 0.7097 | 10 | | 0.6163 | 0.5970 | 0.7419 | 11 | | 0.6022 | 0.6069 | 0.7097 | 12 | | 0.5958 | 0.6009 | 0.7419 | 13 | | 0.5789 | 0.5971 | 0.6774 | 14 | | 0.5758 | 0.5962 | 0.6774 | 15 | | 0.5662 | 0.5976 | 0.6774 | 16 | | 0.5579 | 0.5926 | 0.6774 | 17 | | 0.5577 | 0.5811 | 0.6452 | 18 | | 0.5474 | 0.5880 | 0.6452 | 19 | | 0.5249 | 0.5921 | 0.6774 | 20 | | 0.5412 | 0.6075 | 0.6774 | 21 | | 0.5154 | 0.6266 | 0.7097 | 22 | | 0.5199 | 0.6063 | 0.6129 | 23 | | 0.5150 | 0.6054 | 0.5806 | 24 | | 0.5199 | 0.6107 | 0.6774 | 25 | | 0.4823 | 0.5959 | 0.6129 | 26 | | 0.4800 | 0.6581 | 0.6452 | 27 | | 0.4732 | 0.6620 | 0.6129 | 28 | | 0.4766 | 0.6284 | 0.6129 | 29 | | 0.4889 | 0.6978 | 0.5806 | 30 | | 0.4530 | 0.6636 | 0.5806 | 31 | | 0.4320 | 0.6348 | 0.6129 | 32 | | 0.4704 | 0.6326 | 0.6774 | 33 | | 0.4487 | 0.6937 | 0.6774 | 34 | | 0.4382 | 0.6423 | 0.5806 | 35 | | 0.4035 | 0.6926 | 0.5806 | 36 | | 0.4330 | 0.7225 | 0.5484 | 37 | | 0.4367 | 0.7459 | 0.5806 | 38 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.14.0 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "badhand", "goodhand" ]
akashmaggon/vit-base-crack-classification-aug-last
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-crack-classification-aug-last This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0124 - F1: 0.9943 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4012 | 1.0 | 212 | 0.3809 | 0.8400 | | 0.1153 | 2.0 | 424 | 0.1429 | 0.9465 | | 0.0467 | 3.0 | 636 | 0.0742 | 0.9628 | | 0.0097 | 4.0 | 848 | 0.0194 | 0.9907 | | 0.0062 | 5.0 | 1060 | 0.0163 | 0.9943 | | 0.0039 | 6.0 | 1272 | 0.0124 | 0.9943 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "crack", "scratch", "tire flat", "dent", "glass shatter", "lamp broken" ]
Miotvinnik00/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 0.8575 - Accuracy: 0.918 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1974 | 0.99 | 62 | 1.1935 | 0.901 | | 0.8604 | 2.0 | 125 | 0.9183 | 0.914 | | 0.7686 | 2.98 | 186 | 0.8575 | 0.918 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
beingamit99/car_damage_detection
# 🚗 Car Damage Prediction Model 🛠️ Predict car damage with confidence using the **llm VIT bEIT** model! This model is trained to classify car damage into six distinct classes: - **"0"**: *Crack* - **"1"**: *Scratch* - **"2"**: *Tire Flat* - **"3"**: *Dent* - **"4"**: *Glass Shatter* - **"5"**: *Lamp Broken* ## Key Features 🔍 - Accurate classification into six car damage categories. - Seamless integration into various applications. - Streamlined image processing with transformer-based architecture. ## Applications 🌐 This powerful car damage prediction model can be seamlessly integrated into various applications, such as: - **Auto Insurance Claim Processing:** Streamline the assessment of car damage for faster claim processing. - **Vehicle Inspection Services:** Enhance efficiency in vehicle inspection services by automating damage detection. - **Used Car Marketplaces:** Provide detailed insights into the condition of used cars through automated damage analysis. Feel free to explore and integrate this model into your applications for accurate car damage predictions! 🌟 ## How to Use This Model 🤖 ### Approach ### First Approach ```python import numpy as np from PIL import Image from transformers import AutoImageProcessor, AutoModelForImageClassification # Load the model and image processor processor = AutoImageProcessor.from_pretrained("beingamit99/car_damage_detection") model = AutoModelForImageClassification.from_pretrained("beingamit99/car_damage_detection") # Load and process the image image = Image.open(IMAGE) inputs = processor(images=image, return_tensors="pt") # Make predictions outputs = model(**inputs) logits = outputs.logits.detach().cpu().numpy() predicted_class_id = np.argmax(logits) predicted_proba = np.max(logits) label_map = model.config.id2label predicted_class_name = label_map[predicted_class_id] # Print the results print(f"Predicted class: {predicted_class_name} (probability: {predicted_proba:.4f}") ``` ### Second Approach ```python from transformers import pipeline #Create a classification pipeline pipe = pipeline("image-classification", model="beingamit99/car_damage_detection") pipe(IMAGE) ```
[ "crack", "scratch", "tire flat", "dent", "glass shatter", "lamp broken" ]
platzi/platzi-vit-model-aleckeith
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-vit-model-aleckeith This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0621 - Accuracy: 0.9774 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1238 | 3.85 | 500 | 0.0621 | 0.9774 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.13.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
hkivancoral/smids_1x_deit_small_rms_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_deit_small_rms_00001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7203 - Accuracy: 0.8848 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4024 | 1.0 | 76 | 0.3457 | 0.8598 | | 0.2939 | 2.0 | 152 | 0.3056 | 0.8765 | | 0.1494 | 3.0 | 228 | 0.3010 | 0.8815 | | 0.1219 | 4.0 | 304 | 0.3026 | 0.8848 | | 0.0709 | 5.0 | 380 | 0.3230 | 0.8881 | | 0.0265 | 6.0 | 456 | 0.3473 | 0.8915 | | 0.0053 | 7.0 | 532 | 0.4250 | 0.8815 | | 0.0086 | 8.0 | 608 | 0.4355 | 0.8848 | | 0.0119 | 9.0 | 684 | 0.4635 | 0.8865 | | 0.0011 | 10.0 | 760 | 0.4824 | 0.8932 | | 0.0255 | 11.0 | 836 | 0.5139 | 0.8831 | | 0.0006 | 12.0 | 912 | 0.5793 | 0.8815 | | 0.0183 | 13.0 | 988 | 0.5403 | 0.8848 | | 0.0037 | 14.0 | 1064 | 0.5951 | 0.8848 | | 0.024 | 15.0 | 1140 | 0.5951 | 0.8815 | | 0.0002 | 16.0 | 1216 | 0.6061 | 0.8798 | | 0.0001 | 17.0 | 1292 | 0.5992 | 0.8948 | | 0.0157 | 18.0 | 1368 | 0.6206 | 0.8848 | | 0.0002 | 19.0 | 1444 | 0.6514 | 0.8881 | | 0.0058 | 20.0 | 1520 | 0.6656 | 0.8798 | | 0.0096 | 21.0 | 1596 | 0.6589 | 0.8915 | | 0.0045 | 22.0 | 1672 | 0.6509 | 0.8848 | | 0.0001 | 23.0 | 1748 | 0.6180 | 0.8881 | | 0.0001 | 24.0 | 1824 | 0.6676 | 0.8765 | | 0.0077 | 25.0 | 1900 | 0.6271 | 0.8831 | | 0.0032 | 26.0 | 1976 | 0.7135 | 0.8848 | | 0.0043 | 27.0 | 2052 | 0.7062 | 0.8765 | | 0.0034 | 28.0 | 2128 | 0.7064 | 0.8781 | | 0.0062 | 29.0 | 2204 | 0.6764 | 0.8781 | | 0.0001 | 30.0 | 2280 | 0.6847 | 0.8831 | | 0.006 | 31.0 | 2356 | 0.6868 | 0.8865 | | 0.009 | 32.0 | 2432 | 0.7122 | 0.8881 | | 0.0 | 33.0 | 2508 | 0.7011 | 0.8865 | | 0.0 | 34.0 | 2584 | 0.7102 | 0.8881 | | 0.0121 | 35.0 | 2660 | 0.7023 | 0.8881 | | 0.0034 | 36.0 | 2736 | 0.7188 | 0.8765 | | 0.0064 | 37.0 | 2812 | 0.7029 | 0.8848 | | 0.0001 | 38.0 | 2888 | 0.7098 | 0.8798 | | 0.0031 | 39.0 | 2964 | 0.7171 | 0.8815 | | 0.0 | 40.0 | 3040 | 0.7137 | 0.8815 | | 0.0029 | 41.0 | 3116 | 0.7143 | 0.8815 | | 0.0 | 42.0 | 3192 | 0.7224 | 0.8815 | | 0.0048 | 43.0 | 3268 | 0.7157 | 0.8831 | | 0.0 | 44.0 | 3344 | 0.7190 | 0.8848 | | 0.0 | 45.0 | 3420 | 0.7200 | 0.8848 | | 0.0 | 46.0 | 3496 | 0.7204 | 0.8848 | | 0.0 | 47.0 | 3572 | 0.7209 | 0.8848 | | 0.0024 | 48.0 | 3648 | 0.7205 | 0.8848 | | 0.0 | 49.0 | 3724 | 0.7204 | 0.8848 | | 0.0 | 50.0 | 3800 | 0.7203 | 0.8848 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_1x_deit_small_rms_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_deit_small_rms_00001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8494 - Accuracy: 0.8702 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.391 | 1.0 | 75 | 0.3306 | 0.8569 | | 0.2024 | 2.0 | 150 | 0.3078 | 0.8719 | | 0.1659 | 3.0 | 225 | 0.3046 | 0.8636 | | 0.1089 | 4.0 | 300 | 0.3233 | 0.8702 | | 0.0832 | 5.0 | 375 | 0.4345 | 0.8552 | | 0.0315 | 6.0 | 450 | 0.4227 | 0.8686 | | 0.0247 | 7.0 | 525 | 0.5432 | 0.8652 | | 0.0031 | 8.0 | 600 | 0.5857 | 0.8769 | | 0.0058 | 9.0 | 675 | 0.5689 | 0.8619 | | 0.0354 | 10.0 | 750 | 0.6368 | 0.8619 | | 0.0193 | 11.0 | 825 | 0.5921 | 0.8752 | | 0.0019 | 12.0 | 900 | 0.6514 | 0.8785 | | 0.0447 | 13.0 | 975 | 0.6838 | 0.8686 | | 0.0527 | 14.0 | 1050 | 0.6693 | 0.8735 | | 0.0047 | 15.0 | 1125 | 0.6444 | 0.8735 | | 0.0064 | 16.0 | 1200 | 0.7052 | 0.8719 | | 0.0002 | 17.0 | 1275 | 0.7289 | 0.8636 | | 0.0092 | 18.0 | 1350 | 0.7405 | 0.8669 | | 0.0001 | 19.0 | 1425 | 0.7743 | 0.8619 | | 0.0038 | 20.0 | 1500 | 0.7512 | 0.8686 | | 0.0001 | 21.0 | 1575 | 0.8249 | 0.8602 | | 0.0001 | 22.0 | 1650 | 0.7832 | 0.8686 | | 0.0001 | 23.0 | 1725 | 0.8312 | 0.8636 | | 0.0 | 24.0 | 1800 | 0.7877 | 0.8669 | | 0.0 | 25.0 | 1875 | 0.7958 | 0.8719 | | 0.0001 | 26.0 | 1950 | 0.7718 | 0.8752 | | 0.0055 | 27.0 | 2025 | 0.7918 | 0.8686 | | 0.0032 | 28.0 | 2100 | 0.8022 | 0.8735 | | 0.0023 | 29.0 | 2175 | 0.8185 | 0.8735 | | 0.0031 | 30.0 | 2250 | 0.8365 | 0.8735 | | 0.0028 | 31.0 | 2325 | 0.7946 | 0.8686 | | 0.0 | 32.0 | 2400 | 0.8222 | 0.8752 | | 0.0 | 33.0 | 2475 | 0.7981 | 0.8719 | | 0.0 | 34.0 | 2550 | 0.8313 | 0.8752 | | 0.0084 | 35.0 | 2625 | 0.8895 | 0.8702 | | 0.0 | 36.0 | 2700 | 0.8170 | 0.8686 | | 0.0 | 37.0 | 2775 | 0.8344 | 0.8752 | | 0.0 | 38.0 | 2850 | 0.8561 | 0.8735 | | 0.0022 | 39.0 | 2925 | 0.8329 | 0.8702 | | 0.0 | 40.0 | 3000 | 0.8473 | 0.8719 | | 0.0026 | 41.0 | 3075 | 0.8354 | 0.8686 | | 0.0 | 42.0 | 3150 | 0.8451 | 0.8735 | | 0.0025 | 43.0 | 3225 | 0.8430 | 0.8735 | | 0.0025 | 44.0 | 3300 | 0.8484 | 0.8719 | | 0.0 | 45.0 | 3375 | 0.8461 | 0.8702 | | 0.0 | 46.0 | 3450 | 0.8473 | 0.8735 | | 0.0023 | 47.0 | 3525 | 0.8487 | 0.8719 | | 0.0 | 48.0 | 3600 | 0.8492 | 0.8702 | | 0.0022 | 49.0 | 3675 | 0.8491 | 0.8686 | | 0.0022 | 50.0 | 3750 | 0.8494 | 0.8702 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
hkivancoral/smids_1x_deit_small_rms_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_1x_deit_small_rms_00001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7182 - Accuracy: 0.905 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3259 | 1.0 | 75 | 0.3001 | 0.89 | | 0.2426 | 2.0 | 150 | 0.3217 | 0.8717 | | 0.1676 | 3.0 | 225 | 0.2596 | 0.9083 | | 0.1287 | 4.0 | 300 | 0.2827 | 0.895 | | 0.0316 | 5.0 | 375 | 0.3452 | 0.885 | | 0.0237 | 6.0 | 450 | 0.3793 | 0.9017 | | 0.0244 | 7.0 | 525 | 0.4128 | 0.8967 | | 0.0233 | 8.0 | 600 | 0.4590 | 0.8883 | | 0.0286 | 9.0 | 675 | 0.4790 | 0.8983 | | 0.0295 | 10.0 | 750 | 0.4835 | 0.8917 | | 0.0562 | 11.0 | 825 | 0.4705 | 0.9067 | | 0.0087 | 12.0 | 900 | 0.5035 | 0.9033 | | 0.0083 | 13.0 | 975 | 0.5418 | 0.9017 | | 0.0001 | 14.0 | 1050 | 0.5563 | 0.9 | | 0.0012 | 15.0 | 1125 | 0.5874 | 0.8983 | | 0.0001 | 16.0 | 1200 | 0.5698 | 0.8967 | | 0.0001 | 17.0 | 1275 | 0.5930 | 0.9033 | | 0.0062 | 18.0 | 1350 | 0.5972 | 0.9017 | | 0.0048 | 19.0 | 1425 | 0.5918 | 0.9033 | | 0.0089 | 20.0 | 1500 | 0.6518 | 0.9017 | | 0.0001 | 21.0 | 1575 | 0.7835 | 0.885 | | 0.0001 | 22.0 | 1650 | 0.6700 | 0.9 | | 0.0031 | 23.0 | 1725 | 0.6679 | 0.8983 | | 0.0 | 24.0 | 1800 | 0.6364 | 0.9033 | | 0.0001 | 25.0 | 1875 | 0.6464 | 0.8983 | | 0.003 | 26.0 | 1950 | 0.6535 | 0.8967 | | 0.0 | 27.0 | 2025 | 0.6525 | 0.8983 | | 0.0 | 28.0 | 2100 | 0.6526 | 0.8983 | | 0.0 | 29.0 | 2175 | 0.6663 | 0.895 | | 0.0 | 30.0 | 2250 | 0.6645 | 0.8983 | | 0.0 | 31.0 | 2325 | 0.6717 | 0.9 | | 0.0 | 32.0 | 2400 | 0.6659 | 0.8983 | | 0.0 | 33.0 | 2475 | 0.6774 | 0.9017 | | 0.0051 | 34.0 | 2550 | 0.6726 | 0.905 | | 0.0059 | 35.0 | 2625 | 0.7209 | 0.8933 | | 0.0031 | 36.0 | 2700 | 0.6818 | 0.9067 | | 0.0022 | 37.0 | 2775 | 0.6938 | 0.8967 | | 0.0 | 38.0 | 2850 | 0.6968 | 0.8967 | | 0.0 | 39.0 | 2925 | 0.7122 | 0.8983 | | 0.0 | 40.0 | 3000 | 0.7008 | 0.8983 | | 0.0 | 41.0 | 3075 | 0.7070 | 0.8983 | | 0.0026 | 42.0 | 3150 | 0.7002 | 0.9 | | 0.0025 | 43.0 | 3225 | 0.7107 | 0.9 | | 0.0 | 44.0 | 3300 | 0.7106 | 0.9033 | | 0.0025 | 45.0 | 3375 | 0.7116 | 0.905 | | 0.0025 | 46.0 | 3450 | 0.7142 | 0.905 | | 0.0047 | 47.0 | 3525 | 0.7163 | 0.9033 | | 0.0 | 48.0 | 3600 | 0.7169 | 0.9033 | | 0.0 | 49.0 | 3675 | 0.7178 | 0.9033 | | 0.0045 | 50.0 | 3750 | 0.7182 | 0.905 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]