model_id
stringlengths
7
105
model_card
stringlengths
1
130k
model_labels
listlengths
2
80k
Madhukar7559/vit-fire-detection
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-fire-detection This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0103 - Precision: 0.9987 - Recall: 0.9987 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.0797 | 1.0 | 190 | 0.0811 | 0.9789 | 0.9775 | | 0.0536 | 2.0 | 380 | 0.0205 | 0.9947 | 0.9947 | | 0.0374 | 3.0 | 570 | 0.0283 | 0.9922 | 0.9921 | | 0.0209 | 4.0 | 760 | 0.0046 | 1.0 | 1.0 | | 0.0104 | 5.0 | 950 | 0.0128 | 0.9960 | 0.9960 | | 0.0159 | 6.0 | 1140 | 0.0152 | 0.9947 | 0.9947 | | 0.0119 | 7.0 | 1330 | 0.0084 | 0.9974 | 0.9974 | | 0.0044 | 8.0 | 1520 | 0.0111 | 0.9987 | 0.9987 | | 0.0077 | 9.0 | 1710 | 0.0094 | 0.9987 | 0.9987 | | 0.0106 | 10.0 | 1900 | 0.0103 | 0.9987 | 0.9987 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Tokenizers 0.15.0
[ "fire", "normal", "smoke" ]
arieg/bw_spec_cls_4_01_noise_200_confirm
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arieg/bw_spec_cls_4_01_noise_200_confirm This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0143 - Train Sparse Categorical Accuracy: 1.0 - Validation Loss: 0.0140 - Validation Sparse Categorical Accuracy: 1.0 - Epoch: 19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'clipnorm': 1.0, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 14400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 0.6064 | 0.9569 | 0.2224 | 1.0 | 0 | | 0.1543 | 1.0 | 0.1168 | 1.0 | 1 | | 0.0979 | 1.0 | 0.0858 | 1.0 | 2 | | 0.0769 | 1.0 | 0.0709 | 1.0 | 3 | | 0.0647 | 1.0 | 0.0603 | 1.0 | 4 | | 0.0558 | 1.0 | 0.0528 | 1.0 | 5 | | 0.0490 | 1.0 | 0.0465 | 1.0 | 6 | | 0.0434 | 1.0 | 0.0414 | 1.0 | 7 | | 0.0387 | 1.0 | 0.0369 | 1.0 | 8 | | 0.0347 | 1.0 | 0.0332 | 1.0 | 9 | | 0.0312 | 1.0 | 0.0300 | 1.0 | 10 | | 0.0282 | 1.0 | 0.0272 | 1.0 | 11 | | 0.0256 | 1.0 | 0.0248 | 1.0 | 12 | | 0.0234 | 1.0 | 0.0226 | 1.0 | 13 | | 0.0214 | 1.0 | 0.0207 | 1.0 | 14 | | 0.0196 | 1.0 | 0.0190 | 1.0 | 15 | | 0.0181 | 1.0 | 0.0176 | 1.0 | 16 | | 0.0167 | 1.0 | 0.0162 | 1.0 | 17 | | 0.0155 | 1.0 | 0.0150 | 1.0 | 18 | | 0.0143 | 1.0 | 0.0140 | 1.0 | 19 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "141", "190", "193", "194" ]
hkivancoral/hushem_conflu_deneme_f1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_conflu_deneme_f1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 4.1726 - Accuracy: 0.4222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.4791 | 0.3333 | | 2.0372 | 2.0 | 12 | 1.3991 | 0.2444 | | 2.0372 | 3.0 | 18 | 1.9327 | 0.2444 | | 1.2524 | 4.0 | 24 | 1.4584 | 0.3556 | | 1.1547 | 5.0 | 30 | 1.3317 | 0.3556 | | 1.1547 | 6.0 | 36 | 1.9319 | 0.3333 | | 0.8748 | 7.0 | 42 | 1.3603 | 0.4222 | | 0.8748 | 8.0 | 48 | 1.0979 | 0.5333 | | 0.8902 | 9.0 | 54 | 1.9103 | 0.4222 | | 0.6653 | 10.0 | 60 | 2.0004 | 0.3778 | | 0.6653 | 11.0 | 66 | 2.0962 | 0.4 | | 0.5253 | 12.0 | 72 | 1.2246 | 0.5111 | | 0.5253 | 13.0 | 78 | 1.6731 | 0.4889 | | 0.5223 | 14.0 | 84 | 2.1516 | 0.4 | | 0.2968 | 15.0 | 90 | 2.5065 | 0.4 | | 0.2968 | 16.0 | 96 | 2.0657 | 0.4444 | | 0.4394 | 17.0 | 102 | 1.5876 | 0.4667 | | 0.4394 | 18.0 | 108 | 2.1433 | 0.4 | | 0.2725 | 19.0 | 114 | 1.4220 | 0.5556 | | 0.1718 | 20.0 | 120 | 1.7558 | 0.4667 | | 0.1718 | 21.0 | 126 | 2.3734 | 0.4667 | | 0.0642 | 22.0 | 132 | 2.9683 | 0.4667 | | 0.0642 | 23.0 | 138 | 2.9217 | 0.4889 | | 0.0435 | 24.0 | 144 | 3.4732 | 0.4667 | | 0.0409 | 25.0 | 150 | 3.8797 | 0.4667 | | 0.0409 | 26.0 | 156 | 4.3387 | 0.4444 | | 0.0418 | 27.0 | 162 | 3.9839 | 0.4444 | | 0.0418 | 28.0 | 168 | 4.5122 | 0.4444 | | 0.0035 | 29.0 | 174 | 4.2517 | 0.4444 | | 0.0006 | 30.0 | 180 | 3.9958 | 0.4444 | | 0.0006 | 31.0 | 186 | 3.9647 | 0.4444 | | 0.0004 | 32.0 | 192 | 3.9928 | 0.4444 | | 0.0004 | 33.0 | 198 | 4.0376 | 0.4222 | | 0.0003 | 34.0 | 204 | 4.0736 | 0.4222 | | 0.0002 | 35.0 | 210 | 4.1046 | 0.4222 | | 0.0002 | 36.0 | 216 | 4.1284 | 0.4222 | | 0.0002 | 37.0 | 222 | 4.1466 | 0.4222 | | 0.0002 | 38.0 | 228 | 4.1585 | 0.4222 | | 0.0002 | 39.0 | 234 | 4.1664 | 0.4222 | | 0.0002 | 40.0 | 240 | 4.1704 | 0.4222 | | 0.0002 | 41.0 | 246 | 4.1721 | 0.4222 | | 0.0002 | 42.0 | 252 | 4.1726 | 0.4222 | | 0.0002 | 43.0 | 258 | 4.1726 | 0.4222 | | 0.0002 | 44.0 | 264 | 4.1726 | 0.4222 | | 0.0002 | 45.0 | 270 | 4.1726 | 0.4222 | | 0.0002 | 46.0 | 276 | 4.1726 | 0.4222 | | 0.0002 | 47.0 | 282 | 4.1726 | 0.4222 | | 0.0002 | 48.0 | 288 | 4.1726 | 0.4222 | | 0.0002 | 49.0 | 294 | 4.1726 | 0.4222 | | 0.0002 | 50.0 | 300 | 4.1726 | 0.4222 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
dwiedarioo/vit-base-patch16-224-in21k-datascience2
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dwiedarioo/vit-base-patch16-224-in21k-datascience2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0109 - Train Accuracy: 0.9997 - Train Top-3-accuracy: 1.0 - Validation Loss: 0.0242 - Validation Accuracy: 0.9948 - Validation Top-3-accuracy: 1.0 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 2880, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 0.3365 | 0.9206 | 0.9902 | 0.1057 | 0.9809 | 1.0 | 0 | | 0.0657 | 0.9891 | 0.9999 | 0.0509 | 0.9902 | 1.0 | 1 | | 0.0252 | 0.9980 | 1.0 | 0.0314 | 0.9945 | 1.0 | 2 | | 0.0146 | 0.9992 | 1.0 | 0.0260 | 0.9948 | 1.0 | 3 | | 0.0109 | 0.9997 | 1.0 | 0.0242 | 0.9948 | 1.0 | 4 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "meningioma_tumor", ".ipynb_checkpoints", "glioma_tumor", "pituitary_tumor", "normal" ]
hkivancoral/hushem_conflu_deneme_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_conflu_deneme_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.8961 - Accuracy: 0.5111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.4190 | 0.2444 | | 1.9213 | 2.0 | 12 | 1.3227 | 0.3111 | | 1.9213 | 3.0 | 18 | 2.3526 | 0.2444 | | 1.2734 | 4.0 | 24 | 1.7104 | 0.3778 | | 1.0407 | 5.0 | 30 | 1.6039 | 0.3556 | | 1.0407 | 6.0 | 36 | 1.2459 | 0.4667 | | 0.733 | 7.0 | 42 | 1.3344 | 0.4667 | | 0.733 | 8.0 | 48 | 1.5744 | 0.5556 | | 0.448 | 9.0 | 54 | 1.2479 | 0.5556 | | 0.3254 | 10.0 | 60 | 2.2545 | 0.5333 | | 0.3254 | 11.0 | 66 | 1.7472 | 0.5333 | | 0.2088 | 12.0 | 72 | 2.0350 | 0.5778 | | 0.2088 | 13.0 | 78 | 3.0002 | 0.4889 | | 0.1216 | 14.0 | 84 | 2.1774 | 0.5556 | | 0.0746 | 15.0 | 90 | 2.5953 | 0.5333 | | 0.0746 | 16.0 | 96 | 2.8934 | 0.5111 | | 0.0176 | 17.0 | 102 | 2.8961 | 0.5111 | | 0.0176 | 18.0 | 108 | 2.8961 | 0.5111 | | 0.0201 | 19.0 | 114 | 2.8961 | 0.5111 | | 0.0136 | 20.0 | 120 | 2.8961 | 0.5111 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_conflu_deneme_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_conflu_deneme_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.9900 - Accuracy: 0.5333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5124 | 0.2444 | | 2.1014 | 2.0 | 12 | 1.4172 | 0.2667 | | 2.1014 | 3.0 | 18 | 1.3682 | 0.2667 | | 1.3494 | 4.0 | 24 | 1.5568 | 0.3333 | | 1.1794 | 5.0 | 30 | 1.1703 | 0.3778 | | 1.1794 | 6.0 | 36 | 1.1853 | 0.5333 | | 0.9962 | 7.0 | 42 | 0.9960 | 0.5778 | | 0.9962 | 8.0 | 48 | 0.9911 | 0.5778 | | 0.7941 | 9.0 | 54 | 1.7710 | 0.4444 | | 0.6504 | 10.0 | 60 | 1.0188 | 0.5111 | | 0.6504 | 11.0 | 66 | 1.3899 | 0.4889 | | 0.3424 | 12.0 | 72 | 1.3633 | 0.5333 | | 0.3424 | 13.0 | 78 | 1.6911 | 0.4667 | | 0.1576 | 14.0 | 84 | 1.8405 | 0.5556 | | 0.0563 | 15.0 | 90 | 1.8925 | 0.5333 | | 0.0563 | 16.0 | 96 | 2.0167 | 0.5333 | | 0.0162 | 17.0 | 102 | 1.9900 | 0.5333 | | 0.0162 | 18.0 | 108 | 1.9900 | 0.5333 | | 0.009 | 19.0 | 114 | 1.9900 | 0.5333 | | 0.0088 | 20.0 | 120 | 1.9900 | 0.5333 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_conflu_deneme_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_conflu_deneme_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9617 - Accuracy: 0.6279 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6871 | 0.2558 | | 1.9835 | 2.0 | 12 | 1.3632 | 0.2326 | | 1.9835 | 3.0 | 18 | 1.4109 | 0.3256 | | 1.294 | 4.0 | 24 | 1.3794 | 0.4186 | | 1.2341 | 5.0 | 30 | 1.2119 | 0.4651 | | 1.2341 | 6.0 | 36 | 1.4964 | 0.4419 | | 1.0897 | 7.0 | 42 | 1.2398 | 0.4651 | | 1.0897 | 8.0 | 48 | 1.0532 | 0.5349 | | 0.9835 | 9.0 | 54 | 1.1022 | 0.5116 | | 0.9034 | 10.0 | 60 | 0.9784 | 0.6279 | | 0.9034 | 11.0 | 66 | 1.5952 | 0.5116 | | 0.8061 | 12.0 | 72 | 0.9828 | 0.5581 | | 0.8061 | 13.0 | 78 | 0.9199 | 0.7209 | | 0.765 | 14.0 | 84 | 1.0672 | 0.5581 | | 0.6513 | 15.0 | 90 | 1.0129 | 0.6744 | | 0.6513 | 16.0 | 96 | 0.9247 | 0.6977 | | 0.4919 | 17.0 | 102 | 0.9617 | 0.6279 | | 0.4919 | 18.0 | 108 | 0.9617 | 0.6279 | | 0.4742 | 19.0 | 114 | 0.9617 | 0.6279 | | 0.4695 | 20.0 | 120 | 0.9617 | 0.6279 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_conflu_deneme_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_conflu_deneme_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8165 - Accuracy: 0.7381 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.7088 | 0.2381 | | 1.9076 | 2.0 | 12 | 1.4617 | 0.2381 | | 1.9076 | 3.0 | 18 | 1.4512 | 0.2619 | | 1.4689 | 4.0 | 24 | 1.3283 | 0.2381 | | 1.3599 | 5.0 | 30 | 1.0112 | 0.6667 | | 1.3599 | 6.0 | 36 | 1.1598 | 0.3810 | | 1.2233 | 7.0 | 42 | 1.4323 | 0.4524 | | 1.2233 | 8.0 | 48 | 0.9658 | 0.6667 | | 1.0502 | 9.0 | 54 | 0.9166 | 0.6429 | | 0.8636 | 10.0 | 60 | 0.8181 | 0.6190 | | 0.8636 | 11.0 | 66 | 1.2729 | 0.5238 | | 0.8856 | 12.0 | 72 | 0.7434 | 0.7381 | | 0.8856 | 13.0 | 78 | 0.6840 | 0.7143 | | 0.6672 | 14.0 | 84 | 0.9596 | 0.5238 | | 0.5861 | 15.0 | 90 | 0.7243 | 0.7381 | | 0.5861 | 16.0 | 96 | 0.8378 | 0.7143 | | 0.4357 | 17.0 | 102 | 0.8165 | 0.7381 | | 0.4357 | 18.0 | 108 | 0.8165 | 0.7381 | | 0.4614 | 19.0 | 114 | 0.8165 | 0.7381 | | 0.431 | 20.0 | 120 | 0.8165 | 0.7381 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_conflu_deneme_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_conflu_deneme_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.9630 - Accuracy: 0.6341 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.4708 | 0.2439 | | 1.7951 | 2.0 | 12 | 1.3099 | 0.2439 | | 1.7951 | 3.0 | 18 | 1.1130 | 0.4146 | | 1.2772 | 4.0 | 24 | 1.0471 | 0.7073 | | 1.1124 | 5.0 | 30 | 1.2680 | 0.5366 | | 1.1124 | 6.0 | 36 | 1.0908 | 0.5122 | | 0.9481 | 7.0 | 42 | 1.5674 | 0.3902 | | 0.9481 | 8.0 | 48 | 0.8947 | 0.6098 | | 0.9653 | 9.0 | 54 | 1.1885 | 0.6098 | | 0.639 | 10.0 | 60 | 0.9898 | 0.6585 | | 0.639 | 11.0 | 66 | 1.7943 | 0.4634 | | 0.5108 | 12.0 | 72 | 1.7088 | 0.5366 | | 0.5108 | 13.0 | 78 | 1.6432 | 0.5610 | | 0.1679 | 14.0 | 84 | 1.5598 | 0.5854 | | 0.1286 | 15.0 | 90 | 2.1600 | 0.5854 | | 0.1286 | 16.0 | 96 | 1.9849 | 0.5854 | | 0.0501 | 17.0 | 102 | 1.9630 | 0.6341 | | 0.0501 | 18.0 | 108 | 1.9630 | 0.6341 | | 0.0271 | 19.0 | 114 | 1.9630 | 0.6341 | | 0.0437 | 20.0 | 120 | 1.9630 | 0.6341 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
arieg/bw_spec_cls_4_01_s_200
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arieg/bw_spec_cls_4_01_s_200 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0046 - Train Sparse Categorical Accuracy: 1.0 - Validation Loss: 0.0045 - Validation Sparse Categorical Accuracy: 1.0 - Epoch: 39 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'clipnorm': 1.0, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 28800, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 0.7335 | 0.9306 | 0.3009 | 1.0 | 0 | | 0.1862 | 1.0 | 0.1287 | 1.0 | 1 | | 0.1060 | 1.0 | 0.0894 | 1.0 | 2 | | 0.0803 | 1.0 | 0.0719 | 1.0 | 3 | | 0.0664 | 1.0 | 0.0611 | 1.0 | 4 | | 0.0570 | 1.0 | 0.0530 | 1.0 | 5 | | 0.0498 | 1.0 | 0.0468 | 1.0 | 6 | | 0.0440 | 1.0 | 0.0415 | 1.0 | 7 | | 0.0392 | 1.0 | 0.0372 | 1.0 | 8 | | 0.0352 | 1.0 | 0.0334 | 1.0 | 9 | | 0.0317 | 1.0 | 0.0302 | 1.0 | 10 | | 0.0287 | 1.0 | 0.0274 | 1.0 | 11 | | 0.0261 | 1.0 | 0.0250 | 1.0 | 12 | | 0.0238 | 1.0 | 0.0228 | 1.0 | 13 | | 0.0218 | 1.0 | 0.0209 | 1.0 | 14 | | 0.0200 | 1.0 | 0.0193 | 1.0 | 15 | | 0.0184 | 1.0 | 0.0178 | 1.0 | 16 | | 0.0170 | 1.0 | 0.0164 | 1.0 | 17 | | 0.0157 | 1.0 | 0.0152 | 1.0 | 18 | | 0.0146 | 1.0 | 0.0141 | 1.0 | 19 | | 0.0136 | 1.0 | 0.0132 | 1.0 | 20 | | 0.0126 | 1.0 | 0.0123 | 1.0 | 21 | | 0.0118 | 1.0 | 0.0115 | 1.0 | 22 | | 0.0111 | 1.0 | 0.0108 | 1.0 | 23 | | 0.0104 | 1.0 | 0.0101 | 1.0 | 24 | | 0.0097 | 1.0 | 0.0095 | 1.0 | 25 | | 0.0091 | 1.0 | 0.0089 | 1.0 | 26 | | 0.0086 | 1.0 | 0.0084 | 1.0 | 27 | | 0.0081 | 1.0 | 0.0079 | 1.0 | 28 | | 0.0077 | 1.0 | 0.0075 | 1.0 | 29 | | 0.0072 | 1.0 | 0.0071 | 1.0 | 30 | | 0.0069 | 1.0 | 0.0067 | 1.0 | 31 | | 0.0065 | 1.0 | 0.0064 | 1.0 | 32 | | 0.0062 | 1.0 | 0.0060 | 1.0 | 33 | | 0.0058 | 1.0 | 0.0057 | 1.0 | 34 | | 0.0056 | 1.0 | 0.0055 | 1.0 | 35 | | 0.0053 | 1.0 | 0.0052 | 1.0 | 36 | | 0.0050 | 1.0 | 0.0049 | 1.0 | 37 | | 0.0048 | 1.0 | 0.0047 | 1.0 | 38 | | 0.0046 | 1.0 | 0.0045 | 1.0 | 39 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "141", "190", "193", "194" ]
thomastess/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Framework versions - Transformers 4.35.0 - Pytorch 1.10.2 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
deathperminutV2/hojas
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hojas This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0340 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1438 | 3.85 | 500 | 0.0340 | 0.9850 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.13.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
Noobjing/food_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Noobjing/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.2571 - Validation Loss: 1.1757 - Train Accuracy: 1.0 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 2000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 3.6012 | 2.6090 | 1.0 | 0 | | 2.1348 | 1.8255 | 1.0 | 1 | | 1.6677 | 1.5386 | 1.0 | 2 | | 1.4364 | 1.3427 | 1.0 | 3 | | 1.2571 | 1.1757 | 1.0 | 4 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
Nititorn/food_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Nititorn/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.8401 - Validation Loss: 1.6982 - Train Accuracy: 0.805 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 2.8401 | 1.6982 | 0.805 | 0 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
Artemiy27/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0136 - Accuracy: 0.9938 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0694 | 1.0 | 56 | 0.0158 | 0.995 | | 0.0495 | 1.99 | 112 | 0.0207 | 0.9925 | | 0.0402 | 2.99 | 168 | 0.0136 | 0.9938 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "cat", "dog" ]
dima806/celebs_face_image_detection
Returns celebrity name given a facial image with about 93% accuracy. See https://www.kaggle.com/code/dima806/celebs-face-image-detection-vit for more details. ``` Classification report: precision recall f1-score support Adriana Lima 0.9462 0.9362 0.9412 94 Alex Lawther 0.9490 0.9789 0.9637 95 Alexandra Daddario 0.9485 0.9684 0.9583 95 Alvaro Morte 0.9794 1.0000 0.9896 95 Alycia Dabnem Carey 0.9620 0.8000 0.8736 95 Amanda Crew 0.9286 0.9579 0.9430 95 Amber Heard 0.8652 0.8105 0.8370 95 Andy Samberg 0.9785 0.9681 0.9733 94 Anne Hathaway 0.9109 0.9684 0.9388 95 Anthony Mackie 1.0000 1.0000 1.0000 95 Avril Lavigne 0.9135 1.0000 0.9548 95 Barack Obama 1.0000 1.0000 1.0000 95 Barbara Palvin 0.9175 0.9368 0.9271 95 Ben Affleck 0.9474 0.9474 0.9474 95 Bill Gates 1.0000 1.0000 1.0000 95 Bobby Morley 0.9400 0.9895 0.9641 95 Brenton Thwaites 0.9474 0.9574 0.9524 94 Brian J. Smith 0.8559 1.0000 0.9223 95 Brie Larson 0.8558 0.9368 0.8945 95 Camila Mendes 0.9495 0.9895 0.9691 95 Chris Evans 0.9247 0.9053 0.9149 95 Chris Hemsworth 0.9565 0.9263 0.9412 95 Chris Pratt 0.9691 0.9895 0.9792 95 Christian Bale 0.9783 0.9574 0.9677 94 Cristiano Ronaldo 1.0000 1.0000 1.0000 94 Danielle Panabaker 0.9859 0.7368 0.8434 95 Dominic Purcell 0.9792 0.9895 0.9843 95 Dwayne Johnson 0.9895 1.0000 0.9947 94 Eliza Taylor 0.9750 0.8211 0.8914 95 Elizabeth Lail 0.9670 0.9263 0.9462 95 Elizabeth Olsen 0.8411 0.9474 0.8911 95 Ellen Page 0.8687 0.9053 0.8866 95 Elon Musk 0.9583 0.9684 0.9634 95 Emilia Clarke 0.9206 0.6105 0.7342 95 Emma Stone 0.9500 0.8000 0.8686 95 Emma Watson 0.9615 0.5263 0.6803 95 Gal Gadot 0.9296 0.6947 0.7952 95 Grant Gustin 0.9468 0.9368 0.9418 95 Gwyneth Paltrow 0.8796 1.0000 0.9360 95 Henry Cavil 0.9487 0.7789 0.8555 95 Hugh Jackman 0.9570 0.9368 0.9468 95 Inbar Lavi 0.9570 0.9368 0.9468 95 Irina Shayk 0.9592 0.9895 0.9741 95 Jake Mcdorman 1.0000 0.9789 0.9894 95 Jason Momoa 0.9894 0.9789 0.9841 95 Jeff Bezos 0.9896 1.0000 0.9948 95 Jennifer Lawrence 0.8876 0.8404 0.8634 94 Jeremy Renner 0.9691 0.9895 0.9792 95 Jessica Barden 0.8624 1.0000 0.9261 94 Jimmy Fallon 0.9792 0.9895 0.9843 95 Johnny Depp 0.9140 0.8947 0.9043 95 Josh Radnor 0.9792 0.9895 0.9843 95 Katharine Mcphee 0.9333 0.8842 0.9081 95 Katherine Langford 0.7851 1.0000 0.8796 95 Keanu Reeves 0.9785 0.9579 0.9681 95 Kiernen Shipka 0.6078 0.9789 0.7500 95 Krysten Ritter 0.9118 0.9894 0.9490 94 Leonardo Dicaprio 0.9588 0.9789 0.9688 95 Lili Reinhart 0.8144 0.8404 0.8272 94 Lindsey Morgan 0.8571 0.9474 0.9000 95 Lionel Messi 0.9890 0.9474 0.9677 95 Logan Lerman 0.9583 0.9684 0.9634 95 Madelaine Petsch 0.9072 0.9362 0.9215 94 Maisie Williams 0.8713 0.9362 0.9026 94 Margot Robbie 0.7634 0.7474 0.7553 95 Maria Pedraza 0.9310 0.8617 0.8950 94 Marie Avgeropoulos 0.9118 0.9789 0.9442 95 Mark Ruffalo 1.0000 0.8632 0.9266 95 Mark Zuckerberg 0.9896 1.0000 0.9948 95 Megan Fox 1.0000 0.9362 0.9670 94 Melissa Fumero 0.9400 0.9895 0.9641 95 Miley Cyrus 1.0000 0.7053 0.8272 95 Millie Bobby Brown 0.9192 0.9579 0.9381 95 Morena Baccarin 0.9789 0.9789 0.9789 95 Morgan Freeman 1.0000 1.0000 1.0000 94 Nadia Hilker 0.9892 0.9787 0.9840 94 Natalie Dormer 0.7417 0.9368 0.8279 95 Natalie Portman 0.8804 0.8526 0.8663 95 Neil Patrick Harris 1.0000 0.9789 0.9894 95 Pedro Alonso 0.9579 0.9579 0.9579 95 Penn Badgley 0.9583 0.9787 0.9684 94 Rami Malek 0.9792 0.9895 0.9843 95 Rebecca Ferguson 0.8304 0.9789 0.8986 95 Richard Harmon 0.9381 0.9579 0.9479 95 Rihanna 0.9485 0.9787 0.9634 94 Robert De Niro 0.8687 0.9053 0.8866 95 Robert Downey Jr 0.9765 0.8830 0.9274 94 Sarah Wayne Callies 0.8476 0.9368 0.8900 95 Scarlett Johansson 0.9302 0.4211 0.5797 95 Selena Gomez 0.9359 0.7684 0.8439 95 Shakira Isabel Mebarak 0.9368 0.9368 0.9368 95 Sophie Turner 0.8969 0.9158 0.9062 95 Stephen Amell 0.9500 1.0000 0.9744 95 Taylor Swift 0.9300 0.9789 0.9538 95 Tom Cruise 0.9688 0.9789 0.9738 95 Tom Ellis 0.9208 0.9894 0.9538 94 Tom Hardy 0.9765 0.8737 0.9222 95 Tom Hiddleston 0.9451 0.9053 0.9247 95 Tom Holland 0.9300 0.9789 0.9538 95 Tuppence Middleton 0.8304 0.9789 0.8986 95 Ursula Corbero 0.9278 0.9474 0.9375 95 Wentworth Miller 0.9694 1.0000 0.9845 95 Zac Efron 0.9192 0.9579 0.9381 95 Zendaya 0.8468 0.9895 0.9126 95 Zoe Saldana 1.0000 1.0000 1.0000 94 accuracy 0.9277 9954 macro avg 0.9324 0.9277 0.9260 9954 weighted avg 0.9324 0.9277 0.9259 9954 ```
[ "adriana lima", "alex lawther", "alexandra daddario", "alvaro morte", "alycia dabnem carey", "amanda crew", "amber heard", "andy samberg", "anne hathaway", "anthony mackie", "avril lavigne", "barack obama", "barbara palvin", "ben affleck", "bill gates", "bobby morley", "brenton thwaites", "brian j. smith", "brie larson", "camila mendes", "chris evans", "chris hemsworth", "chris pratt", "christian bale", "cristiano ronaldo", "danielle panabaker", "dominic purcell", "dwayne johnson", "eliza taylor", "elizabeth lail", "elizabeth olsen", "ellen page", "elon musk", "emilia clarke", "emma stone", "emma watson", "gal gadot", "grant gustin", "gwyneth paltrow", "henry cavil", "hugh jackman", "inbar lavi", "irina shayk", "jake mcdorman", "jason momoa", "jeff bezos", "jennifer lawrence", "jeremy renner", "jessica barden", "jimmy fallon", "johnny depp", "josh radnor", "katharine mcphee", "katherine langford", "keanu reeves", "kiernen shipka", "krysten ritter", "leonardo dicaprio", "lili reinhart", "lindsey morgan", "lionel messi", "logan lerman", "madelaine petsch", "maisie williams", "margot robbie", "maria pedraza", "marie avgeropoulos", "mark ruffalo", "mark zuckerberg", "megan fox", "melissa fumero", "miley cyrus", "millie bobby brown", "morena baccarin", "morgan freeman", "nadia hilker", "natalie dormer", "natalie portman", "neil patrick harris", "pedro alonso", "penn badgley", "rami malek", "rebecca ferguson", "richard harmon", "rihanna", "robert de niro", "robert downey jr", "sarah wayne callies", "scarlett johansson", "selena gomez", "shakira isabel mebarak", "sophie turner", "stephen amell", "taylor swift", "tom cruise", "tom ellis", "tom hardy", "tom hiddleston", "tom holland", "tuppence middleton", "ursula corbero", "wentworth miller", "zac efron", "zendaya", "zoe saldana" ]
jerryteps/swin-tiny-patch4-window7-224
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224 This model was trained from scratch on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8630 - Accuracy: 0.6846 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3586 | 1.0 | 252 | 1.2051 | 0.5403 | | 1.2281 | 2.0 | 505 | 1.0535 | 0.6108 | | 1.148 | 3.0 | 757 | 0.9985 | 0.6194 | | 1.087 | 4.0 | 1010 | 0.9658 | 0.6361 | | 1.1121 | 5.0 | 1262 | 0.9203 | 0.6539 | | 1.0127 | 6.0 | 1515 | 0.9245 | 0.6567 | | 0.9858 | 7.0 | 1767 | 0.8846 | 0.6757 | | 0.9948 | 8.0 | 2020 | 0.8793 | 0.6748 | | 0.9398 | 9.0 | 2272 | 0.8671 | 0.6765 | | 0.9904 | 9.98 | 2520 | 0.8630 | 0.6846 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "angry", "disgusted", "fearful", "happy", "neutral", "sad", "surprised" ]
dwiedarioo/vit-base-patch16-224-in21k-datascience4
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dwiedarioo/vit-base-patch16-224-in21k-datascience4 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0225 - Train Accuracy: 0.9974 - Train Top-3-accuracy: 1.0 - Validation Loss: 0.0312 - Validation Accuracy: 0.9945 - Validation Top-3-accuracy: 1.0 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 2880, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 0.3153 | 0.9166 | 0.9916 | 0.1097 | 0.9757 | 1.0 | 0 | | 0.0583 | 0.9898 | 1.0 | 0.0558 | 0.9877 | 1.0 | 1 | | 0.0225 | 0.9974 | 1.0 | 0.0312 | 0.9945 | 1.0 | 2 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.14.0 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "glioma_tumor", "normal", "pituitary_tumor", "meningioma_tumor" ]
JLB-JLB/seizure_vit_jlb_231112_fft_raw_combo
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # seizure_vit_jlb_231112_fft_raw_combo This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the JLB-JLB/seizure_detection_224x224_raw_frequency dataset. It achieves the following results on the evaluation set: - Loss: 0.4822 - Roc Auc: 0.7667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Roc Auc | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.4777 | 0.17 | 500 | 0.5237 | 0.7455 | | 0.4469 | 0.34 | 1000 | 0.5114 | 0.7542 | | 0.4122 | 0.52 | 1500 | 0.5084 | 0.7567 | | 0.3904 | 0.69 | 2000 | 0.5043 | 0.7611 | | 0.3619 | 0.86 | 2500 | 0.5283 | 0.7609 | | 0.3528 | 1.03 | 3000 | 0.5352 | 0.7517 | | 0.3445 | 1.2 | 3500 | 0.5338 | 0.7572 | | 0.3221 | 1.37 | 4000 | 0.5388 | 0.7509 | | 0.3109 | 1.55 | 4500 | 0.5641 | 0.7458 | | 0.3203 | 1.72 | 5000 | 0.5404 | 0.7574 | | 0.294 | 1.89 | 5500 | 0.5421 | 0.7564 | | 0.2964 | 2.06 | 6000 | 0.5582 | 0.7493 | | 0.292 | 2.23 | 6500 | 0.5513 | 0.7561 | | 0.2838 | 2.4 | 7000 | 0.5557 | 0.7598 | | 0.2736 | 2.58 | 7500 | 0.5514 | 0.7606 | | 0.2922 | 2.75 | 8000 | 0.5503 | 0.7538 | | 0.2699 | 2.92 | 8500 | 0.5535 | 0.7578 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "bckg", "seiz" ]
EstherSan/car_identified_model_7
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # car_identified_model_7 This model is a fine-tuned version of [apple/mobilevitv2-1.0-imagenet1k-256](https://huggingface.co/apple/mobilevitv2-1.0-imagenet1k-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5755 - F1: 0.3629 - Roc Auc: 0.6990 - Accuracy: 0.0714 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:------:|:----:|:---------------:|:------:|:-------:|:--------:| | 0.6919 | 0.73 | 1 | 0.6887 | 0.1786 | 0.5738 | 0.0 | | 0.6919 | 1.45 | 2 | 0.6856 | 0.1818 | 0.5761 | 0.0 | | 0.6919 | 2.91 | 4 | 0.6802 | 0.2116 | 0.6066 | 0.0 | | 0.6919 | 3.64 | 5 | 0.6800 | 0.1861 | 0.5826 | 0.0 | | 0.6919 | 4.36 | 6 | 0.6858 | 0.1905 | 0.5973 | 0.0 | | 0.6919 | 5.82 | 8 | 0.6938 | 0.1549 | 0.5342 | 0.0 | | 0.6919 | 6.55 | 9 | 0.6917 | 0.1805 | 0.5802 | 0.0 | | 0.6919 | 8.0 | 11 | 0.6735 | 0.1905 | 0.5932 | 0.0 | | 0.6919 | 8.73 | 12 | 0.6727 | 0.1952 | 0.6007 | 0.0 | | 0.6919 | 9.45 | 13 | 0.6698 | 0.2061 | 0.6172 | 0.0 | | 0.6919 | 10.91 | 15 | 0.6672 | 0.2008 | 0.6092 | 0.0 | | 0.6919 | 11.64 | 16 | 0.6645 | 0.2092 | 0.6196 | 0.0 | | 0.6919 | 12.36 | 17 | 0.6646 | 0.2049 | 0.6144 | 0.0 | | 0.6919 | 13.82 | 19 | 0.6623 | 0.2081 | 0.6167 | 0.0 | | 0.6919 | 14.55 | 20 | 0.6607 | 0.2078 | 0.6149 | 0.0 | | 0.6919 | 16.0 | 22 | 0.6585 | 0.2203 | 0.6320 | 0.0 | | 0.6919 | 16.73 | 23 | 0.6562 | 0.2156 | 0.6219 | 0.0 | | 0.6919 | 17.45 | 24 | 0.6555 | 0.2182 | 0.6263 | 0.0 | | 0.6919 | 18.91 | 26 | 0.6522 | 0.2185 | 0.6232 | 0.0 | | 0.6919 | 19.64 | 27 | 0.6512 | 0.2228 | 0.6273 | 0.0 | | 0.6919 | 20.36 | 28 | 0.6501 | 0.2356 | 0.6410 | 0.0 | | 0.6919 | 21.82 | 30 | 0.6477 | 0.2280 | 0.6284 | 0.0 | | 0.6919 | 22.55 | 31 | 0.6476 | 0.2326 | 0.6343 | 0.0 | | 0.6919 | 24.0 | 33 | 0.6469 | 0.2408 | 0.6434 | 0.0 | | 0.6919 | 24.73 | 34 | 0.6432 | 0.2409 | 0.6369 | 0.0 | | 0.6919 | 25.45 | 35 | 0.6432 | 0.2431 | 0.6408 | 0.0 | | 0.6919 | 26.91 | 37 | 0.6402 | 0.2486 | 0.6449 | 0.0 | | 0.6919 | 27.64 | 38 | 0.6386 | 0.2686 | 0.6664 | 0.0 | | 0.6919 | 28.36 | 39 | 0.6376 | 0.2762 | 0.6796 | 0.0 | | 0.6919 | 29.82 | 41 | 0.6347 | 0.2692 | 0.6721 | 0.0 | | 0.6919 | 30.55 | 42 | 0.6339 | 0.2655 | 0.6643 | 0.0 | | 0.6919 | 32.0 | 44 | 0.6310 | 0.2674 | 0.6630 | 0.0 | | 0.6919 | 32.73 | 45 | 0.6307 | 0.2789 | 0.6731 | 0.0 | | 0.6919 | 33.45 | 46 | 0.6291 | 0.2714 | 0.6656 | 0.0 | | 0.6919 | 34.91 | 48 | 0.6271 | 0.2761 | 0.6659 | 0.0 | | 0.6919 | 35.64 | 49 | 0.6271 | 0.2687 | 0.6612 | 0.0 | | 0.6919 | 36.36 | 50 | 0.6277 | 0.2606 | 0.6509 | 0.0 | | 0.6919 | 37.82 | 52 | 0.6257 | 0.2741 | 0.6620 | 0.0 | | 0.6919 | 38.55 | 53 | 0.6244 | 0.2892 | 0.6793 | 0.0 | | 0.6919 | 40.0 | 55 | 0.6203 | 0.2968 | 0.6806 | 0.0 | | 0.6919 | 40.73 | 56 | 0.6198 | 0.2902 | 0.6770 | 0.0 | | 0.6919 | 41.45 | 57 | 0.6184 | 0.3023 | 0.6866 | 0.0 | | 0.6919 | 42.91 | 59 | 0.6163 | 0.2977 | 0.6812 | 0.0 | | 0.6919 | 43.64 | 60 | 0.6147 | 0.3322 | 0.7112 | 0.0 | | 0.6919 | 44.36 | 61 | 0.6154 | 0.3197 | 0.6954 | 0.0 | | 0.6919 | 45.82 | 63 | 0.6129 | 0.3016 | 0.6832 | 0.0 | | 0.6919 | 46.55 | 64 | 0.6112 | 0.3020 | 0.6804 | 0.0 | | 0.6919 | 48.0 | 66 | 0.6095 | 0.2961 | 0.6773 | 0.0 | | 0.6919 | 48.73 | 67 | 0.6091 | 0.3133 | 0.6923 | 0.0 | | 0.6919 | 49.45 | 68 | 0.6090 | 0.3265 | 0.7019 | 0.0 | | 0.6919 | 50.91 | 70 | 0.6077 | 0.3093 | 0.6840 | 0.0 | | 0.6919 | 51.64 | 71 | 0.6065 | 0.3239 | 0.6941 | 0.0 | | 0.6919 | 52.36 | 72 | 0.6058 | 0.3237 | 0.6907 | 0.0 | | 0.6919 | 53.82 | 74 | 0.6028 | 0.3285 | 0.6928 | 0.0 | | 0.6919 | 54.55 | 75 | 0.6038 | 0.3285 | 0.6928 | 0.0238 | | 0.6919 | 56.0 | 77 | 0.6056 | 0.3197 | 0.6825 | 0.0 | | 0.6919 | 56.73 | 78 | 0.6074 | 0.3249 | 0.6913 | 0.0 | | 0.6919 | 57.45 | 79 | 0.6030 | 0.3158 | 0.6775 | 0.0238 | | 0.6919 | 58.91 | 81 | 0.6001 | 0.3359 | 0.6925 | 0.0238 | | 0.6919 | 59.64 | 82 | 0.5993 | 0.3409 | 0.6980 | 0.0238 | | 0.6919 | 60.36 | 83 | 0.6017 | 0.3259 | 0.6884 | 0.0238 | | 0.6919 | 61.82 | 85 | 0.6009 | 0.3146 | 0.6770 | 0.0238 | | 0.6919 | 62.55 | 86 | 0.6018 | 0.3197 | 0.6825 | 0.0238 | | 0.6919 | 64.0 | 88 | 0.5975 | 0.3130 | 0.6731 | 0.0238 | | 0.6919 | 64.73 | 89 | 0.5978 | 0.3271 | 0.6889 | 0.0238 | | 0.6919 | 65.45 | 90 | 0.5967 | 0.3424 | 0.6951 | 0.0238 | | 0.6919 | 66.91 | 92 | 0.5973 | 0.3125 | 0.6698 | 0.0238 | | 0.6919 | 67.64 | 93 | 0.5956 | 0.3372 | 0.6931 | 0.0238 | | 0.6919 | 68.36 | 94 | 0.5922 | 0.3373 | 0.6897 | 0.0238 | | 0.6919 | 69.82 | 96 | 0.5949 | 0.3320 | 0.6843 | 0.0476 | | 0.6919 | 70.55 | 97 | 0.5959 | 0.3413 | 0.6913 | 0.0476 | | 0.6919 | 72.0 | 99 | 0.5944 | 0.3420 | 0.7019 | 0.0238 | | 0.6919 | 72.73 | 100 | 0.5955 | 0.3333 | 0.6881 | 0.0476 | | 0.6919 | 73.45 | 101 | 0.5933 | 0.3346 | 0.6887 | 0.0238 | | 0.6919 | 74.91 | 103 | 0.5894 | 0.3543 | 0.7032 | 0.0238 | | 0.6919 | 75.64 | 104 | 0.5903 | 0.3424 | 0.6951 | 0.0238 | | 0.6919 | 76.36 | 105 | 0.5890 | 0.3411 | 0.6946 | 0.0476 | | 0.6919 | 77.82 | 107 | 0.5922 | 0.3346 | 0.6887 | 0.0476 | | 0.6919 | 78.55 | 108 | 0.5923 | 0.3243 | 0.6812 | 0.0476 | | 0.6919 | 80.0 | 110 | 0.5908 | 0.3468 | 0.6933 | 0.0476 | | 0.6919 | 80.73 | 111 | 0.5922 | 0.328 | 0.6793 | 0.0476 | | 0.6919 | 81.45 | 112 | 0.5892 | 0.3440 | 0.6923 | 0.0238 | | 0.6919 | 82.91 | 114 | 0.5880 | 0.3506 | 0.6982 | 0.0238 | | 0.6919 | 83.64 | 115 | 0.5869 | 0.3454 | 0.6928 | 0.0476 | | 0.6919 | 84.36 | 116 | 0.5841 | 0.3465 | 0.6967 | 0.0238 | | 0.6919 | 85.82 | 118 | 0.5841 | 0.3568 | 0.6969 | 0.0714 | | 0.6919 | 86.55 | 119 | 0.5843 | 0.3496 | 0.6944 | 0.0476 | | 0.6919 | 88.0 | 121 | 0.5860 | 0.3598 | 0.6980 | 0.0476 | | 0.6919 | 88.73 | 122 | 0.5837 | 0.3457 | 0.6894 | 0.0476 | | 0.6919 | 89.45 | 123 | 0.5826 | 0.3636 | 0.7029 | 0.0714 | | 0.6919 | 90.91 | 125 | 0.5822 | 0.3651 | 0.7034 | 0.0714 | | 0.6919 | 91.64 | 126 | 0.5814 | 0.3607 | 0.7019 | 0.0714 | | 0.6919 | 92.36 | 127 | 0.5814 | 0.3629 | 0.7063 | 0.0476 | | 0.6919 | 93.82 | 129 | 0.5818 | 0.3713 | 0.7055 | 0.0714 | | 0.6919 | 94.55 | 130 | 0.5802 | 0.3766 | 0.7109 | 0.0714 | | 0.6919 | 96.0 | 132 | 0.5803 | 0.3675 | 0.7006 | 0.0714 | | 0.6919 | 96.73 | 133 | 0.5825 | 0.3519 | 0.6881 | 0.0714 | | 0.6919 | 97.45 | 134 | 0.5790 | 0.3629 | 0.6990 | 0.0714 | | 0.6919 | 98.91 | 136 | 0.5795 | 0.3766 | 0.7109 | 0.0714 | | 0.6919 | 99.64 | 137 | 0.5784 | 0.3697 | 0.7050 | 0.0714 | | 0.6919 | 100.36 | 138 | 0.5819 | 0.3583 | 0.6975 | 0.0714 | | 0.6919 | 101.82 | 140 | 0.5834 | 0.3525 | 0.6954 | 0.0476 | | 0.6919 | 102.55 | 141 | 0.5825 | 0.3689 | 0.7083 | 0.0238 | | 0.6919 | 104.0 | 143 | 0.5839 | 0.3460 | 0.6861 | 0.0714 | | 0.6919 | 104.73 | 144 | 0.5838 | 0.3333 | 0.6814 | 0.0476 | | 0.6919 | 105.45 | 145 | 0.5801 | 0.3387 | 0.6869 | 0.0238 | | 0.6919 | 106.91 | 147 | 0.5811 | 0.3515 | 0.6915 | 0.0476 | | 0.6919 | 107.64 | 148 | 0.5793 | 0.3374 | 0.6830 | 0.0476 | | 0.6919 | 108.36 | 149 | 0.5766 | 0.3448 | 0.6822 | 0.0714 | | 0.6919 | 109.82 | 151 | 0.5760 | 0.3445 | 0.6856 | 0.0714 | | 0.6919 | 110.55 | 152 | 0.5757 | 0.3559 | 0.6931 | 0.0714 | | 0.6919 | 112.0 | 154 | 0.5760 | 0.3475 | 0.6866 | 0.0714 | | 0.6919 | 112.73 | 155 | 0.5743 | 0.3629 | 0.6990 | 0.0714 | | 0.6919 | 113.45 | 156 | 0.5732 | 0.3636 | 0.7029 | 0.0714 | | 0.6919 | 114.91 | 158 | 0.5736 | 0.3786 | 0.7153 | 0.0476 | | 0.6919 | 115.64 | 159 | 0.5764 | 0.3667 | 0.7039 | 0.0238 | | 0.6919 | 116.36 | 160 | 0.5765 | 0.3613 | 0.6985 | 0.0476 | | 0.6919 | 117.82 | 162 | 0.5749 | 0.3574 | 0.6936 | 0.0714 | | 0.6919 | 118.55 | 163 | 0.5754 | 0.3592 | 0.7013 | 0.0476 | | 0.6919 | 120.0 | 165 | 0.5757 | 0.3665 | 0.7112 | 0.0476 | | 0.6919 | 120.73 | 166 | 0.5771 | 0.3729 | 0.7060 | 0.0714 | | 0.6919 | 121.45 | 167 | 0.5746 | 0.3629 | 0.6990 | 0.0714 | | 0.6919 | 122.91 | 169 | 0.5758 | 0.3644 | 0.6995 | 0.0714 | | 0.6919 | 123.64 | 170 | 0.5745 | 0.3559 | 0.6931 | 0.0714 | | 0.6919 | 124.36 | 171 | 0.5758 | 0.3544 | 0.6925 | 0.0714 | | 0.6919 | 125.82 | 173 | 0.5759 | 0.3598 | 0.6980 | 0.0714 | | 0.6919 | 126.55 | 174 | 0.5772 | 0.3568 | 0.6969 | 0.0714 | | 0.6919 | 128.0 | 176 | 0.5747 | 0.3583 | 0.6975 | 0.0714 | | 0.6919 | 128.73 | 177 | 0.5738 | 0.3644 | 0.6995 | 0.0714 | | 0.6919 | 129.45 | 178 | 0.5751 | 0.3644 | 0.6995 | 0.0714 | | 0.6919 | 130.91 | 180 | 0.5741 | 0.3713 | 0.7055 | 0.0952 | | 0.6919 | 131.64 | 181 | 0.5748 | 0.3713 | 0.7055 | 0.0952 | | 0.6919 | 132.36 | 182 | 0.5767 | 0.3660 | 0.7001 | 0.0714 | | 0.6919 | 133.82 | 184 | 0.5732 | 0.3660 | 0.7001 | 0.0952 | | 0.6919 | 134.55 | 185 | 0.5742 | 0.3772 | 0.7037 | 0.0952 | | 0.6919 | 136.0 | 187 | 0.5690 | 0.3755 | 0.7032 | 0.0952 | | 0.6919 | 136.73 | 188 | 0.5699 | 0.3805 | 0.7047 | 0.0714 | | 0.6919 | 137.45 | 189 | 0.5743 | 0.3707 | 0.7016 | 0.0714 | | 0.6919 | 138.91 | 191 | 0.5740 | 0.3529 | 0.6920 | 0.0952 | | 0.6919 | 139.64 | 192 | 0.5740 | 0.3660 | 0.7001 | 0.0714 | | 0.6919 | 140.36 | 193 | 0.5734 | 0.3644 | 0.6995 | 0.0714 | | 0.6919 | 141.82 | 195 | 0.5740 | 0.3675 | 0.7006 | 0.0714 | | 0.6919 | 142.55 | 196 | 0.5721 | 0.3707 | 0.7016 | 0.0714 | | 0.6919 | 144.0 | 198 | 0.5725 | 0.3767 | 0.6998 | 0.0714 | | 0.6919 | 144.73 | 199 | 0.5734 | 0.3729 | 0.7060 | 0.0952 | | 0.6919 | 145.45 | 200 | 0.5755 | 0.3629 | 0.6990 | 0.0714 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "black", "white" ]
aditnnda/felidae_klasifikasi
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # aditnnda/felidae_klasifikasi This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an [Felidae Dataset](https://huggingface.co/datasets/aditnnda/Felidae). It achieves the following results on the evaluation set: - Train Loss: 0.5782 - Train Accuracy: 0.8361 - Validation Loss: 0.5283 - Validation Accuracy: 0.8361 - Epoch: 19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 3640, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 1.5945 | 0.5574 | 1.5482 | 0.5574 | 0 | | 1.5213 | 0.7541 | 1.4625 | 0.7541 | 1 | | 1.4429 | 0.7049 | 1.3574 | 0.7049 | 2 | | 1.3399 | 0.7869 | 1.2390 | 0.7869 | 3 | | 1.2264 | 0.6721 | 1.1328 | 0.6721 | 4 | | 1.1660 | 0.7869 | 1.0287 | 0.7869 | 5 | | 1.0825 | 0.7377 | 0.9690 | 0.7377 | 6 | | 1.0005 | 0.8197 | 0.8654 | 0.8197 | 7 | | 0.9121 | 0.7869 | 0.8303 | 0.7869 | 8 | | 0.8530 | 0.8525 | 0.7590 | 0.8525 | 9 | | 0.8602 | 0.8361 | 0.7169 | 0.8361 | 10 | | 0.8420 | 0.8197 | 0.6993 | 0.8197 | 11 | | 0.7772 | 0.8689 | 0.6347 | 0.8689 | 12 | | 0.7447 | 0.8689 | 0.6023 | 0.8689 | 13 | | 0.7253 | 0.8197 | 0.6458 | 0.8197 | 14 | | 0.6994 | 0.8361 | 0.6045 | 0.8361 | 15 | | 0.6761 | 0.8361 | 0.6030 | 0.8361 | 16 | | 0.5814 | 0.8197 | 0.5523 | 0.8197 | 17 | | 0.5939 | 0.8689 | 0.5456 | 0.8689 | 18 | | 0.5782 | 0.8361 | 0.5283 | 0.8361 | 19 | ### Framework versions - Transformers 4.35.1 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "cheetah", "leopard", "lion", "puma", "tiger" ]
Akshay0706/Cinnamon-Plant-20-Epochs-Model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Rice-Plant-Disease-Detection-Model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2929 - Accuracy: 0.8958 - F1: 0.8965 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5517 | 1.0 | 18 | 0.5222 | 0.875 | 0.8754 | | 0.2996 | 2.0 | 36 | 0.3833 | 0.8542 | 0.8564 | | 0.1529 | 3.0 | 54 | 0.3152 | 0.875 | 0.8763 | | 0.0843 | 4.0 | 72 | 0.2929 | 0.8958 | 0.8965 | | 0.0549 | 5.0 | 90 | 0.2756 | 0.875 | 0.8754 | | 0.0402 | 6.0 | 108 | 0.2765 | 0.875 | 0.8754 | | 0.0327 | 7.0 | 126 | 0.2875 | 0.875 | 0.8754 | | 0.0277 | 8.0 | 144 | 0.2938 | 0.875 | 0.8754 | | 0.0244 | 9.0 | 162 | 0.2992 | 0.875 | 0.8754 | | 0.0222 | 10.0 | 180 | 0.2996 | 0.8958 | 0.8960 | | 0.0203 | 11.0 | 198 | 0.3052 | 0.8958 | 0.8960 | | 0.019 | 12.0 | 216 | 0.3087 | 0.8958 | 0.8960 | | 0.018 | 13.0 | 234 | 0.3143 | 0.8958 | 0.8960 | | 0.0171 | 14.0 | 252 | 0.3206 | 0.8958 | 0.8960 | | 0.0164 | 15.0 | 270 | 0.3227 | 0.8958 | 0.8960 | | 0.0158 | 16.0 | 288 | 0.3250 | 0.8958 | 0.8960 | | 0.0155 | 17.0 | 306 | 0.3257 | 0.8958 | 0.8960 | | 0.0152 | 18.0 | 324 | 0.3264 | 0.8958 | 0.8960 | | 0.015 | 19.0 | 342 | 0.3276 | 0.8958 | 0.8960 | | 0.0149 | 20.0 | 360 | 0.3275 | 0.8958 | 0.8960 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.1.0+cpu - Datasets 2.14.5 - Tokenizers 0.14.0
[ "0", "1" ]
hkivancoral/hushem_1x_deit_tiny_adamax_001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.8804 - Accuracy: 0.5778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3266 | 0.3778 | | 1.1956 | 2.0 | 12 | 1.1674 | 0.4667 | | 1.1956 | 3.0 | 18 | 1.1849 | 0.4889 | | 0.4784 | 4.0 | 24 | 1.2723 | 0.4667 | | 0.1535 | 5.0 | 30 | 1.2811 | 0.4889 | | 0.1535 | 6.0 | 36 | 1.5643 | 0.4667 | | 0.0259 | 7.0 | 42 | 1.3477 | 0.5556 | | 0.0259 | 8.0 | 48 | 1.7927 | 0.4889 | | 0.0051 | 9.0 | 54 | 1.7277 | 0.5556 | | 0.0016 | 10.0 | 60 | 1.5795 | 0.6222 | | 0.0016 | 11.0 | 66 | 1.6103 | 0.6 | | 0.0008 | 12.0 | 72 | 1.7043 | 0.5778 | | 0.0008 | 13.0 | 78 | 1.7832 | 0.5778 | | 0.0005 | 14.0 | 84 | 1.8224 | 0.5778 | | 0.0004 | 15.0 | 90 | 1.8294 | 0.5778 | | 0.0004 | 16.0 | 96 | 1.8185 | 0.5778 | | 0.0004 | 17.0 | 102 | 1.8150 | 0.5778 | | 0.0004 | 18.0 | 108 | 1.8206 | 0.5778 | | 0.0004 | 19.0 | 114 | 1.8349 | 0.5778 | | 0.0003 | 20.0 | 120 | 1.8491 | 0.5778 | | 0.0003 | 21.0 | 126 | 1.8590 | 0.5778 | | 0.0003 | 22.0 | 132 | 1.8667 | 0.5556 | | 0.0003 | 23.0 | 138 | 1.8640 | 0.5556 | | 0.0003 | 24.0 | 144 | 1.8624 | 0.5556 | | 0.0003 | 25.0 | 150 | 1.8632 | 0.5778 | | 0.0003 | 26.0 | 156 | 1.8651 | 0.5556 | | 0.0003 | 27.0 | 162 | 1.8642 | 0.5778 | | 0.0003 | 28.0 | 168 | 1.8659 | 0.5778 | | 0.0003 | 29.0 | 174 | 1.8666 | 0.5778 | | 0.0003 | 30.0 | 180 | 1.8680 | 0.5778 | | 0.0003 | 31.0 | 186 | 1.8684 | 0.5778 | | 0.0002 | 32.0 | 192 | 1.8677 | 0.5778 | | 0.0002 | 33.0 | 198 | 1.8709 | 0.5778 | | 0.0002 | 34.0 | 204 | 1.8723 | 0.5778 | | 0.0002 | 35.0 | 210 | 1.8730 | 0.5778 | | 0.0002 | 36.0 | 216 | 1.8757 | 0.5778 | | 0.0002 | 37.0 | 222 | 1.8766 | 0.5778 | | 0.0002 | 38.0 | 228 | 1.8780 | 0.5778 | | 0.0002 | 39.0 | 234 | 1.8793 | 0.5778 | | 0.0002 | 40.0 | 240 | 1.8801 | 0.5778 | | 0.0002 | 41.0 | 246 | 1.8804 | 0.5778 | | 0.0002 | 42.0 | 252 | 1.8804 | 0.5778 | | 0.0002 | 43.0 | 258 | 1.8804 | 0.5778 | | 0.0002 | 44.0 | 264 | 1.8804 | 0.5778 | | 0.0002 | 45.0 | 270 | 1.8804 | 0.5778 | | 0.0002 | 46.0 | 276 | 1.8804 | 0.5778 | | 0.0002 | 47.0 | 282 | 1.8804 | 0.5778 | | 0.0002 | 48.0 | 288 | 1.8804 | 0.5778 | | 0.0002 | 49.0 | 294 | 1.8804 | 0.5778 | | 0.0002 | 50.0 | 300 | 1.8804 | 0.5778 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.6766 - Accuracy: 0.6 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3194 | 0.2889 | | 1.3705 | 2.0 | 12 | 1.2766 | 0.3778 | | 1.3705 | 3.0 | 18 | 1.3268 | 0.5333 | | 0.7361 | 4.0 | 24 | 1.2927 | 0.5556 | | 0.3404 | 5.0 | 30 | 1.3610 | 0.5556 | | 0.3404 | 6.0 | 36 | 1.1429 | 0.5778 | | 0.1188 | 7.0 | 42 | 1.5833 | 0.5333 | | 0.1188 | 8.0 | 48 | 1.2765 | 0.6667 | | 0.0229 | 9.0 | 54 | 1.4099 | 0.6222 | | 0.0046 | 10.0 | 60 | 1.4395 | 0.6 | | 0.0046 | 11.0 | 66 | 1.6161 | 0.5556 | | 0.0013 | 12.0 | 72 | 1.5774 | 0.5778 | | 0.0013 | 13.0 | 78 | 1.5201 | 0.6 | | 0.0007 | 14.0 | 84 | 1.5608 | 0.6 | | 0.0005 | 15.0 | 90 | 1.6187 | 0.5778 | | 0.0005 | 16.0 | 96 | 1.6424 | 0.5778 | | 0.0004 | 17.0 | 102 | 1.6470 | 0.5778 | | 0.0004 | 18.0 | 108 | 1.6480 | 0.6 | | 0.0003 | 19.0 | 114 | 1.6471 | 0.6 | | 0.0003 | 20.0 | 120 | 1.6450 | 0.6 | | 0.0003 | 21.0 | 126 | 1.6532 | 0.6 | | 0.0003 | 22.0 | 132 | 1.6559 | 0.6 | | 0.0003 | 23.0 | 138 | 1.6612 | 0.6 | | 0.0003 | 24.0 | 144 | 1.6668 | 0.6 | | 0.0002 | 25.0 | 150 | 1.6718 | 0.6 | | 0.0002 | 26.0 | 156 | 1.6748 | 0.6 | | 0.0002 | 27.0 | 162 | 1.6728 | 0.6 | | 0.0002 | 28.0 | 168 | 1.6726 | 0.6 | | 0.0002 | 29.0 | 174 | 1.6718 | 0.6 | | 0.0002 | 30.0 | 180 | 1.6716 | 0.6 | | 0.0002 | 31.0 | 186 | 1.6738 | 0.6 | | 0.0002 | 32.0 | 192 | 1.6734 | 0.6 | | 0.0002 | 33.0 | 198 | 1.6748 | 0.6 | | 0.0002 | 34.0 | 204 | 1.6753 | 0.6 | | 0.0002 | 35.0 | 210 | 1.6740 | 0.6 | | 0.0002 | 36.0 | 216 | 1.6735 | 0.6 | | 0.0002 | 37.0 | 222 | 1.6732 | 0.6 | | 0.0002 | 38.0 | 228 | 1.6740 | 0.6 | | 0.0002 | 39.0 | 234 | 1.6751 | 0.6 | | 0.0002 | 40.0 | 240 | 1.6758 | 0.6 | | 0.0002 | 41.0 | 246 | 1.6766 | 0.6 | | 0.0002 | 42.0 | 252 | 1.6766 | 0.6 | | 0.0002 | 43.0 | 258 | 1.6766 | 0.6 | | 0.0002 | 44.0 | 264 | 1.6766 | 0.6 | | 0.0002 | 45.0 | 270 | 1.6766 | 0.6 | | 0.0002 | 46.0 | 276 | 1.6766 | 0.6 | | 0.0002 | 47.0 | 282 | 1.6766 | 0.6 | | 0.0002 | 48.0 | 288 | 1.6766 | 0.6 | | 0.0002 | 49.0 | 294 | 1.6766 | 0.6 | | 0.0002 | 50.0 | 300 | 1.6766 | 0.6 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5538 - Accuracy: 0.8372 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3426 | 0.4419 | | 1.3195 | 2.0 | 12 | 1.0931 | 0.5116 | | 1.3195 | 3.0 | 18 | 0.8535 | 0.6512 | | 0.6419 | 4.0 | 24 | 0.9249 | 0.6279 | | 0.325 | 5.0 | 30 | 0.7057 | 0.7674 | | 0.325 | 6.0 | 36 | 0.5831 | 0.7674 | | 0.0848 | 7.0 | 42 | 0.6810 | 0.7907 | | 0.0848 | 8.0 | 48 | 0.5917 | 0.7674 | | 0.0193 | 9.0 | 54 | 0.6267 | 0.8140 | | 0.0077 | 10.0 | 60 | 0.4330 | 0.8372 | | 0.0077 | 11.0 | 66 | 0.5195 | 0.8372 | | 0.0032 | 12.0 | 72 | 0.6710 | 0.7907 | | 0.0032 | 13.0 | 78 | 0.6980 | 0.8372 | | 0.0012 | 14.0 | 84 | 0.5701 | 0.8372 | | 0.0006 | 15.0 | 90 | 0.5278 | 0.8605 | | 0.0006 | 16.0 | 96 | 0.5226 | 0.8372 | | 0.0005 | 17.0 | 102 | 0.5245 | 0.8605 | | 0.0005 | 18.0 | 108 | 0.5277 | 0.8605 | | 0.0004 | 19.0 | 114 | 0.5338 | 0.8372 | | 0.0003 | 20.0 | 120 | 0.5401 | 0.8372 | | 0.0003 | 21.0 | 126 | 0.5445 | 0.8372 | | 0.0003 | 22.0 | 132 | 0.5461 | 0.8372 | | 0.0003 | 23.0 | 138 | 0.5481 | 0.8372 | | 0.0003 | 24.0 | 144 | 0.5486 | 0.8372 | | 0.0003 | 25.0 | 150 | 0.5495 | 0.8372 | | 0.0003 | 26.0 | 156 | 0.5492 | 0.8372 | | 0.0002 | 27.0 | 162 | 0.5497 | 0.8372 | | 0.0002 | 28.0 | 168 | 0.5490 | 0.8372 | | 0.0002 | 29.0 | 174 | 0.5497 | 0.8372 | | 0.0002 | 30.0 | 180 | 0.5498 | 0.8372 | | 0.0002 | 31.0 | 186 | 0.5499 | 0.8372 | | 0.0002 | 32.0 | 192 | 0.5503 | 0.8372 | | 0.0002 | 33.0 | 198 | 0.5508 | 0.8372 | | 0.0002 | 34.0 | 204 | 0.5520 | 0.8372 | | 0.0002 | 35.0 | 210 | 0.5527 | 0.8372 | | 0.0002 | 36.0 | 216 | 0.5529 | 0.8372 | | 0.0002 | 37.0 | 222 | 0.5532 | 0.8372 | | 0.0002 | 38.0 | 228 | 0.5534 | 0.8372 | | 0.0002 | 39.0 | 234 | 0.5536 | 0.8372 | | 0.0002 | 40.0 | 240 | 0.5537 | 0.8372 | | 0.0002 | 41.0 | 246 | 0.5538 | 0.8372 | | 0.0002 | 42.0 | 252 | 0.5538 | 0.8372 | | 0.0002 | 43.0 | 258 | 0.5538 | 0.8372 | | 0.0002 | 44.0 | 264 | 0.5538 | 0.8372 | | 0.0002 | 45.0 | 270 | 0.5538 | 0.8372 | | 0.0002 | 46.0 | 276 | 0.5538 | 0.8372 | | 0.0002 | 47.0 | 282 | 0.5538 | 0.8372 | | 0.0002 | 48.0 | 288 | 0.5538 | 0.8372 | | 0.0002 | 49.0 | 294 | 0.5538 | 0.8372 | | 0.0002 | 50.0 | 300 | 0.5538 | 0.8372 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7761 - Accuracy: 0.8095 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3751 | 0.2619 | | 1.4552 | 2.0 | 12 | 1.1251 | 0.4048 | | 1.4552 | 3.0 | 18 | 0.8714 | 0.7143 | | 0.8827 | 4.0 | 24 | 0.7894 | 0.6190 | | 0.3505 | 5.0 | 30 | 0.5971 | 0.6905 | | 0.3505 | 6.0 | 36 | 0.7618 | 0.7143 | | 0.1054 | 7.0 | 42 | 0.5229 | 0.7619 | | 0.1054 | 8.0 | 48 | 0.6150 | 0.7857 | | 0.0181 | 9.0 | 54 | 0.6620 | 0.7619 | | 0.0039 | 10.0 | 60 | 0.7502 | 0.7619 | | 0.0039 | 11.0 | 66 | 0.7572 | 0.7143 | | 0.0013 | 12.0 | 72 | 0.7148 | 0.8095 | | 0.0013 | 13.0 | 78 | 0.7881 | 0.8095 | | 0.0007 | 14.0 | 84 | 0.8192 | 0.7857 | | 0.0005 | 15.0 | 90 | 0.7913 | 0.8095 | | 0.0005 | 16.0 | 96 | 0.7465 | 0.8095 | | 0.0004 | 17.0 | 102 | 0.7194 | 0.8095 | | 0.0004 | 18.0 | 108 | 0.7125 | 0.8095 | | 0.0003 | 19.0 | 114 | 0.7205 | 0.8095 | | 0.0003 | 20.0 | 120 | 0.7348 | 0.8095 | | 0.0003 | 21.0 | 126 | 0.7482 | 0.8095 | | 0.0003 | 22.0 | 132 | 0.7579 | 0.8095 | | 0.0003 | 23.0 | 138 | 0.7664 | 0.8095 | | 0.0003 | 24.0 | 144 | 0.7720 | 0.8095 | | 0.0003 | 25.0 | 150 | 0.7718 | 0.8095 | | 0.0003 | 26.0 | 156 | 0.7710 | 0.8095 | | 0.0003 | 27.0 | 162 | 0.7669 | 0.8095 | | 0.0003 | 28.0 | 168 | 0.7689 | 0.8095 | | 0.0003 | 29.0 | 174 | 0.7693 | 0.8095 | | 0.0002 | 30.0 | 180 | 0.7708 | 0.8095 | | 0.0002 | 31.0 | 186 | 0.7724 | 0.8095 | | 0.0002 | 32.0 | 192 | 0.7744 | 0.8095 | | 0.0002 | 33.0 | 198 | 0.7750 | 0.8095 | | 0.0002 | 34.0 | 204 | 0.7743 | 0.8095 | | 0.0002 | 35.0 | 210 | 0.7745 | 0.8095 | | 0.0002 | 36.0 | 216 | 0.7743 | 0.8095 | | 0.0002 | 37.0 | 222 | 0.7745 | 0.8095 | | 0.0002 | 38.0 | 228 | 0.7747 | 0.8095 | | 0.0002 | 39.0 | 234 | 0.7753 | 0.8095 | | 0.0002 | 40.0 | 240 | 0.7758 | 0.8095 | | 0.0002 | 41.0 | 246 | 0.7760 | 0.8095 | | 0.0002 | 42.0 | 252 | 0.7761 | 0.8095 | | 0.0002 | 43.0 | 258 | 0.7761 | 0.8095 | | 0.0002 | 44.0 | 264 | 0.7761 | 0.8095 | | 0.0002 | 45.0 | 270 | 0.7761 | 0.8095 | | 0.0002 | 46.0 | 276 | 0.7761 | 0.8095 | | 0.0002 | 47.0 | 282 | 0.7761 | 0.8095 | | 0.0002 | 48.0 | 288 | 0.7761 | 0.8095 | | 0.0002 | 49.0 | 294 | 0.7761 | 0.8095 | | 0.0002 | 50.0 | 300 | 0.7761 | 0.8095 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9886 - Accuracy: 0.7805 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.2270 | 0.3415 | | 1.4194 | 2.0 | 12 | 1.0630 | 0.5122 | | 1.4194 | 3.0 | 18 | 0.7493 | 0.7073 | | 0.7944 | 4.0 | 24 | 0.7294 | 0.7561 | | 0.3715 | 5.0 | 30 | 0.6953 | 0.6585 | | 0.3715 | 6.0 | 36 | 0.5928 | 0.8293 | | 0.1471 | 7.0 | 42 | 0.5485 | 0.8049 | | 0.1471 | 8.0 | 48 | 0.8515 | 0.6829 | | 0.0288 | 9.0 | 54 | 0.5381 | 0.8293 | | 0.0065 | 10.0 | 60 | 0.8647 | 0.7317 | | 0.0065 | 11.0 | 66 | 0.7563 | 0.7805 | | 0.0018 | 12.0 | 72 | 0.7678 | 0.8049 | | 0.0018 | 13.0 | 78 | 0.8017 | 0.8049 | | 0.0008 | 14.0 | 84 | 0.8475 | 0.7805 | | 0.0005 | 15.0 | 90 | 0.8926 | 0.7805 | | 0.0005 | 16.0 | 96 | 0.9216 | 0.7805 | | 0.0004 | 17.0 | 102 | 0.9424 | 0.7805 | | 0.0004 | 18.0 | 108 | 0.9465 | 0.7805 | | 0.0003 | 19.0 | 114 | 0.9461 | 0.7805 | | 0.0003 | 20.0 | 120 | 0.9448 | 0.7805 | | 0.0003 | 21.0 | 126 | 0.9474 | 0.7805 | | 0.0003 | 22.0 | 132 | 0.9525 | 0.7805 | | 0.0003 | 23.0 | 138 | 0.9551 | 0.7805 | | 0.0003 | 24.0 | 144 | 0.9581 | 0.7805 | | 0.0002 | 25.0 | 150 | 0.9626 | 0.7805 | | 0.0002 | 26.0 | 156 | 0.9650 | 0.7805 | | 0.0002 | 27.0 | 162 | 0.9711 | 0.7805 | | 0.0002 | 28.0 | 168 | 0.9713 | 0.7805 | | 0.0002 | 29.0 | 174 | 0.9730 | 0.7805 | | 0.0002 | 30.0 | 180 | 0.9754 | 0.7805 | | 0.0002 | 31.0 | 186 | 0.9786 | 0.7805 | | 0.0002 | 32.0 | 192 | 0.9820 | 0.7805 | | 0.0002 | 33.0 | 198 | 0.9835 | 0.7805 | | 0.0002 | 34.0 | 204 | 0.9850 | 0.7805 | | 0.0002 | 35.0 | 210 | 0.9850 | 0.7805 | | 0.0002 | 36.0 | 216 | 0.9860 | 0.7805 | | 0.0002 | 37.0 | 222 | 0.9866 | 0.7805 | | 0.0002 | 38.0 | 228 | 0.9873 | 0.7805 | | 0.0002 | 39.0 | 234 | 0.9879 | 0.7805 | | 0.0002 | 40.0 | 240 | 0.9883 | 0.7805 | | 0.0002 | 41.0 | 246 | 0.9886 | 0.7805 | | 0.0002 | 42.0 | 252 | 0.9886 | 0.7805 | | 0.0002 | 43.0 | 258 | 0.9886 | 0.7805 | | 0.0002 | 44.0 | 264 | 0.9886 | 0.7805 | | 0.0002 | 45.0 | 270 | 0.9886 | 0.7805 | | 0.0002 | 46.0 | 276 | 0.9886 | 0.7805 | | 0.0002 | 47.0 | 282 | 0.9886 | 0.7805 | | 0.0002 | 48.0 | 288 | 0.9886 | 0.7805 | | 0.0002 | 49.0 | 294 | 0.9886 | 0.7805 | | 0.0002 | 50.0 | 300 | 0.9886 | 0.7805 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_00001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1341 - Accuracy: 0.4222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.4260 | 0.2 | | 1.446 | 2.0 | 12 | 1.3794 | 0.2889 | | 1.446 | 3.0 | 18 | 1.3570 | 0.3556 | | 1.184 | 4.0 | 24 | 1.3382 | 0.3111 | | 1.0671 | 5.0 | 30 | 1.3283 | 0.3111 | | 1.0671 | 6.0 | 36 | 1.3144 | 0.2889 | | 0.9249 | 7.0 | 42 | 1.2898 | 0.3333 | | 0.9249 | 8.0 | 48 | 1.2748 | 0.3556 | | 0.8443 | 9.0 | 54 | 1.2692 | 0.3333 | | 0.7477 | 10.0 | 60 | 1.2518 | 0.3778 | | 0.7477 | 11.0 | 66 | 1.2338 | 0.4 | | 0.662 | 12.0 | 72 | 1.2193 | 0.3778 | | 0.662 | 13.0 | 78 | 1.2195 | 0.4 | | 0.622 | 14.0 | 84 | 1.2039 | 0.3778 | | 0.5154 | 15.0 | 90 | 1.1949 | 0.4 | | 0.5154 | 16.0 | 96 | 1.1879 | 0.4 | | 0.4537 | 17.0 | 102 | 1.1810 | 0.4 | | 0.4537 | 18.0 | 108 | 1.1670 | 0.4 | | 0.3859 | 19.0 | 114 | 1.1628 | 0.4 | | 0.3586 | 20.0 | 120 | 1.1721 | 0.4 | | 0.3586 | 21.0 | 126 | 1.1698 | 0.4222 | | 0.3151 | 22.0 | 132 | 1.1603 | 0.4 | | 0.3151 | 23.0 | 138 | 1.1584 | 0.4222 | | 0.2881 | 24.0 | 144 | 1.1519 | 0.4222 | | 0.2498 | 25.0 | 150 | 1.1515 | 0.4222 | | 0.2498 | 26.0 | 156 | 1.1445 | 0.4222 | | 0.232 | 27.0 | 162 | 1.1430 | 0.4222 | | 0.232 | 28.0 | 168 | 1.1452 | 0.4222 | | 0.2183 | 29.0 | 174 | 1.1406 | 0.4222 | | 0.1798 | 30.0 | 180 | 1.1348 | 0.4222 | | 0.1798 | 31.0 | 186 | 1.1304 | 0.4222 | | 0.1811 | 32.0 | 192 | 1.1281 | 0.4222 | | 0.1811 | 33.0 | 198 | 1.1317 | 0.4222 | | 0.1748 | 34.0 | 204 | 1.1302 | 0.4222 | | 0.1492 | 35.0 | 210 | 1.1303 | 0.4222 | | 0.1492 | 36.0 | 216 | 1.1319 | 0.4222 | | 0.1477 | 37.0 | 222 | 1.1328 | 0.4222 | | 0.1477 | 38.0 | 228 | 1.1366 | 0.4222 | | 0.1357 | 39.0 | 234 | 1.1362 | 0.4222 | | 0.1379 | 40.0 | 240 | 1.1351 | 0.4222 | | 0.1379 | 41.0 | 246 | 1.1344 | 0.4222 | | 0.1325 | 42.0 | 252 | 1.1341 | 0.4222 | | 0.1325 | 43.0 | 258 | 1.1341 | 0.4222 | | 0.1377 | 44.0 | 264 | 1.1341 | 0.4222 | | 0.1332 | 45.0 | 270 | 1.1341 | 0.4222 | | 0.1332 | 46.0 | 276 | 1.1341 | 0.4222 | | 0.1323 | 47.0 | 282 | 1.1341 | 0.4222 | | 0.1323 | 48.0 | 288 | 1.1341 | 0.4222 | | 0.1276 | 49.0 | 294 | 1.1341 | 0.4222 | | 0.1376 | 50.0 | 300 | 1.1341 | 0.4222 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_00001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3630 - Accuracy: 0.5333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3933 | 0.2889 | | 1.4502 | 2.0 | 12 | 1.3758 | 0.2889 | | 1.4502 | 3.0 | 18 | 1.3846 | 0.1556 | | 1.1864 | 4.0 | 24 | 1.3867 | 0.2 | | 1.0417 | 5.0 | 30 | 1.4200 | 0.2222 | | 1.0417 | 6.0 | 36 | 1.4398 | 0.2667 | | 0.8998 | 7.0 | 42 | 1.4309 | 0.2667 | | 0.8998 | 8.0 | 48 | 1.4422 | 0.2889 | | 0.802 | 9.0 | 54 | 1.4525 | 0.3111 | | 0.7173 | 10.0 | 60 | 1.4451 | 0.3333 | | 0.7173 | 11.0 | 66 | 1.4170 | 0.3556 | | 0.6327 | 12.0 | 72 | 1.4262 | 0.3778 | | 0.6327 | 13.0 | 78 | 1.4500 | 0.3778 | | 0.5705 | 14.0 | 84 | 1.4362 | 0.3778 | | 0.4928 | 15.0 | 90 | 1.4119 | 0.3778 | | 0.4928 | 16.0 | 96 | 1.4031 | 0.4 | | 0.4272 | 17.0 | 102 | 1.4009 | 0.4 | | 0.4272 | 18.0 | 108 | 1.4134 | 0.4 | | 0.3882 | 19.0 | 114 | 1.4007 | 0.4 | | 0.3396 | 20.0 | 120 | 1.3936 | 0.4 | | 0.3396 | 21.0 | 126 | 1.3916 | 0.4222 | | 0.2975 | 22.0 | 132 | 1.3801 | 0.4222 | | 0.2975 | 23.0 | 138 | 1.3854 | 0.4222 | | 0.2664 | 24.0 | 144 | 1.3827 | 0.4444 | | 0.2292 | 25.0 | 150 | 1.3826 | 0.4444 | | 0.2292 | 26.0 | 156 | 1.3717 | 0.4667 | | 0.2136 | 27.0 | 162 | 1.3670 | 0.4667 | | 0.2136 | 28.0 | 168 | 1.3720 | 0.4667 | | 0.1873 | 29.0 | 174 | 1.3622 | 0.4667 | | 0.1666 | 30.0 | 180 | 1.3494 | 0.5111 | | 0.1666 | 31.0 | 186 | 1.3586 | 0.4889 | | 0.1595 | 32.0 | 192 | 1.3677 | 0.5111 | | 0.1595 | 33.0 | 198 | 1.3760 | 0.5111 | | 0.1486 | 34.0 | 204 | 1.3711 | 0.5111 | | 0.1401 | 35.0 | 210 | 1.3652 | 0.5111 | | 0.1401 | 36.0 | 216 | 1.3610 | 0.5333 | | 0.1317 | 37.0 | 222 | 1.3597 | 0.5333 | | 0.1317 | 38.0 | 228 | 1.3618 | 0.5333 | | 0.1202 | 39.0 | 234 | 1.3633 | 0.5333 | | 0.122 | 40.0 | 240 | 1.3628 | 0.5333 | | 0.122 | 41.0 | 246 | 1.3631 | 0.5333 | | 0.1214 | 42.0 | 252 | 1.3630 | 0.5333 | | 0.1214 | 43.0 | 258 | 1.3630 | 0.5333 | | 0.1203 | 44.0 | 264 | 1.3630 | 0.5333 | | 0.1185 | 45.0 | 270 | 1.3630 | 0.5333 | | 0.1185 | 46.0 | 276 | 1.3630 | 0.5333 | | 0.1174 | 47.0 | 282 | 1.3630 | 0.5333 | | 0.1174 | 48.0 | 288 | 1.3630 | 0.5333 | | 0.1152 | 49.0 | 294 | 1.3630 | 0.5333 | | 0.1204 | 50.0 | 300 | 1.3630 | 0.5333 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_00001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8253 - Accuracy: 0.5581 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.4425 | 0.2791 | | 1.416 | 2.0 | 12 | 1.3728 | 0.3023 | | 1.416 | 3.0 | 18 | 1.3124 | 0.3488 | | 1.2388 | 4.0 | 24 | 1.2509 | 0.3721 | | 1.1051 | 5.0 | 30 | 1.1962 | 0.3488 | | 1.1051 | 6.0 | 36 | 1.1517 | 0.3721 | | 0.9682 | 7.0 | 42 | 1.1212 | 0.3721 | | 0.9682 | 8.0 | 48 | 1.0990 | 0.4186 | | 0.8769 | 9.0 | 54 | 1.0709 | 0.4884 | | 0.7643 | 10.0 | 60 | 1.0587 | 0.5116 | | 0.7643 | 11.0 | 66 | 1.0451 | 0.4884 | | 0.6717 | 12.0 | 72 | 1.0399 | 0.5581 | | 0.6717 | 13.0 | 78 | 1.0224 | 0.5349 | | 0.5988 | 14.0 | 84 | 1.0021 | 0.4884 | | 0.5291 | 15.0 | 90 | 0.9852 | 0.4884 | | 0.5291 | 16.0 | 96 | 0.9774 | 0.5116 | | 0.4581 | 17.0 | 102 | 0.9701 | 0.5116 | | 0.4581 | 18.0 | 108 | 0.9598 | 0.5116 | | 0.3895 | 19.0 | 114 | 0.9410 | 0.5814 | | 0.3415 | 20.0 | 120 | 0.9223 | 0.5581 | | 0.3415 | 21.0 | 126 | 0.9172 | 0.5349 | | 0.3044 | 22.0 | 132 | 0.9106 | 0.5349 | | 0.3044 | 23.0 | 138 | 0.9037 | 0.5581 | | 0.2632 | 24.0 | 144 | 0.8935 | 0.5581 | | 0.2425 | 25.0 | 150 | 0.8847 | 0.5814 | | 0.2425 | 26.0 | 156 | 0.8721 | 0.5581 | | 0.2102 | 27.0 | 162 | 0.8625 | 0.5581 | | 0.2102 | 28.0 | 168 | 0.8546 | 0.5581 | | 0.189 | 29.0 | 174 | 0.8540 | 0.5814 | | 0.1637 | 30.0 | 180 | 0.8496 | 0.6047 | | 0.1637 | 31.0 | 186 | 0.8464 | 0.6047 | | 0.1512 | 32.0 | 192 | 0.8420 | 0.5581 | | 0.1512 | 33.0 | 198 | 0.8380 | 0.5581 | | 0.1374 | 34.0 | 204 | 0.8346 | 0.5581 | | 0.1287 | 35.0 | 210 | 0.8327 | 0.5581 | | 0.1287 | 36.0 | 216 | 0.8290 | 0.5581 | | 0.124 | 37.0 | 222 | 0.8276 | 0.5581 | | 0.124 | 38.0 | 228 | 0.8271 | 0.5581 | | 0.1186 | 39.0 | 234 | 0.8265 | 0.5581 | | 0.1159 | 40.0 | 240 | 0.8255 | 0.5581 | | 0.1159 | 41.0 | 246 | 0.8253 | 0.5581 | | 0.1139 | 42.0 | 252 | 0.8253 | 0.5581 | | 0.1139 | 43.0 | 258 | 0.8253 | 0.5581 | | 0.1142 | 44.0 | 264 | 0.8253 | 0.5581 | | 0.1107 | 45.0 | 270 | 0.8253 | 0.5581 | | 0.1107 | 46.0 | 276 | 0.8253 | 0.5581 | | 0.1118 | 47.0 | 282 | 0.8253 | 0.5581 | | 0.1118 | 48.0 | 288 | 0.8253 | 0.5581 | | 0.1159 | 49.0 | 294 | 0.8253 | 0.5581 | | 0.1095 | 50.0 | 300 | 0.8253 | 0.5581 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_00001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8218 - Accuracy: 0.6667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3850 | 0.3333 | | 1.4335 | 2.0 | 12 | 1.3341 | 0.3571 | | 1.4335 | 3.0 | 18 | 1.2836 | 0.4286 | | 1.2369 | 4.0 | 24 | 1.2256 | 0.5238 | | 1.1106 | 5.0 | 30 | 1.1743 | 0.4762 | | 1.1106 | 6.0 | 36 | 1.1379 | 0.5238 | | 0.9897 | 7.0 | 42 | 1.1120 | 0.5952 | | 0.9897 | 8.0 | 48 | 1.0871 | 0.6190 | | 0.869 | 9.0 | 54 | 1.0617 | 0.5952 | | 0.7919 | 10.0 | 60 | 1.0389 | 0.5952 | | 0.7919 | 11.0 | 66 | 1.0206 | 0.5714 | | 0.7005 | 12.0 | 72 | 1.0005 | 0.5714 | | 0.7005 | 13.0 | 78 | 0.9876 | 0.5714 | | 0.6273 | 14.0 | 84 | 0.9709 | 0.5952 | | 0.5477 | 15.0 | 90 | 0.9546 | 0.5952 | | 0.5477 | 16.0 | 96 | 0.9438 | 0.5714 | | 0.4708 | 17.0 | 102 | 0.9277 | 0.5952 | | 0.4708 | 18.0 | 108 | 0.9166 | 0.6190 | | 0.4523 | 19.0 | 114 | 0.9086 | 0.6190 | | 0.3797 | 20.0 | 120 | 0.9051 | 0.5952 | | 0.3797 | 21.0 | 126 | 0.8956 | 0.6190 | | 0.3458 | 22.0 | 132 | 0.8852 | 0.6190 | | 0.3458 | 23.0 | 138 | 0.8841 | 0.6190 | | 0.3057 | 24.0 | 144 | 0.8804 | 0.5952 | | 0.2867 | 25.0 | 150 | 0.8683 | 0.6429 | | 0.2867 | 26.0 | 156 | 0.8580 | 0.6667 | | 0.2509 | 27.0 | 162 | 0.8515 | 0.6667 | | 0.2509 | 28.0 | 168 | 0.8546 | 0.6429 | | 0.2322 | 29.0 | 174 | 0.8500 | 0.6667 | | 0.2064 | 30.0 | 180 | 0.8396 | 0.6667 | | 0.2064 | 31.0 | 186 | 0.8363 | 0.6667 | | 0.1928 | 32.0 | 192 | 0.8371 | 0.6667 | | 0.1928 | 33.0 | 198 | 0.8332 | 0.6667 | | 0.1767 | 34.0 | 204 | 0.8261 | 0.6667 | | 0.1746 | 35.0 | 210 | 0.8249 | 0.6667 | | 0.1746 | 36.0 | 216 | 0.8258 | 0.6667 | | 0.1557 | 37.0 | 222 | 0.8248 | 0.6667 | | 0.1557 | 38.0 | 228 | 0.8243 | 0.6667 | | 0.1581 | 39.0 | 234 | 0.8225 | 0.6667 | | 0.1477 | 40.0 | 240 | 0.8219 | 0.6667 | | 0.1477 | 41.0 | 246 | 0.8217 | 0.6667 | | 0.149 | 42.0 | 252 | 0.8218 | 0.6667 | | 0.149 | 43.0 | 258 | 0.8218 | 0.6667 | | 0.1403 | 44.0 | 264 | 0.8218 | 0.6667 | | 0.146 | 45.0 | 270 | 0.8218 | 0.6667 | | 0.146 | 46.0 | 276 | 0.8218 | 0.6667 | | 0.1461 | 47.0 | 282 | 0.8218 | 0.6667 | | 0.1461 | 48.0 | 288 | 0.8218 | 0.6667 | | 0.1422 | 49.0 | 294 | 0.8218 | 0.6667 | | 0.1494 | 50.0 | 300 | 0.8218 | 0.6667 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_00001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9982 - Accuracy: 0.5122 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3816 | 0.2927 | | 1.4331 | 2.0 | 12 | 1.3595 | 0.2195 | | 1.4331 | 3.0 | 18 | 1.3006 | 0.2927 | | 1.2071 | 4.0 | 24 | 1.2477 | 0.3415 | | 1.0931 | 5.0 | 30 | 1.2218 | 0.3659 | | 1.0931 | 6.0 | 36 | 1.1904 | 0.3415 | | 0.9583 | 7.0 | 42 | 1.2070 | 0.3659 | | 0.9583 | 8.0 | 48 | 1.1804 | 0.3415 | | 0.875 | 9.0 | 54 | 1.1663 | 0.3415 | | 0.7821 | 10.0 | 60 | 1.1729 | 0.3659 | | 0.7821 | 11.0 | 66 | 1.1600 | 0.3659 | | 0.7082 | 12.0 | 72 | 1.1535 | 0.3659 | | 0.7082 | 13.0 | 78 | 1.1283 | 0.3902 | | 0.5865 | 14.0 | 84 | 1.1050 | 0.4146 | | 0.5549 | 15.0 | 90 | 1.0989 | 0.4146 | | 0.5549 | 16.0 | 96 | 1.0902 | 0.4146 | | 0.4748 | 17.0 | 102 | 1.0889 | 0.4146 | | 0.4748 | 18.0 | 108 | 1.0670 | 0.4146 | | 0.4005 | 19.0 | 114 | 1.0529 | 0.4146 | | 0.3717 | 20.0 | 120 | 1.0514 | 0.4146 | | 0.3717 | 21.0 | 126 | 1.0589 | 0.4146 | | 0.3189 | 22.0 | 132 | 1.0546 | 0.4146 | | 0.3189 | 23.0 | 138 | 1.0253 | 0.4390 | | 0.2768 | 24.0 | 144 | 1.0205 | 0.4390 | | 0.2632 | 25.0 | 150 | 1.0386 | 0.4146 | | 0.2632 | 26.0 | 156 | 1.0297 | 0.4390 | | 0.2284 | 27.0 | 162 | 1.0322 | 0.4634 | | 0.2284 | 28.0 | 168 | 1.0102 | 0.4634 | | 0.196 | 29.0 | 174 | 1.0015 | 0.4878 | | 0.1861 | 30.0 | 180 | 1.0070 | 0.4634 | | 0.1861 | 31.0 | 186 | 1.0149 | 0.4878 | | 0.1711 | 32.0 | 192 | 1.0173 | 0.4878 | | 0.1711 | 33.0 | 198 | 1.0083 | 0.4878 | | 0.1508 | 34.0 | 204 | 1.0068 | 0.5122 | | 0.1433 | 35.0 | 210 | 0.9998 | 0.5122 | | 0.1433 | 36.0 | 216 | 0.9984 | 0.5122 | | 0.1371 | 37.0 | 222 | 0.9985 | 0.5122 | | 0.1371 | 38.0 | 228 | 0.9983 | 0.5122 | | 0.1311 | 39.0 | 234 | 0.9983 | 0.5122 | | 0.1245 | 40.0 | 240 | 0.9977 | 0.5122 | | 0.1245 | 41.0 | 246 | 0.9980 | 0.5122 | | 0.1273 | 42.0 | 252 | 0.9982 | 0.5122 | | 0.1273 | 43.0 | 258 | 0.9982 | 0.5122 | | 0.1185 | 44.0 | 264 | 0.9982 | 0.5122 | | 0.1259 | 45.0 | 270 | 0.9982 | 0.5122 | | 0.1259 | 46.0 | 276 | 0.9982 | 0.5122 | | 0.1239 | 47.0 | 282 | 0.9982 | 0.5122 | | 0.1239 | 48.0 | 288 | 0.9982 | 0.5122 | | 0.1264 | 49.0 | 294 | 0.9982 | 0.5122 | | 0.1234 | 50.0 | 300 | 0.9982 | 0.5122 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3946 - Accuracy: 0.2667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6081 | 0.2889 | | 1.6517 | 2.0 | 12 | 1.5532 | 0.3333 | | 1.6517 | 3.0 | 18 | 1.5183 | 0.3111 | | 1.5073 | 4.0 | 24 | 1.4941 | 0.2 | | 1.4569 | 5.0 | 30 | 1.4762 | 0.1333 | | 1.4569 | 6.0 | 36 | 1.4655 | 0.1333 | | 1.377 | 7.0 | 42 | 1.4570 | 0.1333 | | 1.377 | 8.0 | 48 | 1.4508 | 0.1333 | | 1.3495 | 9.0 | 54 | 1.4443 | 0.1333 | | 1.3234 | 10.0 | 60 | 1.4390 | 0.1333 | | 1.3234 | 11.0 | 66 | 1.4339 | 0.1778 | | 1.2813 | 12.0 | 72 | 1.4301 | 0.1778 | | 1.2813 | 13.0 | 78 | 1.4257 | 0.2 | | 1.3124 | 14.0 | 84 | 1.4223 | 0.2 | | 1.2528 | 15.0 | 90 | 1.4195 | 0.2 | | 1.2528 | 16.0 | 96 | 1.4170 | 0.2222 | | 1.2252 | 17.0 | 102 | 1.4152 | 0.2 | | 1.2252 | 18.0 | 108 | 1.4125 | 0.2222 | | 1.2441 | 19.0 | 114 | 1.4108 | 0.2 | | 1.1872 | 20.0 | 120 | 1.4088 | 0.2 | | 1.1872 | 21.0 | 126 | 1.4068 | 0.2 | | 1.1818 | 22.0 | 132 | 1.4052 | 0.2222 | | 1.1818 | 23.0 | 138 | 1.4041 | 0.2 | | 1.1835 | 24.0 | 144 | 1.4032 | 0.2222 | | 1.1551 | 25.0 | 150 | 1.4021 | 0.2222 | | 1.1551 | 26.0 | 156 | 1.4013 | 0.2222 | | 1.1564 | 27.0 | 162 | 1.4008 | 0.2 | | 1.1564 | 28.0 | 168 | 1.3999 | 0.2222 | | 1.1662 | 29.0 | 174 | 1.3989 | 0.2222 | | 1.116 | 30.0 | 180 | 1.3985 | 0.2222 | | 1.116 | 31.0 | 186 | 1.3976 | 0.2444 | | 1.153 | 32.0 | 192 | 1.3972 | 0.2444 | | 1.153 | 33.0 | 198 | 1.3964 | 0.2444 | | 1.1437 | 34.0 | 204 | 1.3958 | 0.2444 | | 1.1259 | 35.0 | 210 | 1.3954 | 0.2444 | | 1.1259 | 36.0 | 216 | 1.3954 | 0.2667 | | 1.1125 | 37.0 | 222 | 1.3951 | 0.2667 | | 1.1125 | 38.0 | 228 | 1.3951 | 0.2667 | | 1.0816 | 39.0 | 234 | 1.3948 | 0.2667 | | 1.1207 | 40.0 | 240 | 1.3948 | 0.2667 | | 1.1207 | 41.0 | 246 | 1.3947 | 0.2667 | | 1.1291 | 42.0 | 252 | 1.3946 | 0.2667 | | 1.1291 | 43.0 | 258 | 1.3946 | 0.2667 | | 1.1338 | 44.0 | 264 | 1.3946 | 0.2667 | | 1.1093 | 45.0 | 270 | 1.3946 | 0.2667 | | 1.1093 | 46.0 | 276 | 1.3946 | 0.2667 | | 1.1123 | 47.0 | 282 | 1.3946 | 0.2667 | | 1.1123 | 48.0 | 288 | 1.3946 | 0.2667 | | 1.096 | 49.0 | 294 | 1.3946 | 0.2667 | | 1.1328 | 50.0 | 300 | 1.3946 | 0.2667 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4913 - Accuracy: 0.1778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6461 | 0.2222 | | 1.647 | 2.0 | 12 | 1.5827 | 0.2 | | 1.647 | 3.0 | 18 | 1.5400 | 0.2 | | 1.5111 | 4.0 | 24 | 1.5101 | 0.2 | | 1.4472 | 5.0 | 30 | 1.4855 | 0.1778 | | 1.4472 | 6.0 | 36 | 1.4711 | 0.1778 | | 1.3765 | 7.0 | 42 | 1.4618 | 0.2 | | 1.3765 | 8.0 | 48 | 1.4555 | 0.2 | | 1.3363 | 9.0 | 54 | 1.4523 | 0.2222 | | 1.3131 | 10.0 | 60 | 1.4505 | 0.2 | | 1.3131 | 11.0 | 66 | 1.4495 | 0.2 | | 1.2743 | 12.0 | 72 | 1.4504 | 0.2 | | 1.2743 | 13.0 | 78 | 1.4505 | 0.2 | | 1.2923 | 14.0 | 84 | 1.4516 | 0.2 | | 1.2475 | 15.0 | 90 | 1.4529 | 0.2 | | 1.2475 | 16.0 | 96 | 1.4558 | 0.2 | | 1.2052 | 17.0 | 102 | 1.4591 | 0.1778 | | 1.2052 | 18.0 | 108 | 1.4603 | 0.1778 | | 1.2375 | 19.0 | 114 | 1.4628 | 0.1778 | | 1.1665 | 20.0 | 120 | 1.4654 | 0.1778 | | 1.1665 | 21.0 | 126 | 1.4668 | 0.1778 | | 1.1508 | 22.0 | 132 | 1.4681 | 0.1778 | | 1.1508 | 23.0 | 138 | 1.4710 | 0.1778 | | 1.1615 | 24.0 | 144 | 1.4735 | 0.1778 | | 1.1372 | 25.0 | 150 | 1.4742 | 0.1778 | | 1.1372 | 26.0 | 156 | 1.4775 | 0.1778 | | 1.1389 | 27.0 | 162 | 1.4787 | 0.1778 | | 1.1389 | 28.0 | 168 | 1.4813 | 0.1778 | | 1.1191 | 29.0 | 174 | 1.4821 | 0.1778 | | 1.106 | 30.0 | 180 | 1.4844 | 0.1778 | | 1.106 | 31.0 | 186 | 1.4853 | 0.1778 | | 1.1156 | 32.0 | 192 | 1.4867 | 0.1778 | | 1.1156 | 33.0 | 198 | 1.4872 | 0.1778 | | 1.127 | 34.0 | 204 | 1.4879 | 0.1778 | | 1.1055 | 35.0 | 210 | 1.4887 | 0.1778 | | 1.1055 | 36.0 | 216 | 1.4895 | 0.1778 | | 1.089 | 37.0 | 222 | 1.4902 | 0.1778 | | 1.089 | 38.0 | 228 | 1.4907 | 0.1778 | | 1.0605 | 39.0 | 234 | 1.4911 | 0.1778 | | 1.0925 | 40.0 | 240 | 1.4913 | 0.1778 | | 1.0925 | 41.0 | 246 | 1.4913 | 0.1778 | | 1.1025 | 42.0 | 252 | 1.4913 | 0.1778 | | 1.1025 | 43.0 | 258 | 1.4913 | 0.1778 | | 1.1085 | 44.0 | 264 | 1.4913 | 0.1778 | | 1.0909 | 45.0 | 270 | 1.4913 | 0.1778 | | 1.0909 | 46.0 | 276 | 1.4913 | 0.1778 | | 1.0889 | 47.0 | 282 | 1.4913 | 0.1778 | | 1.0889 | 48.0 | 288 | 1.4913 | 0.1778 | | 1.0611 | 49.0 | 294 | 1.4913 | 0.1778 | | 1.1045 | 50.0 | 300 | 1.4913 | 0.1778 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2767 - Accuracy: 0.3488 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6231 | 0.2791 | | 1.6502 | 2.0 | 12 | 1.5615 | 0.2791 | | 1.6502 | 3.0 | 18 | 1.5208 | 0.2558 | | 1.5138 | 4.0 | 24 | 1.4935 | 0.2093 | | 1.441 | 5.0 | 30 | 1.4720 | 0.2093 | | 1.441 | 6.0 | 36 | 1.4541 | 0.2326 | | 1.3942 | 7.0 | 42 | 1.4402 | 0.3023 | | 1.3942 | 8.0 | 48 | 1.4271 | 0.3023 | | 1.3895 | 9.0 | 54 | 1.4159 | 0.2791 | | 1.3382 | 10.0 | 60 | 1.4069 | 0.2791 | | 1.3382 | 11.0 | 66 | 1.3983 | 0.2558 | | 1.3326 | 12.0 | 72 | 1.3893 | 0.2558 | | 1.3326 | 13.0 | 78 | 1.3800 | 0.2558 | | 1.3102 | 14.0 | 84 | 1.3707 | 0.2558 | | 1.3163 | 15.0 | 90 | 1.3619 | 0.2791 | | 1.3163 | 16.0 | 96 | 1.3528 | 0.2791 | | 1.295 | 17.0 | 102 | 1.3463 | 0.2791 | | 1.295 | 18.0 | 108 | 1.3391 | 0.2791 | | 1.2552 | 19.0 | 114 | 1.3325 | 0.3023 | | 1.2682 | 20.0 | 120 | 1.3269 | 0.3023 | | 1.2682 | 21.0 | 126 | 1.3221 | 0.3256 | | 1.2578 | 22.0 | 132 | 1.3173 | 0.3488 | | 1.2578 | 23.0 | 138 | 1.3126 | 0.3488 | | 1.2124 | 24.0 | 144 | 1.3087 | 0.3488 | | 1.2284 | 25.0 | 150 | 1.3049 | 0.3488 | | 1.2284 | 26.0 | 156 | 1.3017 | 0.3488 | | 1.2178 | 27.0 | 162 | 1.2982 | 0.3488 | | 1.2178 | 28.0 | 168 | 1.2955 | 0.3488 | | 1.2019 | 29.0 | 174 | 1.2931 | 0.3488 | | 1.2029 | 30.0 | 180 | 1.2906 | 0.3488 | | 1.2029 | 31.0 | 186 | 1.2886 | 0.3488 | | 1.1935 | 32.0 | 192 | 1.2863 | 0.3488 | | 1.1935 | 33.0 | 198 | 1.2843 | 0.3488 | | 1.164 | 34.0 | 204 | 1.2826 | 0.3488 | | 1.1999 | 35.0 | 210 | 1.2814 | 0.3488 | | 1.1999 | 36.0 | 216 | 1.2801 | 0.3488 | | 1.1813 | 37.0 | 222 | 1.2790 | 0.3488 | | 1.1813 | 38.0 | 228 | 1.2781 | 0.3488 | | 1.1753 | 39.0 | 234 | 1.2775 | 0.3488 | | 1.1877 | 40.0 | 240 | 1.2770 | 0.3488 | | 1.1877 | 41.0 | 246 | 1.2768 | 0.3488 | | 1.1774 | 42.0 | 252 | 1.2767 | 0.3488 | | 1.1774 | 43.0 | 258 | 1.2767 | 0.3488 | | 1.1704 | 44.0 | 264 | 1.2767 | 0.3488 | | 1.1843 | 45.0 | 270 | 1.2767 | 0.3488 | | 1.1843 | 46.0 | 276 | 1.2767 | 0.3488 | | 1.1726 | 47.0 | 282 | 1.2767 | 0.3488 | | 1.1726 | 48.0 | 288 | 1.2767 | 0.3488 | | 1.1541 | 49.0 | 294 | 1.2767 | 0.3488 | | 1.1928 | 50.0 | 300 | 1.2767 | 0.3488 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2335 - Accuracy: 0.4524 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5918 | 0.2857 | | 1.6404 | 2.0 | 12 | 1.5188 | 0.2857 | | 1.6404 | 3.0 | 18 | 1.4665 | 0.2857 | | 1.5241 | 4.0 | 24 | 1.4299 | 0.3333 | | 1.4755 | 5.0 | 30 | 1.4106 | 0.3571 | | 1.4755 | 6.0 | 36 | 1.3938 | 0.3095 | | 1.4186 | 7.0 | 42 | 1.3803 | 0.2857 | | 1.4186 | 8.0 | 48 | 1.3677 | 0.3810 | | 1.3819 | 9.0 | 54 | 1.3558 | 0.3810 | | 1.3541 | 10.0 | 60 | 1.3456 | 0.3810 | | 1.3541 | 11.0 | 66 | 1.3370 | 0.3810 | | 1.3363 | 12.0 | 72 | 1.3284 | 0.3810 | | 1.3363 | 13.0 | 78 | 1.3193 | 0.3571 | | 1.3168 | 14.0 | 84 | 1.3103 | 0.4048 | | 1.2875 | 15.0 | 90 | 1.3032 | 0.4048 | | 1.2875 | 16.0 | 96 | 1.2966 | 0.4048 | | 1.2638 | 17.0 | 102 | 1.2902 | 0.4048 | | 1.2638 | 18.0 | 108 | 1.2846 | 0.4048 | | 1.2758 | 19.0 | 114 | 1.2805 | 0.4048 | | 1.2611 | 20.0 | 120 | 1.2763 | 0.4048 | | 1.2611 | 21.0 | 126 | 1.2724 | 0.4048 | | 1.2411 | 22.0 | 132 | 1.2693 | 0.4048 | | 1.2411 | 23.0 | 138 | 1.2666 | 0.4048 | | 1.2357 | 24.0 | 144 | 1.2628 | 0.4048 | | 1.231 | 25.0 | 150 | 1.2590 | 0.4048 | | 1.231 | 26.0 | 156 | 1.2555 | 0.4048 | | 1.2026 | 27.0 | 162 | 1.2531 | 0.4048 | | 1.2026 | 28.0 | 168 | 1.2508 | 0.4048 | | 1.2253 | 29.0 | 174 | 1.2482 | 0.4048 | | 1.1949 | 30.0 | 180 | 1.2457 | 0.4048 | | 1.1949 | 31.0 | 186 | 1.2436 | 0.4286 | | 1.2025 | 32.0 | 192 | 1.2420 | 0.4286 | | 1.2025 | 33.0 | 198 | 1.2406 | 0.4524 | | 1.1709 | 34.0 | 204 | 1.2390 | 0.4524 | | 1.1908 | 35.0 | 210 | 1.2376 | 0.4524 | | 1.1908 | 36.0 | 216 | 1.2365 | 0.4524 | | 1.1663 | 37.0 | 222 | 1.2358 | 0.4524 | | 1.1663 | 38.0 | 228 | 1.2349 | 0.4524 | | 1.1875 | 39.0 | 234 | 1.2342 | 0.4524 | | 1.1799 | 40.0 | 240 | 1.2338 | 0.4524 | | 1.1799 | 41.0 | 246 | 1.2336 | 0.4524 | | 1.1658 | 42.0 | 252 | 1.2335 | 0.4524 | | 1.1658 | 43.0 | 258 | 1.2335 | 0.4524 | | 1.1875 | 44.0 | 264 | 1.2335 | 0.4524 | | 1.1627 | 45.0 | 270 | 1.2335 | 0.4524 | | 1.1627 | 46.0 | 276 | 1.2335 | 0.4524 | | 1.1689 | 47.0 | 282 | 1.2335 | 0.4524 | | 1.1689 | 48.0 | 288 | 1.2335 | 0.4524 | | 1.1911 | 49.0 | 294 | 1.2335 | 0.4524 | | 1.1557 | 50.0 | 300 | 1.2335 | 0.4524 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2764 - Accuracy: 0.3659 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6481 | 0.2439 | | 1.6453 | 2.0 | 12 | 1.5595 | 0.2439 | | 1.6453 | 3.0 | 18 | 1.4979 | 0.2683 | | 1.5144 | 4.0 | 24 | 1.4546 | 0.2683 | | 1.4538 | 5.0 | 30 | 1.4262 | 0.2927 | | 1.4538 | 6.0 | 36 | 1.4074 | 0.2683 | | 1.3994 | 7.0 | 42 | 1.3954 | 0.2683 | | 1.3994 | 8.0 | 48 | 1.3847 | 0.2683 | | 1.3731 | 9.0 | 54 | 1.3749 | 0.2683 | | 1.3564 | 10.0 | 60 | 1.3671 | 0.2927 | | 1.3564 | 11.0 | 66 | 1.3612 | 0.3415 | | 1.3402 | 12.0 | 72 | 1.3541 | 0.3659 | | 1.3402 | 13.0 | 78 | 1.3472 | 0.3171 | | 1.2912 | 14.0 | 84 | 1.3416 | 0.3171 | | 1.304 | 15.0 | 90 | 1.3360 | 0.2927 | | 1.304 | 16.0 | 96 | 1.3318 | 0.3171 | | 1.267 | 17.0 | 102 | 1.3278 | 0.3171 | | 1.267 | 18.0 | 108 | 1.3225 | 0.3171 | | 1.2687 | 19.0 | 114 | 1.3187 | 0.3415 | | 1.2447 | 20.0 | 120 | 1.3147 | 0.3415 | | 1.2447 | 21.0 | 126 | 1.3131 | 0.3171 | | 1.2262 | 22.0 | 132 | 1.3086 | 0.3171 | | 1.2262 | 23.0 | 138 | 1.3054 | 0.3171 | | 1.2132 | 24.0 | 144 | 1.3031 | 0.3171 | | 1.2231 | 25.0 | 150 | 1.3007 | 0.3171 | | 1.2231 | 26.0 | 156 | 1.2974 | 0.3171 | | 1.1895 | 27.0 | 162 | 1.2937 | 0.3171 | | 1.1895 | 28.0 | 168 | 1.2903 | 0.3415 | | 1.2062 | 29.0 | 174 | 1.2886 | 0.3415 | | 1.1907 | 30.0 | 180 | 1.2864 | 0.3415 | | 1.1907 | 31.0 | 186 | 1.2852 | 0.3415 | | 1.1836 | 32.0 | 192 | 1.2832 | 0.3415 | | 1.1836 | 33.0 | 198 | 1.2819 | 0.3415 | | 1.1632 | 34.0 | 204 | 1.2802 | 0.3415 | | 1.1553 | 35.0 | 210 | 1.2792 | 0.3659 | | 1.1553 | 36.0 | 216 | 1.2784 | 0.3659 | | 1.1703 | 37.0 | 222 | 1.2777 | 0.3659 | | 1.1703 | 38.0 | 228 | 1.2771 | 0.3659 | | 1.1625 | 39.0 | 234 | 1.2768 | 0.3659 | | 1.1523 | 40.0 | 240 | 1.2765 | 0.3659 | | 1.1523 | 41.0 | 246 | 1.2764 | 0.3659 | | 1.1617 | 42.0 | 252 | 1.2764 | 0.3659 | | 1.1617 | 43.0 | 258 | 1.2764 | 0.3659 | | 1.1427 | 44.0 | 264 | 1.2764 | 0.3659 | | 1.1631 | 45.0 | 270 | 1.2764 | 0.3659 | | 1.1631 | 46.0 | 276 | 1.2764 | 0.3659 | | 1.162 | 47.0 | 282 | 1.2764 | 0.3659 | | 1.162 | 48.0 | 288 | 1.2764 | 0.3659 | | 1.1542 | 49.0 | 294 | 1.2764 | 0.3659 | | 1.1633 | 50.0 | 300 | 1.2764 | 0.3659 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_0001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5492 - Accuracy: 0.3333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6732 | 0.2667 | | 1.7181 | 2.0 | 12 | 1.6650 | 0.2667 | | 1.7181 | 3.0 | 18 | 1.6576 | 0.2667 | | 1.6899 | 4.0 | 24 | 1.6505 | 0.2667 | | 1.7151 | 5.0 | 30 | 1.6435 | 0.2889 | | 1.7151 | 6.0 | 36 | 1.6372 | 0.2889 | | 1.6507 | 7.0 | 42 | 1.6312 | 0.2889 | | 1.6507 | 8.0 | 48 | 1.6255 | 0.2889 | | 1.626 | 9.0 | 54 | 1.6199 | 0.2889 | | 1.6566 | 10.0 | 60 | 1.6145 | 0.2889 | | 1.6566 | 11.0 | 66 | 1.6095 | 0.2889 | | 1.6122 | 12.0 | 72 | 1.6048 | 0.2889 | | 1.6122 | 13.0 | 78 | 1.6001 | 0.2889 | | 1.7016 | 14.0 | 84 | 1.5960 | 0.2889 | | 1.6075 | 15.0 | 90 | 1.5922 | 0.2889 | | 1.6075 | 16.0 | 96 | 1.5886 | 0.2889 | | 1.5839 | 17.0 | 102 | 1.5852 | 0.2889 | | 1.5839 | 18.0 | 108 | 1.5821 | 0.3111 | | 1.589 | 19.0 | 114 | 1.5790 | 0.3111 | | 1.5539 | 20.0 | 120 | 1.5762 | 0.3111 | | 1.5539 | 21.0 | 126 | 1.5736 | 0.3111 | | 1.5431 | 22.0 | 132 | 1.5710 | 0.3111 | | 1.5431 | 23.0 | 138 | 1.5686 | 0.3111 | | 1.58 | 24.0 | 144 | 1.5665 | 0.3111 | | 1.5398 | 25.0 | 150 | 1.5646 | 0.3333 | | 1.5398 | 26.0 | 156 | 1.5627 | 0.3333 | | 1.5415 | 27.0 | 162 | 1.5611 | 0.3333 | | 1.5415 | 28.0 | 168 | 1.5596 | 0.3333 | | 1.5548 | 29.0 | 174 | 1.5581 | 0.3333 | | 1.5423 | 30.0 | 180 | 1.5567 | 0.3333 | | 1.5423 | 31.0 | 186 | 1.5553 | 0.3333 | | 1.5803 | 32.0 | 192 | 1.5542 | 0.3333 | | 1.5803 | 33.0 | 198 | 1.5532 | 0.3333 | | 1.4986 | 34.0 | 204 | 1.5522 | 0.3333 | | 1.5635 | 35.0 | 210 | 1.5514 | 0.3333 | | 1.5635 | 36.0 | 216 | 1.5508 | 0.3333 | | 1.5318 | 37.0 | 222 | 1.5503 | 0.3333 | | 1.5318 | 38.0 | 228 | 1.5499 | 0.3333 | | 1.4575 | 39.0 | 234 | 1.5495 | 0.3333 | | 1.527 | 40.0 | 240 | 1.5493 | 0.3333 | | 1.527 | 41.0 | 246 | 1.5492 | 0.3333 | | 1.5482 | 42.0 | 252 | 1.5492 | 0.3333 | | 1.5482 | 43.0 | 258 | 1.5492 | 0.3333 | | 1.5545 | 44.0 | 264 | 1.5492 | 0.3333 | | 1.5122 | 45.0 | 270 | 1.5492 | 0.3333 | | 1.5122 | 46.0 | 276 | 1.5492 | 0.3333 | | 1.5284 | 47.0 | 282 | 1.5492 | 0.3333 | | 1.5284 | 48.0 | 288 | 1.5492 | 0.3333 | | 1.5117 | 49.0 | 294 | 1.5492 | 0.3333 | | 1.5484 | 50.0 | 300 | 1.5492 | 0.3333 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_0001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5785 - Accuracy: 0.2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.7159 | 0.2222 | | 1.7117 | 2.0 | 12 | 1.7073 | 0.2222 | | 1.7117 | 3.0 | 18 | 1.6994 | 0.2222 | | 1.6883 | 4.0 | 24 | 1.6919 | 0.2222 | | 1.6982 | 5.0 | 30 | 1.6843 | 0.2222 | | 1.6982 | 6.0 | 36 | 1.6776 | 0.2222 | | 1.6419 | 7.0 | 42 | 1.6712 | 0.2222 | | 1.6419 | 8.0 | 48 | 1.6649 | 0.2222 | | 1.6108 | 9.0 | 54 | 1.6588 | 0.2222 | | 1.6345 | 10.0 | 60 | 1.6530 | 0.2222 | | 1.6345 | 11.0 | 66 | 1.6473 | 0.2222 | | 1.6108 | 12.0 | 72 | 1.6421 | 0.2222 | | 1.6108 | 13.0 | 78 | 1.6369 | 0.2 | | 1.6666 | 14.0 | 84 | 1.6323 | 0.2 | | 1.6138 | 15.0 | 90 | 1.6282 | 0.2 | | 1.6138 | 16.0 | 96 | 1.6241 | 0.2 | | 1.5738 | 17.0 | 102 | 1.6203 | 0.2 | | 1.5738 | 18.0 | 108 | 1.6167 | 0.2 | | 1.5952 | 19.0 | 114 | 1.6131 | 0.2 | | 1.555 | 20.0 | 120 | 1.6099 | 0.2 | | 1.555 | 21.0 | 126 | 1.6070 | 0.2 | | 1.5267 | 22.0 | 132 | 1.6040 | 0.2 | | 1.5267 | 23.0 | 138 | 1.6012 | 0.2 | | 1.5686 | 24.0 | 144 | 1.5988 | 0.2 | | 1.5444 | 25.0 | 150 | 1.5966 | 0.2 | | 1.5444 | 26.0 | 156 | 1.5944 | 0.2 | | 1.544 | 27.0 | 162 | 1.5926 | 0.2 | | 1.544 | 28.0 | 168 | 1.5907 | 0.2 | | 1.5375 | 29.0 | 174 | 1.5889 | 0.2 | | 1.5441 | 30.0 | 180 | 1.5873 | 0.2 | | 1.5441 | 31.0 | 186 | 1.5857 | 0.2 | | 1.5614 | 32.0 | 192 | 1.5845 | 0.2 | | 1.5614 | 33.0 | 198 | 1.5832 | 0.2 | | 1.5093 | 34.0 | 204 | 1.5821 | 0.2 | | 1.5478 | 35.0 | 210 | 1.5812 | 0.2 | | 1.5478 | 36.0 | 216 | 1.5804 | 0.2 | | 1.5301 | 37.0 | 222 | 1.5798 | 0.2 | | 1.5301 | 38.0 | 228 | 1.5793 | 0.2 | | 1.4582 | 39.0 | 234 | 1.5789 | 0.2 | | 1.5151 | 40.0 | 240 | 1.5786 | 0.2 | | 1.5151 | 41.0 | 246 | 1.5785 | 0.2 | | 1.5298 | 42.0 | 252 | 1.5785 | 0.2 | | 1.5298 | 43.0 | 258 | 1.5785 | 0.2 | | 1.548 | 44.0 | 264 | 1.5785 | 0.2 | | 1.5172 | 45.0 | 270 | 1.5785 | 0.2 | | 1.5172 | 46.0 | 276 | 1.5785 | 0.2 | | 1.528 | 47.0 | 282 | 1.5785 | 0.2 | | 1.528 | 48.0 | 288 | 1.5785 | 0.2 | | 1.4968 | 49.0 | 294 | 1.5785 | 0.2 | | 1.5413 | 50.0 | 300 | 1.5785 | 0.2 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_0001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5555 - Accuracy: 0.2791 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6995 | 0.2791 | | 1.7242 | 2.0 | 12 | 1.6902 | 0.2791 | | 1.7242 | 3.0 | 18 | 1.6819 | 0.2791 | | 1.6909 | 4.0 | 24 | 1.6741 | 0.2791 | | 1.6461 | 5.0 | 30 | 1.6664 | 0.2791 | | 1.6461 | 6.0 | 36 | 1.6587 | 0.2791 | | 1.6466 | 7.0 | 42 | 1.6518 | 0.2791 | | 1.6466 | 8.0 | 48 | 1.6448 | 0.2791 | | 1.6495 | 9.0 | 54 | 1.6384 | 0.2791 | | 1.6495 | 10.0 | 60 | 1.6323 | 0.2791 | | 1.6495 | 11.0 | 66 | 1.6267 | 0.2791 | | 1.6244 | 12.0 | 72 | 1.6213 | 0.2791 | | 1.6244 | 13.0 | 78 | 1.6166 | 0.2791 | | 1.593 | 14.0 | 84 | 1.6117 | 0.2791 | | 1.6183 | 15.0 | 90 | 1.6071 | 0.2791 | | 1.6183 | 16.0 | 96 | 1.6026 | 0.2791 | | 1.6105 | 17.0 | 102 | 1.5985 | 0.2558 | | 1.6105 | 18.0 | 108 | 1.5946 | 0.2558 | | 1.5599 | 19.0 | 114 | 1.5912 | 0.2558 | | 1.5756 | 20.0 | 120 | 1.5878 | 0.2558 | | 1.5756 | 21.0 | 126 | 1.5845 | 0.2558 | | 1.5692 | 22.0 | 132 | 1.5817 | 0.2558 | | 1.5692 | 23.0 | 138 | 1.5789 | 0.2558 | | 1.544 | 24.0 | 144 | 1.5763 | 0.2558 | | 1.548 | 25.0 | 150 | 1.5738 | 0.2558 | | 1.548 | 26.0 | 156 | 1.5716 | 0.2791 | | 1.549 | 27.0 | 162 | 1.5695 | 0.2791 | | 1.549 | 28.0 | 168 | 1.5675 | 0.2791 | | 1.5593 | 29.0 | 174 | 1.5658 | 0.2791 | | 1.528 | 30.0 | 180 | 1.5641 | 0.2791 | | 1.528 | 31.0 | 186 | 1.5627 | 0.2791 | | 1.5394 | 32.0 | 192 | 1.5615 | 0.2791 | | 1.5394 | 33.0 | 198 | 1.5603 | 0.2791 | | 1.4822 | 34.0 | 204 | 1.5592 | 0.2791 | | 1.5618 | 35.0 | 210 | 1.5583 | 0.2791 | | 1.5618 | 36.0 | 216 | 1.5575 | 0.2791 | | 1.5279 | 37.0 | 222 | 1.5568 | 0.2791 | | 1.5279 | 38.0 | 228 | 1.5563 | 0.2791 | | 1.5233 | 39.0 | 234 | 1.5559 | 0.2791 | | 1.5255 | 40.0 | 240 | 1.5556 | 0.2791 | | 1.5255 | 41.0 | 246 | 1.5555 | 0.2791 | | 1.5147 | 42.0 | 252 | 1.5555 | 0.2791 | | 1.5147 | 43.0 | 258 | 1.5555 | 0.2791 | | 1.5048 | 44.0 | 264 | 1.5555 | 0.2791 | | 1.5464 | 45.0 | 270 | 1.5555 | 0.2791 | | 1.5464 | 46.0 | 276 | 1.5555 | 0.2791 | | 1.5243 | 47.0 | 282 | 1.5555 | 0.2791 | | 1.5243 | 48.0 | 288 | 1.5555 | 0.2791 | | 1.5049 | 49.0 | 294 | 1.5555 | 0.2791 | | 1.5545 | 50.0 | 300 | 1.5555 | 0.2791 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_0001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5092 - Accuracy: 0.2857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6866 | 0.2857 | | 1.7029 | 2.0 | 12 | 1.6755 | 0.2857 | | 1.7029 | 3.0 | 18 | 1.6648 | 0.2857 | | 1.6819 | 4.0 | 24 | 1.6543 | 0.2857 | | 1.7084 | 5.0 | 30 | 1.6452 | 0.2857 | | 1.7084 | 6.0 | 36 | 1.6365 | 0.2857 | | 1.661 | 7.0 | 42 | 1.6277 | 0.2857 | | 1.661 | 8.0 | 48 | 1.6195 | 0.2857 | | 1.6506 | 9.0 | 54 | 1.6113 | 0.2857 | | 1.6321 | 10.0 | 60 | 1.6035 | 0.2857 | | 1.6321 | 11.0 | 66 | 1.5969 | 0.2857 | | 1.605 | 12.0 | 72 | 1.5900 | 0.2857 | | 1.605 | 13.0 | 78 | 1.5837 | 0.2857 | | 1.6205 | 14.0 | 84 | 1.5775 | 0.2857 | | 1.6128 | 15.0 | 90 | 1.5717 | 0.2857 | | 1.6128 | 16.0 | 96 | 1.5663 | 0.2857 | | 1.5818 | 17.0 | 102 | 1.5613 | 0.2857 | | 1.5818 | 18.0 | 108 | 1.5566 | 0.2857 | | 1.6012 | 19.0 | 114 | 1.5522 | 0.2857 | | 1.6068 | 20.0 | 120 | 1.5482 | 0.2857 | | 1.6068 | 21.0 | 126 | 1.5443 | 0.2857 | | 1.5674 | 22.0 | 132 | 1.5409 | 0.2857 | | 1.5674 | 23.0 | 138 | 1.5376 | 0.2857 | | 1.565 | 24.0 | 144 | 1.5344 | 0.2857 | | 1.5842 | 25.0 | 150 | 1.5314 | 0.2857 | | 1.5842 | 26.0 | 156 | 1.5286 | 0.2857 | | 1.5593 | 27.0 | 162 | 1.5260 | 0.2857 | | 1.5593 | 28.0 | 168 | 1.5236 | 0.2857 | | 1.5824 | 29.0 | 174 | 1.5216 | 0.2857 | | 1.537 | 30.0 | 180 | 1.5196 | 0.2857 | | 1.537 | 31.0 | 186 | 1.5181 | 0.2857 | | 1.5437 | 32.0 | 192 | 1.5165 | 0.2857 | | 1.5437 | 33.0 | 198 | 1.5150 | 0.2857 | | 1.5369 | 34.0 | 204 | 1.5137 | 0.2857 | | 1.5371 | 35.0 | 210 | 1.5125 | 0.2857 | | 1.5371 | 36.0 | 216 | 1.5116 | 0.2857 | | 1.5229 | 37.0 | 222 | 1.5109 | 0.2857 | | 1.5229 | 38.0 | 228 | 1.5102 | 0.2857 | | 1.5623 | 39.0 | 234 | 1.5097 | 0.2857 | | 1.5343 | 40.0 | 240 | 1.5094 | 0.2857 | | 1.5343 | 41.0 | 246 | 1.5093 | 0.2857 | | 1.5211 | 42.0 | 252 | 1.5092 | 0.2857 | | 1.5211 | 43.0 | 258 | 1.5092 | 0.2857 | | 1.5618 | 44.0 | 264 | 1.5092 | 0.2857 | | 1.5309 | 45.0 | 270 | 1.5092 | 0.2857 | | 1.5309 | 46.0 | 276 | 1.5092 | 0.2857 | | 1.5362 | 47.0 | 282 | 1.5092 | 0.2857 | | 1.5362 | 48.0 | 288 | 1.5092 | 0.2857 | | 1.5728 | 49.0 | 294 | 1.5092 | 0.2857 | | 1.5244 | 50.0 | 300 | 1.5092 | 0.2857 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_0001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5523 - Accuracy: 0.2439 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.7547 | 0.2439 | | 1.7078 | 2.0 | 12 | 1.7422 | 0.2439 | | 1.7078 | 3.0 | 18 | 1.7303 | 0.2439 | | 1.6827 | 4.0 | 24 | 1.7187 | 0.2439 | | 1.6676 | 5.0 | 30 | 1.7076 | 0.2439 | | 1.6676 | 6.0 | 36 | 1.6970 | 0.2439 | | 1.6669 | 7.0 | 42 | 1.6882 | 0.2439 | | 1.6669 | 8.0 | 48 | 1.6793 | 0.2439 | | 1.5935 | 9.0 | 54 | 1.6701 | 0.2439 | | 1.6316 | 10.0 | 60 | 1.6617 | 0.2439 | | 1.6316 | 11.0 | 66 | 1.6538 | 0.2439 | | 1.6324 | 12.0 | 72 | 1.6460 | 0.2439 | | 1.6324 | 13.0 | 78 | 1.6387 | 0.2439 | | 1.5842 | 14.0 | 84 | 1.6318 | 0.2439 | | 1.5897 | 15.0 | 90 | 1.6256 | 0.2439 | | 1.5897 | 16.0 | 96 | 1.6199 | 0.2439 | | 1.5943 | 17.0 | 102 | 1.6144 | 0.2439 | | 1.5943 | 18.0 | 108 | 1.6092 | 0.2195 | | 1.5586 | 19.0 | 114 | 1.6040 | 0.2195 | | 1.5924 | 20.0 | 120 | 1.5990 | 0.2195 | | 1.5924 | 21.0 | 126 | 1.5945 | 0.2195 | | 1.5676 | 22.0 | 132 | 1.5902 | 0.2195 | | 1.5676 | 23.0 | 138 | 1.5862 | 0.2195 | | 1.5352 | 24.0 | 144 | 1.5823 | 0.2195 | | 1.5842 | 25.0 | 150 | 1.5786 | 0.2195 | | 1.5842 | 26.0 | 156 | 1.5752 | 0.2195 | | 1.5461 | 27.0 | 162 | 1.5723 | 0.2195 | | 1.5461 | 28.0 | 168 | 1.5695 | 0.2195 | | 1.551 | 29.0 | 174 | 1.5671 | 0.2439 | | 1.5549 | 30.0 | 180 | 1.5649 | 0.2439 | | 1.5549 | 31.0 | 186 | 1.5628 | 0.2439 | | 1.5532 | 32.0 | 192 | 1.5610 | 0.2439 | | 1.5532 | 33.0 | 198 | 1.5594 | 0.2439 | | 1.5006 | 34.0 | 204 | 1.5578 | 0.2439 | | 1.5134 | 35.0 | 210 | 1.5565 | 0.2439 | | 1.5134 | 36.0 | 216 | 1.5553 | 0.2439 | | 1.5386 | 37.0 | 222 | 1.5543 | 0.2439 | | 1.5386 | 38.0 | 228 | 1.5536 | 0.2439 | | 1.5372 | 39.0 | 234 | 1.5530 | 0.2439 | | 1.528 | 40.0 | 240 | 1.5526 | 0.2439 | | 1.528 | 41.0 | 246 | 1.5524 | 0.2439 | | 1.5555 | 42.0 | 252 | 1.5523 | 0.2439 | | 1.5555 | 43.0 | 258 | 1.5523 | 0.2439 | | 1.509 | 44.0 | 264 | 1.5523 | 0.2439 | | 1.5379 | 45.0 | 270 | 1.5523 | 0.2439 | | 1.5379 | 46.0 | 276 | 1.5523 | 0.2439 | | 1.5588 | 47.0 | 282 | 1.5523 | 0.2439 | | 1.5588 | 48.0 | 288 | 1.5523 | 0.2439 | | 1.509 | 49.0 | 294 | 1.5523 | 0.2439 | | 1.5414 | 50.0 | 300 | 1.5523 | 0.2439 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_00001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.6938 - Accuracy: 0.2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6986 | 0.2 | | 1.6333 | 2.0 | 12 | 1.6983 | 0.2 | | 1.6333 | 3.0 | 18 | 1.6981 | 0.2 | | 1.6088 | 4.0 | 24 | 1.6979 | 0.2 | | 1.6296 | 5.0 | 30 | 1.6976 | 0.2 | | 1.6296 | 6.0 | 36 | 1.6974 | 0.2 | | 1.6252 | 7.0 | 42 | 1.6972 | 0.2 | | 1.6252 | 8.0 | 48 | 1.6970 | 0.2 | | 1.6833 | 9.0 | 54 | 1.6968 | 0.2 | | 1.5983 | 10.0 | 60 | 1.6965 | 0.2 | | 1.5983 | 11.0 | 66 | 1.6964 | 0.2 | | 1.61 | 12.0 | 72 | 1.6962 | 0.2 | | 1.61 | 13.0 | 78 | 1.6960 | 0.2 | | 1.6125 | 14.0 | 84 | 1.6958 | 0.2 | | 1.6595 | 15.0 | 90 | 1.6957 | 0.2 | | 1.6595 | 16.0 | 96 | 1.6956 | 0.2 | | 1.6372 | 17.0 | 102 | 1.6954 | 0.2 | | 1.6372 | 18.0 | 108 | 1.6953 | 0.2 | | 1.6292 | 19.0 | 114 | 1.6951 | 0.2 | | 1.6414 | 20.0 | 120 | 1.6950 | 0.2 | | 1.6414 | 21.0 | 126 | 1.6949 | 0.2 | | 1.6168 | 22.0 | 132 | 1.6948 | 0.2 | | 1.6168 | 23.0 | 138 | 1.6947 | 0.2 | | 1.6445 | 24.0 | 144 | 1.6946 | 0.2 | | 1.6172 | 25.0 | 150 | 1.6945 | 0.2 | | 1.6172 | 26.0 | 156 | 1.6944 | 0.2 | | 1.5925 | 27.0 | 162 | 1.6944 | 0.2 | | 1.5925 | 28.0 | 168 | 1.6943 | 0.2 | | 1.6351 | 29.0 | 174 | 1.6942 | 0.2 | | 1.6161 | 30.0 | 180 | 1.6941 | 0.2 | | 1.6161 | 31.0 | 186 | 1.6941 | 0.2 | | 1.6095 | 32.0 | 192 | 1.6940 | 0.2 | | 1.6095 | 33.0 | 198 | 1.6940 | 0.2 | | 1.6215 | 34.0 | 204 | 1.6939 | 0.2 | | 1.6213 | 35.0 | 210 | 1.6939 | 0.2 | | 1.6213 | 36.0 | 216 | 1.6939 | 0.2 | | 1.6372 | 37.0 | 222 | 1.6938 | 0.2 | | 1.6372 | 38.0 | 228 | 1.6938 | 0.2 | | 1.6199 | 39.0 | 234 | 1.6938 | 0.2 | | 1.6087 | 40.0 | 240 | 1.6938 | 0.2 | | 1.6087 | 41.0 | 246 | 1.6938 | 0.2 | | 1.6309 | 42.0 | 252 | 1.6938 | 0.2 | | 1.6309 | 43.0 | 258 | 1.6938 | 0.2 | | 1.6203 | 44.0 | 264 | 1.6938 | 0.2 | | 1.6564 | 45.0 | 270 | 1.6938 | 0.2 | | 1.6564 | 46.0 | 276 | 1.6938 | 0.2 | | 1.6178 | 47.0 | 282 | 1.6938 | 0.2 | | 1.6178 | 48.0 | 288 | 1.6938 | 0.2 | | 1.6557 | 49.0 | 294 | 1.6938 | 0.2 | | 1.6181 | 50.0 | 300 | 1.6938 | 0.2 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_00001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.7073 - Accuracy: 0.2222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.7237 | 0.2222 | | 1.719 | 2.0 | 12 | 1.7228 | 0.2222 | | 1.719 | 3.0 | 18 | 1.7220 | 0.2222 | | 1.7128 | 4.0 | 24 | 1.7212 | 0.2222 | | 1.7405 | 5.0 | 30 | 1.7204 | 0.2222 | | 1.7405 | 6.0 | 36 | 1.7197 | 0.2222 | | 1.6943 | 7.0 | 42 | 1.7190 | 0.2222 | | 1.6943 | 8.0 | 48 | 1.7183 | 0.2222 | | 1.6759 | 9.0 | 54 | 1.7176 | 0.2222 | | 1.7158 | 10.0 | 60 | 1.7169 | 0.2222 | | 1.7158 | 11.0 | 66 | 1.7162 | 0.2222 | | 1.7024 | 12.0 | 72 | 1.7156 | 0.2222 | | 1.7024 | 13.0 | 78 | 1.7150 | 0.2222 | | 1.7744 | 14.0 | 84 | 1.7144 | 0.2222 | | 1.7251 | 15.0 | 90 | 1.7139 | 0.2222 | | 1.7251 | 16.0 | 96 | 1.7134 | 0.2222 | | 1.6942 | 17.0 | 102 | 1.7129 | 0.2222 | | 1.6942 | 18.0 | 108 | 1.7124 | 0.2222 | | 1.7154 | 19.0 | 114 | 1.7120 | 0.2222 | | 1.6829 | 20.0 | 120 | 1.7115 | 0.2222 | | 1.6829 | 21.0 | 126 | 1.7111 | 0.2222 | | 1.6559 | 22.0 | 132 | 1.7107 | 0.2222 | | 1.6559 | 23.0 | 138 | 1.7104 | 0.2222 | | 1.7194 | 24.0 | 144 | 1.7100 | 0.2222 | | 1.6925 | 25.0 | 150 | 1.7097 | 0.2222 | | 1.6925 | 26.0 | 156 | 1.7094 | 0.2222 | | 1.6919 | 27.0 | 162 | 1.7091 | 0.2222 | | 1.6919 | 28.0 | 168 | 1.7089 | 0.2222 | | 1.6948 | 29.0 | 174 | 1.7086 | 0.2222 | | 1.7059 | 30.0 | 180 | 1.7084 | 0.2222 | | 1.7059 | 31.0 | 186 | 1.7082 | 0.2222 | | 1.7337 | 32.0 | 192 | 1.7080 | 0.2222 | | 1.7337 | 33.0 | 198 | 1.7079 | 0.2222 | | 1.6587 | 34.0 | 204 | 1.7077 | 0.2222 | | 1.7172 | 35.0 | 210 | 1.7076 | 0.2222 | | 1.7172 | 36.0 | 216 | 1.7075 | 0.2222 | | 1.7051 | 37.0 | 222 | 1.7075 | 0.2222 | | 1.7051 | 38.0 | 228 | 1.7074 | 0.2222 | | 1.6141 | 39.0 | 234 | 1.7074 | 0.2222 | | 1.6784 | 40.0 | 240 | 1.7073 | 0.2222 | | 1.6784 | 41.0 | 246 | 1.7073 | 0.2222 | | 1.6991 | 42.0 | 252 | 1.7073 | 0.2222 | | 1.6991 | 43.0 | 258 | 1.7073 | 0.2222 | | 1.7247 | 44.0 | 264 | 1.7073 | 0.2222 | | 1.6773 | 45.0 | 270 | 1.7073 | 0.2222 | | 1.6773 | 46.0 | 276 | 1.7073 | 0.2222 | | 1.6939 | 47.0 | 282 | 1.7073 | 0.2222 | | 1.6939 | 48.0 | 288 | 1.7073 | 0.2222 | | 1.6622 | 49.0 | 294 | 1.7073 | 0.2222 | | 1.7192 | 50.0 | 300 | 1.7073 | 0.2222 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_00001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.6900 - Accuracy: 0.2791 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.7081 | 0.2791 | | 1.7325 | 2.0 | 12 | 1.7072 | 0.2791 | | 1.7325 | 3.0 | 18 | 1.7063 | 0.2791 | | 1.7152 | 4.0 | 24 | 1.7055 | 0.2791 | | 1.6813 | 5.0 | 30 | 1.7046 | 0.2791 | | 1.6813 | 6.0 | 36 | 1.7038 | 0.2791 | | 1.6984 | 7.0 | 42 | 1.7030 | 0.2791 | | 1.6984 | 8.0 | 48 | 1.7022 | 0.2791 | | 1.7131 | 9.0 | 54 | 1.7014 | 0.2791 | | 1.7337 | 10.0 | 60 | 1.7007 | 0.2791 | | 1.7337 | 11.0 | 66 | 1.7000 | 0.2791 | | 1.7143 | 12.0 | 72 | 1.6993 | 0.2791 | | 1.7143 | 13.0 | 78 | 1.6987 | 0.2791 | | 1.6884 | 14.0 | 84 | 1.6981 | 0.2791 | | 1.7252 | 15.0 | 90 | 1.6975 | 0.2791 | | 1.7252 | 16.0 | 96 | 1.6969 | 0.2791 | | 1.7269 | 17.0 | 102 | 1.6963 | 0.2791 | | 1.7269 | 18.0 | 108 | 1.6958 | 0.2791 | | 1.6858 | 19.0 | 114 | 1.6953 | 0.2791 | | 1.7013 | 20.0 | 120 | 1.6948 | 0.2791 | | 1.7013 | 21.0 | 126 | 1.6943 | 0.2791 | | 1.7051 | 22.0 | 132 | 1.6939 | 0.2791 | | 1.7051 | 23.0 | 138 | 1.6935 | 0.2791 | | 1.6834 | 24.0 | 144 | 1.6931 | 0.2791 | | 1.6977 | 25.0 | 150 | 1.6927 | 0.2791 | | 1.6977 | 26.0 | 156 | 1.6924 | 0.2791 | | 1.7016 | 27.0 | 162 | 1.6920 | 0.2791 | | 1.7016 | 28.0 | 168 | 1.6917 | 0.2791 | | 1.7242 | 29.0 | 174 | 1.6915 | 0.2791 | | 1.6808 | 30.0 | 180 | 1.6912 | 0.2791 | | 1.6808 | 31.0 | 186 | 1.6910 | 0.2791 | | 1.7032 | 32.0 | 192 | 1.6908 | 0.2791 | | 1.7032 | 33.0 | 198 | 1.6906 | 0.2791 | | 1.6261 | 34.0 | 204 | 1.6905 | 0.2791 | | 1.7412 | 35.0 | 210 | 1.6903 | 0.2791 | | 1.7412 | 36.0 | 216 | 1.6902 | 0.2791 | | 1.6899 | 37.0 | 222 | 1.6901 | 0.2791 | | 1.6899 | 38.0 | 228 | 1.6901 | 0.2791 | | 1.6944 | 39.0 | 234 | 1.6900 | 0.2791 | | 1.6965 | 40.0 | 240 | 1.6900 | 0.2791 | | 1.6965 | 41.0 | 246 | 1.6900 | 0.2791 | | 1.6787 | 42.0 | 252 | 1.6900 | 0.2791 | | 1.6787 | 43.0 | 258 | 1.6900 | 0.2791 | | 1.6617 | 44.0 | 264 | 1.6900 | 0.2791 | | 1.7215 | 45.0 | 270 | 1.6900 | 0.2791 | | 1.7215 | 46.0 | 276 | 1.6900 | 0.2791 | | 1.6881 | 47.0 | 282 | 1.6900 | 0.2791 | | 1.6881 | 48.0 | 288 | 1.6900 | 0.2791 | | 1.6823 | 49.0 | 294 | 1.6900 | 0.2791 | | 1.7275 | 50.0 | 300 | 1.6900 | 0.2791 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_00001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.6751 - Accuracy: 0.2857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6974 | 0.2857 | | 1.71 | 2.0 | 12 | 1.6962 | 0.2857 | | 1.71 | 3.0 | 18 | 1.6951 | 0.2857 | | 1.7036 | 4.0 | 24 | 1.6940 | 0.2857 | | 1.7465 | 5.0 | 30 | 1.6930 | 0.2857 | | 1.7465 | 6.0 | 36 | 1.6921 | 0.2857 | | 1.709 | 7.0 | 42 | 1.6911 | 0.2857 | | 1.709 | 8.0 | 48 | 1.6901 | 0.2857 | | 1.712 | 9.0 | 54 | 1.6892 | 0.2857 | | 1.7048 | 10.0 | 60 | 1.6882 | 0.2857 | | 1.7048 | 11.0 | 66 | 1.6874 | 0.2857 | | 1.6828 | 12.0 | 72 | 1.6866 | 0.2857 | | 1.6828 | 13.0 | 78 | 1.6858 | 0.2857 | | 1.7139 | 14.0 | 84 | 1.6850 | 0.2857 | | 1.719 | 15.0 | 90 | 1.6842 | 0.2857 | | 1.719 | 16.0 | 96 | 1.6835 | 0.2857 | | 1.6904 | 17.0 | 102 | 1.6828 | 0.2857 | | 1.6904 | 18.0 | 108 | 1.6821 | 0.2857 | | 1.7154 | 19.0 | 114 | 1.6815 | 0.2857 | | 1.7326 | 20.0 | 120 | 1.6809 | 0.2857 | | 1.7326 | 21.0 | 126 | 1.6804 | 0.2857 | | 1.6942 | 22.0 | 132 | 1.6799 | 0.2857 | | 1.6942 | 23.0 | 138 | 1.6794 | 0.2857 | | 1.6945 | 24.0 | 144 | 1.6789 | 0.2857 | | 1.728 | 25.0 | 150 | 1.6784 | 0.2857 | | 1.728 | 26.0 | 156 | 1.6780 | 0.2857 | | 1.7026 | 27.0 | 162 | 1.6776 | 0.2857 | | 1.7026 | 28.0 | 168 | 1.6772 | 0.2857 | | 1.7403 | 29.0 | 174 | 1.6769 | 0.2857 | | 1.6716 | 30.0 | 180 | 1.6766 | 0.2857 | | 1.6716 | 31.0 | 186 | 1.6764 | 0.2857 | | 1.6806 | 32.0 | 192 | 1.6761 | 0.2857 | | 1.6806 | 33.0 | 198 | 1.6759 | 0.2857 | | 1.6988 | 34.0 | 204 | 1.6757 | 0.2857 | | 1.6893 | 35.0 | 210 | 1.6755 | 0.2857 | | 1.6893 | 36.0 | 216 | 1.6754 | 0.2857 | | 1.6718 | 37.0 | 222 | 1.6753 | 0.2857 | | 1.6718 | 38.0 | 228 | 1.6752 | 0.2857 | | 1.7279 | 39.0 | 234 | 1.6751 | 0.2857 | | 1.6803 | 40.0 | 240 | 1.6751 | 0.2857 | | 1.6803 | 41.0 | 246 | 1.6751 | 0.2857 | | 1.6785 | 42.0 | 252 | 1.6751 | 0.2857 | | 1.6785 | 43.0 | 258 | 1.6751 | 0.2857 | | 1.7169 | 44.0 | 264 | 1.6751 | 0.2857 | | 1.6924 | 45.0 | 270 | 1.6751 | 0.2857 | | 1.6924 | 46.0 | 276 | 1.6751 | 0.2857 | | 1.6961 | 47.0 | 282 | 1.6751 | 0.2857 | | 1.6961 | 48.0 | 288 | 1.6751 | 0.2857 | | 1.7415 | 49.0 | 294 | 1.6751 | 0.2857 | | 1.681 | 50.0 | 300 | 1.6751 | 0.2857 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_00001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.7419 - Accuracy: 0.2439 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.7664 | 0.2439 | | 1.7149 | 2.0 | 12 | 1.7652 | 0.2439 | | 1.7149 | 3.0 | 18 | 1.7640 | 0.2439 | | 1.7055 | 4.0 | 24 | 1.7627 | 0.2439 | | 1.7032 | 5.0 | 30 | 1.7616 | 0.2439 | | 1.7032 | 6.0 | 36 | 1.7604 | 0.2439 | | 1.7195 | 7.0 | 42 | 1.7594 | 0.2439 | | 1.7195 | 8.0 | 48 | 1.7584 | 0.2439 | | 1.6458 | 9.0 | 54 | 1.7574 | 0.2439 | | 1.7017 | 10.0 | 60 | 1.7564 | 0.2439 | | 1.7017 | 11.0 | 66 | 1.7554 | 0.2439 | | 1.7123 | 12.0 | 72 | 1.7545 | 0.2439 | | 1.7123 | 13.0 | 78 | 1.7536 | 0.2439 | | 1.6713 | 14.0 | 84 | 1.7528 | 0.2439 | | 1.6849 | 15.0 | 90 | 1.7520 | 0.2439 | | 1.6849 | 16.0 | 96 | 1.7512 | 0.2439 | | 1.7051 | 17.0 | 102 | 1.7505 | 0.2439 | | 1.7051 | 18.0 | 108 | 1.7498 | 0.2439 | | 1.6541 | 19.0 | 114 | 1.7491 | 0.2439 | | 1.7161 | 20.0 | 120 | 1.7484 | 0.2439 | | 1.7161 | 21.0 | 126 | 1.7478 | 0.2439 | | 1.6901 | 22.0 | 132 | 1.7472 | 0.2439 | | 1.6901 | 23.0 | 138 | 1.7466 | 0.2439 | | 1.6528 | 24.0 | 144 | 1.7461 | 0.2439 | | 1.7234 | 25.0 | 150 | 1.7456 | 0.2439 | | 1.7234 | 26.0 | 156 | 1.7451 | 0.2439 | | 1.6839 | 27.0 | 162 | 1.7447 | 0.2439 | | 1.6839 | 28.0 | 168 | 1.7443 | 0.2439 | | 1.6859 | 29.0 | 174 | 1.7439 | 0.2439 | | 1.6955 | 30.0 | 180 | 1.7436 | 0.2439 | | 1.6955 | 31.0 | 186 | 1.7433 | 0.2439 | | 1.7014 | 32.0 | 192 | 1.7430 | 0.2439 | | 1.7014 | 33.0 | 198 | 1.7428 | 0.2439 | | 1.6319 | 34.0 | 204 | 1.7426 | 0.2439 | | 1.6586 | 35.0 | 210 | 1.7424 | 0.2439 | | 1.6586 | 36.0 | 216 | 1.7422 | 0.2439 | | 1.6897 | 37.0 | 222 | 1.7421 | 0.2439 | | 1.6897 | 38.0 | 228 | 1.7420 | 0.2439 | | 1.6863 | 39.0 | 234 | 1.7420 | 0.2439 | | 1.6801 | 40.0 | 240 | 1.7419 | 0.2439 | | 1.6801 | 41.0 | 246 | 1.7419 | 0.2439 | | 1.7183 | 42.0 | 252 | 1.7419 | 0.2439 | | 1.7183 | 43.0 | 258 | 1.7419 | 0.2439 | | 1.6529 | 44.0 | 264 | 1.7419 | 0.2439 | | 1.6913 | 45.0 | 270 | 1.7419 | 0.2439 | | 1.6913 | 46.0 | 276 | 1.7419 | 0.2439 | | 1.7139 | 47.0 | 282 | 1.7419 | 0.2439 | | 1.7139 | 48.0 | 288 | 1.7419 | 0.2439 | | 1.6464 | 49.0 | 294 | 1.7419 | 0.2439 | | 1.6966 | 50.0 | 300 | 1.7419 | 0.2439 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5885 - Accuracy: 0.2889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 4.1632 | 0.2444 | | 4.2585 | 2.0 | 12 | 2.5063 | 0.2444 | | 4.2585 | 3.0 | 18 | 1.7281 | 0.2444 | | 1.7534 | 4.0 | 24 | 1.3946 | 0.2444 | | 1.5909 | 5.0 | 30 | 1.5054 | 0.2444 | | 1.5909 | 6.0 | 36 | 1.6818 | 0.2444 | | 1.5201 | 7.0 | 42 | 1.5863 | 0.3556 | | 1.5201 | 8.0 | 48 | 1.5570 | 0.2667 | | 1.4849 | 9.0 | 54 | 1.5043 | 0.4667 | | 1.4118 | 10.0 | 60 | 1.4204 | 0.2444 | | 1.4118 | 11.0 | 66 | 1.4708 | 0.2667 | | 1.4258 | 12.0 | 72 | 1.4115 | 0.2444 | | 1.4258 | 13.0 | 78 | 1.5806 | 0.2667 | | 1.444 | 14.0 | 84 | 1.3600 | 0.3111 | | 1.4369 | 15.0 | 90 | 1.4011 | 0.2667 | | 1.4369 | 16.0 | 96 | 1.2994 | 0.4889 | | 1.4072 | 17.0 | 102 | 1.3804 | 0.4222 | | 1.4072 | 18.0 | 108 | 2.3179 | 0.2444 | | 1.3585 | 19.0 | 114 | 1.4391 | 0.3111 | | 1.3358 | 20.0 | 120 | 2.0579 | 0.2667 | | 1.3358 | 21.0 | 126 | 1.3519 | 0.3333 | | 1.432 | 22.0 | 132 | 1.4609 | 0.2889 | | 1.432 | 23.0 | 138 | 2.1987 | 0.2444 | | 1.3028 | 24.0 | 144 | 1.5480 | 0.2444 | | 1.282 | 25.0 | 150 | 1.3898 | 0.2889 | | 1.282 | 26.0 | 156 | 1.2611 | 0.2444 | | 1.2714 | 27.0 | 162 | 1.7016 | 0.2444 | | 1.2714 | 28.0 | 168 | 1.3743 | 0.2889 | | 1.2632 | 29.0 | 174 | 1.4836 | 0.3778 | | 1.176 | 30.0 | 180 | 1.3073 | 0.4 | | 1.176 | 31.0 | 186 | 1.4096 | 0.2667 | | 1.1646 | 32.0 | 192 | 1.4023 | 0.4222 | | 1.1646 | 33.0 | 198 | 1.4449 | 0.4 | | 1.1055 | 34.0 | 204 | 1.6514 | 0.2889 | | 1.1692 | 35.0 | 210 | 1.4679 | 0.3111 | | 1.1692 | 36.0 | 216 | 1.6234 | 0.2667 | | 1.1228 | 37.0 | 222 | 1.6770 | 0.3333 | | 1.1228 | 38.0 | 228 | 1.5646 | 0.2667 | | 1.0125 | 39.0 | 234 | 1.5851 | 0.2889 | | 1.0301 | 40.0 | 240 | 1.5653 | 0.2889 | | 1.0301 | 41.0 | 246 | 1.5924 | 0.2667 | | 1.0049 | 42.0 | 252 | 1.5885 | 0.2889 | | 1.0049 | 43.0 | 258 | 1.5885 | 0.2889 | | 1.0088 | 44.0 | 264 | 1.5885 | 0.2889 | | 0.9822 | 45.0 | 270 | 1.5885 | 0.2889 | | 0.9822 | 46.0 | 276 | 1.5885 | 0.2889 | | 0.9822 | 47.0 | 282 | 1.5885 | 0.2889 | | 0.9822 | 48.0 | 288 | 1.5885 | 0.2889 | | 0.9898 | 49.0 | 294 | 1.5885 | 0.2889 | | 0.9935 | 50.0 | 300 | 1.5885 | 0.2889 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2949 - Accuracy: 0.3556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 6.6312 | 0.2667 | | 4.1013 | 2.0 | 12 | 2.1471 | 0.2444 | | 4.1013 | 3.0 | 18 | 1.7992 | 0.2444 | | 1.7936 | 4.0 | 24 | 1.5377 | 0.2667 | | 1.5908 | 5.0 | 30 | 1.6029 | 0.2444 | | 1.5908 | 6.0 | 36 | 1.5728 | 0.2444 | | 1.533 | 7.0 | 42 | 1.6272 | 0.2444 | | 1.533 | 8.0 | 48 | 1.5192 | 0.2667 | | 1.4887 | 9.0 | 54 | 1.4382 | 0.2444 | | 1.4288 | 10.0 | 60 | 1.4387 | 0.2444 | | 1.4288 | 11.0 | 66 | 1.4770 | 0.2667 | | 1.422 | 12.0 | 72 | 1.3624 | 0.2444 | | 1.422 | 13.0 | 78 | 1.4332 | 0.2667 | | 1.4231 | 14.0 | 84 | 1.4892 | 0.2444 | | 1.385 | 15.0 | 90 | 1.3102 | 0.4222 | | 1.385 | 16.0 | 96 | 1.3352 | 0.3333 | | 1.4799 | 17.0 | 102 | 1.6140 | 0.3111 | | 1.4799 | 18.0 | 108 | 1.4774 | 0.2444 | | 1.4126 | 19.0 | 114 | 1.3130 | 0.3333 | | 1.3511 | 20.0 | 120 | 1.2400 | 0.4222 | | 1.3511 | 21.0 | 126 | 1.5468 | 0.2667 | | 1.412 | 22.0 | 132 | 1.4525 | 0.2667 | | 1.412 | 23.0 | 138 | 1.2484 | 0.3778 | | 1.3184 | 24.0 | 144 | 1.5741 | 0.2444 | | 1.3429 | 25.0 | 150 | 1.3487 | 0.4444 | | 1.3429 | 26.0 | 156 | 1.3203 | 0.3111 | | 1.2824 | 27.0 | 162 | 1.2257 | 0.4222 | | 1.2824 | 28.0 | 168 | 1.3520 | 0.2222 | | 1.2504 | 29.0 | 174 | 1.1717 | 0.4667 | | 1.235 | 30.0 | 180 | 1.2327 | 0.3778 | | 1.235 | 31.0 | 186 | 1.3371 | 0.4 | | 1.2286 | 32.0 | 192 | 1.3224 | 0.2889 | | 1.2286 | 33.0 | 198 | 1.2295 | 0.3778 | | 1.168 | 34.0 | 204 | 1.2716 | 0.3111 | | 1.2345 | 35.0 | 210 | 1.2743 | 0.3111 | | 1.2345 | 36.0 | 216 | 1.3964 | 0.3778 | | 1.2057 | 37.0 | 222 | 1.3905 | 0.3556 | | 1.2057 | 38.0 | 228 | 1.2908 | 0.3778 | | 1.1197 | 39.0 | 234 | 1.2888 | 0.3556 | | 1.1518 | 40.0 | 240 | 1.2704 | 0.4 | | 1.1518 | 41.0 | 246 | 1.3067 | 0.3556 | | 1.1311 | 42.0 | 252 | 1.2949 | 0.3556 | | 1.1311 | 43.0 | 258 | 1.2949 | 0.3556 | | 1.109 | 44.0 | 264 | 1.2949 | 0.3556 | | 1.1464 | 45.0 | 270 | 1.2949 | 0.3556 | | 1.1464 | 46.0 | 276 | 1.2949 | 0.3556 | | 1.0982 | 47.0 | 282 | 1.2949 | 0.3556 | | 1.0982 | 48.0 | 288 | 1.2949 | 0.3556 | | 1.1635 | 49.0 | 294 | 1.2949 | 0.3556 | | 1.1115 | 50.0 | 300 | 1.2949 | 0.3556 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1536 - Accuracy: 0.4186 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 3.8148 | 0.2558 | | 4.0682 | 2.0 | 12 | 1.5106 | 0.2558 | | 4.0682 | 3.0 | 18 | 2.1015 | 0.2558 | | 1.8697 | 4.0 | 24 | 2.1521 | 0.2558 | | 1.6204 | 5.0 | 30 | 2.0540 | 0.2558 | | 1.6204 | 6.0 | 36 | 1.4487 | 0.2558 | | 1.5557 | 7.0 | 42 | 1.5322 | 0.2326 | | 1.5557 | 8.0 | 48 | 1.6480 | 0.2558 | | 1.5276 | 9.0 | 54 | 1.5085 | 0.2558 | | 1.4446 | 10.0 | 60 | 1.3921 | 0.2558 | | 1.4446 | 11.0 | 66 | 1.5703 | 0.2558 | | 1.4728 | 12.0 | 72 | 1.3608 | 0.2791 | | 1.4728 | 13.0 | 78 | 1.4250 | 0.3488 | | 1.3652 | 14.0 | 84 | 1.4495 | 0.2558 | | 1.3593 | 15.0 | 90 | 1.4182 | 0.3023 | | 1.3593 | 16.0 | 96 | 1.5418 | 0.3023 | | 1.2943 | 17.0 | 102 | 1.4454 | 0.3256 | | 1.2943 | 18.0 | 108 | 1.5941 | 0.3721 | | 1.2915 | 19.0 | 114 | 1.4889 | 0.2558 | | 1.2591 | 20.0 | 120 | 1.3804 | 0.3488 | | 1.2591 | 21.0 | 126 | 1.8125 | 0.2558 | | 1.2263 | 22.0 | 132 | 1.4098 | 0.3023 | | 1.2263 | 23.0 | 138 | 1.4818 | 0.2558 | | 1.1885 | 24.0 | 144 | 1.4257 | 0.3721 | | 1.1814 | 25.0 | 150 | 1.4317 | 0.3023 | | 1.1814 | 26.0 | 156 | 1.3854 | 0.3488 | | 1.1163 | 27.0 | 162 | 1.9054 | 0.3256 | | 1.1163 | 28.0 | 168 | 1.3109 | 0.3488 | | 1.0609 | 29.0 | 174 | 1.3896 | 0.3488 | | 1.1038 | 30.0 | 180 | 1.3466 | 0.3256 | | 1.1038 | 31.0 | 186 | 1.3101 | 0.3256 | | 1.0099 | 32.0 | 192 | 1.2865 | 0.3721 | | 1.0099 | 33.0 | 198 | 1.2846 | 0.3721 | | 1.0297 | 34.0 | 204 | 1.2587 | 0.4186 | | 0.964 | 35.0 | 210 | 1.2832 | 0.3953 | | 0.964 | 36.0 | 216 | 1.1929 | 0.3721 | | 0.9335 | 37.0 | 222 | 1.2162 | 0.3953 | | 0.9335 | 38.0 | 228 | 1.1906 | 0.4419 | | 0.8668 | 39.0 | 234 | 1.1859 | 0.4186 | | 0.8296 | 40.0 | 240 | 1.1516 | 0.4884 | | 0.8296 | 41.0 | 246 | 1.1577 | 0.4651 | | 0.8332 | 42.0 | 252 | 1.1536 | 0.4186 | | 0.8332 | 43.0 | 258 | 1.1536 | 0.4186 | | 0.8289 | 44.0 | 264 | 1.1536 | 0.4186 | | 0.8217 | 45.0 | 270 | 1.1536 | 0.4186 | | 0.8217 | 46.0 | 276 | 1.1536 | 0.4186 | | 0.8205 | 47.0 | 282 | 1.1536 | 0.4186 | | 0.8205 | 48.0 | 288 | 1.1536 | 0.4186 | | 0.8548 | 49.0 | 294 | 1.1536 | 0.4186 | | 0.8042 | 50.0 | 300 | 1.1536 | 0.4186 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0712 - Accuracy: 0.4762 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 5.0165 | 0.2381 | | 4.2481 | 2.0 | 12 | 3.3074 | 0.2381 | | 4.2481 | 3.0 | 18 | 1.5288 | 0.2619 | | 2.0024 | 4.0 | 24 | 1.5375 | 0.2381 | | 1.6731 | 5.0 | 30 | 1.4069 | 0.2619 | | 1.6731 | 6.0 | 36 | 1.8969 | 0.2381 | | 1.5329 | 7.0 | 42 | 1.4811 | 0.2381 | | 1.5329 | 8.0 | 48 | 1.4117 | 0.2619 | | 1.475 | 9.0 | 54 | 1.4704 | 0.2619 | | 1.4639 | 10.0 | 60 | 1.4459 | 0.2381 | | 1.4639 | 11.0 | 66 | 1.3572 | 0.4524 | | 1.4524 | 12.0 | 72 | 1.2630 | 0.4524 | | 1.4524 | 13.0 | 78 | 1.2843 | 0.4524 | | 1.4025 | 14.0 | 84 | 1.3420 | 0.2857 | | 1.3666 | 15.0 | 90 | 1.4060 | 0.2381 | | 1.3666 | 16.0 | 96 | 1.2621 | 0.3810 | | 1.3178 | 17.0 | 102 | 1.2969 | 0.2857 | | 1.3178 | 18.0 | 108 | 1.2881 | 0.3333 | | 1.3667 | 19.0 | 114 | 1.3980 | 0.2857 | | 1.3043 | 20.0 | 120 | 1.5195 | 0.2857 | | 1.3043 | 21.0 | 126 | 1.1841 | 0.4048 | | 1.2859 | 22.0 | 132 | 1.0567 | 0.5238 | | 1.2859 | 23.0 | 138 | 1.2258 | 0.2619 | | 1.2496 | 24.0 | 144 | 1.2372 | 0.2857 | | 1.252 | 25.0 | 150 | 1.4386 | 0.3333 | | 1.252 | 26.0 | 156 | 1.1416 | 0.3810 | | 1.2296 | 27.0 | 162 | 1.0872 | 0.4286 | | 1.2296 | 28.0 | 168 | 1.4121 | 0.2857 | | 1.1581 | 29.0 | 174 | 1.0555 | 0.5476 | | 1.2027 | 30.0 | 180 | 1.1296 | 0.4762 | | 1.2027 | 31.0 | 186 | 1.2095 | 0.4048 | | 1.1595 | 32.0 | 192 | 1.0821 | 0.4762 | | 1.1595 | 33.0 | 198 | 1.1681 | 0.3810 | | 1.1909 | 34.0 | 204 | 1.1147 | 0.4762 | | 1.1121 | 35.0 | 210 | 1.0734 | 0.4048 | | 1.1121 | 36.0 | 216 | 1.0002 | 0.5238 | | 1.1218 | 37.0 | 222 | 1.1912 | 0.3095 | | 1.1218 | 38.0 | 228 | 1.0883 | 0.4524 | | 1.1024 | 39.0 | 234 | 1.1229 | 0.4286 | | 1.0678 | 40.0 | 240 | 1.0903 | 0.4762 | | 1.0678 | 41.0 | 246 | 1.0717 | 0.4762 | | 1.058 | 42.0 | 252 | 1.0712 | 0.4762 | | 1.058 | 43.0 | 258 | 1.0712 | 0.4762 | | 1.0512 | 44.0 | 264 | 1.0712 | 0.4762 | | 1.0743 | 45.0 | 270 | 1.0712 | 0.4762 | | 1.0743 | 46.0 | 276 | 1.0712 | 0.4762 | | 1.0691 | 47.0 | 282 | 1.0712 | 0.4762 | | 1.0691 | 48.0 | 288 | 1.0712 | 0.4762 | | 1.052 | 49.0 | 294 | 1.0712 | 0.4762 | | 1.066 | 50.0 | 300 | 1.0712 | 0.4762 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1358 - Accuracy: 0.6098 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 4.7231 | 0.2683 | | 4.2141 | 2.0 | 12 | 1.8531 | 0.2683 | | 4.2141 | 3.0 | 18 | 1.6449 | 0.2439 | | 1.9845 | 4.0 | 24 | 1.4265 | 0.2439 | | 1.5807 | 5.0 | 30 | 2.0165 | 0.2439 | | 1.5807 | 6.0 | 36 | 1.5975 | 0.2683 | | 1.5979 | 7.0 | 42 | 1.4305 | 0.3171 | | 1.5979 | 8.0 | 48 | 1.4587 | 0.2683 | | 1.4992 | 9.0 | 54 | 1.2917 | 0.3171 | | 1.4954 | 10.0 | 60 | 1.2462 | 0.4390 | | 1.4954 | 11.0 | 66 | 1.2479 | 0.2683 | | 1.415 | 12.0 | 72 | 1.1246 | 0.5122 | | 1.415 | 13.0 | 78 | 1.1689 | 0.4878 | | 1.374 | 14.0 | 84 | 1.3767 | 0.2927 | | 1.3675 | 15.0 | 90 | 1.1692 | 0.4146 | | 1.3675 | 16.0 | 96 | 1.6528 | 0.2927 | | 1.319 | 17.0 | 102 | 1.3151 | 0.3659 | | 1.319 | 18.0 | 108 | 1.1475 | 0.4146 | | 1.3335 | 19.0 | 114 | 1.1506 | 0.3415 | | 1.2819 | 20.0 | 120 | 1.2300 | 0.3902 | | 1.2819 | 21.0 | 126 | 1.1641 | 0.4146 | | 1.2507 | 22.0 | 132 | 1.4148 | 0.3659 | | 1.2507 | 23.0 | 138 | 1.3061 | 0.3415 | | 1.2134 | 24.0 | 144 | 1.2367 | 0.3415 | | 1.2611 | 25.0 | 150 | 1.2383 | 0.4878 | | 1.2611 | 26.0 | 156 | 1.0375 | 0.4878 | | 1.2053 | 27.0 | 162 | 1.1983 | 0.4878 | | 1.2053 | 28.0 | 168 | 1.1898 | 0.4146 | | 1.1593 | 29.0 | 174 | 1.1479 | 0.4878 | | 1.2426 | 30.0 | 180 | 1.1382 | 0.5610 | | 1.2426 | 31.0 | 186 | 1.0558 | 0.5610 | | 1.1866 | 32.0 | 192 | 1.1895 | 0.4390 | | 1.1866 | 33.0 | 198 | 1.2172 | 0.4146 | | 1.1453 | 34.0 | 204 | 1.3773 | 0.4146 | | 1.1026 | 35.0 | 210 | 1.1168 | 0.5122 | | 1.1026 | 36.0 | 216 | 1.1184 | 0.5610 | | 1.131 | 37.0 | 222 | 1.1344 | 0.5366 | | 1.131 | 38.0 | 228 | 1.0932 | 0.5122 | | 1.1098 | 39.0 | 234 | 1.1070 | 0.6098 | | 1.0797 | 40.0 | 240 | 1.1237 | 0.5854 | | 1.0797 | 41.0 | 246 | 1.1366 | 0.6098 | | 1.0648 | 42.0 | 252 | 1.1358 | 0.6098 | | 1.0648 | 43.0 | 258 | 1.1358 | 0.6098 | | 1.0281 | 44.0 | 264 | 1.1358 | 0.6098 | | 1.0542 | 45.0 | 270 | 1.1358 | 0.6098 | | 1.0542 | 46.0 | 276 | 1.1358 | 0.6098 | | 1.0409 | 47.0 | 282 | 1.1358 | 0.6098 | | 1.0409 | 48.0 | 288 | 1.1358 | 0.6098 | | 1.0504 | 49.0 | 294 | 1.1358 | 0.6098 | | 1.0111 | 50.0 | 300 | 1.1358 | 0.6098 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_0001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 3.4166 - Accuracy: 0.5556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 2.1314 | 0.2444 | | 2.0481 | 2.0 | 12 | 1.5573 | 0.2444 | | 2.0481 | 3.0 | 18 | 1.4598 | 0.2444 | | 1.5099 | 4.0 | 24 | 1.4194 | 0.2444 | | 1.4253 | 5.0 | 30 | 1.3528 | 0.2667 | | 1.4253 | 6.0 | 36 | 1.6348 | 0.2444 | | 1.3319 | 7.0 | 42 | 1.3901 | 0.4444 | | 1.3319 | 8.0 | 48 | 1.3151 | 0.2889 | | 1.2142 | 9.0 | 54 | 1.3395 | 0.3333 | | 1.1416 | 10.0 | 60 | 1.4176 | 0.3556 | | 1.1416 | 11.0 | 66 | 1.9072 | 0.2667 | | 0.9889 | 12.0 | 72 | 1.7446 | 0.3111 | | 0.9889 | 13.0 | 78 | 1.4748 | 0.3778 | | 0.8552 | 14.0 | 84 | 1.7450 | 0.3778 | | 0.6798 | 15.0 | 90 | 1.6042 | 0.4889 | | 0.6798 | 16.0 | 96 | 1.5863 | 0.4222 | | 0.563 | 17.0 | 102 | 1.9311 | 0.4 | | 0.563 | 18.0 | 108 | 1.9509 | 0.4444 | | 0.3845 | 19.0 | 114 | 2.1256 | 0.4667 | | 0.2041 | 20.0 | 120 | 2.4131 | 0.4889 | | 0.2041 | 21.0 | 126 | 2.1029 | 0.4667 | | 0.1874 | 22.0 | 132 | 2.0412 | 0.5778 | | 0.1874 | 23.0 | 138 | 2.4952 | 0.4889 | | 0.0735 | 24.0 | 144 | 2.8992 | 0.4667 | | 0.0229 | 25.0 | 150 | 2.7495 | 0.5556 | | 0.0229 | 26.0 | 156 | 3.2879 | 0.4667 | | 0.0293 | 27.0 | 162 | 3.1526 | 0.5111 | | 0.0293 | 28.0 | 168 | 3.0123 | 0.5333 | | 0.0023 | 29.0 | 174 | 3.0812 | 0.5556 | | 0.0008 | 30.0 | 180 | 3.1384 | 0.5556 | | 0.0008 | 31.0 | 186 | 3.2017 | 0.5556 | | 0.0005 | 32.0 | 192 | 3.2443 | 0.5556 | | 0.0005 | 33.0 | 198 | 3.2806 | 0.5556 | | 0.0005 | 34.0 | 204 | 3.3167 | 0.5556 | | 0.0004 | 35.0 | 210 | 3.3393 | 0.5556 | | 0.0004 | 36.0 | 216 | 3.3662 | 0.5556 | | 0.0004 | 37.0 | 222 | 3.3843 | 0.5556 | | 0.0004 | 38.0 | 228 | 3.3970 | 0.5556 | | 0.0003 | 39.0 | 234 | 3.4053 | 0.5556 | | 0.0003 | 40.0 | 240 | 3.4123 | 0.5556 | | 0.0003 | 41.0 | 246 | 3.4159 | 0.5556 | | 0.0003 | 42.0 | 252 | 3.4166 | 0.5556 | | 0.0003 | 43.0 | 258 | 3.4166 | 0.5556 | | 0.0003 | 44.0 | 264 | 3.4166 | 0.5556 | | 0.0003 | 45.0 | 270 | 3.4166 | 0.5556 | | 0.0003 | 46.0 | 276 | 3.4166 | 0.5556 | | 0.0003 | 47.0 | 282 | 3.4166 | 0.5556 | | 0.0003 | 48.0 | 288 | 3.4166 | 0.5556 | | 0.0003 | 49.0 | 294 | 3.4166 | 0.5556 | | 0.0003 | 50.0 | 300 | 3.4166 | 0.5556 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_0001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 3.1133 - Accuracy: 0.5556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.9323 | 0.2444 | | 2.0865 | 2.0 | 12 | 1.4427 | 0.2444 | | 2.0865 | 3.0 | 18 | 1.4293 | 0.2444 | | 1.4431 | 4.0 | 24 | 1.3952 | 0.4667 | | 1.4003 | 5.0 | 30 | 1.2967 | 0.4 | | 1.4003 | 6.0 | 36 | 1.4719 | 0.2444 | | 1.3496 | 7.0 | 42 | 1.3224 | 0.3556 | | 1.3496 | 8.0 | 48 | 1.4673 | 0.3778 | | 1.2064 | 9.0 | 54 | 1.4551 | 0.2667 | | 1.1859 | 10.0 | 60 | 1.3687 | 0.3111 | | 1.1859 | 11.0 | 66 | 1.2313 | 0.4444 | | 1.0817 | 12.0 | 72 | 1.1514 | 0.4444 | | 1.0817 | 13.0 | 78 | 1.1701 | 0.4444 | | 1.0144 | 14.0 | 84 | 1.2204 | 0.4222 | | 0.8578 | 15.0 | 90 | 1.1603 | 0.4889 | | 0.8578 | 16.0 | 96 | 1.0987 | 0.5111 | | 0.8063 | 17.0 | 102 | 0.9277 | 0.5111 | | 0.8063 | 18.0 | 108 | 1.2038 | 0.5333 | | 0.601 | 19.0 | 114 | 0.9886 | 0.6 | | 0.465 | 20.0 | 120 | 1.5667 | 0.5111 | | 0.465 | 21.0 | 126 | 1.8238 | 0.4889 | | 0.2956 | 22.0 | 132 | 1.6043 | 0.4222 | | 0.2956 | 23.0 | 138 | 1.2746 | 0.4889 | | 0.3513 | 24.0 | 144 | 1.6389 | 0.5556 | | 0.2137 | 25.0 | 150 | 1.6350 | 0.4889 | | 0.2137 | 26.0 | 156 | 1.5926 | 0.4667 | | 0.191 | 27.0 | 162 | 1.8516 | 0.4889 | | 0.191 | 28.0 | 168 | 2.3628 | 0.4889 | | 0.0581 | 29.0 | 174 | 2.3998 | 0.4889 | | 0.0517 | 30.0 | 180 | 2.3913 | 0.5333 | | 0.0517 | 31.0 | 186 | 2.7108 | 0.5556 | | 0.005 | 32.0 | 192 | 2.8104 | 0.5556 | | 0.005 | 33.0 | 198 | 2.8829 | 0.5556 | | 0.0008 | 34.0 | 204 | 2.9326 | 0.5333 | | 0.0006 | 35.0 | 210 | 2.9793 | 0.5556 | | 0.0006 | 36.0 | 216 | 3.0150 | 0.5556 | | 0.0005 | 37.0 | 222 | 3.0520 | 0.5556 | | 0.0005 | 38.0 | 228 | 3.0772 | 0.5556 | | 0.0004 | 39.0 | 234 | 3.0948 | 0.5556 | | 0.0004 | 40.0 | 240 | 3.1038 | 0.5556 | | 0.0004 | 41.0 | 246 | 3.1116 | 0.5556 | | 0.0004 | 42.0 | 252 | 3.1133 | 0.5556 | | 0.0004 | 43.0 | 258 | 3.1133 | 0.5556 | | 0.0004 | 44.0 | 264 | 3.1133 | 0.5556 | | 0.0004 | 45.0 | 270 | 3.1133 | 0.5556 | | 0.0004 | 46.0 | 276 | 3.1133 | 0.5556 | | 0.0004 | 47.0 | 282 | 3.1133 | 0.5556 | | 0.0004 | 48.0 | 288 | 3.1133 | 0.5556 | | 0.0004 | 49.0 | 294 | 3.1133 | 0.5556 | | 0.0004 | 50.0 | 300 | 3.1133 | 0.5556 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_0001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.2114 - Accuracy: 0.5814 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.4590 | 0.2558 | | 2.1915 | 2.0 | 12 | 1.4820 | 0.2558 | | 2.1915 | 3.0 | 18 | 1.4635 | 0.3488 | | 1.4733 | 4.0 | 24 | 1.6507 | 0.2558 | | 1.4003 | 5.0 | 30 | 1.5038 | 0.2558 | | 1.4003 | 6.0 | 36 | 1.5372 | 0.2093 | | 1.28 | 7.0 | 42 | 1.4420 | 0.3023 | | 1.28 | 8.0 | 48 | 1.3681 | 0.3488 | | 1.2064 | 9.0 | 54 | 1.4133 | 0.3023 | | 1.1588 | 10.0 | 60 | 1.2991 | 0.4419 | | 1.1588 | 11.0 | 66 | 1.2547 | 0.4651 | | 1.133 | 12.0 | 72 | 1.2924 | 0.4884 | | 1.133 | 13.0 | 78 | 1.2566 | 0.4884 | | 1.0357 | 14.0 | 84 | 1.1915 | 0.5349 | | 0.8616 | 15.0 | 90 | 1.2058 | 0.5116 | | 0.8616 | 16.0 | 96 | 1.1399 | 0.5349 | | 0.6595 | 17.0 | 102 | 1.1462 | 0.5581 | | 0.6595 | 18.0 | 108 | 1.2856 | 0.5116 | | 0.501 | 19.0 | 114 | 1.1528 | 0.6047 | | 0.3761 | 20.0 | 120 | 1.2487 | 0.6047 | | 0.3761 | 21.0 | 126 | 1.9335 | 0.5581 | | 0.1818 | 22.0 | 132 | 2.0855 | 0.5349 | | 0.1818 | 23.0 | 138 | 2.8198 | 0.5349 | | 0.0677 | 24.0 | 144 | 1.5837 | 0.6279 | | 0.0703 | 25.0 | 150 | 2.1739 | 0.5116 | | 0.0703 | 26.0 | 156 | 2.0640 | 0.5581 | | 0.0053 | 27.0 | 162 | 2.0886 | 0.5814 | | 0.0053 | 28.0 | 168 | 2.1352 | 0.5814 | | 0.0006 | 29.0 | 174 | 2.1434 | 0.5814 | | 0.0004 | 30.0 | 180 | 2.1524 | 0.5814 | | 0.0004 | 31.0 | 186 | 2.1594 | 0.5814 | | 0.0003 | 32.0 | 192 | 2.1659 | 0.5814 | | 0.0003 | 33.0 | 198 | 2.1759 | 0.5814 | | 0.0003 | 34.0 | 204 | 2.1825 | 0.5814 | | 0.0003 | 35.0 | 210 | 2.1918 | 0.5814 | | 0.0003 | 36.0 | 216 | 2.1964 | 0.5814 | | 0.0002 | 37.0 | 222 | 2.2014 | 0.5814 | | 0.0002 | 38.0 | 228 | 2.2049 | 0.5814 | | 0.0002 | 39.0 | 234 | 2.2075 | 0.5814 | | 0.0002 | 40.0 | 240 | 2.2099 | 0.5814 | | 0.0002 | 41.0 | 246 | 2.2110 | 0.5814 | | 0.0002 | 42.0 | 252 | 2.2114 | 0.5814 | | 0.0002 | 43.0 | 258 | 2.2114 | 0.5814 | | 0.0002 | 44.0 | 264 | 2.2114 | 0.5814 | | 0.0002 | 45.0 | 270 | 2.2114 | 0.5814 | | 0.0002 | 46.0 | 276 | 2.2114 | 0.5814 | | 0.0002 | 47.0 | 282 | 2.2114 | 0.5814 | | 0.0002 | 48.0 | 288 | 2.2114 | 0.5814 | | 0.0002 | 49.0 | 294 | 2.2114 | 0.5814 | | 0.0002 | 50.0 | 300 | 2.2114 | 0.5814 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_0001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.3903 - Accuracy: 0.5714 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5699 | 0.2619 | | 2.0801 | 2.0 | 12 | 1.5693 | 0.2381 | | 2.0801 | 3.0 | 18 | 1.6087 | 0.2619 | | 1.5352 | 4.0 | 24 | 1.4372 | 0.2619 | | 1.4323 | 5.0 | 30 | 1.3212 | 0.3095 | | 1.4323 | 6.0 | 36 | 1.3803 | 0.2381 | | 1.3894 | 7.0 | 42 | 1.4606 | 0.4524 | | 1.3894 | 8.0 | 48 | 1.5543 | 0.2619 | | 1.294 | 9.0 | 54 | 1.1365 | 0.5 | | 1.1627 | 10.0 | 60 | 1.3219 | 0.3571 | | 1.1627 | 11.0 | 66 | 1.0508 | 0.5714 | | 1.0159 | 12.0 | 72 | 1.0736 | 0.5 | | 1.0159 | 13.0 | 78 | 1.6175 | 0.3571 | | 0.8051 | 14.0 | 84 | 1.4409 | 0.4524 | | 0.5869 | 15.0 | 90 | 2.1188 | 0.4286 | | 0.5869 | 16.0 | 96 | 1.8546 | 0.5476 | | 0.3044 | 17.0 | 102 | 1.7485 | 0.5 | | 0.3044 | 18.0 | 108 | 1.6544 | 0.5476 | | 0.2005 | 19.0 | 114 | 1.7817 | 0.5714 | | 0.0634 | 20.0 | 120 | 2.6836 | 0.5238 | | 0.0634 | 21.0 | 126 | 2.3476 | 0.5714 | | 0.0488 | 22.0 | 132 | 2.3551 | 0.5476 | | 0.0488 | 23.0 | 138 | 2.4123 | 0.5714 | | 0.0014 | 24.0 | 144 | 2.3855 | 0.5714 | | 0.0006 | 25.0 | 150 | 2.3709 | 0.5714 | | 0.0006 | 26.0 | 156 | 2.3623 | 0.5714 | | 0.0004 | 27.0 | 162 | 2.3621 | 0.5714 | | 0.0004 | 28.0 | 168 | 2.3646 | 0.5952 | | 0.0003 | 29.0 | 174 | 2.3639 | 0.5952 | | 0.0003 | 30.0 | 180 | 2.3665 | 0.5952 | | 0.0003 | 31.0 | 186 | 2.3692 | 0.5952 | | 0.0002 | 32.0 | 192 | 2.3723 | 0.5952 | | 0.0002 | 33.0 | 198 | 2.3750 | 0.5952 | | 0.0002 | 34.0 | 204 | 2.3777 | 0.5714 | | 0.0002 | 35.0 | 210 | 2.3806 | 0.5714 | | 0.0002 | 36.0 | 216 | 2.3834 | 0.5714 | | 0.0002 | 37.0 | 222 | 2.3855 | 0.5714 | | 0.0002 | 38.0 | 228 | 2.3872 | 0.5714 | | 0.0001 | 39.0 | 234 | 2.3885 | 0.5714 | | 0.0001 | 40.0 | 240 | 2.3895 | 0.5714 | | 0.0001 | 41.0 | 246 | 2.3902 | 0.5714 | | 0.0001 | 42.0 | 252 | 2.3903 | 0.5714 | | 0.0001 | 43.0 | 258 | 2.3903 | 0.5714 | | 0.0001 | 44.0 | 264 | 2.3903 | 0.5714 | | 0.0001 | 45.0 | 270 | 2.3903 | 0.5714 | | 0.0001 | 46.0 | 276 | 2.3903 | 0.5714 | | 0.0001 | 47.0 | 282 | 2.3903 | 0.5714 | | 0.0001 | 48.0 | 288 | 2.3903 | 0.5714 | | 0.0001 | 49.0 | 294 | 2.3903 | 0.5714 | | 0.0001 | 50.0 | 300 | 2.3903 | 0.5714 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_0001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.7903 - Accuracy: 0.6098 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6663 | 0.2683 | | 2.1037 | 2.0 | 12 | 1.5022 | 0.2439 | | 2.1037 | 3.0 | 18 | 1.3886 | 0.2439 | | 1.5578 | 4.0 | 24 | 1.5306 | 0.2683 | | 1.3692 | 5.0 | 30 | 1.1860 | 0.4390 | | 1.3692 | 6.0 | 36 | 1.1738 | 0.4634 | | 1.2281 | 7.0 | 42 | 1.1634 | 0.4146 | | 1.2281 | 8.0 | 48 | 1.0062 | 0.4878 | | 1.0442 | 9.0 | 54 | 1.0814 | 0.5366 | | 0.7932 | 10.0 | 60 | 1.0549 | 0.5366 | | 0.7932 | 11.0 | 66 | 1.1757 | 0.5610 | | 0.3677 | 12.0 | 72 | 1.3513 | 0.6829 | | 0.3677 | 13.0 | 78 | 1.1722 | 0.6098 | | 0.2156 | 14.0 | 84 | 1.5096 | 0.5854 | | 0.0882 | 15.0 | 90 | 1.2491 | 0.6341 | | 0.0882 | 16.0 | 96 | 1.4974 | 0.6098 | | 0.0242 | 17.0 | 102 | 1.6715 | 0.6341 | | 0.0242 | 18.0 | 108 | 1.6860 | 0.5854 | | 0.0023 | 19.0 | 114 | 1.6856 | 0.5854 | | 0.0006 | 20.0 | 120 | 1.6918 | 0.5854 | | 0.0006 | 21.0 | 126 | 1.7001 | 0.5854 | | 0.0004 | 22.0 | 132 | 1.7120 | 0.5854 | | 0.0004 | 23.0 | 138 | 1.7178 | 0.5854 | | 0.0003 | 24.0 | 144 | 1.7236 | 0.6098 | | 0.0003 | 25.0 | 150 | 1.7313 | 0.6098 | | 0.0003 | 26.0 | 156 | 1.7370 | 0.6098 | | 0.0002 | 27.0 | 162 | 1.7449 | 0.6098 | | 0.0002 | 28.0 | 168 | 1.7492 | 0.6098 | | 0.0002 | 29.0 | 174 | 1.7547 | 0.6098 | | 0.0002 | 30.0 | 180 | 1.7601 | 0.6098 | | 0.0002 | 31.0 | 186 | 1.7659 | 0.6098 | | 0.0002 | 32.0 | 192 | 1.7694 | 0.6098 | | 0.0002 | 33.0 | 198 | 1.7734 | 0.6098 | | 0.0002 | 34.0 | 204 | 1.7771 | 0.6098 | | 0.0002 | 35.0 | 210 | 1.7802 | 0.6098 | | 0.0002 | 36.0 | 216 | 1.7829 | 0.6098 | | 0.0002 | 37.0 | 222 | 1.7850 | 0.6098 | | 0.0002 | 38.0 | 228 | 1.7868 | 0.6098 | | 0.0002 | 39.0 | 234 | 1.7883 | 0.6098 | | 0.0001 | 40.0 | 240 | 1.7895 | 0.6098 | | 0.0001 | 41.0 | 246 | 1.7900 | 0.6098 | | 0.0002 | 42.0 | 252 | 1.7903 | 0.6098 | | 0.0002 | 43.0 | 258 | 1.7903 | 0.6098 | | 0.0002 | 44.0 | 264 | 1.7903 | 0.6098 | | 0.0002 | 45.0 | 270 | 1.7903 | 0.6098 | | 0.0002 | 46.0 | 276 | 1.7903 | 0.6098 | | 0.0002 | 47.0 | 282 | 1.7903 | 0.6098 | | 0.0002 | 48.0 | 288 | 1.7903 | 0.6098 | | 0.0002 | 49.0 | 294 | 1.7903 | 0.6098 | | 0.0001 | 50.0 | 300 | 1.7903 | 0.6098 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_00001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2620 - Accuracy: 0.6222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3330 | 0.3111 | | 1.415 | 2.0 | 12 | 1.1496 | 0.4222 | | 1.415 | 3.0 | 18 | 1.0095 | 0.6 | | 0.6844 | 4.0 | 24 | 1.0528 | 0.5333 | | 0.3289 | 5.0 | 30 | 0.8970 | 0.6 | | 0.3289 | 6.0 | 36 | 1.2025 | 0.5111 | | 0.1275 | 7.0 | 42 | 0.9016 | 0.6 | | 0.1275 | 8.0 | 48 | 1.0450 | 0.5778 | | 0.049 | 9.0 | 54 | 1.1767 | 0.5556 | | 0.0201 | 10.0 | 60 | 1.2285 | 0.5333 | | 0.0201 | 11.0 | 66 | 1.0471 | 0.6 | | 0.0071 | 12.0 | 72 | 0.9300 | 0.6444 | | 0.0071 | 13.0 | 78 | 1.1280 | 0.5778 | | 0.0042 | 14.0 | 84 | 1.1318 | 0.5556 | | 0.0029 | 15.0 | 90 | 1.1503 | 0.5556 | | 0.0029 | 16.0 | 96 | 1.0998 | 0.5778 | | 0.0023 | 17.0 | 102 | 1.1889 | 0.5778 | | 0.0023 | 18.0 | 108 | 1.2431 | 0.5778 | | 0.0018 | 19.0 | 114 | 1.2158 | 0.5778 | | 0.0016 | 20.0 | 120 | 1.2220 | 0.6 | | 0.0016 | 21.0 | 126 | 1.1974 | 0.6 | | 0.0014 | 22.0 | 132 | 1.2207 | 0.6 | | 0.0014 | 23.0 | 138 | 1.2242 | 0.6 | | 0.0013 | 24.0 | 144 | 1.2118 | 0.6 | | 0.0011 | 25.0 | 150 | 1.2264 | 0.6222 | | 0.0011 | 26.0 | 156 | 1.2250 | 0.6 | | 0.0011 | 27.0 | 162 | 1.2237 | 0.6 | | 0.0011 | 28.0 | 168 | 1.2290 | 0.6 | | 0.001 | 29.0 | 174 | 1.2254 | 0.6222 | | 0.0009 | 30.0 | 180 | 1.2294 | 0.6222 | | 0.0009 | 31.0 | 186 | 1.2336 | 0.6222 | | 0.0009 | 32.0 | 192 | 1.2394 | 0.6222 | | 0.0009 | 33.0 | 198 | 1.2441 | 0.6222 | | 0.0008 | 34.0 | 204 | 1.2483 | 0.6 | | 0.0008 | 35.0 | 210 | 1.2484 | 0.6 | | 0.0008 | 36.0 | 216 | 1.2564 | 0.6 | | 0.0008 | 37.0 | 222 | 1.2583 | 0.6222 | | 0.0008 | 38.0 | 228 | 1.2617 | 0.6222 | | 0.0007 | 39.0 | 234 | 1.2626 | 0.6222 | | 0.0007 | 40.0 | 240 | 1.2627 | 0.6222 | | 0.0007 | 41.0 | 246 | 1.2621 | 0.6222 | | 0.0007 | 42.0 | 252 | 1.2620 | 0.6222 | | 0.0007 | 43.0 | 258 | 1.2620 | 0.6222 | | 0.0007 | 44.0 | 264 | 1.2620 | 0.6222 | | 0.0007 | 45.0 | 270 | 1.2620 | 0.6222 | | 0.0007 | 46.0 | 276 | 1.2620 | 0.6222 | | 0.0007 | 47.0 | 282 | 1.2620 | 0.6222 | | 0.0007 | 48.0 | 288 | 1.2620 | 0.6222 | | 0.0007 | 49.0 | 294 | 1.2620 | 0.6222 | | 0.0007 | 50.0 | 300 | 1.2620 | 0.6222 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_00001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4676 - Accuracy: 0.6222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3067 | 0.4222 | | 1.3733 | 2.0 | 12 | 1.3951 | 0.4444 | | 1.3733 | 3.0 | 18 | 1.3740 | 0.4222 | | 0.6558 | 4.0 | 24 | 1.2467 | 0.5333 | | 0.3343 | 5.0 | 30 | 1.5107 | 0.4667 | | 0.3343 | 6.0 | 36 | 1.6079 | 0.4444 | | 0.1446 | 7.0 | 42 | 1.2227 | 0.5333 | | 0.1446 | 8.0 | 48 | 1.2018 | 0.5333 | | 0.0575 | 9.0 | 54 | 1.2408 | 0.5111 | | 0.0237 | 10.0 | 60 | 1.2581 | 0.5111 | | 0.0237 | 11.0 | 66 | 1.4007 | 0.6 | | 0.0072 | 12.0 | 72 | 1.2676 | 0.6444 | | 0.0072 | 13.0 | 78 | 1.2933 | 0.5778 | | 0.0036 | 14.0 | 84 | 1.3326 | 0.6222 | | 0.0025 | 15.0 | 90 | 1.3074 | 0.6444 | | 0.0025 | 16.0 | 96 | 1.3484 | 0.6222 | | 0.002 | 17.0 | 102 | 1.3984 | 0.6222 | | 0.002 | 18.0 | 108 | 1.3916 | 0.6222 | | 0.0017 | 19.0 | 114 | 1.3871 | 0.6222 | | 0.0014 | 20.0 | 120 | 1.4171 | 0.6222 | | 0.0014 | 21.0 | 126 | 1.4207 | 0.6222 | | 0.0012 | 22.0 | 132 | 1.4218 | 0.6222 | | 0.0012 | 23.0 | 138 | 1.4371 | 0.6222 | | 0.0011 | 24.0 | 144 | 1.4404 | 0.6222 | | 0.001 | 25.0 | 150 | 1.4321 | 0.6222 | | 0.001 | 26.0 | 156 | 1.4218 | 0.6222 | | 0.0009 | 27.0 | 162 | 1.4367 | 0.6222 | | 0.0009 | 28.0 | 168 | 1.4359 | 0.6222 | | 0.0008 | 29.0 | 174 | 1.4387 | 0.6222 | | 0.0008 | 30.0 | 180 | 1.4566 | 0.6222 | | 0.0008 | 31.0 | 186 | 1.4528 | 0.6222 | | 0.0007 | 32.0 | 192 | 1.4517 | 0.6222 | | 0.0007 | 33.0 | 198 | 1.4535 | 0.6222 | | 0.0007 | 34.0 | 204 | 1.4488 | 0.6444 | | 0.0007 | 35.0 | 210 | 1.4494 | 0.6444 | | 0.0007 | 36.0 | 216 | 1.4561 | 0.6444 | | 0.0007 | 37.0 | 222 | 1.4595 | 0.6444 | | 0.0007 | 38.0 | 228 | 1.4667 | 0.6222 | | 0.0006 | 39.0 | 234 | 1.4671 | 0.6222 | | 0.0007 | 40.0 | 240 | 1.4686 | 0.6222 | | 0.0007 | 41.0 | 246 | 1.4681 | 0.6222 | | 0.0006 | 42.0 | 252 | 1.4676 | 0.6222 | | 0.0006 | 43.0 | 258 | 1.4676 | 0.6222 | | 0.0006 | 44.0 | 264 | 1.4676 | 0.6222 | | 0.0006 | 45.0 | 270 | 1.4676 | 0.6222 | | 0.0006 | 46.0 | 276 | 1.4676 | 0.6222 | | 0.0006 | 47.0 | 282 | 1.4676 | 0.6222 | | 0.0006 | 48.0 | 288 | 1.4676 | 0.6222 | | 0.0006 | 49.0 | 294 | 1.4676 | 0.6222 | | 0.0006 | 50.0 | 300 | 1.4676 | 0.6222 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_00001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6755 - Accuracy: 0.7674 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.2450 | 0.3953 | | 1.3266 | 2.0 | 12 | 1.0282 | 0.4884 | | 1.3266 | 3.0 | 18 | 0.8766 | 0.6512 | | 0.6113 | 4.0 | 24 | 0.8143 | 0.6279 | | 0.301 | 5.0 | 30 | 0.9703 | 0.6047 | | 0.301 | 6.0 | 36 | 0.7894 | 0.7209 | | 0.1194 | 7.0 | 42 | 0.8712 | 0.6512 | | 0.1194 | 8.0 | 48 | 0.7416 | 0.6744 | | 0.0478 | 9.0 | 54 | 0.7289 | 0.6744 | | 0.0192 | 10.0 | 60 | 0.6181 | 0.7209 | | 0.0192 | 11.0 | 66 | 0.7194 | 0.6977 | | 0.007 | 12.0 | 72 | 0.6519 | 0.6744 | | 0.007 | 13.0 | 78 | 0.6428 | 0.7209 | | 0.0038 | 14.0 | 84 | 0.6323 | 0.6977 | | 0.0027 | 15.0 | 90 | 0.6303 | 0.7209 | | 0.0027 | 16.0 | 96 | 0.6496 | 0.7209 | | 0.0021 | 17.0 | 102 | 0.6367 | 0.7209 | | 0.0021 | 18.0 | 108 | 0.6386 | 0.7209 | | 0.0018 | 19.0 | 114 | 0.6562 | 0.7442 | | 0.0015 | 20.0 | 120 | 0.6541 | 0.7442 | | 0.0015 | 21.0 | 126 | 0.6493 | 0.7442 | | 0.0014 | 22.0 | 132 | 0.6669 | 0.7442 | | 0.0014 | 23.0 | 138 | 0.6543 | 0.7674 | | 0.0012 | 24.0 | 144 | 0.6581 | 0.7442 | | 0.0011 | 25.0 | 150 | 0.6534 | 0.7442 | | 0.0011 | 26.0 | 156 | 0.6644 | 0.7442 | | 0.001 | 27.0 | 162 | 0.6622 | 0.7674 | | 0.001 | 28.0 | 168 | 0.6583 | 0.7442 | | 0.001 | 29.0 | 174 | 0.6594 | 0.7674 | | 0.0009 | 30.0 | 180 | 0.6672 | 0.7674 | | 0.0009 | 31.0 | 186 | 0.6681 | 0.7674 | | 0.0008 | 32.0 | 192 | 0.6656 | 0.7674 | | 0.0008 | 33.0 | 198 | 0.6699 | 0.7674 | | 0.0008 | 34.0 | 204 | 0.6718 | 0.7674 | | 0.0008 | 35.0 | 210 | 0.6718 | 0.7674 | | 0.0008 | 36.0 | 216 | 0.6735 | 0.7674 | | 0.0008 | 37.0 | 222 | 0.6740 | 0.7674 | | 0.0008 | 38.0 | 228 | 0.6754 | 0.7674 | | 0.0007 | 39.0 | 234 | 0.6750 | 0.7674 | | 0.0007 | 40.0 | 240 | 0.6751 | 0.7674 | | 0.0007 | 41.0 | 246 | 0.6753 | 0.7674 | | 0.0007 | 42.0 | 252 | 0.6755 | 0.7674 | | 0.0007 | 43.0 | 258 | 0.6755 | 0.7674 | | 0.0007 | 44.0 | 264 | 0.6755 | 0.7674 | | 0.0007 | 45.0 | 270 | 0.6755 | 0.7674 | | 0.0007 | 46.0 | 276 | 0.6755 | 0.7674 | | 0.0007 | 47.0 | 282 | 0.6755 | 0.7674 | | 0.0007 | 48.0 | 288 | 0.6755 | 0.7674 | | 0.0007 | 49.0 | 294 | 0.6755 | 0.7674 | | 0.0007 | 50.0 | 300 | 0.6755 | 0.7674 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_00001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4549 - Accuracy: 0.8571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.4075 | 0.2857 | | 1.4145 | 2.0 | 12 | 1.3443 | 0.3571 | | 1.4145 | 3.0 | 18 | 0.8612 | 0.6667 | | 0.7818 | 4.0 | 24 | 0.9127 | 0.6190 | | 0.3833 | 5.0 | 30 | 0.5998 | 0.8810 | | 0.3833 | 6.0 | 36 | 0.5796 | 0.7857 | | 0.1457 | 7.0 | 42 | 0.5756 | 0.8333 | | 0.1457 | 8.0 | 48 | 0.5188 | 0.7857 | | 0.0559 | 9.0 | 54 | 0.5146 | 0.8571 | | 0.0198 | 10.0 | 60 | 0.5290 | 0.7857 | | 0.0198 | 11.0 | 66 | 0.4513 | 0.8571 | | 0.007 | 12.0 | 72 | 0.4696 | 0.8571 | | 0.007 | 13.0 | 78 | 0.4668 | 0.8333 | | 0.0039 | 14.0 | 84 | 0.4642 | 0.8333 | | 0.0028 | 15.0 | 90 | 0.4519 | 0.8571 | | 0.0028 | 16.0 | 96 | 0.4562 | 0.8333 | | 0.0022 | 17.0 | 102 | 0.4543 | 0.8571 | | 0.0022 | 18.0 | 108 | 0.4588 | 0.8571 | | 0.0018 | 19.0 | 114 | 0.4546 | 0.8571 | | 0.0016 | 20.0 | 120 | 0.4551 | 0.8333 | | 0.0016 | 21.0 | 126 | 0.4570 | 0.8333 | | 0.0013 | 22.0 | 132 | 0.4556 | 0.8333 | | 0.0013 | 23.0 | 138 | 0.4547 | 0.8333 | | 0.0012 | 24.0 | 144 | 0.4556 | 0.8571 | | 0.0011 | 25.0 | 150 | 0.4547 | 0.8571 | | 0.0011 | 26.0 | 156 | 0.4538 | 0.8571 | | 0.001 | 27.0 | 162 | 0.4593 | 0.8333 | | 0.001 | 28.0 | 168 | 0.4560 | 0.8333 | | 0.0009 | 29.0 | 174 | 0.4555 | 0.8333 | | 0.0009 | 30.0 | 180 | 0.4554 | 0.8333 | | 0.0009 | 31.0 | 186 | 0.4563 | 0.8333 | | 0.0008 | 32.0 | 192 | 0.4547 | 0.8571 | | 0.0008 | 33.0 | 198 | 0.4545 | 0.8571 | | 0.0008 | 34.0 | 204 | 0.4547 | 0.8571 | | 0.0007 | 35.0 | 210 | 0.4541 | 0.8571 | | 0.0007 | 36.0 | 216 | 0.4545 | 0.8571 | | 0.0007 | 37.0 | 222 | 0.4550 | 0.8571 | | 0.0007 | 38.0 | 228 | 0.4547 | 0.8571 | | 0.0007 | 39.0 | 234 | 0.4549 | 0.8571 | | 0.0007 | 40.0 | 240 | 0.4549 | 0.8571 | | 0.0007 | 41.0 | 246 | 0.4549 | 0.8571 | | 0.0007 | 42.0 | 252 | 0.4549 | 0.8571 | | 0.0007 | 43.0 | 258 | 0.4549 | 0.8571 | | 0.0007 | 44.0 | 264 | 0.4549 | 0.8571 | | 0.0007 | 45.0 | 270 | 0.4549 | 0.8571 | | 0.0007 | 46.0 | 276 | 0.4549 | 0.8571 | | 0.0007 | 47.0 | 282 | 0.4549 | 0.8571 | | 0.0007 | 48.0 | 288 | 0.4549 | 0.8571 | | 0.0007 | 49.0 | 294 | 0.4549 | 0.8571 | | 0.0007 | 50.0 | 300 | 0.4549 | 0.8571 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_00001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1280 - Accuracy: 0.6585 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.2888 | 0.4390 | | 1.3565 | 2.0 | 12 | 1.0130 | 0.5366 | | 1.3565 | 3.0 | 18 | 0.9361 | 0.5366 | | 0.667 | 4.0 | 24 | 0.8831 | 0.6585 | | 0.2929 | 5.0 | 30 | 0.8739 | 0.5854 | | 0.2929 | 6.0 | 36 | 0.9329 | 0.5854 | | 0.1055 | 7.0 | 42 | 0.9159 | 0.6585 | | 0.1055 | 8.0 | 48 | 1.0700 | 0.5854 | | 0.04 | 9.0 | 54 | 1.0357 | 0.5854 | | 0.013 | 10.0 | 60 | 0.9379 | 0.6585 | | 0.013 | 11.0 | 66 | 0.9964 | 0.6341 | | 0.0046 | 12.0 | 72 | 1.0009 | 0.6585 | | 0.0046 | 13.0 | 78 | 0.9889 | 0.6585 | | 0.0029 | 14.0 | 84 | 1.0074 | 0.6585 | | 0.0023 | 15.0 | 90 | 1.0258 | 0.6585 | | 0.0023 | 16.0 | 96 | 1.0330 | 0.6585 | | 0.0018 | 17.0 | 102 | 1.0391 | 0.6585 | | 0.0018 | 18.0 | 108 | 1.0476 | 0.6585 | | 0.0015 | 19.0 | 114 | 1.0552 | 0.6585 | | 0.0013 | 20.0 | 120 | 1.0615 | 0.6585 | | 0.0013 | 21.0 | 126 | 1.0642 | 0.6585 | | 0.0011 | 22.0 | 132 | 1.0600 | 0.6585 | | 0.0011 | 23.0 | 138 | 1.0791 | 0.6341 | | 0.001 | 24.0 | 144 | 1.0890 | 0.6585 | | 0.001 | 25.0 | 150 | 1.0948 | 0.6585 | | 0.001 | 26.0 | 156 | 1.1067 | 0.6585 | | 0.0008 | 27.0 | 162 | 1.0949 | 0.6585 | | 0.0008 | 28.0 | 168 | 1.1017 | 0.6585 | | 0.0008 | 29.0 | 174 | 1.1094 | 0.6585 | | 0.0007 | 30.0 | 180 | 1.1105 | 0.6585 | | 0.0007 | 31.0 | 186 | 1.1156 | 0.6585 | | 0.0007 | 32.0 | 192 | 1.1158 | 0.6585 | | 0.0007 | 33.0 | 198 | 1.1174 | 0.6585 | | 0.0007 | 34.0 | 204 | 1.1167 | 0.6585 | | 0.0006 | 35.0 | 210 | 1.1206 | 0.6585 | | 0.0006 | 36.0 | 216 | 1.1224 | 0.6585 | | 0.0006 | 37.0 | 222 | 1.1230 | 0.6585 | | 0.0006 | 38.0 | 228 | 1.1253 | 0.6585 | | 0.0006 | 39.0 | 234 | 1.1272 | 0.6585 | | 0.0006 | 40.0 | 240 | 1.1276 | 0.6585 | | 0.0006 | 41.0 | 246 | 1.1278 | 0.6585 | | 0.0006 | 42.0 | 252 | 1.1280 | 0.6585 | | 0.0006 | 43.0 | 258 | 1.1280 | 0.6585 | | 0.0006 | 44.0 | 264 | 1.1280 | 0.6585 | | 0.0006 | 45.0 | 270 | 1.1280 | 0.6585 | | 0.0006 | 46.0 | 276 | 1.1280 | 0.6585 | | 0.0006 | 47.0 | 282 | 1.1280 | 0.6585 | | 0.0006 | 48.0 | 288 | 1.1280 | 0.6585 | | 0.0006 | 49.0 | 294 | 1.1280 | 0.6585 | | 0.0006 | 50.0 | 300 | 1.1280 | 0.6585 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_adamax_001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_adamax_001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.0215 - Accuracy: 0.4667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 2.2870 | 0.2444 | | 2.1668 | 2.0 | 12 | 1.4669 | 0.2444 | | 2.1668 | 3.0 | 18 | 1.4980 | 0.2444 | | 1.4102 | 4.0 | 24 | 1.4751 | 0.2444 | | 1.4394 | 5.0 | 30 | 1.4286 | 0.2444 | | 1.4394 | 6.0 | 36 | 1.6019 | 0.2444 | | 1.3171 | 7.0 | 42 | 1.7291 | 0.2222 | | 1.3171 | 8.0 | 48 | 1.5314 | 0.3556 | | 1.2906 | 9.0 | 54 | 1.7281 | 0.2667 | | 1.2151 | 10.0 | 60 | 1.6012 | 0.2444 | | 1.2151 | 11.0 | 66 | 1.5621 | 0.4444 | | 1.1016 | 12.0 | 72 | 1.5069 | 0.2 | | 1.1016 | 13.0 | 78 | 1.5452 | 0.4222 | | 1.1085 | 14.0 | 84 | 1.5457 | 0.2889 | | 0.9838 | 15.0 | 90 | 1.7131 | 0.4 | | 0.9838 | 16.0 | 96 | 1.9947 | 0.2889 | | 1.003 | 17.0 | 102 | 1.7538 | 0.4222 | | 1.003 | 18.0 | 108 | 1.3632 | 0.4444 | | 0.846 | 19.0 | 114 | 1.7633 | 0.4 | | 0.7432 | 20.0 | 120 | 1.5259 | 0.4222 | | 0.7432 | 21.0 | 126 | 1.6982 | 0.4 | | 0.8111 | 22.0 | 132 | 1.4722 | 0.4 | | 0.8111 | 23.0 | 138 | 1.5772 | 0.4222 | | 0.6268 | 24.0 | 144 | 1.6621 | 0.4222 | | 0.5956 | 25.0 | 150 | 2.2283 | 0.4 | | 0.5956 | 26.0 | 156 | 1.5965 | 0.4667 | | 0.863 | 27.0 | 162 | 2.0067 | 0.4 | | 0.863 | 28.0 | 168 | 2.2609 | 0.3778 | | 0.575 | 29.0 | 174 | 1.7339 | 0.4222 | | 0.3505 | 30.0 | 180 | 1.6059 | 0.3778 | | 0.3505 | 31.0 | 186 | 1.7578 | 0.4444 | | 0.3884 | 32.0 | 192 | 1.8785 | 0.4444 | | 0.3884 | 33.0 | 198 | 1.5952 | 0.4222 | | 0.3742 | 34.0 | 204 | 1.9834 | 0.4444 | | 0.3113 | 35.0 | 210 | 1.8134 | 0.4222 | | 0.3113 | 36.0 | 216 | 2.1491 | 0.4 | | 0.4478 | 37.0 | 222 | 1.9419 | 0.4667 | | 0.4478 | 38.0 | 228 | 1.8426 | 0.4444 | | 0.1746 | 39.0 | 234 | 1.9349 | 0.4222 | | 0.1737 | 40.0 | 240 | 2.0085 | 0.4667 | | 0.1737 | 41.0 | 246 | 2.0238 | 0.4667 | | 0.1448 | 42.0 | 252 | 2.0215 | 0.4667 | | 0.1448 | 43.0 | 258 | 2.0215 | 0.4667 | | 0.1495 | 44.0 | 264 | 2.0215 | 0.4667 | | 0.1326 | 45.0 | 270 | 2.0215 | 0.4667 | | 0.1326 | 46.0 | 276 | 2.0215 | 0.4667 | | 0.1487 | 47.0 | 282 | 2.0215 | 0.4667 | | 0.1487 | 48.0 | 288 | 2.0215 | 0.4667 | | 0.1112 | 49.0 | 294 | 2.0215 | 0.4667 | | 0.1501 | 50.0 | 300 | 2.0215 | 0.4667 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_adamax_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_adamax_001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 3.0653 - Accuracy: 0.5778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5866 | 0.2444 | | 1.9023 | 2.0 | 12 | 1.3764 | 0.2444 | | 1.9023 | 3.0 | 18 | 1.3051 | 0.4222 | | 1.349 | 4.0 | 24 | 1.1457 | 0.4889 | | 1.2765 | 5.0 | 30 | 1.1296 | 0.5333 | | 1.2765 | 6.0 | 36 | 1.0799 | 0.4667 | | 0.9532 | 7.0 | 42 | 0.9251 | 0.5778 | | 0.9532 | 8.0 | 48 | 0.9697 | 0.6 | | 0.606 | 9.0 | 54 | 1.3926 | 0.4889 | | 0.572 | 10.0 | 60 | 1.7732 | 0.5778 | | 0.572 | 11.0 | 66 | 1.3882 | 0.5556 | | 0.5961 | 12.0 | 72 | 1.7835 | 0.5333 | | 0.5961 | 13.0 | 78 | 1.6876 | 0.5111 | | 0.36 | 14.0 | 84 | 2.6292 | 0.5556 | | 0.1021 | 15.0 | 90 | 3.3955 | 0.4444 | | 0.1021 | 16.0 | 96 | 2.7199 | 0.5333 | | 0.0705 | 17.0 | 102 | 3.2188 | 0.5778 | | 0.0705 | 18.0 | 108 | 2.9572 | 0.5778 | | 0.1408 | 19.0 | 114 | 3.4311 | 0.6222 | | 0.0481 | 20.0 | 120 | 3.3680 | 0.5111 | | 0.0481 | 21.0 | 126 | 3.9440 | 0.4889 | | 0.0285 | 22.0 | 132 | 3.0805 | 0.5111 | | 0.0285 | 23.0 | 138 | 3.2788 | 0.4889 | | 0.0077 | 24.0 | 144 | 3.3798 | 0.5111 | | 0.0144 | 25.0 | 150 | 3.3118 | 0.5333 | | 0.0144 | 26.0 | 156 | 3.1251 | 0.5111 | | 0.0005 | 27.0 | 162 | 2.9134 | 0.5778 | | 0.0005 | 28.0 | 168 | 2.8352 | 0.6 | | 0.0006 | 29.0 | 174 | 2.7529 | 0.5778 | | 0.0002 | 30.0 | 180 | 2.8235 | 0.6 | | 0.0002 | 31.0 | 186 | 2.8802 | 0.6 | | 0.0001 | 32.0 | 192 | 2.9253 | 0.5778 | | 0.0001 | 33.0 | 198 | 2.9651 | 0.5778 | | 0.0001 | 34.0 | 204 | 2.9943 | 0.5778 | | 0.0001 | 35.0 | 210 | 3.0146 | 0.5778 | | 0.0001 | 36.0 | 216 | 3.0314 | 0.5778 | | 0.0001 | 37.0 | 222 | 3.0446 | 0.5778 | | 0.0001 | 38.0 | 228 | 3.0538 | 0.5778 | | 0.0001 | 39.0 | 234 | 3.0596 | 0.5778 | | 0.0001 | 40.0 | 240 | 3.0631 | 0.5778 | | 0.0001 | 41.0 | 246 | 3.0649 | 0.5778 | | 0.0001 | 42.0 | 252 | 3.0653 | 0.5778 | | 0.0001 | 43.0 | 258 | 3.0653 | 0.5778 | | 0.0001 | 44.0 | 264 | 3.0653 | 0.5778 | | 0.0001 | 45.0 | 270 | 3.0653 | 0.5778 | | 0.0001 | 46.0 | 276 | 3.0653 | 0.5778 | | 0.0001 | 47.0 | 282 | 3.0653 | 0.5778 | | 0.0001 | 48.0 | 288 | 3.0653 | 0.5778 | | 0.0001 | 49.0 | 294 | 3.0653 | 0.5778 | | 0.0001 | 50.0 | 300 | 3.0653 | 0.5778 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_adamax_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_adamax_001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 3.7699 - Accuracy: 0.4651 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.4218 | 0.2558 | | 1.7221 | 2.0 | 12 | 1.4061 | 0.3953 | | 1.7221 | 3.0 | 18 | 1.4801 | 0.3256 | | 1.2972 | 4.0 | 24 | 1.5453 | 0.3023 | | 1.2115 | 5.0 | 30 | 1.2993 | 0.3953 | | 1.2115 | 6.0 | 36 | 1.4486 | 0.3721 | | 1.1196 | 7.0 | 42 | 1.4881 | 0.3721 | | 1.1196 | 8.0 | 48 | 1.2031 | 0.4419 | | 1.0394 | 9.0 | 54 | 1.1825 | 0.4651 | | 0.9076 | 10.0 | 60 | 1.3831 | 0.3953 | | 0.9076 | 11.0 | 66 | 1.5606 | 0.3953 | | 0.8351 | 12.0 | 72 | 1.6879 | 0.3721 | | 0.8351 | 13.0 | 78 | 1.5744 | 0.5581 | | 0.7325 | 14.0 | 84 | 2.1220 | 0.5116 | | 0.5767 | 15.0 | 90 | 2.2458 | 0.4884 | | 0.5767 | 16.0 | 96 | 2.4745 | 0.3953 | | 0.487 | 17.0 | 102 | 2.9255 | 0.3953 | | 0.487 | 18.0 | 108 | 2.8169 | 0.4186 | | 0.265 | 19.0 | 114 | 2.9600 | 0.4419 | | 0.2739 | 20.0 | 120 | 3.0131 | 0.3953 | | 0.2739 | 21.0 | 126 | 3.2413 | 0.4186 | | 0.1684 | 22.0 | 132 | 4.9920 | 0.3953 | | 0.1684 | 23.0 | 138 | 3.1514 | 0.5116 | | 0.3265 | 24.0 | 144 | 4.1598 | 0.3953 | | 0.2652 | 25.0 | 150 | 3.3248 | 0.4651 | | 0.2652 | 26.0 | 156 | 3.1898 | 0.4884 | | 0.1992 | 27.0 | 162 | 3.7937 | 0.3953 | | 0.1992 | 28.0 | 168 | 3.9838 | 0.4884 | | 0.1826 | 29.0 | 174 | 3.5764 | 0.3721 | | 0.124 | 30.0 | 180 | 4.1231 | 0.4419 | | 0.124 | 31.0 | 186 | 4.1455 | 0.4186 | | 0.1353 | 32.0 | 192 | 3.9925 | 0.4186 | | 0.1353 | 33.0 | 198 | 3.7016 | 0.5581 | | 0.0743 | 34.0 | 204 | 3.7997 | 0.5349 | | 0.0362 | 35.0 | 210 | 3.6073 | 0.4884 | | 0.0362 | 36.0 | 216 | 3.6198 | 0.4651 | | 0.0082 | 37.0 | 222 | 3.6509 | 0.4651 | | 0.0082 | 38.0 | 228 | 3.7081 | 0.4651 | | 0.003 | 39.0 | 234 | 3.7432 | 0.4651 | | 0.002 | 40.0 | 240 | 3.7616 | 0.4651 | | 0.002 | 41.0 | 246 | 3.7690 | 0.4651 | | 0.0018 | 42.0 | 252 | 3.7699 | 0.4651 | | 0.0018 | 43.0 | 258 | 3.7699 | 0.4651 | | 0.0016 | 44.0 | 264 | 3.7699 | 0.4651 | | 0.0017 | 45.0 | 270 | 3.7699 | 0.4651 | | 0.0017 | 46.0 | 276 | 3.7699 | 0.4651 | | 0.0017 | 47.0 | 282 | 3.7699 | 0.4651 | | 0.0017 | 48.0 | 288 | 3.7699 | 0.4651 | | 0.0018 | 49.0 | 294 | 3.7699 | 0.4651 | | 0.0017 | 50.0 | 300 | 3.7699 | 0.4651 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_adamax_001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_adamax_001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 3.1234 - Accuracy: 0.5952 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5633 | 0.2381 | | 1.7898 | 2.0 | 12 | 1.3816 | 0.2381 | | 1.7898 | 3.0 | 18 | 1.3607 | 0.2619 | | 1.4334 | 4.0 | 24 | 1.3501 | 0.2619 | | 1.3732 | 5.0 | 30 | 1.3553 | 0.2381 | | 1.3732 | 6.0 | 36 | 1.1841 | 0.4762 | | 1.3036 | 7.0 | 42 | 1.0576 | 0.5952 | | 1.3036 | 8.0 | 48 | 1.0689 | 0.5952 | | 1.2142 | 9.0 | 54 | 1.2296 | 0.5 | | 1.056 | 10.0 | 60 | 0.7879 | 0.6429 | | 1.056 | 11.0 | 66 | 0.7199 | 0.7143 | | 0.921 | 12.0 | 72 | 0.9775 | 0.6190 | | 0.921 | 13.0 | 78 | 0.8809 | 0.5952 | | 0.6456 | 14.0 | 84 | 1.0792 | 0.5476 | | 0.6348 | 15.0 | 90 | 1.0335 | 0.6190 | | 0.6348 | 16.0 | 96 | 1.7853 | 0.5714 | | 0.4743 | 17.0 | 102 | 1.5872 | 0.5714 | | 0.4743 | 18.0 | 108 | 2.0651 | 0.5 | | 0.2408 | 19.0 | 114 | 2.8369 | 0.4762 | | 0.2271 | 20.0 | 120 | 2.1149 | 0.6190 | | 0.2271 | 21.0 | 126 | 1.5722 | 0.6190 | | 0.3385 | 22.0 | 132 | 2.8555 | 0.5476 | | 0.3385 | 23.0 | 138 | 2.2068 | 0.6667 | | 0.0822 | 24.0 | 144 | 2.2969 | 0.6190 | | 0.0932 | 25.0 | 150 | 1.8785 | 0.7143 | | 0.0932 | 26.0 | 156 | 3.2275 | 0.5714 | | 0.0807 | 27.0 | 162 | 2.8847 | 0.5952 | | 0.0807 | 28.0 | 168 | 3.1184 | 0.5952 | | 0.0424 | 29.0 | 174 | 2.4583 | 0.6190 | | 0.0287 | 30.0 | 180 | 2.8305 | 0.5714 | | 0.0287 | 31.0 | 186 | 3.5171 | 0.5476 | | 0.0333 | 32.0 | 192 | 3.2119 | 0.5952 | | 0.0333 | 33.0 | 198 | 2.9811 | 0.5952 | | 0.0008 | 34.0 | 204 | 3.0451 | 0.5952 | | 0.0004 | 35.0 | 210 | 3.0670 | 0.5952 | | 0.0004 | 36.0 | 216 | 3.0857 | 0.5952 | | 0.0003 | 37.0 | 222 | 3.1009 | 0.5952 | | 0.0003 | 38.0 | 228 | 3.1113 | 0.5952 | | 0.0003 | 39.0 | 234 | 3.1177 | 0.5952 | | 0.0003 | 40.0 | 240 | 3.1213 | 0.5952 | | 0.0003 | 41.0 | 246 | 3.1231 | 0.5952 | | 0.0002 | 42.0 | 252 | 3.1234 | 0.5952 | | 0.0002 | 43.0 | 258 | 3.1234 | 0.5952 | | 0.0002 | 44.0 | 264 | 3.1234 | 0.5952 | | 0.0002 | 45.0 | 270 | 3.1234 | 0.5952 | | 0.0002 | 46.0 | 276 | 3.1234 | 0.5952 | | 0.0002 | 47.0 | 282 | 3.1234 | 0.5952 | | 0.0002 | 48.0 | 288 | 3.1234 | 0.5952 | | 0.0002 | 49.0 | 294 | 3.1234 | 0.5952 | | 0.0002 | 50.0 | 300 | 3.1234 | 0.5952 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_adamax_001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_adamax_001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.9089 - Accuracy: 0.5854 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5709 | 0.2683 | | 1.6365 | 2.0 | 12 | 1.3231 | 0.3171 | | 1.6365 | 3.0 | 18 | 1.0858 | 0.4878 | | 1.2777 | 4.0 | 24 | 1.0527 | 0.4634 | | 1.0819 | 5.0 | 30 | 2.4025 | 0.4878 | | 1.0819 | 6.0 | 36 | 1.0776 | 0.6098 | | 1.1957 | 7.0 | 42 | 1.2491 | 0.4878 | | 1.1957 | 8.0 | 48 | 1.2390 | 0.4878 | | 1.0582 | 9.0 | 54 | 2.4696 | 0.3659 | | 0.9645 | 10.0 | 60 | 0.9800 | 0.6585 | | 0.9645 | 11.0 | 66 | 1.4465 | 0.4878 | | 0.7158 | 12.0 | 72 | 1.3709 | 0.4146 | | 0.7158 | 13.0 | 78 | 1.8787 | 0.5610 | | 0.4707 | 14.0 | 84 | 2.2003 | 0.4878 | | 0.3746 | 15.0 | 90 | 2.7652 | 0.4390 | | 0.3746 | 16.0 | 96 | 1.4738 | 0.6098 | | 0.3815 | 17.0 | 102 | 2.1297 | 0.4878 | | 0.3815 | 18.0 | 108 | 2.7358 | 0.4634 | | 0.2562 | 19.0 | 114 | 2.1602 | 0.6341 | | 0.215 | 20.0 | 120 | 2.4495 | 0.5122 | | 0.215 | 21.0 | 126 | 2.2161 | 0.5366 | | 0.0855 | 22.0 | 132 | 2.6756 | 0.5610 | | 0.0855 | 23.0 | 138 | 3.4355 | 0.5366 | | 0.0976 | 24.0 | 144 | 2.8453 | 0.6098 | | 0.0588 | 25.0 | 150 | 2.9043 | 0.5854 | | 0.0588 | 26.0 | 156 | 3.0589 | 0.4878 | | 0.0051 | 27.0 | 162 | 2.7256 | 0.6098 | | 0.0051 | 28.0 | 168 | 2.6655 | 0.6098 | | 0.0018 | 29.0 | 174 | 2.6795 | 0.6098 | | 0.005 | 30.0 | 180 | 2.7568 | 0.6098 | | 0.005 | 31.0 | 186 | 2.8042 | 0.6098 | | 0.0004 | 32.0 | 192 | 2.8224 | 0.6098 | | 0.0004 | 33.0 | 198 | 2.8428 | 0.6098 | | 0.0002 | 34.0 | 204 | 2.8628 | 0.5854 | | 0.0002 | 35.0 | 210 | 2.8783 | 0.5854 | | 0.0002 | 36.0 | 216 | 2.8881 | 0.5854 | | 0.0002 | 37.0 | 222 | 2.8950 | 0.5854 | | 0.0002 | 38.0 | 228 | 2.9002 | 0.5854 | | 0.0002 | 39.0 | 234 | 2.9045 | 0.5854 | | 0.0002 | 40.0 | 240 | 2.9071 | 0.5854 | | 0.0002 | 41.0 | 246 | 2.9084 | 0.5854 | | 0.0001 | 42.0 | 252 | 2.9089 | 0.5854 | | 0.0001 | 43.0 | 258 | 2.9089 | 0.5854 | | 0.0002 | 44.0 | 264 | 2.9089 | 0.5854 | | 0.0001 | 45.0 | 270 | 2.9089 | 0.5854 | | 0.0001 | 46.0 | 276 | 2.9089 | 0.5854 | | 0.0002 | 47.0 | 282 | 2.9089 | 0.5854 | | 0.0002 | 48.0 | 288 | 2.9089 | 0.5854 | | 0.0001 | 49.0 | 294 | 2.9089 | 0.5854 | | 0.0001 | 50.0 | 300 | 2.9089 | 0.5854 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_adamax_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_adamax_0001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3048 - Accuracy: 0.5778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3083 | 0.3111 | | 1.2395 | 2.0 | 12 | 1.1116 | 0.5556 | | 1.2395 | 3.0 | 18 | 0.9370 | 0.6444 | | 0.5222 | 4.0 | 24 | 0.8939 | 0.6222 | | 0.1257 | 5.0 | 30 | 1.0613 | 0.6222 | | 0.1257 | 6.0 | 36 | 1.1544 | 0.6667 | | 0.0192 | 7.0 | 42 | 1.0970 | 0.6222 | | 0.0192 | 8.0 | 48 | 1.3834 | 0.5778 | | 0.0034 | 9.0 | 54 | 1.4273 | 0.6222 | | 0.0011 | 10.0 | 60 | 1.2955 | 0.6222 | | 0.0011 | 11.0 | 66 | 1.1578 | 0.6222 | | 0.0006 | 12.0 | 72 | 1.1209 | 0.6 | | 0.0006 | 13.0 | 78 | 1.1439 | 0.6 | | 0.0005 | 14.0 | 84 | 1.1840 | 0.6 | | 0.0004 | 15.0 | 90 | 1.2222 | 0.5778 | | 0.0004 | 16.0 | 96 | 1.2485 | 0.5778 | | 0.0003 | 17.0 | 102 | 1.2638 | 0.5778 | | 0.0003 | 18.0 | 108 | 1.2689 | 0.5778 | | 0.0003 | 19.0 | 114 | 1.2732 | 0.5778 | | 0.0003 | 20.0 | 120 | 1.2771 | 0.5778 | | 0.0003 | 21.0 | 126 | 1.2803 | 0.5778 | | 0.0003 | 22.0 | 132 | 1.2805 | 0.5778 | | 0.0003 | 23.0 | 138 | 1.2805 | 0.5778 | | 0.0002 | 24.0 | 144 | 1.2807 | 0.5778 | | 0.0002 | 25.0 | 150 | 1.2825 | 0.5778 | | 0.0002 | 26.0 | 156 | 1.2850 | 0.5778 | | 0.0002 | 27.0 | 162 | 1.2856 | 0.5778 | | 0.0002 | 28.0 | 168 | 1.2878 | 0.5778 | | 0.0002 | 29.0 | 174 | 1.2904 | 0.5778 | | 0.0002 | 30.0 | 180 | 1.2922 | 0.5778 | | 0.0002 | 31.0 | 186 | 1.2931 | 0.5778 | | 0.0002 | 32.0 | 192 | 1.2945 | 0.5778 | | 0.0002 | 33.0 | 198 | 1.2963 | 0.5778 | | 0.0002 | 34.0 | 204 | 1.2983 | 0.5778 | | 0.0002 | 35.0 | 210 | 1.2995 | 0.5778 | | 0.0002 | 36.0 | 216 | 1.3007 | 0.5778 | | 0.0002 | 37.0 | 222 | 1.3018 | 0.5778 | | 0.0002 | 38.0 | 228 | 1.3034 | 0.5778 | | 0.0002 | 39.0 | 234 | 1.3042 | 0.5778 | | 0.0002 | 40.0 | 240 | 1.3046 | 0.5778 | | 0.0002 | 41.0 | 246 | 1.3047 | 0.5778 | | 0.0002 | 42.0 | 252 | 1.3048 | 0.5778 | | 0.0002 | 43.0 | 258 | 1.3048 | 0.5778 | | 0.0002 | 44.0 | 264 | 1.3048 | 0.5778 | | 0.0002 | 45.0 | 270 | 1.3048 | 0.5778 | | 0.0002 | 46.0 | 276 | 1.3048 | 0.5778 | | 0.0002 | 47.0 | 282 | 1.3048 | 0.5778 | | 0.0002 | 48.0 | 288 | 1.3048 | 0.5778 | | 0.0002 | 49.0 | 294 | 1.3048 | 0.5778 | | 0.0002 | 50.0 | 300 | 1.3048 | 0.5778 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_adamax_0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_adamax_0001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.0748 - Accuracy: 0.6 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.2729 | 0.4 | | 1.2121 | 2.0 | 12 | 1.1250 | 0.6222 | | 1.2121 | 3.0 | 18 | 1.2362 | 0.5556 | | 0.4291 | 4.0 | 24 | 1.2042 | 0.6444 | | 0.1116 | 5.0 | 30 | 1.1861 | 0.6 | | 0.1116 | 6.0 | 36 | 1.6632 | 0.5556 | | 0.0196 | 7.0 | 42 | 1.7499 | 0.6 | | 0.0196 | 8.0 | 48 | 1.7915 | 0.5556 | | 0.0051 | 9.0 | 54 | 1.8168 | 0.5778 | | 0.0016 | 10.0 | 60 | 1.8187 | 0.6222 | | 0.0016 | 11.0 | 66 | 1.8480 | 0.6222 | | 0.0008 | 12.0 | 72 | 1.8621 | 0.6222 | | 0.0008 | 13.0 | 78 | 1.8730 | 0.6222 | | 0.0006 | 14.0 | 84 | 1.8908 | 0.6222 | | 0.0005 | 15.0 | 90 | 1.9136 | 0.6222 | | 0.0005 | 16.0 | 96 | 1.9335 | 0.6222 | | 0.0004 | 17.0 | 102 | 1.9501 | 0.6222 | | 0.0004 | 18.0 | 108 | 1.9655 | 0.6222 | | 0.0004 | 19.0 | 114 | 1.9783 | 0.6222 | | 0.0003 | 20.0 | 120 | 1.9900 | 0.6222 | | 0.0003 | 21.0 | 126 | 1.9990 | 0.6222 | | 0.0003 | 22.0 | 132 | 2.0067 | 0.6222 | | 0.0003 | 23.0 | 138 | 2.0139 | 0.6 | | 0.0003 | 24.0 | 144 | 2.0208 | 0.6 | | 0.0003 | 25.0 | 150 | 2.0271 | 0.6 | | 0.0003 | 26.0 | 156 | 2.0322 | 0.6 | | 0.0003 | 27.0 | 162 | 2.0367 | 0.6 | | 0.0003 | 28.0 | 168 | 2.0419 | 0.6 | | 0.0003 | 29.0 | 174 | 2.0471 | 0.6 | | 0.0003 | 30.0 | 180 | 2.0520 | 0.6 | | 0.0003 | 31.0 | 186 | 2.0560 | 0.6 | | 0.0002 | 32.0 | 192 | 2.0593 | 0.6 | | 0.0002 | 33.0 | 198 | 2.0621 | 0.6 | | 0.0003 | 34.0 | 204 | 2.0649 | 0.6 | | 0.0003 | 35.0 | 210 | 2.0672 | 0.6 | | 0.0003 | 36.0 | 216 | 2.0692 | 0.6 | | 0.0002 | 37.0 | 222 | 2.0710 | 0.6 | | 0.0002 | 38.0 | 228 | 2.0723 | 0.6 | | 0.0002 | 39.0 | 234 | 2.0735 | 0.6 | | 0.0002 | 40.0 | 240 | 2.0742 | 0.6 | | 0.0002 | 41.0 | 246 | 2.0747 | 0.6 | | 0.0002 | 42.0 | 252 | 2.0748 | 0.6 | | 0.0002 | 43.0 | 258 | 2.0748 | 0.6 | | 0.0002 | 44.0 | 264 | 2.0748 | 0.6 | | 0.0002 | 45.0 | 270 | 2.0748 | 0.6 | | 0.0002 | 46.0 | 276 | 2.0748 | 0.6 | | 0.0002 | 47.0 | 282 | 2.0748 | 0.6 | | 0.0002 | 48.0 | 288 | 2.0748 | 0.6 | | 0.0002 | 49.0 | 294 | 2.0748 | 0.6 | | 0.0002 | 50.0 | 300 | 2.0748 | 0.6 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_adamax_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_adamax_0001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4128 - Accuracy: 0.8837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.1575 | 0.6047 | | 1.0751 | 2.0 | 12 | 0.7147 | 0.8140 | | 1.0751 | 3.0 | 18 | 0.4940 | 0.8372 | | 0.3179 | 4.0 | 24 | 0.5279 | 0.7674 | | 0.0721 | 5.0 | 30 | 0.4829 | 0.8140 | | 0.0721 | 6.0 | 36 | 0.3704 | 0.8837 | | 0.0092 | 7.0 | 42 | 0.4306 | 0.8605 | | 0.0092 | 8.0 | 48 | 0.4406 | 0.8605 | | 0.0018 | 9.0 | 54 | 0.4181 | 0.8605 | | 0.0009 | 10.0 | 60 | 0.4015 | 0.8837 | | 0.0009 | 11.0 | 66 | 0.3932 | 0.8837 | | 0.0006 | 12.0 | 72 | 0.3944 | 0.8837 | | 0.0006 | 13.0 | 78 | 0.3986 | 0.8837 | | 0.0005 | 14.0 | 84 | 0.4037 | 0.8837 | | 0.0004 | 15.0 | 90 | 0.4072 | 0.8837 | | 0.0004 | 16.0 | 96 | 0.4099 | 0.8837 | | 0.0004 | 17.0 | 102 | 0.4111 | 0.8837 | | 0.0004 | 18.0 | 108 | 0.4129 | 0.8837 | | 0.0003 | 19.0 | 114 | 0.4147 | 0.8837 | | 0.0003 | 20.0 | 120 | 0.4143 | 0.8837 | | 0.0003 | 21.0 | 126 | 0.4146 | 0.8837 | | 0.0003 | 22.0 | 132 | 0.4136 | 0.8837 | | 0.0003 | 23.0 | 138 | 0.4136 | 0.8837 | | 0.0003 | 24.0 | 144 | 0.4120 | 0.8837 | | 0.0003 | 25.0 | 150 | 0.4117 | 0.8837 | | 0.0003 | 26.0 | 156 | 0.4120 | 0.8837 | | 0.0003 | 27.0 | 162 | 0.4120 | 0.8837 | | 0.0003 | 28.0 | 168 | 0.4117 | 0.8837 | | 0.0003 | 29.0 | 174 | 0.4121 | 0.8837 | | 0.0003 | 30.0 | 180 | 0.4118 | 0.8837 | | 0.0003 | 31.0 | 186 | 0.4116 | 0.8837 | | 0.0002 | 32.0 | 192 | 0.4115 | 0.8837 | | 0.0002 | 33.0 | 198 | 0.4116 | 0.8837 | | 0.0002 | 34.0 | 204 | 0.4120 | 0.8837 | | 0.0002 | 35.0 | 210 | 0.4121 | 0.8837 | | 0.0002 | 36.0 | 216 | 0.4123 | 0.8837 | | 0.0002 | 37.0 | 222 | 0.4125 | 0.8837 | | 0.0002 | 38.0 | 228 | 0.4126 | 0.8837 | | 0.0002 | 39.0 | 234 | 0.4127 | 0.8837 | | 0.0002 | 40.0 | 240 | 0.4128 | 0.8837 | | 0.0002 | 41.0 | 246 | 0.4128 | 0.8837 | | 0.0002 | 42.0 | 252 | 0.4128 | 0.8837 | | 0.0002 | 43.0 | 258 | 0.4128 | 0.8837 | | 0.0002 | 44.0 | 264 | 0.4128 | 0.8837 | | 0.0002 | 45.0 | 270 | 0.4128 | 0.8837 | | 0.0002 | 46.0 | 276 | 0.4128 | 0.8837 | | 0.0002 | 47.0 | 282 | 0.4128 | 0.8837 | | 0.0002 | 48.0 | 288 | 0.4128 | 0.8837 | | 0.0002 | 49.0 | 294 | 0.4128 | 0.8837 | | 0.0002 | 50.0 | 300 | 0.4128 | 0.8837 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_adamax_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_adamax_0001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3532 - Accuracy: 0.8571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3457 | 0.4048 | | 1.2924 | 2.0 | 12 | 0.9334 | 0.5952 | | 1.2924 | 3.0 | 18 | 0.6841 | 0.6667 | | 0.6444 | 4.0 | 24 | 0.7302 | 0.6429 | | 0.1694 | 5.0 | 30 | 0.5848 | 0.7619 | | 0.1694 | 6.0 | 36 | 0.6575 | 0.7619 | | 0.0382 | 7.0 | 42 | 0.4727 | 0.8333 | | 0.0382 | 8.0 | 48 | 0.7729 | 0.7381 | | 0.0125 | 9.0 | 54 | 0.6089 | 0.7619 | | 0.0034 | 10.0 | 60 | 0.3189 | 0.9048 | | 0.0034 | 11.0 | 66 | 0.2852 | 0.8810 | | 0.0011 | 12.0 | 72 | 0.3340 | 0.8571 | | 0.0011 | 13.0 | 78 | 0.3522 | 0.8571 | | 0.0007 | 14.0 | 84 | 0.3495 | 0.8571 | | 0.0005 | 15.0 | 90 | 0.3442 | 0.8571 | | 0.0005 | 16.0 | 96 | 0.3406 | 0.8571 | | 0.0004 | 17.0 | 102 | 0.3391 | 0.8571 | | 0.0004 | 18.0 | 108 | 0.3391 | 0.8571 | | 0.0004 | 19.0 | 114 | 0.3401 | 0.8571 | | 0.0004 | 20.0 | 120 | 0.3412 | 0.8571 | | 0.0004 | 21.0 | 126 | 0.3433 | 0.8571 | | 0.0003 | 22.0 | 132 | 0.3444 | 0.8571 | | 0.0003 | 23.0 | 138 | 0.3456 | 0.8571 | | 0.0003 | 24.0 | 144 | 0.3474 | 0.8571 | | 0.0003 | 25.0 | 150 | 0.3486 | 0.8571 | | 0.0003 | 26.0 | 156 | 0.3489 | 0.8571 | | 0.0003 | 27.0 | 162 | 0.3489 | 0.8571 | | 0.0003 | 28.0 | 168 | 0.3500 | 0.8571 | | 0.0003 | 29.0 | 174 | 0.3510 | 0.8571 | | 0.0003 | 30.0 | 180 | 0.3511 | 0.8571 | | 0.0003 | 31.0 | 186 | 0.3517 | 0.8571 | | 0.0003 | 32.0 | 192 | 0.3522 | 0.8571 | | 0.0003 | 33.0 | 198 | 0.3523 | 0.8571 | | 0.0003 | 34.0 | 204 | 0.3526 | 0.8571 | | 0.0003 | 35.0 | 210 | 0.3526 | 0.8571 | | 0.0003 | 36.0 | 216 | 0.3527 | 0.8571 | | 0.0003 | 37.0 | 222 | 0.3530 | 0.8571 | | 0.0003 | 38.0 | 228 | 0.3531 | 0.8571 | | 0.0002 | 39.0 | 234 | 0.3531 | 0.8571 | | 0.0003 | 40.0 | 240 | 0.3531 | 0.8571 | | 0.0003 | 41.0 | 246 | 0.3531 | 0.8571 | | 0.0003 | 42.0 | 252 | 0.3532 | 0.8571 | | 0.0003 | 43.0 | 258 | 0.3532 | 0.8571 | | 0.0002 | 44.0 | 264 | 0.3532 | 0.8571 | | 0.0003 | 45.0 | 270 | 0.3532 | 0.8571 | | 0.0003 | 46.0 | 276 | 0.3532 | 0.8571 | | 0.0003 | 47.0 | 282 | 0.3532 | 0.8571 | | 0.0003 | 48.0 | 288 | 0.3532 | 0.8571 | | 0.0002 | 49.0 | 294 | 0.3532 | 0.8571 | | 0.0003 | 50.0 | 300 | 0.3532 | 0.8571 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_adamax_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_adamax_0001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8185 - Accuracy: 0.8049 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.1559 | 0.4878 | | 1.2441 | 2.0 | 12 | 0.8253 | 0.6829 | | 1.2441 | 3.0 | 18 | 0.7434 | 0.6098 | | 0.6071 | 4.0 | 24 | 0.5080 | 0.8293 | | 0.2296 | 5.0 | 30 | 0.6693 | 0.6829 | | 0.2296 | 6.0 | 36 | 0.4300 | 0.8293 | | 0.0509 | 7.0 | 42 | 0.7493 | 0.7317 | | 0.0509 | 8.0 | 48 | 0.5064 | 0.8537 | | 0.0088 | 9.0 | 54 | 0.6021 | 0.8780 | | 0.0021 | 10.0 | 60 | 0.7408 | 0.7805 | | 0.0021 | 11.0 | 66 | 0.9234 | 0.7073 | | 0.0009 | 12.0 | 72 | 0.9965 | 0.6829 | | 0.0009 | 13.0 | 78 | 0.9607 | 0.7317 | | 0.0006 | 14.0 | 84 | 0.8998 | 0.7561 | | 0.0004 | 15.0 | 90 | 0.8548 | 0.7561 | | 0.0004 | 16.0 | 96 | 0.8258 | 0.7561 | | 0.0004 | 17.0 | 102 | 0.8107 | 0.7805 | | 0.0004 | 18.0 | 108 | 0.7999 | 0.8049 | | 0.0003 | 19.0 | 114 | 0.7972 | 0.8049 | | 0.0003 | 20.0 | 120 | 0.7983 | 0.8049 | | 0.0003 | 21.0 | 126 | 0.8011 | 0.8049 | | 0.0003 | 22.0 | 132 | 0.8040 | 0.8049 | | 0.0003 | 23.0 | 138 | 0.8052 | 0.8049 | | 0.0003 | 24.0 | 144 | 0.8067 | 0.8049 | | 0.0003 | 25.0 | 150 | 0.8086 | 0.8049 | | 0.0003 | 26.0 | 156 | 0.8104 | 0.8049 | | 0.0003 | 27.0 | 162 | 0.8133 | 0.8049 | | 0.0003 | 28.0 | 168 | 0.8150 | 0.8049 | | 0.0003 | 29.0 | 174 | 0.8155 | 0.8049 | | 0.0002 | 30.0 | 180 | 0.8162 | 0.8049 | | 0.0002 | 31.0 | 186 | 0.8167 | 0.8049 | | 0.0002 | 32.0 | 192 | 0.8175 | 0.8049 | | 0.0002 | 33.0 | 198 | 0.8178 | 0.8049 | | 0.0002 | 34.0 | 204 | 0.8183 | 0.8049 | | 0.0002 | 35.0 | 210 | 0.8179 | 0.8049 | | 0.0002 | 36.0 | 216 | 0.8182 | 0.8049 | | 0.0002 | 37.0 | 222 | 0.8182 | 0.8049 | | 0.0002 | 38.0 | 228 | 0.8181 | 0.8049 | | 0.0002 | 39.0 | 234 | 0.8183 | 0.8049 | | 0.0002 | 40.0 | 240 | 0.8184 | 0.8049 | | 0.0002 | 41.0 | 246 | 0.8184 | 0.8049 | | 0.0002 | 42.0 | 252 | 0.8185 | 0.8049 | | 0.0002 | 43.0 | 258 | 0.8185 | 0.8049 | | 0.0002 | 44.0 | 264 | 0.8185 | 0.8049 | | 0.0002 | 45.0 | 270 | 0.8185 | 0.8049 | | 0.0002 | 46.0 | 276 | 0.8185 | 0.8049 | | 0.0002 | 47.0 | 282 | 0.8185 | 0.8049 | | 0.0002 | 48.0 | 288 | 0.8185 | 0.8049 | | 0.0002 | 49.0 | 294 | 0.8185 | 0.8049 | | 0.0002 | 50.0 | 300 | 0.8185 | 0.8049 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_adamax_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_adamax_00001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1270 - Accuracy: 0.6 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3199 | 0.3333 | | 1.3414 | 2.0 | 12 | 1.2923 | 0.4667 | | 1.3414 | 3.0 | 18 | 1.2886 | 0.4667 | | 1.0791 | 4.0 | 24 | 1.2761 | 0.4667 | | 0.9244 | 5.0 | 30 | 1.2453 | 0.4889 | | 0.9244 | 6.0 | 36 | 1.2252 | 0.4667 | | 0.7694 | 7.0 | 42 | 1.2158 | 0.5111 | | 0.7694 | 8.0 | 48 | 1.2163 | 0.4667 | | 0.6552 | 9.0 | 54 | 1.2081 | 0.5111 | | 0.5314 | 10.0 | 60 | 1.1883 | 0.5556 | | 0.5314 | 11.0 | 66 | 1.1802 | 0.5556 | | 0.4407 | 12.0 | 72 | 1.1737 | 0.5778 | | 0.4407 | 13.0 | 78 | 1.1623 | 0.6222 | | 0.3864 | 14.0 | 84 | 1.1625 | 0.6222 | | 0.3093 | 15.0 | 90 | 1.1653 | 0.6222 | | 0.3093 | 16.0 | 96 | 1.1658 | 0.6222 | | 0.2597 | 17.0 | 102 | 1.1519 | 0.6444 | | 0.2597 | 18.0 | 108 | 1.1466 | 0.6222 | | 0.2099 | 19.0 | 114 | 1.1591 | 0.6 | | 0.1766 | 20.0 | 120 | 1.1509 | 0.5778 | | 0.1766 | 21.0 | 126 | 1.1488 | 0.5778 | | 0.1537 | 22.0 | 132 | 1.1482 | 0.5778 | | 0.1537 | 23.0 | 138 | 1.1427 | 0.6222 | | 0.1244 | 24.0 | 144 | 1.1370 | 0.6 | | 0.103 | 25.0 | 150 | 1.1285 | 0.6 | | 0.103 | 26.0 | 156 | 1.1323 | 0.6 | | 0.089 | 27.0 | 162 | 1.1268 | 0.6 | | 0.089 | 28.0 | 168 | 1.1377 | 0.6 | | 0.0777 | 29.0 | 174 | 1.1346 | 0.6 | | 0.068 | 30.0 | 180 | 1.1274 | 0.6 | | 0.068 | 31.0 | 186 | 1.1199 | 0.6 | | 0.0597 | 32.0 | 192 | 1.1245 | 0.6 | | 0.0597 | 33.0 | 198 | 1.1296 | 0.6 | | 0.0547 | 34.0 | 204 | 1.1270 | 0.6 | | 0.0493 | 35.0 | 210 | 1.1241 | 0.6 | | 0.0493 | 36.0 | 216 | 1.1250 | 0.6 | | 0.0441 | 37.0 | 222 | 1.1253 | 0.6 | | 0.0441 | 38.0 | 228 | 1.1296 | 0.6 | | 0.0409 | 39.0 | 234 | 1.1287 | 0.6 | | 0.0405 | 40.0 | 240 | 1.1275 | 0.6 | | 0.0405 | 41.0 | 246 | 1.1272 | 0.6 | | 0.0391 | 42.0 | 252 | 1.1270 | 0.6 | | 0.0391 | 43.0 | 258 | 1.1270 | 0.6 | | 0.0395 | 44.0 | 264 | 1.1270 | 0.6 | | 0.0377 | 45.0 | 270 | 1.1270 | 0.6 | | 0.0377 | 46.0 | 276 | 1.1270 | 0.6 | | 0.0388 | 47.0 | 282 | 1.1270 | 0.6 | | 0.0388 | 48.0 | 288 | 1.1270 | 0.6 | | 0.0366 | 49.0 | 294 | 1.1270 | 0.6 | | 0.0396 | 50.0 | 300 | 1.1270 | 0.6 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_adamax_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_adamax_00001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3101 - Accuracy: 0.6222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3638 | 0.3556 | | 1.331 | 2.0 | 12 | 1.3133 | 0.4222 | | 1.331 | 3.0 | 18 | 1.2851 | 0.4222 | | 1.0997 | 4.0 | 24 | 1.2715 | 0.4 | | 0.9418 | 5.0 | 30 | 1.2498 | 0.4444 | | 0.9418 | 6.0 | 36 | 1.2371 | 0.5111 | | 0.7701 | 7.0 | 42 | 1.2279 | 0.5111 | | 0.7701 | 8.0 | 48 | 1.2223 | 0.5556 | | 0.6624 | 9.0 | 54 | 1.2136 | 0.5333 | | 0.5468 | 10.0 | 60 | 1.2047 | 0.5111 | | 0.5468 | 11.0 | 66 | 1.2129 | 0.5333 | | 0.4638 | 12.0 | 72 | 1.2131 | 0.5556 | | 0.4638 | 13.0 | 78 | 1.2055 | 0.5778 | | 0.375 | 14.0 | 84 | 1.2059 | 0.5778 | | 0.3096 | 15.0 | 90 | 1.2025 | 0.5778 | | 0.3096 | 16.0 | 96 | 1.2062 | 0.5778 | | 0.2535 | 17.0 | 102 | 1.2103 | 0.6 | | 0.2535 | 18.0 | 108 | 1.2313 | 0.5778 | | 0.2168 | 19.0 | 114 | 1.2293 | 0.5778 | | 0.1735 | 20.0 | 120 | 1.2169 | 0.6222 | | 0.1735 | 21.0 | 126 | 1.2306 | 0.6222 | | 0.1492 | 22.0 | 132 | 1.2370 | 0.6222 | | 0.1492 | 23.0 | 138 | 1.2467 | 0.6222 | | 0.1264 | 24.0 | 144 | 1.2411 | 0.6222 | | 0.1012 | 25.0 | 150 | 1.2438 | 0.6222 | | 0.1012 | 26.0 | 156 | 1.2523 | 0.6222 | | 0.0887 | 27.0 | 162 | 1.2537 | 0.6 | | 0.0887 | 28.0 | 168 | 1.2661 | 0.6222 | | 0.0734 | 29.0 | 174 | 1.2715 | 0.6222 | | 0.0647 | 30.0 | 180 | 1.2745 | 0.6 | | 0.0647 | 31.0 | 186 | 1.2817 | 0.6222 | | 0.0577 | 32.0 | 192 | 1.2861 | 0.6222 | | 0.0577 | 33.0 | 198 | 1.2908 | 0.6222 | | 0.0525 | 34.0 | 204 | 1.2935 | 0.6222 | | 0.048 | 35.0 | 210 | 1.2969 | 0.6222 | | 0.048 | 36.0 | 216 | 1.2990 | 0.6 | | 0.0443 | 37.0 | 222 | 1.3015 | 0.6 | | 0.0443 | 38.0 | 228 | 1.3052 | 0.6222 | | 0.0404 | 39.0 | 234 | 1.3082 | 0.6222 | | 0.0394 | 40.0 | 240 | 1.3089 | 0.6222 | | 0.0394 | 41.0 | 246 | 1.3101 | 0.6222 | | 0.0387 | 42.0 | 252 | 1.3101 | 0.6222 | | 0.0387 | 43.0 | 258 | 1.3101 | 0.6222 | | 0.0369 | 44.0 | 264 | 1.3101 | 0.6222 | | 0.0381 | 45.0 | 270 | 1.3101 | 0.6222 | | 0.0381 | 46.0 | 276 | 1.3101 | 0.6222 | | 0.0382 | 47.0 | 282 | 1.3101 | 0.6222 | | 0.0382 | 48.0 | 288 | 1.3101 | 0.6222 | | 0.037 | 49.0 | 294 | 1.3101 | 0.6222 | | 0.0386 | 50.0 | 300 | 1.3101 | 0.6222 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_adamax_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_adamax_00001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6553 - Accuracy: 0.6512 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3641 | 0.3953 | | 1.3358 | 2.0 | 12 | 1.2934 | 0.4186 | | 1.3358 | 3.0 | 18 | 1.2307 | 0.4419 | | 1.1053 | 4.0 | 24 | 1.1728 | 0.5814 | | 0.9503 | 5.0 | 30 | 1.1200 | 0.5814 | | 0.9503 | 6.0 | 36 | 1.0691 | 0.5814 | | 0.8249 | 7.0 | 42 | 1.0268 | 0.6047 | | 0.8249 | 8.0 | 48 | 1.0002 | 0.6279 | | 0.6991 | 9.0 | 54 | 0.9588 | 0.6279 | | 0.62 | 10.0 | 60 | 0.9254 | 0.6279 | | 0.62 | 11.0 | 66 | 0.8988 | 0.6744 | | 0.5003 | 12.0 | 72 | 0.8718 | 0.6279 | | 0.5003 | 13.0 | 78 | 0.8636 | 0.6279 | | 0.4251 | 14.0 | 84 | 0.8486 | 0.6279 | | 0.3584 | 15.0 | 90 | 0.8228 | 0.6279 | | 0.3584 | 16.0 | 96 | 0.8029 | 0.6512 | | 0.2955 | 17.0 | 102 | 0.7980 | 0.6279 | | 0.2955 | 18.0 | 108 | 0.7871 | 0.6047 | | 0.2345 | 19.0 | 114 | 0.7646 | 0.6279 | | 0.2022 | 20.0 | 120 | 0.7571 | 0.6279 | | 0.2022 | 21.0 | 126 | 0.7433 | 0.6512 | | 0.1667 | 22.0 | 132 | 0.7314 | 0.6744 | | 0.1667 | 23.0 | 138 | 0.7263 | 0.6279 | | 0.1461 | 24.0 | 144 | 0.7221 | 0.6744 | | 0.1251 | 25.0 | 150 | 0.7120 | 0.6512 | | 0.1251 | 26.0 | 156 | 0.6954 | 0.6512 | | 0.1033 | 27.0 | 162 | 0.6904 | 0.6512 | | 0.1033 | 28.0 | 168 | 0.6870 | 0.6744 | | 0.0941 | 29.0 | 174 | 0.6821 | 0.6744 | | 0.0792 | 30.0 | 180 | 0.6785 | 0.6744 | | 0.0792 | 31.0 | 186 | 0.6761 | 0.6744 | | 0.0681 | 32.0 | 192 | 0.6723 | 0.6744 | | 0.0681 | 33.0 | 198 | 0.6679 | 0.6744 | | 0.0621 | 34.0 | 204 | 0.6648 | 0.6512 | | 0.0554 | 35.0 | 210 | 0.6628 | 0.6512 | | 0.0554 | 36.0 | 216 | 0.6584 | 0.6744 | | 0.0533 | 37.0 | 222 | 0.6569 | 0.6744 | | 0.0533 | 38.0 | 228 | 0.6569 | 0.6512 | | 0.0487 | 39.0 | 234 | 0.6565 | 0.6512 | | 0.0478 | 40.0 | 240 | 0.6552 | 0.6512 | | 0.0478 | 41.0 | 246 | 0.6553 | 0.6512 | | 0.0459 | 42.0 | 252 | 0.6553 | 0.6512 | | 0.0459 | 43.0 | 258 | 0.6553 | 0.6512 | | 0.0488 | 44.0 | 264 | 0.6553 | 0.6512 | | 0.0454 | 45.0 | 270 | 0.6553 | 0.6512 | | 0.0454 | 46.0 | 276 | 0.6553 | 0.6512 | | 0.0445 | 47.0 | 282 | 0.6553 | 0.6512 | | 0.0445 | 48.0 | 288 | 0.6553 | 0.6512 | | 0.0487 | 49.0 | 294 | 0.6553 | 0.6512 | | 0.0463 | 50.0 | 300 | 0.6553 | 0.6512 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_adamax_00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_adamax_00001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7508 - Accuracy: 0.6667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3350 | 0.3571 | | 1.346 | 2.0 | 12 | 1.2810 | 0.3810 | | 1.346 | 3.0 | 18 | 1.2346 | 0.4048 | | 1.107 | 4.0 | 24 | 1.1917 | 0.4048 | | 0.9637 | 5.0 | 30 | 1.1623 | 0.3571 | | 0.9637 | 6.0 | 36 | 1.1357 | 0.4048 | | 0.8241 | 7.0 | 42 | 1.1137 | 0.4286 | | 0.8241 | 8.0 | 48 | 1.0906 | 0.4286 | | 0.6746 | 9.0 | 54 | 1.0721 | 0.4286 | | 0.594 | 10.0 | 60 | 1.0502 | 0.4286 | | 0.594 | 11.0 | 66 | 1.0303 | 0.4286 | | 0.4897 | 12.0 | 72 | 1.0072 | 0.4524 | | 0.4897 | 13.0 | 78 | 0.9837 | 0.4762 | | 0.4223 | 14.0 | 84 | 0.9800 | 0.4762 | | 0.3482 | 15.0 | 90 | 0.9580 | 0.5 | | 0.3482 | 16.0 | 96 | 0.9315 | 0.5238 | | 0.2808 | 17.0 | 102 | 0.9182 | 0.5238 | | 0.2808 | 18.0 | 108 | 0.9032 | 0.5714 | | 0.2441 | 19.0 | 114 | 0.8918 | 0.6190 | | 0.2119 | 20.0 | 120 | 0.8729 | 0.6190 | | 0.2119 | 21.0 | 126 | 0.8574 | 0.6190 | | 0.1699 | 22.0 | 132 | 0.8454 | 0.6190 | | 0.1699 | 23.0 | 138 | 0.8308 | 0.6190 | | 0.1443 | 24.0 | 144 | 0.8166 | 0.6190 | | 0.1255 | 25.0 | 150 | 0.8066 | 0.6905 | | 0.1255 | 26.0 | 156 | 0.8088 | 0.6905 | | 0.1078 | 27.0 | 162 | 0.7901 | 0.6905 | | 0.1078 | 28.0 | 168 | 0.7892 | 0.6667 | | 0.094 | 29.0 | 174 | 0.7900 | 0.6667 | | 0.0785 | 30.0 | 180 | 0.7761 | 0.6667 | | 0.0785 | 31.0 | 186 | 0.7673 | 0.6667 | | 0.071 | 32.0 | 192 | 0.7632 | 0.6667 | | 0.071 | 33.0 | 198 | 0.7572 | 0.6667 | | 0.066 | 34.0 | 204 | 0.7549 | 0.6667 | | 0.0595 | 35.0 | 210 | 0.7582 | 0.6667 | | 0.0595 | 36.0 | 216 | 0.7573 | 0.6667 | | 0.0553 | 37.0 | 222 | 0.7569 | 0.6667 | | 0.0553 | 38.0 | 228 | 0.7526 | 0.6667 | | 0.0524 | 39.0 | 234 | 0.7502 | 0.6667 | | 0.0501 | 40.0 | 240 | 0.7502 | 0.6667 | | 0.0501 | 41.0 | 246 | 0.7508 | 0.6667 | | 0.0507 | 42.0 | 252 | 0.7508 | 0.6667 | | 0.0507 | 43.0 | 258 | 0.7508 | 0.6667 | | 0.0466 | 44.0 | 264 | 0.7508 | 0.6667 | | 0.0501 | 45.0 | 270 | 0.7508 | 0.6667 | | 0.0501 | 46.0 | 276 | 0.7508 | 0.6667 | | 0.0512 | 47.0 | 282 | 0.7508 | 0.6667 | | 0.0512 | 48.0 | 288 | 0.7508 | 0.6667 | | 0.0478 | 49.0 | 294 | 0.7508 | 0.6667 | | 0.0501 | 50.0 | 300 | 0.7508 | 0.6667 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_adamax_00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_adamax_00001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7730 - Accuracy: 0.6585 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3080 | 0.3171 | | 1.348 | 2.0 | 12 | 1.2421 | 0.3659 | | 1.348 | 3.0 | 18 | 1.1840 | 0.4634 | | 1.1221 | 4.0 | 24 | 1.1443 | 0.4634 | | 0.9962 | 5.0 | 30 | 1.1209 | 0.4634 | | 0.9962 | 6.0 | 36 | 1.0884 | 0.5366 | | 0.8532 | 7.0 | 42 | 1.0512 | 0.5122 | | 0.8532 | 8.0 | 48 | 1.0147 | 0.5366 | | 0.73 | 9.0 | 54 | 0.9886 | 0.5366 | | 0.61 | 10.0 | 60 | 0.9683 | 0.5610 | | 0.61 | 11.0 | 66 | 0.9452 | 0.5854 | | 0.5241 | 12.0 | 72 | 0.9201 | 0.6341 | | 0.5241 | 13.0 | 78 | 0.9013 | 0.6341 | | 0.4293 | 14.0 | 84 | 0.8851 | 0.6341 | | 0.3674 | 15.0 | 90 | 0.8707 | 0.6341 | | 0.3674 | 16.0 | 96 | 0.8542 | 0.6341 | | 0.304 | 17.0 | 102 | 0.8474 | 0.6341 | | 0.304 | 18.0 | 108 | 0.8370 | 0.6341 | | 0.2449 | 19.0 | 114 | 0.8233 | 0.6341 | | 0.2119 | 20.0 | 120 | 0.8193 | 0.6341 | | 0.2119 | 21.0 | 126 | 0.8116 | 0.6341 | | 0.1788 | 22.0 | 132 | 0.8051 | 0.6341 | | 0.1788 | 23.0 | 138 | 0.7954 | 0.6341 | | 0.1445 | 24.0 | 144 | 0.7897 | 0.6341 | | 0.1262 | 25.0 | 150 | 0.7881 | 0.6829 | | 0.1262 | 26.0 | 156 | 0.7818 | 0.6585 | | 0.1066 | 27.0 | 162 | 0.7872 | 0.6829 | | 0.1066 | 28.0 | 168 | 0.7762 | 0.6585 | | 0.0891 | 29.0 | 174 | 0.7687 | 0.6585 | | 0.0806 | 30.0 | 180 | 0.7658 | 0.6829 | | 0.0806 | 31.0 | 186 | 0.7688 | 0.6829 | | 0.0692 | 32.0 | 192 | 0.7732 | 0.6829 | | 0.0692 | 33.0 | 198 | 0.7763 | 0.6585 | | 0.0592 | 34.0 | 204 | 0.7749 | 0.6585 | | 0.0587 | 35.0 | 210 | 0.7694 | 0.6829 | | 0.0587 | 36.0 | 216 | 0.7701 | 0.6829 | | 0.0549 | 37.0 | 222 | 0.7733 | 0.6585 | | 0.0549 | 38.0 | 228 | 0.7741 | 0.6585 | | 0.0463 | 39.0 | 234 | 0.7744 | 0.6585 | | 0.0481 | 40.0 | 240 | 0.7732 | 0.6585 | | 0.0481 | 41.0 | 246 | 0.7732 | 0.6585 | | 0.0468 | 42.0 | 252 | 0.7730 | 0.6585 | | 0.0468 | 43.0 | 258 | 0.7730 | 0.6585 | | 0.0455 | 44.0 | 264 | 0.7730 | 0.6585 | | 0.0473 | 45.0 | 270 | 0.7730 | 0.6585 | | 0.0473 | 46.0 | 276 | 0.7730 | 0.6585 | | 0.0444 | 47.0 | 282 | 0.7730 | 0.6585 | | 0.0444 | 48.0 | 288 | 0.7730 | 0.6585 | | 0.048 | 49.0 | 294 | 0.7730 | 0.6585 | | 0.0476 | 50.0 | 300 | 0.7730 | 0.6585 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_sgd_001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_sgd_001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2536 - Accuracy: 0.4667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3733 | 0.3778 | | 1.5546 | 2.0 | 12 | 1.3601 | 0.4 | | 1.5546 | 3.0 | 18 | 1.3490 | 0.4222 | | 1.5316 | 4.0 | 24 | 1.3414 | 0.4222 | | 1.4864 | 5.0 | 30 | 1.3332 | 0.4222 | | 1.4864 | 6.0 | 36 | 1.3258 | 0.4222 | | 1.4723 | 7.0 | 42 | 1.3198 | 0.4222 | | 1.4723 | 8.0 | 48 | 1.3148 | 0.4 | | 1.4485 | 9.0 | 54 | 1.3096 | 0.4 | | 1.4339 | 10.0 | 60 | 1.3042 | 0.4 | | 1.4339 | 11.0 | 66 | 1.3005 | 0.4222 | | 1.4182 | 12.0 | 72 | 1.2965 | 0.4222 | | 1.4182 | 13.0 | 78 | 1.2931 | 0.4 | | 1.3944 | 14.0 | 84 | 1.2902 | 0.4222 | | 1.3955 | 15.0 | 90 | 1.2868 | 0.4444 | | 1.3955 | 16.0 | 96 | 1.2841 | 0.4444 | | 1.3685 | 17.0 | 102 | 1.2813 | 0.4444 | | 1.3685 | 18.0 | 108 | 1.2791 | 0.4444 | | 1.351 | 19.0 | 114 | 1.2769 | 0.4444 | | 1.3583 | 20.0 | 120 | 1.2750 | 0.4667 | | 1.3583 | 21.0 | 126 | 1.2734 | 0.4444 | | 1.3432 | 22.0 | 132 | 1.2719 | 0.4444 | | 1.3432 | 23.0 | 138 | 1.2696 | 0.4444 | | 1.3309 | 24.0 | 144 | 1.2677 | 0.4444 | | 1.3166 | 25.0 | 150 | 1.2667 | 0.4444 | | 1.3166 | 26.0 | 156 | 1.2651 | 0.4667 | | 1.3168 | 27.0 | 162 | 1.2639 | 0.4667 | | 1.3168 | 28.0 | 168 | 1.2624 | 0.4667 | | 1.3102 | 29.0 | 174 | 1.2615 | 0.4667 | | 1.3034 | 30.0 | 180 | 1.2602 | 0.4667 | | 1.3034 | 31.0 | 186 | 1.2590 | 0.4667 | | 1.3106 | 32.0 | 192 | 1.2580 | 0.4667 | | 1.3106 | 33.0 | 198 | 1.2570 | 0.4667 | | 1.2903 | 34.0 | 204 | 1.2562 | 0.4667 | | 1.2915 | 35.0 | 210 | 1.2554 | 0.4667 | | 1.2915 | 36.0 | 216 | 1.2549 | 0.4667 | | 1.2913 | 37.0 | 222 | 1.2546 | 0.4667 | | 1.2913 | 38.0 | 228 | 1.2542 | 0.4667 | | 1.2715 | 39.0 | 234 | 1.2539 | 0.4667 | | 1.2929 | 40.0 | 240 | 1.2538 | 0.4667 | | 1.2929 | 41.0 | 246 | 1.2537 | 0.4667 | | 1.2815 | 42.0 | 252 | 1.2536 | 0.4667 | | 1.2815 | 43.0 | 258 | 1.2536 | 0.4667 | | 1.2834 | 44.0 | 264 | 1.2536 | 0.4667 | | 1.2687 | 45.0 | 270 | 1.2536 | 0.4667 | | 1.2687 | 46.0 | 276 | 1.2536 | 0.4667 | | 1.2845 | 47.0 | 282 | 1.2536 | 0.4667 | | 1.2845 | 48.0 | 288 | 1.2536 | 0.4667 | | 1.2639 | 49.0 | 294 | 1.2536 | 0.4667 | | 1.2911 | 50.0 | 300 | 1.2536 | 0.4667 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_sgd_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_sgd_001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3470 - Accuracy: 0.3556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.4885 | 0.2 | | 1.5055 | 2.0 | 12 | 1.4667 | 0.2667 | | 1.5055 | 3.0 | 18 | 1.4496 | 0.2444 | | 1.4394 | 4.0 | 24 | 1.4374 | 0.2444 | | 1.4154 | 5.0 | 30 | 1.4269 | 0.2444 | | 1.4154 | 6.0 | 36 | 1.4185 | 0.2667 | | 1.3643 | 7.0 | 42 | 1.4107 | 0.3333 | | 1.3643 | 8.0 | 48 | 1.4053 | 0.3556 | | 1.3559 | 9.0 | 54 | 1.4001 | 0.3556 | | 1.3227 | 10.0 | 60 | 1.3952 | 0.3556 | | 1.3227 | 11.0 | 66 | 1.3910 | 0.3556 | | 1.3197 | 12.0 | 72 | 1.3872 | 0.3556 | | 1.3197 | 13.0 | 78 | 1.3837 | 0.3556 | | 1.2846 | 14.0 | 84 | 1.3804 | 0.3556 | | 1.2901 | 15.0 | 90 | 1.3773 | 0.3556 | | 1.2901 | 16.0 | 96 | 1.3743 | 0.3333 | | 1.2643 | 17.0 | 102 | 1.3716 | 0.3333 | | 1.2643 | 18.0 | 108 | 1.3691 | 0.3333 | | 1.2844 | 19.0 | 114 | 1.3667 | 0.3333 | | 1.2293 | 20.0 | 120 | 1.3643 | 0.3333 | | 1.2293 | 21.0 | 126 | 1.3623 | 0.3333 | | 1.2404 | 22.0 | 132 | 1.3607 | 0.3333 | | 1.2404 | 23.0 | 138 | 1.3587 | 0.3333 | | 1.2359 | 24.0 | 144 | 1.3573 | 0.3333 | | 1.2062 | 25.0 | 150 | 1.3561 | 0.3333 | | 1.2062 | 26.0 | 156 | 1.3548 | 0.3333 | | 1.2199 | 27.0 | 162 | 1.3536 | 0.3333 | | 1.2199 | 28.0 | 168 | 1.3527 | 0.3333 | | 1.2151 | 29.0 | 174 | 1.3520 | 0.3556 | | 1.2005 | 30.0 | 180 | 1.3511 | 0.3556 | | 1.2005 | 31.0 | 186 | 1.3504 | 0.3556 | | 1.1928 | 32.0 | 192 | 1.3498 | 0.3556 | | 1.1928 | 33.0 | 198 | 1.3492 | 0.3556 | | 1.1891 | 34.0 | 204 | 1.3487 | 0.3556 | | 1.1974 | 35.0 | 210 | 1.3482 | 0.3556 | | 1.1974 | 36.0 | 216 | 1.3478 | 0.3556 | | 1.1657 | 37.0 | 222 | 1.3476 | 0.3556 | | 1.1657 | 38.0 | 228 | 1.3474 | 0.3556 | | 1.1722 | 39.0 | 234 | 1.3472 | 0.3556 | | 1.2031 | 40.0 | 240 | 1.3471 | 0.3556 | | 1.2031 | 41.0 | 246 | 1.3470 | 0.3556 | | 1.1899 | 42.0 | 252 | 1.3470 | 0.3556 | | 1.1899 | 43.0 | 258 | 1.3470 | 0.3556 | | 1.1761 | 44.0 | 264 | 1.3470 | 0.3556 | | 1.1715 | 45.0 | 270 | 1.3470 | 0.3556 | | 1.1715 | 46.0 | 276 | 1.3470 | 0.3556 | | 1.1816 | 47.0 | 282 | 1.3470 | 0.3556 | | 1.1816 | 48.0 | 288 | 1.3470 | 0.3556 | | 1.1504 | 49.0 | 294 | 1.3470 | 0.3556 | | 1.1896 | 50.0 | 300 | 1.3470 | 0.3556 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_sgd_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_sgd_001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3255 - Accuracy: 0.2558 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5642 | 0.2326 | | 1.4806 | 2.0 | 12 | 1.5332 | 0.3256 | | 1.4806 | 3.0 | 18 | 1.5110 | 0.3256 | | 1.4127 | 4.0 | 24 | 1.4910 | 0.3256 | | 1.3859 | 5.0 | 30 | 1.4734 | 0.3256 | | 1.3859 | 6.0 | 36 | 1.4581 | 0.3256 | | 1.372 | 7.0 | 42 | 1.4448 | 0.3256 | | 1.372 | 8.0 | 48 | 1.4360 | 0.3256 | | 1.3407 | 9.0 | 54 | 1.4268 | 0.3256 | | 1.3476 | 10.0 | 60 | 1.4184 | 0.3256 | | 1.3476 | 11.0 | 66 | 1.4115 | 0.3256 | | 1.3176 | 12.0 | 72 | 1.4055 | 0.3488 | | 1.3176 | 13.0 | 78 | 1.3989 | 0.3488 | | 1.3009 | 14.0 | 84 | 1.3926 | 0.3256 | | 1.3032 | 15.0 | 90 | 1.3870 | 0.3256 | | 1.3032 | 16.0 | 96 | 1.3815 | 0.3256 | | 1.2893 | 17.0 | 102 | 1.3768 | 0.3256 | | 1.2893 | 18.0 | 108 | 1.3723 | 0.3023 | | 1.252 | 19.0 | 114 | 1.3680 | 0.3023 | | 1.2643 | 20.0 | 120 | 1.3638 | 0.3023 | | 1.2643 | 21.0 | 126 | 1.3601 | 0.2791 | | 1.2642 | 22.0 | 132 | 1.3567 | 0.2791 | | 1.2642 | 23.0 | 138 | 1.3535 | 0.2791 | | 1.2369 | 24.0 | 144 | 1.3502 | 0.2791 | | 1.2315 | 25.0 | 150 | 1.3476 | 0.2791 | | 1.2315 | 26.0 | 156 | 1.3450 | 0.2791 | | 1.2236 | 27.0 | 162 | 1.3424 | 0.2558 | | 1.2236 | 28.0 | 168 | 1.3403 | 0.2558 | | 1.2327 | 29.0 | 174 | 1.3382 | 0.2558 | | 1.2254 | 30.0 | 180 | 1.3363 | 0.2558 | | 1.2254 | 31.0 | 186 | 1.3347 | 0.2558 | | 1.2165 | 32.0 | 192 | 1.3331 | 0.2558 | | 1.2165 | 33.0 | 198 | 1.3315 | 0.2558 | | 1.2003 | 34.0 | 204 | 1.3303 | 0.2558 | | 1.2034 | 35.0 | 210 | 1.3292 | 0.2558 | | 1.2034 | 36.0 | 216 | 1.3282 | 0.2558 | | 1.2052 | 37.0 | 222 | 1.3273 | 0.2558 | | 1.2052 | 38.0 | 228 | 1.3266 | 0.2558 | | 1.2216 | 39.0 | 234 | 1.3261 | 0.2558 | | 1.2003 | 40.0 | 240 | 1.3258 | 0.2558 | | 1.2003 | 41.0 | 246 | 1.3256 | 0.2558 | | 1.1856 | 42.0 | 252 | 1.3255 | 0.2558 | | 1.1856 | 43.0 | 258 | 1.3255 | 0.2558 | | 1.2091 | 44.0 | 264 | 1.3255 | 0.2558 | | 1.1987 | 45.0 | 270 | 1.3255 | 0.2558 | | 1.1987 | 46.0 | 276 | 1.3255 | 0.2558 | | 1.1885 | 47.0 | 282 | 1.3255 | 0.2558 | | 1.1885 | 48.0 | 288 | 1.3255 | 0.2558 | | 1.2076 | 49.0 | 294 | 1.3255 | 0.2558 | | 1.2139 | 50.0 | 300 | 1.3255 | 0.2558 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_sgd_001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_sgd_001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3002 - Accuracy: 0.3333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.4504 | 0.2857 | | 1.4996 | 2.0 | 12 | 1.4256 | 0.2619 | | 1.4996 | 3.0 | 18 | 1.4065 | 0.3095 | | 1.4661 | 4.0 | 24 | 1.3909 | 0.3333 | | 1.4137 | 5.0 | 30 | 1.3815 | 0.3333 | | 1.4137 | 6.0 | 36 | 1.3736 | 0.3810 | | 1.3923 | 7.0 | 42 | 1.3662 | 0.3571 | | 1.3923 | 8.0 | 48 | 1.3602 | 0.3095 | | 1.3511 | 9.0 | 54 | 1.3552 | 0.3333 | | 1.3471 | 10.0 | 60 | 1.3505 | 0.3333 | | 1.3471 | 11.0 | 66 | 1.3464 | 0.3333 | | 1.3212 | 12.0 | 72 | 1.3425 | 0.3333 | | 1.3212 | 13.0 | 78 | 1.3391 | 0.3333 | | 1.3151 | 14.0 | 84 | 1.3358 | 0.3333 | | 1.2949 | 15.0 | 90 | 1.3328 | 0.3333 | | 1.2949 | 16.0 | 96 | 1.3296 | 0.3333 | | 1.282 | 17.0 | 102 | 1.3270 | 0.3333 | | 1.282 | 18.0 | 108 | 1.3243 | 0.3333 | | 1.2637 | 19.0 | 114 | 1.3223 | 0.3333 | | 1.2828 | 20.0 | 120 | 1.3203 | 0.3333 | | 1.2828 | 21.0 | 126 | 1.3182 | 0.3333 | | 1.2384 | 22.0 | 132 | 1.3165 | 0.3333 | | 1.2384 | 23.0 | 138 | 1.3149 | 0.3333 | | 1.2419 | 24.0 | 144 | 1.3133 | 0.3333 | | 1.2404 | 25.0 | 150 | 1.3117 | 0.3571 | | 1.2404 | 26.0 | 156 | 1.3102 | 0.3571 | | 1.2294 | 27.0 | 162 | 1.3091 | 0.3571 | | 1.2294 | 28.0 | 168 | 1.3080 | 0.3571 | | 1.2327 | 29.0 | 174 | 1.3070 | 0.3571 | | 1.2115 | 30.0 | 180 | 1.3061 | 0.3571 | | 1.2115 | 31.0 | 186 | 1.3052 | 0.3333 | | 1.2091 | 32.0 | 192 | 1.3043 | 0.3333 | | 1.2091 | 33.0 | 198 | 1.3036 | 0.3333 | | 1.2111 | 34.0 | 204 | 1.3028 | 0.3333 | | 1.2001 | 35.0 | 210 | 1.3022 | 0.3333 | | 1.2001 | 36.0 | 216 | 1.3016 | 0.3333 | | 1.2048 | 37.0 | 222 | 1.3012 | 0.3333 | | 1.2048 | 38.0 | 228 | 1.3009 | 0.3333 | | 1.1981 | 39.0 | 234 | 1.3006 | 0.3333 | | 1.1973 | 40.0 | 240 | 1.3004 | 0.3333 | | 1.1973 | 41.0 | 246 | 1.3003 | 0.3333 | | 1.2009 | 42.0 | 252 | 1.3002 | 0.3333 | | 1.2009 | 43.0 | 258 | 1.3002 | 0.3333 | | 1.1848 | 44.0 | 264 | 1.3002 | 0.3333 | | 1.2 | 45.0 | 270 | 1.3002 | 0.3333 | | 1.2 | 46.0 | 276 | 1.3002 | 0.3333 | | 1.2026 | 47.0 | 282 | 1.3002 | 0.3333 | | 1.2026 | 48.0 | 288 | 1.3002 | 0.3333 | | 1.1883 | 49.0 | 294 | 1.3002 | 0.3333 | | 1.2097 | 50.0 | 300 | 1.3002 | 0.3333 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_sgd_001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_sgd_001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2874 - Accuracy: 0.3171 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.4789 | 0.2683 | | 1.5098 | 2.0 | 12 | 1.4475 | 0.2927 | | 1.5098 | 3.0 | 18 | 1.4244 | 0.2683 | | 1.4415 | 4.0 | 24 | 1.4086 | 0.2683 | | 1.4228 | 5.0 | 30 | 1.3943 | 0.2927 | | 1.4228 | 6.0 | 36 | 1.3837 | 0.2683 | | 1.3818 | 7.0 | 42 | 1.3755 | 0.2439 | | 1.3818 | 8.0 | 48 | 1.3687 | 0.2195 | | 1.3662 | 9.0 | 54 | 1.3625 | 0.2439 | | 1.3382 | 10.0 | 60 | 1.3567 | 0.2439 | | 1.3382 | 11.0 | 66 | 1.3518 | 0.2439 | | 1.3324 | 12.0 | 72 | 1.3466 | 0.2439 | | 1.3324 | 13.0 | 78 | 1.3420 | 0.2439 | | 1.3002 | 14.0 | 84 | 1.3382 | 0.2439 | | 1.2845 | 15.0 | 90 | 1.3339 | 0.2683 | | 1.2845 | 16.0 | 96 | 1.3305 | 0.2683 | | 1.2783 | 17.0 | 102 | 1.3271 | 0.2927 | | 1.2783 | 18.0 | 108 | 1.3237 | 0.3171 | | 1.2896 | 19.0 | 114 | 1.3207 | 0.3171 | | 1.2581 | 20.0 | 120 | 1.3176 | 0.3171 | | 1.2581 | 21.0 | 126 | 1.3151 | 0.3415 | | 1.2555 | 22.0 | 132 | 1.3123 | 0.3415 | | 1.2555 | 23.0 | 138 | 1.3099 | 0.3415 | | 1.2563 | 24.0 | 144 | 1.3076 | 0.3415 | | 1.2461 | 25.0 | 150 | 1.3050 | 0.3415 | | 1.2461 | 26.0 | 156 | 1.3029 | 0.3171 | | 1.2294 | 27.0 | 162 | 1.3009 | 0.3171 | | 1.2294 | 28.0 | 168 | 1.2991 | 0.3171 | | 1.2223 | 29.0 | 174 | 1.2975 | 0.3171 | | 1.2396 | 30.0 | 180 | 1.2961 | 0.3171 | | 1.2396 | 31.0 | 186 | 1.2948 | 0.3171 | | 1.2235 | 32.0 | 192 | 1.2934 | 0.3171 | | 1.2235 | 33.0 | 198 | 1.2923 | 0.3171 | | 1.2018 | 34.0 | 204 | 1.2911 | 0.3171 | | 1.2131 | 35.0 | 210 | 1.2902 | 0.3171 | | 1.2131 | 36.0 | 216 | 1.2895 | 0.3171 | | 1.2105 | 37.0 | 222 | 1.2888 | 0.3171 | | 1.2105 | 38.0 | 228 | 1.2883 | 0.3171 | | 1.1724 | 39.0 | 234 | 1.2879 | 0.3171 | | 1.2168 | 40.0 | 240 | 1.2876 | 0.3171 | | 1.2168 | 41.0 | 246 | 1.2875 | 0.3171 | | 1.1977 | 42.0 | 252 | 1.2874 | 0.3171 | | 1.1977 | 43.0 | 258 | 1.2874 | 0.3171 | | 1.1916 | 44.0 | 264 | 1.2874 | 0.3171 | | 1.21 | 45.0 | 270 | 1.2874 | 0.3171 | | 1.21 | 46.0 | 276 | 1.2874 | 0.3171 | | 1.1885 | 47.0 | 282 | 1.2874 | 0.3171 | | 1.1885 | 48.0 | 288 | 1.2874 | 0.3171 | | 1.2083 | 49.0 | 294 | 1.2874 | 0.3171 | | 1.2106 | 50.0 | 300 | 1.2874 | 0.3171 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
camiloTel0410/bean-classifier
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bean-classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0051 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0592 | 3.85 | 500 | 0.0051 | 1.0 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.13.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
hkivancoral/hushem_1x_deit_small_sgd_0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_sgd_0001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4520 - Accuracy: 0.2667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5076 | 0.2889 | | 1.5379 | 2.0 | 12 | 1.5042 | 0.2889 | | 1.5379 | 3.0 | 18 | 1.5010 | 0.2889 | | 1.5099 | 4.0 | 24 | 1.4983 | 0.2889 | | 1.521 | 5.0 | 30 | 1.4955 | 0.2889 | | 1.521 | 6.0 | 36 | 1.4929 | 0.2889 | | 1.4972 | 7.0 | 42 | 1.4902 | 0.2889 | | 1.4972 | 8.0 | 48 | 1.4879 | 0.2889 | | 1.5152 | 9.0 | 54 | 1.4855 | 0.2889 | | 1.4839 | 10.0 | 60 | 1.4831 | 0.2889 | | 1.4839 | 11.0 | 66 | 1.4810 | 0.2889 | | 1.478 | 12.0 | 72 | 1.4788 | 0.2889 | | 1.478 | 13.0 | 78 | 1.4768 | 0.2889 | | 1.4972 | 14.0 | 84 | 1.4749 | 0.2889 | | 1.4822 | 15.0 | 90 | 1.4732 | 0.2889 | | 1.4822 | 16.0 | 96 | 1.4714 | 0.2889 | | 1.4784 | 17.0 | 102 | 1.4698 | 0.2889 | | 1.4784 | 18.0 | 108 | 1.4684 | 0.2889 | | 1.4862 | 19.0 | 114 | 1.4671 | 0.2889 | | 1.4536 | 20.0 | 120 | 1.4657 | 0.2889 | | 1.4536 | 21.0 | 126 | 1.4645 | 0.2889 | | 1.4751 | 22.0 | 132 | 1.4634 | 0.2889 | | 1.4751 | 23.0 | 138 | 1.4621 | 0.2889 | | 1.4645 | 24.0 | 144 | 1.4610 | 0.2889 | | 1.4518 | 25.0 | 150 | 1.4602 | 0.2889 | | 1.4518 | 26.0 | 156 | 1.4592 | 0.2889 | | 1.4648 | 27.0 | 162 | 1.4583 | 0.2889 | | 1.4648 | 28.0 | 168 | 1.4575 | 0.2667 | | 1.47 | 29.0 | 174 | 1.4568 | 0.2667 | | 1.4692 | 30.0 | 180 | 1.4560 | 0.2667 | | 1.4692 | 31.0 | 186 | 1.4553 | 0.2667 | | 1.4701 | 32.0 | 192 | 1.4547 | 0.2667 | | 1.4701 | 33.0 | 198 | 1.4541 | 0.2667 | | 1.4745 | 34.0 | 204 | 1.4536 | 0.2667 | | 1.4582 | 35.0 | 210 | 1.4532 | 0.2667 | | 1.4582 | 36.0 | 216 | 1.4528 | 0.2667 | | 1.4443 | 37.0 | 222 | 1.4526 | 0.2667 | | 1.4443 | 38.0 | 228 | 1.4523 | 0.2667 | | 1.44 | 39.0 | 234 | 1.4522 | 0.2667 | | 1.4727 | 40.0 | 240 | 1.4521 | 0.2667 | | 1.4727 | 41.0 | 246 | 1.4520 | 0.2667 | | 1.4651 | 42.0 | 252 | 1.4520 | 0.2667 | | 1.4651 | 43.0 | 258 | 1.4520 | 0.2667 | | 1.4764 | 44.0 | 264 | 1.4520 | 0.2667 | | 1.4313 | 45.0 | 270 | 1.4520 | 0.2667 | | 1.4313 | 46.0 | 276 | 1.4520 | 0.2667 | | 1.4565 | 47.0 | 282 | 1.4520 | 0.2667 | | 1.4565 | 48.0 | 288 | 1.4520 | 0.2667 | | 1.4277 | 49.0 | 294 | 1.4520 | 0.2667 | | 1.4569 | 50.0 | 300 | 1.4520 | 0.2667 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_sgd_0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_sgd_0001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4641 - Accuracy: 0.2667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5104 | 0.1778 | | 1.5321 | 2.0 | 12 | 1.5075 | 0.1778 | | 1.5321 | 3.0 | 18 | 1.5047 | 0.1778 | | 1.5118 | 4.0 | 24 | 1.5023 | 0.2 | | 1.5295 | 5.0 | 30 | 1.4998 | 0.2 | | 1.5295 | 6.0 | 36 | 1.4976 | 0.2 | | 1.4893 | 7.0 | 42 | 1.4953 | 0.2 | | 1.4893 | 8.0 | 48 | 1.4932 | 0.2 | | 1.5068 | 9.0 | 54 | 1.4912 | 0.2 | | 1.4876 | 10.0 | 60 | 1.4893 | 0.2 | | 1.4876 | 11.0 | 66 | 1.4876 | 0.2222 | | 1.4872 | 12.0 | 72 | 1.4858 | 0.2222 | | 1.4872 | 13.0 | 78 | 1.4842 | 0.2444 | | 1.482 | 14.0 | 84 | 1.4826 | 0.2444 | | 1.4925 | 15.0 | 90 | 1.4811 | 0.2444 | | 1.4925 | 16.0 | 96 | 1.4797 | 0.2444 | | 1.4692 | 17.0 | 102 | 1.4783 | 0.2444 | | 1.4692 | 18.0 | 108 | 1.4772 | 0.2444 | | 1.4971 | 19.0 | 114 | 1.4761 | 0.2444 | | 1.4368 | 20.0 | 120 | 1.4750 | 0.2444 | | 1.4368 | 21.0 | 126 | 1.4740 | 0.2444 | | 1.4645 | 22.0 | 132 | 1.4731 | 0.2444 | | 1.4645 | 23.0 | 138 | 1.4721 | 0.2444 | | 1.4558 | 24.0 | 144 | 1.4712 | 0.2667 | | 1.4397 | 25.0 | 150 | 1.4705 | 0.2667 | | 1.4397 | 26.0 | 156 | 1.4698 | 0.2667 | | 1.4566 | 27.0 | 162 | 1.4691 | 0.2667 | | 1.4566 | 28.0 | 168 | 1.4684 | 0.2667 | | 1.4686 | 29.0 | 174 | 1.4678 | 0.2667 | | 1.4549 | 30.0 | 180 | 1.4672 | 0.2667 | | 1.4549 | 31.0 | 186 | 1.4667 | 0.2667 | | 1.4527 | 32.0 | 192 | 1.4662 | 0.2667 | | 1.4527 | 33.0 | 198 | 1.4658 | 0.2667 | | 1.4549 | 34.0 | 204 | 1.4654 | 0.2667 | | 1.4704 | 35.0 | 210 | 1.4650 | 0.2667 | | 1.4704 | 36.0 | 216 | 1.4648 | 0.2667 | | 1.4264 | 37.0 | 222 | 1.4646 | 0.2667 | | 1.4264 | 38.0 | 228 | 1.4644 | 0.2667 | | 1.4286 | 39.0 | 234 | 1.4642 | 0.2667 | | 1.4743 | 40.0 | 240 | 1.4642 | 0.2667 | | 1.4743 | 41.0 | 246 | 1.4641 | 0.2667 | | 1.4713 | 42.0 | 252 | 1.4641 | 0.2667 | | 1.4713 | 43.0 | 258 | 1.4641 | 0.2667 | | 1.4345 | 44.0 | 264 | 1.4641 | 0.2667 | | 1.4282 | 45.0 | 270 | 1.4641 | 0.2667 | | 1.4282 | 46.0 | 276 | 1.4641 | 0.2667 | | 1.4413 | 47.0 | 282 | 1.4641 | 0.2667 | | 1.4413 | 48.0 | 288 | 1.4641 | 0.2667 | | 1.4233 | 49.0 | 294 | 1.4641 | 0.2667 | | 1.4542 | 50.0 | 300 | 1.4641 | 0.2667 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_sgd_0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_sgd_0001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5323 - Accuracy: 0.3023 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5973 | 0.1860 | | 1.5067 | 2.0 | 12 | 1.5935 | 0.1860 | | 1.5067 | 3.0 | 18 | 1.5902 | 0.1860 | | 1.4808 | 4.0 | 24 | 1.5868 | 0.2326 | | 1.4805 | 5.0 | 30 | 1.5835 | 0.2326 | | 1.4805 | 6.0 | 36 | 1.5802 | 0.2558 | | 1.4884 | 7.0 | 42 | 1.5770 | 0.2558 | | 1.4884 | 8.0 | 48 | 1.5745 | 0.2558 | | 1.4701 | 9.0 | 54 | 1.5717 | 0.2558 | | 1.4909 | 10.0 | 60 | 1.5691 | 0.2558 | | 1.4909 | 11.0 | 66 | 1.5667 | 0.2558 | | 1.4719 | 12.0 | 72 | 1.5643 | 0.2558 | | 1.4719 | 13.0 | 78 | 1.5620 | 0.2558 | | 1.4695 | 14.0 | 84 | 1.5598 | 0.3023 | | 1.4633 | 15.0 | 90 | 1.5576 | 0.3023 | | 1.4633 | 16.0 | 96 | 1.5555 | 0.3023 | | 1.4805 | 17.0 | 102 | 1.5536 | 0.3023 | | 1.4805 | 18.0 | 108 | 1.5518 | 0.3023 | | 1.4265 | 19.0 | 114 | 1.5500 | 0.3023 | | 1.4558 | 20.0 | 120 | 1.5483 | 0.3023 | | 1.4558 | 21.0 | 126 | 1.5468 | 0.3023 | | 1.4538 | 22.0 | 132 | 1.5454 | 0.3023 | | 1.4538 | 23.0 | 138 | 1.5441 | 0.3023 | | 1.4345 | 24.0 | 144 | 1.5427 | 0.3023 | | 1.435 | 25.0 | 150 | 1.5416 | 0.3023 | | 1.435 | 26.0 | 156 | 1.5405 | 0.3023 | | 1.4381 | 27.0 | 162 | 1.5394 | 0.3023 | | 1.4381 | 28.0 | 168 | 1.5384 | 0.3023 | | 1.4397 | 29.0 | 174 | 1.5376 | 0.3023 | | 1.4251 | 30.0 | 180 | 1.5368 | 0.3023 | | 1.4251 | 31.0 | 186 | 1.5361 | 0.3023 | | 1.4272 | 32.0 | 192 | 1.5354 | 0.3023 | | 1.4272 | 33.0 | 198 | 1.5348 | 0.3023 | | 1.4277 | 34.0 | 204 | 1.5343 | 0.3023 | | 1.4249 | 35.0 | 210 | 1.5338 | 0.3023 | | 1.4249 | 36.0 | 216 | 1.5334 | 0.3023 | | 1.4476 | 37.0 | 222 | 1.5330 | 0.3023 | | 1.4476 | 38.0 | 228 | 1.5328 | 0.3023 | | 1.4487 | 39.0 | 234 | 1.5326 | 0.3023 | | 1.4294 | 40.0 | 240 | 1.5324 | 0.3023 | | 1.4294 | 41.0 | 246 | 1.5324 | 0.3023 | | 1.4087 | 42.0 | 252 | 1.5323 | 0.3023 | | 1.4087 | 43.0 | 258 | 1.5323 | 0.3023 | | 1.4561 | 44.0 | 264 | 1.5323 | 0.3023 | | 1.4317 | 45.0 | 270 | 1.5323 | 0.3023 | | 1.4317 | 46.0 | 276 | 1.5323 | 0.3023 | | 1.4154 | 47.0 | 282 | 1.5323 | 0.3023 | | 1.4154 | 48.0 | 288 | 1.5323 | 0.3023 | | 1.4386 | 49.0 | 294 | 1.5323 | 0.3023 | | 1.4625 | 50.0 | 300 | 1.5323 | 0.3023 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_sgd_0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_sgd_0001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4227 - Accuracy: 0.2619 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.4804 | 0.2619 | | 1.5213 | 2.0 | 12 | 1.4770 | 0.2857 | | 1.5213 | 3.0 | 18 | 1.4737 | 0.2857 | | 1.5439 | 4.0 | 24 | 1.4702 | 0.2857 | | 1.5226 | 5.0 | 30 | 1.4673 | 0.2857 | | 1.5226 | 6.0 | 36 | 1.4646 | 0.2857 | | 1.52 | 7.0 | 42 | 1.4618 | 0.2857 | | 1.52 | 8.0 | 48 | 1.4591 | 0.2857 | | 1.5076 | 9.0 | 54 | 1.4566 | 0.2857 | | 1.5003 | 10.0 | 60 | 1.4541 | 0.2857 | | 1.5003 | 11.0 | 66 | 1.4520 | 0.2857 | | 1.4856 | 12.0 | 72 | 1.4497 | 0.2857 | | 1.4856 | 13.0 | 78 | 1.4476 | 0.2857 | | 1.5104 | 14.0 | 84 | 1.4457 | 0.2857 | | 1.4726 | 15.0 | 90 | 1.4438 | 0.2857 | | 1.4726 | 16.0 | 96 | 1.4420 | 0.2857 | | 1.4844 | 17.0 | 102 | 1.4403 | 0.2857 | | 1.4844 | 18.0 | 108 | 1.4387 | 0.2619 | | 1.4456 | 19.0 | 114 | 1.4373 | 0.2619 | | 1.5242 | 20.0 | 120 | 1.4359 | 0.2619 | | 1.5242 | 21.0 | 126 | 1.4347 | 0.2619 | | 1.4484 | 22.0 | 132 | 1.4335 | 0.2619 | | 1.4484 | 23.0 | 138 | 1.4324 | 0.2619 | | 1.4722 | 24.0 | 144 | 1.4314 | 0.2619 | | 1.4802 | 25.0 | 150 | 1.4303 | 0.2619 | | 1.4802 | 26.0 | 156 | 1.4294 | 0.2619 | | 1.4658 | 27.0 | 162 | 1.4284 | 0.2619 | | 1.4658 | 28.0 | 168 | 1.4276 | 0.2619 | | 1.4705 | 29.0 | 174 | 1.4269 | 0.2619 | | 1.4629 | 30.0 | 180 | 1.4263 | 0.2619 | | 1.4629 | 31.0 | 186 | 1.4256 | 0.2619 | | 1.4786 | 32.0 | 192 | 1.4251 | 0.2619 | | 1.4786 | 33.0 | 198 | 1.4246 | 0.2619 | | 1.4444 | 34.0 | 204 | 1.4242 | 0.2619 | | 1.435 | 35.0 | 210 | 1.4238 | 0.2619 | | 1.435 | 36.0 | 216 | 1.4235 | 0.2619 | | 1.4653 | 37.0 | 222 | 1.4232 | 0.2619 | | 1.4653 | 38.0 | 228 | 1.4230 | 0.2619 | | 1.4482 | 39.0 | 234 | 1.4228 | 0.2619 | | 1.4598 | 40.0 | 240 | 1.4227 | 0.2619 | | 1.4598 | 41.0 | 246 | 1.4227 | 0.2619 | | 1.4528 | 42.0 | 252 | 1.4227 | 0.2619 | | 1.4528 | 43.0 | 258 | 1.4227 | 0.2619 | | 1.4661 | 44.0 | 264 | 1.4227 | 0.2619 | | 1.4575 | 45.0 | 270 | 1.4227 | 0.2619 | | 1.4575 | 46.0 | 276 | 1.4227 | 0.2619 | | 1.4719 | 47.0 | 282 | 1.4227 | 0.2619 | | 1.4719 | 48.0 | 288 | 1.4227 | 0.2619 | | 1.4602 | 49.0 | 294 | 1.4227 | 0.2619 | | 1.465 | 50.0 | 300 | 1.4227 | 0.2619 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_sgd_0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_sgd_0001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4475 - Accuracy: 0.2683 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5161 | 0.2439 | | 1.5359 | 2.0 | 12 | 1.5118 | 0.2439 | | 1.5359 | 3.0 | 18 | 1.5076 | 0.2439 | | 1.5171 | 4.0 | 24 | 1.5040 | 0.2439 | | 1.5208 | 5.0 | 30 | 1.5002 | 0.2683 | | 1.5208 | 6.0 | 36 | 1.4969 | 0.2683 | | 1.5066 | 7.0 | 42 | 1.4937 | 0.2683 | | 1.5066 | 8.0 | 48 | 1.4908 | 0.2683 | | 1.4941 | 9.0 | 54 | 1.4878 | 0.2683 | | 1.4953 | 10.0 | 60 | 1.4851 | 0.2683 | | 1.4953 | 11.0 | 66 | 1.4825 | 0.2683 | | 1.498 | 12.0 | 72 | 1.4798 | 0.2683 | | 1.498 | 13.0 | 78 | 1.4774 | 0.2683 | | 1.465 | 14.0 | 84 | 1.4753 | 0.2683 | | 1.4811 | 15.0 | 90 | 1.4730 | 0.2683 | | 1.4811 | 16.0 | 96 | 1.4709 | 0.2683 | | 1.476 | 17.0 | 102 | 1.4689 | 0.2683 | | 1.476 | 18.0 | 108 | 1.4672 | 0.2683 | | 1.4977 | 19.0 | 114 | 1.4656 | 0.2683 | | 1.4745 | 20.0 | 120 | 1.4639 | 0.2683 | | 1.4745 | 21.0 | 126 | 1.4624 | 0.2683 | | 1.4662 | 22.0 | 132 | 1.4609 | 0.2683 | | 1.4662 | 23.0 | 138 | 1.4594 | 0.2683 | | 1.4905 | 24.0 | 144 | 1.4581 | 0.2683 | | 1.465 | 25.0 | 150 | 1.4568 | 0.2683 | | 1.465 | 26.0 | 156 | 1.4556 | 0.2683 | | 1.4499 | 27.0 | 162 | 1.4545 | 0.2683 | | 1.4499 | 28.0 | 168 | 1.4535 | 0.2683 | | 1.473 | 29.0 | 174 | 1.4527 | 0.2683 | | 1.4704 | 30.0 | 180 | 1.4520 | 0.2683 | | 1.4704 | 31.0 | 186 | 1.4512 | 0.2683 | | 1.4654 | 32.0 | 192 | 1.4506 | 0.2683 | | 1.4654 | 33.0 | 198 | 1.4500 | 0.2683 | | 1.4322 | 34.0 | 204 | 1.4494 | 0.2683 | | 1.459 | 35.0 | 210 | 1.4490 | 0.2683 | | 1.459 | 36.0 | 216 | 1.4486 | 0.2683 | | 1.4499 | 37.0 | 222 | 1.4482 | 0.2683 | | 1.4499 | 38.0 | 228 | 1.4480 | 0.2683 | | 1.4314 | 39.0 | 234 | 1.4477 | 0.2683 | | 1.4745 | 40.0 | 240 | 1.4476 | 0.2683 | | 1.4745 | 41.0 | 246 | 1.4476 | 0.2683 | | 1.4482 | 42.0 | 252 | 1.4475 | 0.2683 | | 1.4482 | 43.0 | 258 | 1.4475 | 0.2683 | | 1.4526 | 44.0 | 264 | 1.4475 | 0.2683 | | 1.4693 | 45.0 | 270 | 1.4475 | 0.2683 | | 1.4693 | 46.0 | 276 | 1.4475 | 0.2683 | | 1.4506 | 47.0 | 282 | 1.4475 | 0.2683 | | 1.4506 | 48.0 | 288 | 1.4475 | 0.2683 | | 1.4529 | 49.0 | 294 | 1.4475 | 0.2683 | | 1.4667 | 50.0 | 300 | 1.4475 | 0.2683 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_sgd_00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_sgd_00001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5045 - Accuracy: 0.2889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5103 | 0.2889 | | 1.5406 | 2.0 | 12 | 1.5100 | 0.2889 | | 1.5406 | 3.0 | 18 | 1.5097 | 0.2889 | | 1.5187 | 4.0 | 24 | 1.5094 | 0.2889 | | 1.5371 | 5.0 | 30 | 1.5091 | 0.2889 | | 1.5371 | 6.0 | 36 | 1.5089 | 0.2889 | | 1.517 | 7.0 | 42 | 1.5086 | 0.2889 | | 1.517 | 8.0 | 48 | 1.5084 | 0.2889 | | 1.5407 | 9.0 | 54 | 1.5081 | 0.2889 | | 1.5157 | 10.0 | 60 | 1.5079 | 0.2889 | | 1.5157 | 11.0 | 66 | 1.5077 | 0.2889 | | 1.5121 | 12.0 | 72 | 1.5074 | 0.2889 | | 1.5121 | 13.0 | 78 | 1.5072 | 0.2889 | | 1.538 | 14.0 | 84 | 1.5070 | 0.2889 | | 1.5262 | 15.0 | 90 | 1.5068 | 0.2889 | | 1.5262 | 16.0 | 96 | 1.5066 | 0.2889 | | 1.5233 | 17.0 | 102 | 1.5064 | 0.2889 | | 1.5233 | 18.0 | 108 | 1.5063 | 0.2889 | | 1.5376 | 19.0 | 114 | 1.5061 | 0.2889 | | 1.5005 | 20.0 | 120 | 1.5060 | 0.2889 | | 1.5005 | 21.0 | 126 | 1.5058 | 0.2889 | | 1.5271 | 22.0 | 132 | 1.5057 | 0.2889 | | 1.5271 | 23.0 | 138 | 1.5056 | 0.2889 | | 1.5205 | 24.0 | 144 | 1.5055 | 0.2889 | | 1.5085 | 25.0 | 150 | 1.5054 | 0.2889 | | 1.5085 | 26.0 | 156 | 1.5053 | 0.2889 | | 1.5221 | 27.0 | 162 | 1.5052 | 0.2889 | | 1.5221 | 28.0 | 168 | 1.5051 | 0.2889 | | 1.5344 | 29.0 | 174 | 1.5050 | 0.2889 | | 1.5325 | 30.0 | 180 | 1.5049 | 0.2889 | | 1.5325 | 31.0 | 186 | 1.5048 | 0.2889 | | 1.5365 | 32.0 | 192 | 1.5048 | 0.2889 | | 1.5365 | 33.0 | 198 | 1.5047 | 0.2889 | | 1.5421 | 34.0 | 204 | 1.5046 | 0.2889 | | 1.5276 | 35.0 | 210 | 1.5046 | 0.2889 | | 1.5276 | 36.0 | 216 | 1.5046 | 0.2889 | | 1.5101 | 37.0 | 222 | 1.5045 | 0.2889 | | 1.5101 | 38.0 | 228 | 1.5045 | 0.2889 | | 1.5025 | 39.0 | 234 | 1.5045 | 0.2889 | | 1.5405 | 40.0 | 240 | 1.5045 | 0.2889 | | 1.5405 | 41.0 | 246 | 1.5045 | 0.2889 | | 1.5373 | 42.0 | 252 | 1.5045 | 0.2889 | | 1.5373 | 43.0 | 258 | 1.5045 | 0.2889 | | 1.5465 | 44.0 | 264 | 1.5045 | 0.2889 | | 1.4924 | 45.0 | 270 | 1.5045 | 0.2889 | | 1.4924 | 46.0 | 276 | 1.5045 | 0.2889 | | 1.521 | 47.0 | 282 | 1.5045 | 0.2889 | | 1.521 | 48.0 | 288 | 1.5045 | 0.2889 | | 1.494 | 49.0 | 294 | 1.5045 | 0.2889 | | 1.5268 | 50.0 | 300 | 1.5045 | 0.2889 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_sgd_00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_sgd_00001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5076 - Accuracy: 0.1778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5128 | 0.1778 | | 1.5351 | 2.0 | 12 | 1.5126 | 0.1778 | | 1.5351 | 3.0 | 18 | 1.5123 | 0.1778 | | 1.521 | 4.0 | 24 | 1.5120 | 0.1778 | | 1.5462 | 5.0 | 30 | 1.5118 | 0.1778 | | 1.5462 | 6.0 | 36 | 1.5116 | 0.1778 | | 1.5099 | 7.0 | 42 | 1.5113 | 0.1778 | | 1.5099 | 8.0 | 48 | 1.5111 | 0.1778 | | 1.5333 | 9.0 | 54 | 1.5109 | 0.1778 | | 1.5206 | 10.0 | 60 | 1.5106 | 0.1778 | | 1.5206 | 11.0 | 66 | 1.5105 | 0.1778 | | 1.5227 | 12.0 | 72 | 1.5103 | 0.1778 | | 1.5227 | 13.0 | 78 | 1.5101 | 0.1778 | | 1.5256 | 14.0 | 84 | 1.5099 | 0.1778 | | 1.5395 | 15.0 | 90 | 1.5097 | 0.1778 | | 1.5395 | 16.0 | 96 | 1.5095 | 0.1778 | | 1.5169 | 17.0 | 102 | 1.5094 | 0.1778 | | 1.5169 | 18.0 | 108 | 1.5092 | 0.1778 | | 1.5502 | 19.0 | 114 | 1.5091 | 0.1778 | | 1.4882 | 20.0 | 120 | 1.5090 | 0.1778 | | 1.4882 | 21.0 | 126 | 1.5088 | 0.1778 | | 1.5202 | 22.0 | 132 | 1.5087 | 0.1778 | | 1.5202 | 23.0 | 138 | 1.5086 | 0.1778 | | 1.5139 | 24.0 | 144 | 1.5085 | 0.1778 | | 1.4995 | 25.0 | 150 | 1.5084 | 0.1778 | | 1.4995 | 26.0 | 156 | 1.5083 | 0.1778 | | 1.5175 | 27.0 | 162 | 1.5082 | 0.1778 | | 1.5175 | 28.0 | 168 | 1.5081 | 0.1778 | | 1.5365 | 29.0 | 174 | 1.5081 | 0.1778 | | 1.5232 | 30.0 | 180 | 1.5080 | 0.1778 | | 1.5232 | 31.0 | 186 | 1.5079 | 0.1778 | | 1.5236 | 32.0 | 192 | 1.5079 | 0.1778 | | 1.5236 | 33.0 | 198 | 1.5078 | 0.1778 | | 1.5292 | 34.0 | 204 | 1.5078 | 0.1778 | | 1.544 | 35.0 | 210 | 1.5077 | 0.1778 | | 1.544 | 36.0 | 216 | 1.5077 | 0.1778 | | 1.4971 | 37.0 | 222 | 1.5077 | 0.1778 | | 1.4971 | 38.0 | 228 | 1.5077 | 0.1778 | | 1.4951 | 39.0 | 234 | 1.5076 | 0.1778 | | 1.5452 | 40.0 | 240 | 1.5076 | 0.1778 | | 1.5452 | 41.0 | 246 | 1.5076 | 0.1778 | | 1.5473 | 42.0 | 252 | 1.5076 | 0.1778 | | 1.5473 | 43.0 | 258 | 1.5076 | 0.1778 | | 1.5095 | 44.0 | 264 | 1.5076 | 0.1778 | | 1.495 | 45.0 | 270 | 1.5076 | 0.1778 | | 1.495 | 46.0 | 276 | 1.5076 | 0.1778 | | 1.5118 | 47.0 | 282 | 1.5076 | 0.1778 | | 1.5118 | 48.0 | 288 | 1.5076 | 0.1778 | | 1.493 | 49.0 | 294 | 1.5076 | 0.1778 | | 1.528 | 50.0 | 300 | 1.5076 | 0.1778 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_sgd_00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_sgd_00001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5939 - Accuracy: 0.1860 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6009 | 0.1860 | | 1.5096 | 2.0 | 12 | 1.6005 | 0.1860 | | 1.5096 | 3.0 | 18 | 1.6002 | 0.1860 | | 1.4896 | 4.0 | 24 | 1.5998 | 0.1860 | | 1.4946 | 5.0 | 30 | 1.5995 | 0.1860 | | 1.4946 | 6.0 | 36 | 1.5992 | 0.1860 | | 1.508 | 7.0 | 42 | 1.5988 | 0.1860 | | 1.508 | 8.0 | 48 | 1.5986 | 0.1860 | | 1.4945 | 9.0 | 54 | 1.5983 | 0.1860 | | 1.5205 | 10.0 | 60 | 1.5980 | 0.1860 | | 1.5205 | 11.0 | 66 | 1.5977 | 0.1860 | | 1.5058 | 12.0 | 72 | 1.5975 | 0.1860 | | 1.5058 | 13.0 | 78 | 1.5972 | 0.1860 | | 1.5082 | 14.0 | 84 | 1.5970 | 0.1860 | | 1.502 | 15.0 | 90 | 1.5967 | 0.1860 | | 1.502 | 16.0 | 96 | 1.5965 | 0.1860 | | 1.5281 | 17.0 | 102 | 1.5963 | 0.1860 | | 1.5281 | 18.0 | 108 | 1.5961 | 0.1860 | | 1.4713 | 19.0 | 114 | 1.5959 | 0.1860 | | 1.5067 | 20.0 | 120 | 1.5957 | 0.1860 | | 1.5067 | 21.0 | 126 | 1.5955 | 0.1860 | | 1.5046 | 22.0 | 132 | 1.5953 | 0.1860 | | 1.5046 | 23.0 | 138 | 1.5952 | 0.1860 | | 1.4884 | 24.0 | 144 | 1.5950 | 0.1860 | | 1.4923 | 25.0 | 150 | 1.5949 | 0.1860 | | 1.4923 | 26.0 | 156 | 1.5948 | 0.1860 | | 1.4973 | 27.0 | 162 | 1.5947 | 0.1860 | | 1.4973 | 28.0 | 168 | 1.5945 | 0.1860 | | 1.5002 | 29.0 | 174 | 1.5945 | 0.1860 | | 1.4807 | 30.0 | 180 | 1.5944 | 0.1860 | | 1.4807 | 31.0 | 186 | 1.5943 | 0.1860 | | 1.486 | 32.0 | 192 | 1.5942 | 0.1860 | | 1.486 | 33.0 | 198 | 1.5941 | 0.1860 | | 1.4927 | 34.0 | 204 | 1.5941 | 0.1860 | | 1.4875 | 35.0 | 210 | 1.5940 | 0.1860 | | 1.4875 | 36.0 | 216 | 1.5940 | 0.1860 | | 1.5166 | 37.0 | 222 | 1.5940 | 0.1860 | | 1.5166 | 38.0 | 228 | 1.5939 | 0.1860 | | 1.5127 | 39.0 | 234 | 1.5939 | 0.1860 | | 1.4974 | 40.0 | 240 | 1.5939 | 0.1860 | | 1.4974 | 41.0 | 246 | 1.5939 | 0.1860 | | 1.4716 | 42.0 | 252 | 1.5939 | 0.1860 | | 1.4716 | 43.0 | 258 | 1.5939 | 0.1860 | | 1.5277 | 44.0 | 264 | 1.5939 | 0.1860 | | 1.501 | 45.0 | 270 | 1.5939 | 0.1860 | | 1.501 | 46.0 | 276 | 1.5939 | 0.1860 | | 1.4805 | 47.0 | 282 | 1.5939 | 0.1860 | | 1.4805 | 48.0 | 288 | 1.5939 | 0.1860 | | 1.5052 | 49.0 | 294 | 1.5939 | 0.1860 | | 1.536 | 50.0 | 300 | 1.5939 | 0.1860 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_sgd_00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_sgd_00001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4772 - Accuracy: 0.2857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.4838 | 0.2619 | | 1.5238 | 2.0 | 12 | 1.4835 | 0.2619 | | 1.5238 | 3.0 | 18 | 1.4831 | 0.2619 | | 1.5534 | 4.0 | 24 | 1.4828 | 0.2619 | | 1.5381 | 5.0 | 30 | 1.4825 | 0.2619 | | 1.5381 | 6.0 | 36 | 1.4822 | 0.2619 | | 1.5402 | 7.0 | 42 | 1.4819 | 0.2619 | | 1.5402 | 8.0 | 48 | 1.4816 | 0.2619 | | 1.5343 | 9.0 | 54 | 1.4813 | 0.2619 | | 1.5296 | 10.0 | 60 | 1.4811 | 0.2619 | | 1.5296 | 11.0 | 66 | 1.4808 | 0.2619 | | 1.5185 | 12.0 | 72 | 1.4805 | 0.2619 | | 1.5185 | 13.0 | 78 | 1.4803 | 0.2619 | | 1.5511 | 14.0 | 84 | 1.4801 | 0.2619 | | 1.5137 | 15.0 | 90 | 1.4798 | 0.2619 | | 1.5137 | 16.0 | 96 | 1.4796 | 0.2619 | | 1.5299 | 17.0 | 102 | 1.4794 | 0.2619 | | 1.5299 | 18.0 | 108 | 1.4792 | 0.2857 | | 1.4899 | 19.0 | 114 | 1.4790 | 0.2857 | | 1.5822 | 20.0 | 120 | 1.4789 | 0.2857 | | 1.5822 | 21.0 | 126 | 1.4787 | 0.2857 | | 1.5002 | 22.0 | 132 | 1.4786 | 0.2857 | | 1.5002 | 23.0 | 138 | 1.4784 | 0.2857 | | 1.5297 | 24.0 | 144 | 1.4783 | 0.2857 | | 1.5406 | 25.0 | 150 | 1.4781 | 0.2857 | | 1.5406 | 26.0 | 156 | 1.4780 | 0.2857 | | 1.5241 | 27.0 | 162 | 1.4779 | 0.2857 | | 1.5241 | 28.0 | 168 | 1.4778 | 0.2857 | | 1.5379 | 29.0 | 174 | 1.4777 | 0.2857 | | 1.5253 | 30.0 | 180 | 1.4776 | 0.2857 | | 1.5253 | 31.0 | 186 | 1.4775 | 0.2857 | | 1.549 | 32.0 | 192 | 1.4775 | 0.2857 | | 1.549 | 33.0 | 198 | 1.4774 | 0.2857 | | 1.5016 | 34.0 | 204 | 1.4774 | 0.2857 | | 1.4996 | 35.0 | 210 | 1.4773 | 0.2857 | | 1.4996 | 36.0 | 216 | 1.4773 | 0.2857 | | 1.533 | 37.0 | 222 | 1.4772 | 0.2857 | | 1.533 | 38.0 | 228 | 1.4772 | 0.2857 | | 1.5136 | 39.0 | 234 | 1.4772 | 0.2857 | | 1.5288 | 40.0 | 240 | 1.4772 | 0.2857 | | 1.5288 | 41.0 | 246 | 1.4772 | 0.2857 | | 1.5195 | 42.0 | 252 | 1.4772 | 0.2857 | | 1.5195 | 43.0 | 258 | 1.4772 | 0.2857 | | 1.5432 | 44.0 | 264 | 1.4772 | 0.2857 | | 1.5238 | 45.0 | 270 | 1.4772 | 0.2857 | | 1.5238 | 46.0 | 276 | 1.4772 | 0.2857 | | 1.544 | 47.0 | 282 | 1.4772 | 0.2857 | | 1.544 | 48.0 | 288 | 1.4772 | 0.2857 | | 1.5337 | 49.0 | 294 | 1.4772 | 0.2857 | | 1.5345 | 50.0 | 300 | 1.4772 | 0.2857 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_sgd_00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_sgd_00001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5124 - Accuracy: 0.2439 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5202 | 0.2195 | | 1.5388 | 2.0 | 12 | 1.5198 | 0.2195 | | 1.5388 | 3.0 | 18 | 1.5194 | 0.2195 | | 1.5264 | 4.0 | 24 | 1.5190 | 0.2195 | | 1.5345 | 5.0 | 30 | 1.5186 | 0.2195 | | 1.5345 | 6.0 | 36 | 1.5183 | 0.2195 | | 1.526 | 7.0 | 42 | 1.5179 | 0.2195 | | 1.526 | 8.0 | 48 | 1.5176 | 0.2195 | | 1.5157 | 9.0 | 54 | 1.5173 | 0.2195 | | 1.5235 | 10.0 | 60 | 1.5170 | 0.2195 | | 1.5235 | 11.0 | 66 | 1.5167 | 0.2195 | | 1.5297 | 12.0 | 72 | 1.5164 | 0.2439 | | 1.5297 | 13.0 | 78 | 1.5161 | 0.2439 | | 1.4988 | 14.0 | 84 | 1.5158 | 0.2439 | | 1.5228 | 15.0 | 90 | 1.5155 | 0.2439 | | 1.5228 | 16.0 | 96 | 1.5153 | 0.2439 | | 1.5206 | 17.0 | 102 | 1.5150 | 0.2439 | | 1.5206 | 18.0 | 108 | 1.5148 | 0.2439 | | 1.5425 | 19.0 | 114 | 1.5146 | 0.2439 | | 1.5252 | 20.0 | 120 | 1.5144 | 0.2439 | | 1.5252 | 21.0 | 126 | 1.5142 | 0.2439 | | 1.5165 | 22.0 | 132 | 1.5140 | 0.2439 | | 1.5165 | 23.0 | 138 | 1.5139 | 0.2439 | | 1.5451 | 24.0 | 144 | 1.5137 | 0.2439 | | 1.5198 | 25.0 | 150 | 1.5135 | 0.2439 | | 1.5198 | 26.0 | 156 | 1.5134 | 0.2439 | | 1.5047 | 27.0 | 162 | 1.5132 | 0.2439 | | 1.5047 | 28.0 | 168 | 1.5131 | 0.2439 | | 1.5384 | 29.0 | 174 | 1.5130 | 0.2439 | | 1.5271 | 30.0 | 180 | 1.5129 | 0.2439 | | 1.5271 | 31.0 | 186 | 1.5128 | 0.2439 | | 1.5283 | 32.0 | 192 | 1.5127 | 0.2439 | | 1.5283 | 33.0 | 198 | 1.5127 | 0.2439 | | 1.4864 | 34.0 | 204 | 1.5126 | 0.2439 | | 1.5229 | 35.0 | 210 | 1.5125 | 0.2439 | | 1.5229 | 36.0 | 216 | 1.5125 | 0.2439 | | 1.513 | 37.0 | 222 | 1.5125 | 0.2439 | | 1.513 | 38.0 | 228 | 1.5124 | 0.2439 | | 1.4969 | 39.0 | 234 | 1.5124 | 0.2439 | | 1.5399 | 40.0 | 240 | 1.5124 | 0.2439 | | 1.5399 | 41.0 | 246 | 1.5124 | 0.2439 | | 1.5142 | 42.0 | 252 | 1.5124 | 0.2439 | | 1.5142 | 43.0 | 258 | 1.5124 | 0.2439 | | 1.5226 | 44.0 | 264 | 1.5124 | 0.2439 | | 1.538 | 45.0 | 270 | 1.5124 | 0.2439 | | 1.538 | 46.0 | 276 | 1.5124 | 0.2439 | | 1.5217 | 47.0 | 282 | 1.5124 | 0.2439 | | 1.5217 | 48.0 | 288 | 1.5124 | 0.2439 | | 1.5124 | 49.0 | 294 | 1.5124 | 0.2439 | | 1.5354 | 50.0 | 300 | 1.5124 | 0.2439 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_rms_001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_rms_001_fold1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5587 - Accuracy: 0.3556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 5.6616 | 0.2444 | | 4.5403 | 2.0 | 12 | 1.9139 | 0.2444 | | 4.5403 | 3.0 | 18 | 1.7372 | 0.2444 | | 1.8724 | 4.0 | 24 | 1.4323 | 0.2667 | | 1.5505 | 5.0 | 30 | 1.5541 | 0.2444 | | 1.5505 | 6.0 | 36 | 1.5305 | 0.2444 | | 1.4992 | 7.0 | 42 | 1.5286 | 0.2444 | | 1.4992 | 8.0 | 48 | 1.5617 | 0.2444 | | 1.4899 | 9.0 | 54 | 1.4717 | 0.2444 | | 1.4501 | 10.0 | 60 | 1.4440 | 0.2444 | | 1.4501 | 11.0 | 66 | 1.4155 | 0.2667 | | 1.4052 | 12.0 | 72 | 1.3606 | 0.2444 | | 1.4052 | 13.0 | 78 | 1.4215 | 0.3333 | | 1.4555 | 14.0 | 84 | 1.3356 | 0.3333 | | 1.4209 | 15.0 | 90 | 1.4688 | 0.2667 | | 1.4209 | 16.0 | 96 | 1.2956 | 0.4444 | | 1.4079 | 17.0 | 102 | 1.4012 | 0.2444 | | 1.4079 | 18.0 | 108 | 1.4817 | 0.2444 | | 1.4101 | 19.0 | 114 | 1.4296 | 0.2667 | | 1.6129 | 20.0 | 120 | 1.5601 | 0.2444 | | 1.6129 | 21.0 | 126 | 1.8216 | 0.2667 | | 1.5349 | 22.0 | 132 | 1.6109 | 0.2667 | | 1.5349 | 23.0 | 138 | 1.6663 | 0.2444 | | 1.4443 | 24.0 | 144 | 1.4166 | 0.2444 | | 1.3949 | 25.0 | 150 | 1.5159 | 0.2444 | | 1.3949 | 26.0 | 156 | 1.5557 | 0.2444 | | 1.2549 | 27.0 | 162 | 1.2710 | 0.3333 | | 1.2549 | 28.0 | 168 | 1.4661 | 0.3333 | | 1.2756 | 29.0 | 174 | 1.3759 | 0.3111 | | 1.2244 | 30.0 | 180 | 1.3243 | 0.4222 | | 1.2244 | 31.0 | 186 | 1.1877 | 0.4222 | | 1.1482 | 32.0 | 192 | 1.1943 | 0.4667 | | 1.1482 | 33.0 | 198 | 1.3644 | 0.3111 | | 1.0904 | 34.0 | 204 | 1.3812 | 0.3778 | | 1.051 | 35.0 | 210 | 1.3131 | 0.4444 | | 1.051 | 36.0 | 216 | 1.7518 | 0.2667 | | 1.0583 | 37.0 | 222 | 1.8440 | 0.3556 | | 1.0583 | 38.0 | 228 | 1.7450 | 0.2889 | | 0.8766 | 39.0 | 234 | 1.5767 | 0.3556 | | 0.9084 | 40.0 | 240 | 1.5052 | 0.3778 | | 0.9084 | 41.0 | 246 | 1.5534 | 0.3556 | | 0.8553 | 42.0 | 252 | 1.5587 | 0.3556 | | 0.8553 | 43.0 | 258 | 1.5587 | 0.3556 | | 0.8404 | 44.0 | 264 | 1.5587 | 0.3556 | | 0.8432 | 45.0 | 270 | 1.5587 | 0.3556 | | 0.8432 | 46.0 | 276 | 1.5587 | 0.3556 | | 0.8133 | 47.0 | 282 | 1.5587 | 0.3556 | | 0.8133 | 48.0 | 288 | 1.5587 | 0.3556 | | 0.8467 | 49.0 | 294 | 1.5587 | 0.3556 | | 0.8396 | 50.0 | 300 | 1.5587 | 0.3556 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_rms_001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_rms_001_fold2 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2524 - Accuracy: 0.4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 7.5665 | 0.2444 | | 4.6598 | 2.0 | 12 | 1.8034 | 0.2444 | | 4.6598 | 3.0 | 18 | 1.7719 | 0.2444 | | 1.754 | 4.0 | 24 | 1.5619 | 0.2667 | | 1.5561 | 5.0 | 30 | 1.5155 | 0.2444 | | 1.5561 | 6.0 | 36 | 1.5905 | 0.2444 | | 1.5161 | 7.0 | 42 | 1.4606 | 0.2444 | | 1.5161 | 8.0 | 48 | 1.5057 | 0.2667 | | 1.4837 | 9.0 | 54 | 1.4997 | 0.2444 | | 1.456 | 10.0 | 60 | 1.4411 | 0.2444 | | 1.456 | 11.0 | 66 | 1.4980 | 0.2667 | | 1.4256 | 12.0 | 72 | 1.4097 | 0.2444 | | 1.4256 | 13.0 | 78 | 1.4518 | 0.2667 | | 1.4488 | 14.0 | 84 | 1.3937 | 0.2667 | | 1.4354 | 15.0 | 90 | 1.4044 | 0.2444 | | 1.4354 | 16.0 | 96 | 1.3767 | 0.2667 | | 1.4383 | 17.0 | 102 | 1.4222 | 0.2444 | | 1.4383 | 18.0 | 108 | 1.4806 | 0.2444 | | 1.4107 | 19.0 | 114 | 1.4789 | 0.2444 | | 1.3761 | 20.0 | 120 | 1.2485 | 0.4444 | | 1.3761 | 21.0 | 126 | 1.3600 | 0.2667 | | 1.3385 | 22.0 | 132 | 1.4500 | 0.4 | | 1.3385 | 23.0 | 138 | 1.3814 | 0.3778 | | 1.3465 | 24.0 | 144 | 1.4692 | 0.2667 | | 1.323 | 25.0 | 150 | 1.1674 | 0.4667 | | 1.323 | 26.0 | 156 | 1.3636 | 0.2889 | | 1.2871 | 27.0 | 162 | 1.3963 | 0.4 | | 1.2871 | 28.0 | 168 | 1.3023 | 0.4444 | | 1.1938 | 29.0 | 174 | 1.2034 | 0.4222 | | 1.2252 | 30.0 | 180 | 1.2237 | 0.4444 | | 1.2252 | 31.0 | 186 | 1.2906 | 0.4 | | 1.2127 | 32.0 | 192 | 1.2853 | 0.4 | | 1.2127 | 33.0 | 198 | 1.3006 | 0.3556 | | 1.131 | 34.0 | 204 | 1.3803 | 0.2889 | | 1.1689 | 35.0 | 210 | 1.2981 | 0.3556 | | 1.1689 | 36.0 | 216 | 1.4728 | 0.2889 | | 1.1285 | 37.0 | 222 | 1.3455 | 0.3333 | | 1.1285 | 38.0 | 228 | 1.2593 | 0.4 | | 1.0174 | 39.0 | 234 | 1.2539 | 0.3556 | | 1.0651 | 40.0 | 240 | 1.2296 | 0.4 | | 1.0651 | 41.0 | 246 | 1.2510 | 0.3778 | | 1.0297 | 42.0 | 252 | 1.2524 | 0.4 | | 1.0297 | 43.0 | 258 | 1.2524 | 0.4 | | 0.9982 | 44.0 | 264 | 1.2524 | 0.4 | | 1.047 | 45.0 | 270 | 1.2524 | 0.4 | | 1.047 | 46.0 | 276 | 1.2524 | 0.4 | | 0.9969 | 47.0 | 282 | 1.2524 | 0.4 | | 0.9969 | 48.0 | 288 | 1.2524 | 0.4 | | 1.0686 | 49.0 | 294 | 1.2524 | 0.4 | | 1.0034 | 50.0 | 300 | 1.2524 | 0.4 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_rms_001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_rms_001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4032 - Accuracy: 0.2791 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 7.8981 | 0.2558 | | 4.5842 | 2.0 | 12 | 2.6640 | 0.2558 | | 4.5842 | 3.0 | 18 | 1.6613 | 0.2558 | | 1.9697 | 4.0 | 24 | 1.9731 | 0.2558 | | 1.667 | 5.0 | 30 | 1.6222 | 0.2558 | | 1.667 | 6.0 | 36 | 1.5650 | 0.2558 | | 1.5493 | 7.0 | 42 | 1.7126 | 0.2326 | | 1.5493 | 8.0 | 48 | 1.7198 | 0.2558 | | 1.5158 | 9.0 | 54 | 1.4567 | 0.2558 | | 1.4878 | 10.0 | 60 | 1.4210 | 0.2558 | | 1.4878 | 11.0 | 66 | 1.4952 | 0.2558 | | 1.4957 | 12.0 | 72 | 1.3917 | 0.2558 | | 1.4957 | 13.0 | 78 | 1.4819 | 0.2558 | | 1.4611 | 14.0 | 84 | 1.4450 | 0.2558 | | 1.4163 | 15.0 | 90 | 1.4214 | 0.2558 | | 1.4163 | 16.0 | 96 | 1.4612 | 0.2326 | | 1.4195 | 17.0 | 102 | 1.4036 | 0.2558 | | 1.4195 | 18.0 | 108 | 1.5088 | 0.2558 | | 1.4297 | 19.0 | 114 | 1.4009 | 0.2558 | | 1.4404 | 20.0 | 120 | 1.3994 | 0.2558 | | 1.4404 | 21.0 | 126 | 1.4434 | 0.2326 | | 1.4296 | 22.0 | 132 | 1.4098 | 0.2326 | | 1.4296 | 23.0 | 138 | 1.4135 | 0.2558 | | 1.4052 | 24.0 | 144 | 1.4035 | 0.3023 | | 1.3982 | 25.0 | 150 | 1.3795 | 0.3023 | | 1.3982 | 26.0 | 156 | 1.3917 | 0.2558 | | 1.358 | 27.0 | 162 | 1.5442 | 0.2326 | | 1.358 | 28.0 | 168 | 1.3715 | 0.3256 | | 1.3753 | 29.0 | 174 | 1.4626 | 0.2791 | | 1.3737 | 30.0 | 180 | 1.4033 | 0.3023 | | 1.3737 | 31.0 | 186 | 1.4221 | 0.3488 | | 1.2553 | 32.0 | 192 | 1.5495 | 0.2791 | | 1.2553 | 33.0 | 198 | 1.4332 | 0.2791 | | 1.2089 | 34.0 | 204 | 1.4065 | 0.2791 | | 1.2158 | 35.0 | 210 | 1.4613 | 0.2791 | | 1.2158 | 36.0 | 216 | 1.4360 | 0.3256 | | 1.1733 | 37.0 | 222 | 1.4966 | 0.3256 | | 1.1733 | 38.0 | 228 | 1.4024 | 0.2791 | | 1.1359 | 39.0 | 234 | 1.3752 | 0.2791 | | 1.1239 | 40.0 | 240 | 1.4121 | 0.3023 | | 1.1239 | 41.0 | 246 | 1.4047 | 0.2791 | | 1.0932 | 42.0 | 252 | 1.4032 | 0.2791 | | 1.0932 | 43.0 | 258 | 1.4032 | 0.2791 | | 1.0875 | 44.0 | 264 | 1.4032 | 0.2791 | | 1.102 | 45.0 | 270 | 1.4032 | 0.2791 | | 1.102 | 46.0 | 276 | 1.4032 | 0.2791 | | 1.0783 | 47.0 | 282 | 1.4032 | 0.2791 | | 1.0783 | 48.0 | 288 | 1.4032 | 0.2791 | | 1.1264 | 49.0 | 294 | 1.4032 | 0.2791 | | 1.0785 | 50.0 | 300 | 1.4032 | 0.2791 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_rms_001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_rms_001_fold4 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2235 - Accuracy: 0.4048 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 6.3906 | 0.2381 | | 3.8063 | 2.0 | 12 | 1.7015 | 0.2619 | | 3.8063 | 3.0 | 18 | 2.0641 | 0.2619 | | 1.9221 | 4.0 | 24 | 1.7697 | 0.2381 | | 1.6782 | 5.0 | 30 | 1.4022 | 0.2619 | | 1.6782 | 6.0 | 36 | 1.7511 | 0.2381 | | 1.5442 | 7.0 | 42 | 1.4627 | 0.2381 | | 1.5442 | 8.0 | 48 | 1.4402 | 0.2619 | | 1.4869 | 9.0 | 54 | 1.4717 | 0.2619 | | 1.4572 | 10.0 | 60 | 1.4285 | 0.2381 | | 1.4572 | 11.0 | 66 | 1.4073 | 0.2619 | | 1.4861 | 12.0 | 72 | 1.4071 | 0.3095 | | 1.4861 | 13.0 | 78 | 1.3676 | 0.3095 | | 1.4283 | 14.0 | 84 | 1.4281 | 0.2381 | | 1.4135 | 15.0 | 90 | 1.4437 | 0.2381 | | 1.4135 | 16.0 | 96 | 1.3561 | 0.3095 | | 1.375 | 17.0 | 102 | 1.3574 | 0.2857 | | 1.375 | 18.0 | 108 | 1.2368 | 0.2857 | | 1.3639 | 19.0 | 114 | 1.4601 | 0.2857 | | 1.2891 | 20.0 | 120 | 1.7927 | 0.2381 | | 1.2891 | 21.0 | 126 | 1.2451 | 0.4048 | | 1.3173 | 22.0 | 132 | 1.1578 | 0.4762 | | 1.3173 | 23.0 | 138 | 1.3222 | 0.3095 | | 1.2505 | 24.0 | 144 | 1.3748 | 0.2381 | | 1.263 | 25.0 | 150 | 1.3699 | 0.2857 | | 1.263 | 26.0 | 156 | 1.2508 | 0.3810 | | 1.2132 | 27.0 | 162 | 1.1843 | 0.4048 | | 1.2132 | 28.0 | 168 | 1.4161 | 0.2619 | | 1.1485 | 29.0 | 174 | 1.1305 | 0.4524 | | 1.181 | 30.0 | 180 | 1.1818 | 0.4524 | | 1.181 | 31.0 | 186 | 1.2906 | 0.4048 | | 1.131 | 32.0 | 192 | 1.1623 | 0.4762 | | 1.131 | 33.0 | 198 | 1.2826 | 0.4524 | | 1.164 | 34.0 | 204 | 1.1932 | 0.4524 | | 1.0879 | 35.0 | 210 | 1.1104 | 0.4286 | | 1.0879 | 36.0 | 216 | 1.0661 | 0.5714 | | 1.1012 | 37.0 | 222 | 1.2594 | 0.4048 | | 1.1012 | 38.0 | 228 | 1.1459 | 0.4286 | | 1.0505 | 39.0 | 234 | 1.1918 | 0.4524 | | 1.0052 | 40.0 | 240 | 1.2662 | 0.4286 | | 1.0052 | 41.0 | 246 | 1.2165 | 0.4048 | | 0.9631 | 42.0 | 252 | 1.2235 | 0.4048 | | 0.9631 | 43.0 | 258 | 1.2235 | 0.4048 | | 0.9397 | 44.0 | 264 | 1.2235 | 0.4048 | | 0.9545 | 45.0 | 270 | 1.2235 | 0.4048 | | 0.9545 | 46.0 | 276 | 1.2235 | 0.4048 | | 0.9591 | 47.0 | 282 | 1.2235 | 0.4048 | | 0.9591 | 48.0 | 288 | 1.2235 | 0.4048 | | 0.9579 | 49.0 | 294 | 1.2235 | 0.4048 | | 0.9362 | 50.0 | 300 | 1.2235 | 0.4048 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_small_rms_001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_small_rms_001_fold5 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0780 - Accuracy: 0.5854 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 5.8408 | 0.2683 | | 4.7719 | 2.0 | 12 | 1.7477 | 0.2683 | | 4.7719 | 3.0 | 18 | 1.7384 | 0.2683 | | 1.932 | 4.0 | 24 | 1.4924 | 0.2439 | | 1.6238 | 5.0 | 30 | 1.5127 | 0.2439 | | 1.6238 | 6.0 | 36 | 1.7015 | 0.2683 | | 1.5494 | 7.0 | 42 | 1.5012 | 0.2439 | | 1.5494 | 8.0 | 48 | 1.4936 | 0.2683 | | 1.5044 | 9.0 | 54 | 1.4046 | 0.2683 | | 1.5179 | 10.0 | 60 | 1.4005 | 0.2439 | | 1.5179 | 11.0 | 66 | 1.4179 | 0.2683 | | 1.475 | 12.0 | 72 | 1.4436 | 0.2439 | | 1.475 | 13.0 | 78 | 1.6356 | 0.2439 | | 1.4371 | 14.0 | 84 | 1.3958 | 0.2683 | | 1.4353 | 15.0 | 90 | 1.3814 | 0.2439 | | 1.4353 | 16.0 | 96 | 1.5489 | 0.2439 | | 1.4229 | 17.0 | 102 | 1.4209 | 0.2439 | | 1.4229 | 18.0 | 108 | 1.3777 | 0.2439 | | 1.4337 | 19.0 | 114 | 1.3611 | 0.3171 | | 1.399 | 20.0 | 120 | 1.3904 | 0.4146 | | 1.399 | 21.0 | 126 | 1.3106 | 0.3415 | | 1.3792 | 22.0 | 132 | 1.5271 | 0.3171 | | 1.3792 | 23.0 | 138 | 1.4354 | 0.3171 | | 1.353 | 24.0 | 144 | 1.2465 | 0.3171 | | 1.3278 | 25.0 | 150 | 1.4779 | 0.2683 | | 1.3278 | 26.0 | 156 | 1.1238 | 0.6585 | | 1.2815 | 27.0 | 162 | 1.0700 | 0.4878 | | 1.2815 | 28.0 | 168 | 1.4309 | 0.2927 | | 1.2766 | 29.0 | 174 | 1.1073 | 0.5854 | | 1.2458 | 30.0 | 180 | 1.0518 | 0.5366 | | 1.2458 | 31.0 | 186 | 1.0678 | 0.5366 | | 1.196 | 32.0 | 192 | 1.0365 | 0.5122 | | 1.196 | 33.0 | 198 | 1.0762 | 0.4878 | | 1.1298 | 34.0 | 204 | 1.1843 | 0.4390 | | 1.1053 | 35.0 | 210 | 0.9867 | 0.5610 | | 1.1053 | 36.0 | 216 | 0.9844 | 0.5854 | | 1.0778 | 37.0 | 222 | 1.2314 | 0.4878 | | 1.0778 | 38.0 | 228 | 0.9827 | 0.5854 | | 1.0269 | 39.0 | 234 | 1.0882 | 0.5854 | | 0.9486 | 40.0 | 240 | 1.0901 | 0.5854 | | 0.9486 | 41.0 | 246 | 1.0899 | 0.5854 | | 0.9443 | 42.0 | 252 | 1.0780 | 0.5854 | | 0.9443 | 43.0 | 258 | 1.0780 | 0.5854 | | 0.8824 | 44.0 | 264 | 1.0780 | 0.5854 | | 0.8971 | 45.0 | 270 | 1.0780 | 0.5854 | | 0.8971 | 46.0 | 276 | 1.0780 | 0.5854 | | 0.8963 | 47.0 | 282 | 1.0780 | 0.5854 | | 0.8963 | 48.0 | 288 | 1.0780 | 0.5854 | | 0.9026 | 49.0 | 294 | 1.0780 | 0.5854 | | 0.9008 | 50.0 | 300 | 1.0780 | 0.5854 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
Akshay0706/Rice-Plant-50-Epochs-Model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Rice-Plant-50-Epochs-Model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1649 - Accuracy: 0.9688 - F1: 0.9686 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.0399 | 1.0 | 115 | 0.6185 | 0.8910 | 0.8933 | | 0.3392 | 2.0 | 230 | 0.2849 | 0.9502 | 0.9497 | | 0.1633 | 3.0 | 345 | 0.2230 | 0.9439 | 0.9440 | | 0.104 | 4.0 | 460 | 0.2022 | 0.9502 | 0.9495 | | 0.0828 | 5.0 | 575 | 0.2081 | 0.9408 | 0.9406 | | 0.0603 | 6.0 | 690 | 0.2301 | 0.9408 | 0.9403 | | 0.0513 | 7.0 | 805 | 0.1704 | 0.9595 | 0.9593 | | 0.042 | 8.0 | 920 | 0.1587 | 0.9626 | 0.9626 | | 0.0356 | 9.0 | 1035 | 0.1606 | 0.9626 | 0.9625 | | 0.0299 | 10.0 | 1150 | 0.1608 | 0.9657 | 0.9656 | | 0.0262 | 11.0 | 1265 | 0.1553 | 0.9626 | 0.9625 | | 0.0232 | 12.0 | 1380 | 0.1582 | 0.9657 | 0.9656 | | 0.0207 | 13.0 | 1495 | 0.1588 | 0.9657 | 0.9656 | | 0.0186 | 14.0 | 1610 | 0.1618 | 0.9657 | 0.9656 | | 0.0168 | 15.0 | 1725 | 0.1618 | 0.9657 | 0.9656 | | 0.0152 | 16.0 | 1840 | 0.1639 | 0.9657 | 0.9656 | | 0.0139 | 17.0 | 1955 | 0.1649 | 0.9688 | 0.9686 | | 0.0127 | 18.0 | 2070 | 0.1676 | 0.9657 | 0.9656 | | 0.0117 | 19.0 | 2185 | 0.1688 | 0.9688 | 0.9686 | | 0.0108 | 20.0 | 2300 | 0.1710 | 0.9626 | 0.9622 | | 0.01 | 21.0 | 2415 | 0.1723 | 0.9657 | 0.9654 | | 0.0093 | 22.0 | 2530 | 0.1739 | 0.9657 | 0.9654 | | 0.0087 | 23.0 | 2645 | 0.1758 | 0.9626 | 0.9622 | | 0.0081 | 24.0 | 2760 | 0.1776 | 0.9626 | 0.9622 | | 0.0076 | 25.0 | 2875 | 0.1777 | 0.9657 | 0.9654 | | 0.0071 | 26.0 | 2990 | 0.1792 | 0.9657 | 0.9654 | | 0.0067 | 27.0 | 3105 | 0.1808 | 0.9657 | 0.9654 | | 0.0063 | 28.0 | 3220 | 0.1822 | 0.9657 | 0.9654 | | 0.006 | 29.0 | 3335 | 0.1834 | 0.9657 | 0.9654 | | 0.0057 | 30.0 | 3450 | 0.1840 | 0.9657 | 0.9654 | | 0.0054 | 31.0 | 3565 | 0.1855 | 0.9657 | 0.9654 | | 0.0051 | 32.0 | 3680 | 0.1868 | 0.9657 | 0.9654 | | 0.0049 | 33.0 | 3795 | 0.1877 | 0.9657 | 0.9654 | | 0.0047 | 34.0 | 3910 | 0.1892 | 0.9657 | 0.9654 | | 0.0045 | 35.0 | 4025 | 0.1900 | 0.9657 | 0.9654 | | 0.0043 | 36.0 | 4140 | 0.1914 | 0.9657 | 0.9654 | | 0.0042 | 37.0 | 4255 | 0.1919 | 0.9657 | 0.9654 | | 0.004 | 38.0 | 4370 | 0.1929 | 0.9657 | 0.9654 | | 0.0039 | 39.0 | 4485 | 0.1938 | 0.9657 | 0.9654 | | 0.0037 | 40.0 | 4600 | 0.1953 | 0.9657 | 0.9654 | | 0.0036 | 41.0 | 4715 | 0.1956 | 0.9657 | 0.9654 | | 0.0035 | 42.0 | 4830 | 0.1965 | 0.9657 | 0.9654 | | 0.0035 | 43.0 | 4945 | 0.1974 | 0.9657 | 0.9654 | | 0.0034 | 44.0 | 5060 | 0.1981 | 0.9657 | 0.9654 | | 0.0033 | 45.0 | 5175 | 0.1984 | 0.9657 | 0.9654 | | 0.0032 | 46.0 | 5290 | 0.1986 | 0.9657 | 0.9654 | | 0.0032 | 47.0 | 5405 | 0.1989 | 0.9657 | 0.9654 | | 0.0032 | 48.0 | 5520 | 0.1993 | 0.9657 | 0.9654 | | 0.0031 | 49.0 | 5635 | 0.1993 | 0.9657 | 0.9654 | | 0.0031 | 50.0 | 5750 | 0.1993 | 0.9657 | 0.9654 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "0", "1", "2", "3", "4", "5" ]
zkdeng/resnet-50-finetuned-combinedSpiders
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-50-finetuned-combinedSpiders This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.3794 - eval_accuracy: 0.8996 - eval_precision: 0.8983 - eval_recall: 0.8934 - eval_f1: 0.8943 - eval_runtime: 14.9052 - eval_samples_per_second: 181.145 - eval_steps_per_second: 11.338 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "annual crop", "forest", "herbaceous vegetation", "highway", "industrial", "pasture", "permanent crop", "residential", "river", "sea or lake" ]