model_id
stringlengths
7
105
model_card
stringlengths
1
130k
model_labels
listlengths
2
80k
Ivanrs/vit-base-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_SEC
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_SEC This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3066 - Accuracy: 0.9417 - Precision: 0.9483 - Recall: 0.9417 - F1: 0.9388 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.056 | 0.6667 | 100 | 0.4899 | 0.85 | 0.8963 | 0.85 | 0.8476 | | 0.0229 | 1.3333 | 200 | 0.5003 | 0.8792 | 0.9087 | 0.8792 | 0.8645 | | 0.0082 | 2.0 | 300 | 0.3076 | 0.8883 | 0.9190 | 0.8883 | 0.8891 | | 0.0049 | 2.6667 | 400 | 0.4297 | 0.9067 | 0.9307 | 0.9067 | 0.9055 | | 0.0355 | 3.3333 | 500 | 0.7084 | 0.8325 | 0.9102 | 0.8325 | 0.8265 | | 0.0752 | 4.0 | 600 | 0.5323 | 0.875 | 0.8919 | 0.875 | 0.8602 | | 0.0025 | 4.6667 | 700 | 0.4350 | 0.8983 | 0.9142 | 0.8983 | 0.8952 | | 0.0018 | 5.3333 | 800 | 0.3244 | 0.935 | 0.9428 | 0.935 | 0.9310 | | 0.0014 | 6.0 | 900 | 0.3183 | 0.9367 | 0.9443 | 0.9367 | 0.9328 | | 0.0012 | 6.6667 | 1000 | 0.3114 | 0.9367 | 0.9441 | 0.9367 | 0.9330 | | 0.0011 | 7.3333 | 1100 | 0.3090 | 0.9367 | 0.9442 | 0.9367 | 0.9330 | | 0.0009 | 8.0 | 1200 | 0.3078 | 0.9392 | 0.9463 | 0.9392 | 0.9359 | | 0.0008 | 8.6667 | 1300 | 0.3077 | 0.94 | 0.9470 | 0.94 | 0.9369 | | 0.0008 | 9.3333 | 1400 | 0.3068 | 0.9408 | 0.9476 | 0.9408 | 0.9378 | | 0.0007 | 10.0 | 1500 | 0.3068 | 0.9417 | 0.9483 | 0.9417 | 0.9388 | | 0.0007 | 10.6667 | 1600 | 0.3066 | 0.9417 | 0.9483 | 0.9417 | 0.9388 | | 0.0006 | 11.3333 | 1700 | 0.3078 | 0.9425 | 0.9490 | 0.9425 | 0.9398 | | 0.0006 | 12.0 | 1800 | 0.3080 | 0.9425 | 0.9490 | 0.9425 | 0.9398 | | 0.0006 | 12.6667 | 1900 | 0.3086 | 0.9433 | 0.9499 | 0.9433 | 0.9406 | | 0.0005 | 13.3333 | 2000 | 0.3091 | 0.9433 | 0.9499 | 0.9433 | 0.9406 | | 0.0005 | 14.0 | 2100 | 0.3093 | 0.9433 | 0.9499 | 0.9433 | 0.9406 | | 0.0005 | 14.6667 | 2200 | 0.3095 | 0.9433 | 0.9499 | 0.9433 | 0.9406 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "sec-subtype_iiia", "sec-subtype_iia", "sec-subtype_ivc", "sec-subtype_ivd", "sec-subtype_ia", "sec-subtype_va" ]
Ivanrs/vit-base-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_SUR
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_SUR This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4946 - Accuracy: 0.9075 - Precision: 0.9136 - Recall: 0.9075 - F1: 0.9046 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.2895 | 0.6667 | 100 | 0.5586 | 0.795 | 0.8452 | 0.795 | 0.7997 | | 0.0848 | 1.3333 | 200 | 0.8609 | 0.7975 | 0.8401 | 0.7975 | 0.7883 | | 0.0782 | 2.0 | 300 | 0.7032 | 0.81 | 0.8414 | 0.81 | 0.8116 | | 0.0158 | 2.6667 | 400 | 0.7198 | 0.8342 | 0.8570 | 0.8342 | 0.8336 | | 0.0327 | 3.3333 | 500 | 0.7624 | 0.8458 | 0.8484 | 0.8458 | 0.8448 | | 0.0044 | 4.0 | 600 | 0.6172 | 0.8792 | 0.8926 | 0.8792 | 0.8769 | | 0.0032 | 4.6667 | 700 | 0.7772 | 0.8517 | 0.8589 | 0.8517 | 0.8496 | | 0.0026 | 5.3333 | 800 | 0.8897 | 0.8375 | 0.8478 | 0.8375 | 0.8351 | | 0.0033 | 6.0 | 900 | 0.4946 | 0.9075 | 0.9136 | 0.9075 | 0.9046 | | 0.0019 | 6.6667 | 1000 | 0.6971 | 0.8725 | 0.8727 | 0.8725 | 0.8716 | | 0.0016 | 7.3333 | 1100 | 0.7355 | 0.8692 | 0.8711 | 0.8692 | 0.8685 | | 0.0136 | 8.0 | 1200 | 0.9004 | 0.8675 | 0.8900 | 0.8675 | 0.8613 | | 0.0013 | 8.6667 | 1300 | 0.7646 | 0.875 | 0.8837 | 0.875 | 0.8715 | | 0.0011 | 9.3333 | 1400 | 0.7833 | 0.875 | 0.8786 | 0.875 | 0.8729 | | 0.0009 | 10.0 | 1500 | 0.7968 | 0.8767 | 0.8800 | 0.8767 | 0.8747 | | 0.0009 | 10.6667 | 1600 | 0.8085 | 0.8758 | 0.8790 | 0.8758 | 0.8738 | | 0.0008 | 11.3333 | 1700 | 0.8175 | 0.8758 | 0.8790 | 0.8758 | 0.8738 | | 0.0008 | 12.0 | 1800 | 0.8242 | 0.8767 | 0.8801 | 0.8767 | 0.8746 | | 0.0007 | 12.6667 | 1900 | 0.8292 | 0.8767 | 0.8801 | 0.8767 | 0.8746 | | 0.0007 | 13.3333 | 2000 | 0.8335 | 0.8775 | 0.8812 | 0.8775 | 0.8754 | | 0.0007 | 14.0 | 2100 | 0.8363 | 0.8775 | 0.8812 | 0.8775 | 0.8754 | | 0.0007 | 14.6667 | 2200 | 0.8376 | 0.8775 | 0.8812 | 0.8775 | 0.8754 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "sur-subtype_iiia", "sur-subtype_iia", "sur-subtype_ivc", "sur-subtype_ivd", "sur-subtype_ia", "sur-subtype_va" ]
Ivanrs/vit-base-kidney-stone-Michel_Daudon_-w256_1k_v1-_MIX
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-kidney-stone-Michel_Daudon_-w256_1k_v1-_MIX This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4940 - Accuracy: 0.8337 - Precision: 0.8589 - Recall: 0.8337 - F1: 0.8356 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.1919 | 0.3333 | 100 | 0.4940 | 0.8337 | 0.8589 | 0.8337 | 0.8356 | | 0.1697 | 0.6667 | 200 | 0.6993 | 0.8092 | 0.8485 | 0.8092 | 0.8059 | | 0.1514 | 1.0 | 300 | 0.5555 | 0.8442 | 0.8565 | 0.8442 | 0.8443 | | 0.0991 | 1.3333 | 400 | 0.5918 | 0.8467 | 0.8741 | 0.8467 | 0.8453 | | 0.0415 | 1.6667 | 500 | 0.6080 | 0.8558 | 0.8690 | 0.8558 | 0.8553 | | 0.1112 | 2.0 | 600 | 0.9788 | 0.7983 | 0.8485 | 0.7983 | 0.8028 | | 0.0658 | 2.3333 | 700 | 1.0272 | 0.8004 | 0.8310 | 0.8004 | 0.8002 | | 0.0977 | 2.6667 | 800 | 0.6861 | 0.8479 | 0.8570 | 0.8479 | 0.8482 | | 0.03 | 3.0 | 900 | 0.8317 | 0.8025 | 0.8225 | 0.8025 | 0.8048 | | 0.0253 | 3.3333 | 1000 | 0.8574 | 0.8242 | 0.8408 | 0.8242 | 0.8254 | | 0.0564 | 3.6667 | 1100 | 0.8591 | 0.8392 | 0.8513 | 0.8392 | 0.8343 | | 0.0285 | 4.0 | 1200 | 1.3453 | 0.7512 | 0.8090 | 0.7512 | 0.7484 | | 0.002 | 4.3333 | 1300 | 0.9746 | 0.8192 | 0.8381 | 0.8192 | 0.8227 | | 0.0214 | 4.6667 | 1400 | 0.7404 | 0.8646 | 0.8641 | 0.8646 | 0.8572 | | 0.0282 | 5.0 | 1500 | 1.0063 | 0.8233 | 0.8486 | 0.8233 | 0.8219 | | 0.03 | 5.3333 | 1600 | 1.0066 | 0.8025 | 0.8376 | 0.8025 | 0.8058 | | 0.028 | 5.6667 | 1700 | 1.1451 | 0.8108 | 0.8325 | 0.8108 | 0.8067 | | 0.0078 | 6.0 | 1800 | 1.0700 | 0.805 | 0.8220 | 0.805 | 0.8045 | | 0.0008 | 6.3333 | 1900 | 1.0180 | 0.8146 | 0.8303 | 0.8146 | 0.8165 | | 0.0008 | 6.6667 | 2000 | 0.9882 | 0.8246 | 0.8401 | 0.8246 | 0.8236 | | 0.0006 | 7.0 | 2100 | 1.0366 | 0.8283 | 0.8424 | 0.8283 | 0.8270 | | 0.0009 | 7.3333 | 2200 | 1.1136 | 0.8121 | 0.8309 | 0.8121 | 0.8143 | | 0.0068 | 7.6667 | 2300 | 1.0873 | 0.8117 | 0.8128 | 0.8117 | 0.8015 | | 0.0006 | 8.0 | 2400 | 0.8601 | 0.8325 | 0.8383 | 0.8325 | 0.8292 | | 0.0187 | 8.3333 | 2500 | 0.9700 | 0.8258 | 0.8375 | 0.8258 | 0.8241 | | 0.0005 | 8.6667 | 2600 | 0.8825 | 0.8175 | 0.8339 | 0.8175 | 0.8199 | | 0.0005 | 9.0 | 2700 | 1.0314 | 0.8242 | 0.8455 | 0.8242 | 0.8230 | | 0.0004 | 9.3333 | 2800 | 1.0323 | 0.8233 | 0.8443 | 0.8233 | 0.8230 | | 0.0003 | 9.6667 | 2900 | 1.0397 | 0.8229 | 0.8433 | 0.8229 | 0.8229 | | 0.0003 | 10.0 | 3000 | 1.0473 | 0.8237 | 0.8437 | 0.8237 | 0.8239 | | 0.0003 | 10.3333 | 3100 | 1.0536 | 0.8229 | 0.8428 | 0.8229 | 0.8233 | | 0.0003 | 10.6667 | 3200 | 1.0605 | 0.8229 | 0.8429 | 0.8229 | 0.8234 | | 0.0003 | 11.0 | 3300 | 1.0667 | 0.8229 | 0.8429 | 0.8229 | 0.8234 | | 0.0002 | 11.3333 | 3400 | 1.0711 | 0.8237 | 0.8436 | 0.8237 | 0.8243 | | 0.0002 | 11.6667 | 3500 | 1.0750 | 0.8246 | 0.8441 | 0.8246 | 0.8251 | | 0.0002 | 12.0 | 3600 | 1.0804 | 0.825 | 0.8443 | 0.825 | 0.8257 | | 0.0002 | 12.3333 | 3700 | 1.0839 | 0.825 | 0.8440 | 0.825 | 0.8257 | | 0.0002 | 12.6667 | 3800 | 1.0875 | 0.8246 | 0.8436 | 0.8246 | 0.8253 | | 0.0002 | 13.0 | 3900 | 1.0909 | 0.8246 | 0.8436 | 0.8246 | 0.8253 | | 0.0002 | 13.3333 | 4000 | 1.0930 | 0.8246 | 0.8436 | 0.8246 | 0.8253 | | 0.0002 | 13.6667 | 4100 | 1.0954 | 0.8237 | 0.8429 | 0.8237 | 0.8246 | | 0.0002 | 14.0 | 4200 | 1.0975 | 0.8237 | 0.8429 | 0.8237 | 0.8246 | | 0.0002 | 14.3333 | 4300 | 1.0988 | 0.8237 | 0.8429 | 0.8237 | 0.8246 | | 0.0002 | 14.6667 | 4400 | 1.0997 | 0.8237 | 0.8429 | 0.8237 | 0.8246 | | 0.0002 | 15.0 | 4500 | 1.1000 | 0.8237 | 0.8429 | 0.8237 | 0.8246 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "mix-subtype_iva", "mix-subtype_iva2", "mix-subtype_ivc", "mix-subtype_ivd", "mix-subtype_ia", "mix-subtype_va" ]
Ivanrs/vit-base-kidney-stone-Michel_Daudon_-w256_1k_v1-_SEC
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-kidney-stone-Michel_Daudon_-w256_1k_v1-_SEC This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3802 - Accuracy: 0.8975 - Precision: 0.9004 - Recall: 0.8975 - F1: 0.8961 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.1982 | 0.6667 | 100 | 0.5328 | 0.8342 | 0.8678 | 0.8342 | 0.8304 | | 0.103 | 1.3333 | 200 | 0.5614 | 0.8342 | 0.8518 | 0.8342 | 0.8235 | | 0.0646 | 2.0 | 300 | 0.3802 | 0.8975 | 0.9004 | 0.8975 | 0.8961 | | 0.0206 | 2.6667 | 400 | 0.5236 | 0.8908 | 0.8932 | 0.8908 | 0.8910 | | 0.0073 | 3.3333 | 500 | 0.4848 | 0.885 | 0.9037 | 0.885 | 0.8879 | | 0.0237 | 4.0 | 600 | 0.6534 | 0.8617 | 0.8872 | 0.8617 | 0.8633 | | 0.0414 | 4.6667 | 700 | 0.5937 | 0.8808 | 0.8914 | 0.8808 | 0.8782 | | 0.0027 | 5.3333 | 800 | 0.5129 | 0.8933 | 0.8992 | 0.8933 | 0.8953 | | 0.0023 | 6.0 | 900 | 0.6645 | 0.8867 | 0.9012 | 0.8867 | 0.8876 | | 0.0017 | 6.6667 | 1000 | 0.4428 | 0.9158 | 0.9162 | 0.9158 | 0.9158 | | 0.0014 | 7.3333 | 1100 | 0.4490 | 0.9183 | 0.9188 | 0.9183 | 0.9183 | | 0.0012 | 8.0 | 1200 | 0.4573 | 0.9183 | 0.9188 | 0.9183 | 0.9183 | | 0.0011 | 8.6667 | 1300 | 0.4643 | 0.9183 | 0.9186 | 0.9183 | 0.9182 | | 0.001 | 9.3333 | 1400 | 0.4724 | 0.9175 | 0.9178 | 0.9175 | 0.9174 | | 0.0009 | 10.0 | 1500 | 0.4783 | 0.9192 | 0.9196 | 0.9192 | 0.9191 | | 0.0008 | 10.6667 | 1600 | 0.4834 | 0.92 | 0.9205 | 0.92 | 0.9200 | | 0.0008 | 11.3333 | 1700 | 0.4880 | 0.9183 | 0.9188 | 0.9183 | 0.9183 | | 0.0007 | 12.0 | 1800 | 0.4913 | 0.9192 | 0.9196 | 0.9192 | 0.9191 | | 0.0007 | 12.6667 | 1900 | 0.4946 | 0.9192 | 0.9196 | 0.9192 | 0.9191 | | 0.0007 | 13.3333 | 2000 | 0.4967 | 0.9192 | 0.9196 | 0.9192 | 0.9191 | | 0.0006 | 14.0 | 2100 | 0.4982 | 0.9192 | 0.9196 | 0.9192 | 0.9191 | | 0.0006 | 14.6667 | 2200 | 0.4990 | 0.9192 | 0.9196 | 0.9192 | 0.9191 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "sec-subtype_iva", "sec-subtype_iva2", "sec-subtype_ivc", "sec-subtype_ivd", "sec-subtype_ia", "sec-subtype_va" ]
ericakcc/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0422 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0686 | 1.5385 | 100 | 0.0757 | 0.9774 | | 0.0152 | 3.0769 | 200 | 0.0422 | 0.9850 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
alyzbane/2025-02-05-21-58-41-resnet-50
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2025-02-05-21-58-41-resnet-50 This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0762 - Precision: 0.9810 - Recall: 0.9805 - F1: 0.9804 - Accuracy: 0.9766 - Top1 Accuracy: 0.9805 - Error Rate: 0.0234 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 3407 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Top1 Accuracy | Error Rate | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:| | 2.4636 | 1.0 | 103 | 2.1548 | 0.6867 | 0.6293 | 0.5929 | 0.5824 | 0.6293 | 0.4176 | | 1.3967 | 2.0 | 206 | 0.5586 | 0.8893 | 0.8780 | 0.8770 | 0.8743 | 0.8780 | 0.1257 | | 0.4328 | 3.0 | 309 | 0.2100 | 0.9565 | 0.9512 | 0.9518 | 0.9524 | 0.9512 | 0.0476 | | 0.2544 | 4.0 | 412 | 0.1414 | 0.9628 | 0.9610 | 0.9613 | 0.9588 | 0.9610 | 0.0412 | | 0.171 | 5.0 | 515 | 0.1127 | 0.9690 | 0.9683 | 0.9683 | 0.9638 | 0.9683 | 0.0362 | | 0.1556 | 6.0 | 618 | 0.0976 | 0.9715 | 0.9707 | 0.9706 | 0.9681 | 0.9707 | 0.0319 | | 0.118 | 7.0 | 721 | 0.0762 | 0.9810 | 0.9805 | 0.9804 | 0.9766 | 0.9805 | 0.0234 | | 0.1142 | 8.0 | 824 | 0.0853 | 0.9809 | 0.9805 | 0.9804 | 0.9813 | 0.9805 | 0.0187 | | 0.0978 | 9.0 | 927 | 0.0798 | 0.9808 | 0.9805 | 0.9803 | 0.9788 | 0.9805 | 0.0212 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.20.3
[ "acacia", "coconut", "dau", "dita", "ilang-ilang", "macarthur", "mango", "mulawin", "narra", "palmera", "royal palm", "santol", "tabebuia" ]
alyzbane/2025-02-05-14-22-36-vit-base-patch16-224
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2025-02-05-14-22-36-vit-base-patch16-224 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0410 - Precision: 0.9931 - Recall: 0.9927 - F1: 0.9927 - Accuracy: 0.9931 - Top1 Accuracy: 0.9927 - Error Rate: 0.0069 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 3407 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Top1 Accuracy | Error Rate | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:| | 0.9229 | 1.0 | 103 | 0.2079 | 0.9541 | 0.9366 | 0.9386 | 0.9333 | 0.9366 | 0.0667 | | 0.1707 | 2.0 | 206 | 0.1222 | 0.9726 | 0.9707 | 0.9688 | 0.9664 | 0.9707 | 0.0336 | | 0.0937 | 3.0 | 309 | 0.1807 | 0.9552 | 0.9512 | 0.9505 | 0.9561 | 0.9512 | 0.0439 | | 0.0619 | 4.0 | 412 | 0.1122 | 0.9764 | 0.9756 | 0.9754 | 0.9778 | 0.9756 | 0.0222 | | 0.0309 | 5.0 | 515 | 0.0803 | 0.9884 | 0.9878 | 0.9875 | 0.9863 | 0.9878 | 0.0137 | | 0.0275 | 6.0 | 618 | 0.0437 | 0.9904 | 0.9902 | 0.9902 | 0.9905 | 0.9902 | 0.0095 | | 0.0103 | 7.0 | 721 | 0.0426 | 0.9905 | 0.9902 | 0.9903 | 0.9908 | 0.9902 | 0.0092 | | 0.0065 | 8.0 | 824 | 0.0414 | 0.9953 | 0.9951 | 0.9951 | 0.9949 | 0.9951 | 0.0051 | | 0.004 | 9.0 | 927 | 0.0410 | 0.9931 | 0.9927 | 0.9927 | 0.9931 | 0.9927 | 0.0069 | | 0.0007 | 10.0 | 1030 | 0.0416 | 0.9953 | 0.9951 | 0.9951 | 0.9949 | 0.9951 | 0.0051 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.20.3
[ "acacia", "coconut", "dau", "dita", "ilang-ilang", "macarthur", "mango", "mulawin", "narra", "palmera", "royal palm", "santol", "tabebuia" ]
alyzbane/2025-02-05-15-01-55-swin-base-patch4-window7-224
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2025-02-05-15-01-55-swin-base-patch4-window7-224 This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0147 - Precision: 0.9953 - Recall: 0.9951 - F1: 0.9951 - Accuracy: 0.9947 - Top1 Accuracy: 0.9951 - Error Rate: 0.0053 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 3407 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Top1 Accuracy | Error Rate | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:| | 0.8621 | 1.0 | 103 | 0.2049 | 0.9405 | 0.9220 | 0.9233 | 0.9284 | 0.9220 | 0.0716 | | 0.3112 | 2.0 | 206 | 0.1944 | 0.9579 | 0.9537 | 0.9511 | 0.9419 | 0.9537 | 0.0581 | | 0.1598 | 3.0 | 309 | 0.1673 | 0.9635 | 0.9610 | 0.9610 | 0.9627 | 0.9610 | 0.0373 | | 0.1019 | 4.0 | 412 | 0.0472 | 0.9856 | 0.9854 | 0.9853 | 0.9858 | 0.9854 | 0.0142 | | 0.0779 | 5.0 | 515 | 0.3869 | 0.9388 | 0.9268 | 0.9246 | 0.9236 | 0.9268 | 0.0764 | | 0.0519 | 6.0 | 618 | 0.0224 | 0.9858 | 0.9854 | 0.9852 | 0.9852 | 0.9854 | 0.0148 | | 0.0477 | 7.0 | 721 | 0.0402 | 0.9887 | 0.9878 | 0.9879 | 0.9885 | 0.9878 | 0.0115 | | 0.0086 | 8.0 | 824 | 0.0147 | 0.9953 | 0.9951 | 0.9951 | 0.9947 | 0.9951 | 0.0053 | | 0.0052 | 9.0 | 927 | 0.0177 | 0.9953 | 0.9951 | 0.9951 | 0.9947 | 0.9951 | 0.0053 | | 0.0022 | 10.0 | 1030 | 0.0180 | 0.9953 | 0.9951 | 0.9951 | 0.9947 | 0.9951 | 0.0053 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.20.3
[ "acacia", "coconut", "dau", "dita", "ilang-ilang", "macarthur", "mango", "mulawin", "narra", "palmera", "royal palm", "santol", "tabebuia" ]
Logistic12/vit-base-patch16-224-in21k
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "apple_pie", "baby_back_ribs", "baklava", "beef_carpaccio", "beef_tartare", "beet_salad", "beignets", "bibimbap", "bread_pudding", "breakfast_burrito", "bruschetta", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare", "waffles" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV40
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV40 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7428 - Accuracy: 0.7614 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 4 | 1.5349 | 0.3409 | | No log | 2.0 | 8 | 1.3213 | 0.4432 | | 4.7629 | 3.0 | 12 | 1.2541 | 0.4432 | | 4.7629 | 4.0 | 16 | 1.2072 | 0.6023 | | 4.7629 | 5.0 | 20 | 1.1313 | 0.6364 | | 3.7987 | 6.0 | 24 | 1.0712 | 0.6477 | | 3.7987 | 7.0 | 28 | 0.9677 | 0.6591 | | 3.7987 | 8.0 | 32 | 0.8655 | 0.7159 | | 3.0437 | 9.0 | 36 | 0.8564 | 0.6818 | | 3.0437 | 10.0 | 40 | 0.8003 | 0.6818 | | 3.0437 | 11.0 | 44 | 0.7987 | 0.7386 | | 2.4867 | 12.0 | 48 | 0.7619 | 0.7159 | | 2.4867 | 13.0 | 52 | 0.7426 | 0.7386 | | 2.4867 | 14.0 | 56 | 0.7492 | 0.6932 | | 2.147 | 15.0 | 60 | 0.7827 | 0.7159 | | 2.147 | 16.0 | 64 | 0.7509 | 0.7045 | | 2.147 | 17.0 | 68 | 0.7364 | 0.7386 | | 1.8443 | 18.0 | 72 | 0.7705 | 0.7159 | | 1.8443 | 19.0 | 76 | 0.7515 | 0.7273 | | 1.8443 | 20.0 | 80 | 0.7470 | 0.7386 | | 1.659 | 21.0 | 84 | 0.7495 | 0.75 | | 1.659 | 22.0 | 88 | 0.7237 | 0.75 | | 1.659 | 23.0 | 92 | 0.7440 | 0.75 | | 1.5303 | 24.0 | 96 | 0.7367 | 0.75 | | 1.5303 | 25.0 | 100 | 0.7428 | 0.7614 | | 1.5303 | 26.0 | 104 | 0.7407 | 0.75 | | 1.4305 | 27.0 | 108 | 0.7406 | 0.75 | | 1.4305 | 28.0 | 112 | 0.7423 | 0.75 | | 1.4305 | 29.0 | 116 | 0.7427 | 0.75 | | 1.3529 | 30.0 | 120 | 0.7428 | 0.75 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV41
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV41 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9385 - Accuracy: 0.6818 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 2 | 1.4874 | 0.4318 | | No log | 2.0 | 4 | 1.3560 | 0.4432 | | No log | 3.0 | 6 | 1.3117 | 0.4432 | | No log | 4.0 | 8 | 1.2763 | 0.4432 | | No log | 5.0 | 10 | 1.2602 | 0.5682 | | 8.9996 | 6.0 | 12 | 1.2348 | 0.6023 | | 8.9996 | 7.0 | 14 | 1.1982 | 0.5795 | | 8.9996 | 8.0 | 16 | 1.1592 | 0.6136 | | 8.9996 | 9.0 | 18 | 1.1142 | 0.625 | | 8.9996 | 10.0 | 20 | 1.0682 | 0.6364 | | 8.9996 | 11.0 | 22 | 1.0256 | 0.6477 | | 7.429 | 12.0 | 24 | 0.9843 | 0.6705 | | 7.429 | 13.0 | 26 | 0.9602 | 0.6705 | | 7.429 | 14.0 | 28 | 0.9452 | 0.6591 | | 7.429 | 15.0 | 30 | 0.9385 | 0.6818 | | 7.429 | 16.0 | 32 | 0.9320 | 0.6705 | | 7.429 | 17.0 | 34 | 0.9285 | 0.6477 | | 6.2752 | 18.0 | 36 | 0.9239 | 0.6591 | | 6.2752 | 19.0 | 38 | 0.9214 | 0.6818 | | 6.2752 | 20.0 | 40 | 0.9206 | 0.6818 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV42
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV42 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0626 - Accuracy: 0.6932 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 2 | 1.5027 | 0.4091 | | No log | 2.0 | 4 | 1.3949 | 0.4545 | | No log | 3.0 | 6 | 1.2983 | 0.4659 | | No log | 4.0 | 8 | 1.2602 | 0.4773 | | No log | 5.0 | 10 | 1.2465 | 0.5568 | | 8.9015 | 6.0 | 12 | 1.2463 | 0.6136 | | 8.9015 | 7.0 | 14 | 1.2369 | 0.6136 | | 8.9015 | 8.0 | 16 | 1.2061 | 0.6136 | | 8.9015 | 9.0 | 18 | 1.1656 | 0.6477 | | 8.9015 | 10.0 | 20 | 1.1330 | 0.6705 | | 8.9015 | 11.0 | 22 | 1.1127 | 0.6818 | | 7.6818 | 12.0 | 24 | 1.0981 | 0.6818 | | 7.6818 | 13.0 | 26 | 1.0913 | 0.7045 | | 7.6818 | 14.0 | 28 | 1.0857 | 0.6932 | | 7.6818 | 15.0 | 30 | 1.0804 | 0.6932 | | 7.6818 | 16.0 | 32 | 1.0732 | 0.6932 | | 7.6818 | 17.0 | 34 | 1.0684 | 0.6932 | | 7.0174 | 18.0 | 36 | 1.0644 | 0.6932 | | 7.0174 | 19.0 | 38 | 1.0629 | 0.6932 | | 7.0174 | 20.0 | 40 | 1.0626 | 0.6932 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV43
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV43 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0777 - Accuracy: 0.6374 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 42 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 2 | 1.5998 | 0.1319 | | No log | 2.0 | 4 | 1.4960 | 0.4176 | | No log | 3.0 | 6 | 1.3659 | 0.4396 | | No log | 4.0 | 8 | 1.3074 | 0.4286 | | No log | 5.0 | 10 | 1.2850 | 0.4286 | | 9.4768 | 6.0 | 12 | 1.2592 | 0.4396 | | 9.4768 | 7.0 | 14 | 1.2446 | 0.4505 | | 9.4768 | 8.0 | 16 | 1.2326 | 0.5495 | | 9.4768 | 9.0 | 18 | 1.2084 | 0.5714 | | 9.4768 | 10.0 | 20 | 1.1779 | 0.5604 | | 9.4768 | 11.0 | 22 | 1.1483 | 0.5604 | | 7.6512 | 12.0 | 24 | 1.1232 | 0.5824 | | 7.6512 | 13.0 | 26 | 1.1042 | 0.6044 | | 7.6512 | 14.0 | 28 | 1.0891 | 0.6154 | | 7.6512 | 15.0 | 30 | 1.0777 | 0.6374 | | 7.6512 | 16.0 | 32 | 1.0686 | 0.6264 | | 7.6512 | 17.0 | 34 | 1.0623 | 0.6264 | | 6.5906 | 18.0 | 36 | 1.0582 | 0.6264 | | 6.5906 | 19.0 | 38 | 1.0559 | 0.6154 | | 6.5906 | 20.0 | 40 | 1.0550 | 0.6154 | | 6.5906 | 21.0 | 42 | 1.0548 | 0.6154 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV44
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV44 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7302 - Accuracy: 0.75 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 42 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 1.0 | 4 | 1.4782 | 0.4659 | | No log | 2.0 | 8 | 1.3298 | 0.4432 | | 4.6468 | 3.0 | 12 | 1.2295 | 0.5341 | | 4.6468 | 4.0 | 16 | 1.1639 | 0.6477 | | 4.6468 | 5.0 | 20 | 1.0070 | 0.6477 | | 3.6233 | 6.0 | 24 | 0.9560 | 0.6477 | | 3.6233 | 7.0 | 28 | 0.8686 | 0.6932 | | 3.6233 | 8.0 | 32 | 0.8405 | 0.7045 | | 2.7642 | 9.0 | 36 | 0.8296 | 0.7045 | | 2.7642 | 10.0 | 40 | 0.8147 | 0.7159 | | 2.7642 | 11.0 | 44 | 0.8032 | 0.7386 | | 2.2276 | 12.0 | 48 | 0.7302 | 0.75 | | 2.2276 | 13.0 | 52 | 0.7815 | 0.75 | | 2.2276 | 14.0 | 56 | 0.7365 | 0.7273 | | 2.0873 | 15.0 | 60 | 0.7417 | 0.75 | | 2.0873 | 16.0 | 64 | 0.7103 | 0.75 | | 2.0873 | 17.0 | 68 | 0.7166 | 0.75 | | 1.7268 | 18.0 | 72 | 0.7360 | 0.7386 | | 1.7268 | 19.0 | 76 | 0.7432 | 0.7159 | | 1.7268 | 20.0 | 80 | 0.7206 | 0.7273 | | 1.602 | 21.0 | 84 | 0.7302 | 0.75 | | 1.602 | 22.0 | 88 | 0.7332 | 0.7159 | | 1.602 | 23.0 | 92 | 0.7401 | 0.7045 | | 1.4229 | 24.0 | 96 | 0.7472 | 0.7273 | | 1.4229 | 25.0 | 100 | 0.7525 | 0.7273 | | 1.4229 | 26.0 | 104 | 0.7436 | 0.7273 | | 1.3233 | 27.0 | 108 | 0.7411 | 0.7273 | | 1.3233 | 28.0 | 112 | 0.7398 | 0.7273 | | 1.3233 | 29.0 | 116 | 0.7398 | 0.7159 | | 1.2076 | 30.0 | 120 | 0.7407 | 0.7273 | | 1.2076 | 31.0 | 124 | 0.7412 | 0.7273 | | 1.2076 | 31.6154 | 126 | 0.7412 | 0.7273 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV45
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV45 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7812 - Accuracy: 0.7841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 42 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 6.1945 | 1.0 | 20 | 1.2588 | 0.4545 | | 4.5836 | 2.0 | 40 | 0.9658 | 0.7159 | | 2.9056 | 3.0 | 60 | 0.7737 | 0.7273 | | 2.8061 | 4.0 | 80 | 0.6738 | 0.7727 | | 1.9405 | 5.0 | 100 | 0.6261 | 0.7614 | | 1.4425 | 6.0 | 120 | 0.8127 | 0.75 | | 1.3554 | 7.0 | 140 | 0.7812 | 0.7841 | | 1.2975 | 8.0 | 160 | 0.8405 | 0.75 | | 0.812 | 9.0 | 180 | 1.0777 | 0.7159 | | 0.7984 | 10.0 | 200 | 0.9404 | 0.7159 | | 0.7895 | 11.0 | 220 | 1.0902 | 0.7045 | | 0.7333 | 12.0 | 240 | 1.0998 | 0.75 | | 0.6073 | 13.0 | 260 | 1.2734 | 0.7386 | | 0.6548 | 14.0 | 280 | 1.3034 | 0.7159 | | 0.5538 | 15.0 | 300 | 1.1890 | 0.75 | | 0.556 | 16.0 | 320 | 1.3662 | 0.75 | | 0.5273 | 17.0 | 340 | 1.2833 | 0.7273 | | 0.3863 | 18.0 | 360 | 1.2976 | 0.7159 | | 0.5185 | 19.0 | 380 | 1.2461 | 0.7386 | | 0.475 | 20.0 | 400 | 1.2543 | 0.7386 | | 0.3021 | 21.0 | 420 | 1.3143 | 0.7727 | | 0.3334 | 22.0 | 440 | 1.2873 | 0.75 | | 0.3773 | 23.0 | 460 | 1.3992 | 0.7386 | | 0.2606 | 24.0 | 480 | 1.5181 | 0.7159 | | 0.3344 | 25.0 | 500 | 1.4330 | 0.7614 | | 0.3349 | 26.0 | 520 | 1.4165 | 0.7841 | | 0.3246 | 27.0 | 540 | 1.3634 | 0.7614 | | 0.3395 | 28.0 | 560 | 1.3985 | 0.7614 | | 0.2606 | 29.0 | 580 | 1.3866 | 0.7614 | | 0.2212 | 30.0 | 600 | 1.4849 | 0.75 | | 0.2266 | 31.0 | 620 | 1.4230 | 0.7727 | | 0.2525 | 32.0 | 640 | 1.4288 | 0.7727 | | 0.2241 | 33.0 | 660 | 1.4497 | 0.7614 | | 0.1816 | 34.0 | 680 | 1.4347 | 0.7614 | | 0.2529 | 35.0 | 700 | 1.4278 | 0.75 | | 0.189 | 36.0 | 720 | 1.4290 | 0.75 | | 0.2491 | 37.0 | 740 | 1.4449 | 0.7614 | | 0.2562 | 38.0 | 760 | 1.4514 | 0.75 | | 0.1872 | 39.0 | 780 | 1.4522 | 0.75 | | 0.223 | 39.9351 | 798 | 1.4527 | 0.75 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
otaku840726/autotrain-ds5v9-t4tki
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metrics loss: 0.6861165165901184 f1: 0.8 precision: 0.6666666666666666 recall: 1.0 auc: 0.375 accuracy: 0.6666666666666666
[ "fake", "real" ]
Dmitry43243242/banana-disease-leaf-model
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV47
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV47 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9284 - Accuracy: 0.75 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.9412 | 8 | 1.5602 | 0.3409 | | 1.6237 | 1.9412 | 16 | 1.3767 | 0.4432 | | 1.4913 | 2.9412 | 24 | 1.3316 | 0.6136 | | 1.4913 | 3.9412 | 32 | 1.0605 | 0.6591 | | 1.2218 | 4.9412 | 40 | 0.9235 | 0.6932 | | 0.9148 | 5.9412 | 48 | 0.8240 | 0.75 | | 0.9148 | 6.9412 | 56 | 0.7359 | 0.6932 | | 0.7686 | 7.9412 | 64 | 0.7190 | 0.6932 | | 0.6291 | 8.9412 | 72 | 0.6824 | 0.7273 | | 0.6291 | 9.9412 | 80 | 0.7034 | 0.7614 | | 0.5546 | 10.9412 | 88 | 0.6911 | 0.7727 | | 0.4494 | 11.9412 | 96 | 0.6893 | 0.75 | | 0.4494 | 12.9412 | 104 | 0.6927 | 0.7727 | | 0.3719 | 13.9412 | 112 | 0.7180 | 0.7955 | | 0.3478 | 14.9412 | 120 | 0.7574 | 0.7159 | | 0.3478 | 15.9412 | 128 | 0.7665 | 0.7159 | | 0.3212 | 16.9412 | 136 | 0.8369 | 0.7386 | | 0.3184 | 17.9412 | 144 | 0.7906 | 0.7159 | | 0.3184 | 18.9412 | 152 | 0.8438 | 0.7273 | | 0.2873 | 19.9412 | 160 | 0.8233 | 0.7273 | | 0.2553 | 20.9412 | 168 | 0.8062 | 0.7386 | | 0.2553 | 21.9412 | 176 | 0.8711 | 0.7159 | | 0.2373 | 22.9412 | 184 | 0.8673 | 0.7386 | | 0.2208 | 23.9412 | 192 | 0.8600 | 0.7273 | | 0.2208 | 24.9412 | 200 | 0.8984 | 0.7159 | | 0.2353 | 25.9412 | 208 | 0.8848 | 0.7273 | | 0.2187 | 26.9412 | 216 | 0.8569 | 0.75 | | 0.2187 | 27.9412 | 224 | 0.8817 | 0.7386 | | 0.1943 | 28.9412 | 232 | 0.8949 | 0.75 | | 0.1926 | 29.9412 | 240 | 0.9077 | 0.7159 | | 0.1926 | 30.9412 | 248 | 0.9200 | 0.7159 | | 0.1816 | 31.9412 | 256 | 0.9233 | 0.7386 | | 0.1744 | 32.9412 | 264 | 0.9231 | 0.7386 | | 0.1744 | 33.9412 | 272 | 0.9329 | 0.7273 | | 0.1718 | 34.9412 | 280 | 0.9277 | 0.7386 | | 0.1701 | 35.9412 | 288 | 0.9258 | 0.75 | | 0.1701 | 36.9412 | 296 | 0.9262 | 0.75 | | 0.1921 | 37.9412 | 304 | 0.9274 | 0.75 | | 0.161 | 38.9412 | 312 | 0.9282 | 0.75 | | 0.161 | 39.9412 | 320 | 0.9284 | 0.75 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV48
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV48 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7283 - Accuracy: 0.75 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: reduce_lr_on_plateau - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.9412 | 8 | 1.4387 | 0.4432 | | 1.5789 | 1.9412 | 16 | 1.3131 | 0.5568 | | 1.3907 | 2.9412 | 24 | 1.1805 | 0.5909 | | 1.3907 | 3.9412 | 32 | 1.0386 | 0.6136 | | 1.1967 | 4.9412 | 40 | 1.0065 | 0.6136 | | 1.0098 | 5.9412 | 48 | 0.8786 | 0.6477 | | 1.0098 | 6.9412 | 56 | 0.8264 | 0.6932 | | 0.863 | 7.9412 | 64 | 0.8026 | 0.7273 | | 0.7309 | 8.9412 | 72 | 0.7853 | 0.7159 | | 0.7309 | 9.9412 | 80 | 0.7649 | 0.7273 | | 0.6597 | 10.9412 | 88 | 0.7671 | 0.7386 | | 0.56 | 11.9412 | 96 | 0.7551 | 0.7159 | | 0.56 | 12.9412 | 104 | 0.7428 | 0.7273 | | 0.5207 | 13.9412 | 112 | 0.7396 | 0.7273 | | 0.5108 | 14.9412 | 120 | 0.7368 | 0.7273 | | 0.5108 | 15.9412 | 128 | 0.7366 | 0.7386 | | 0.5062 | 16.9412 | 136 | 0.7364 | 0.7273 | | 0.5069 | 17.9412 | 144 | 0.7329 | 0.7386 | | 0.5069 | 18.9412 | 152 | 0.7285 | 0.7273 | | 0.4952 | 19.9412 | 160 | 0.7371 | 0.7386 | | 0.4979 | 20.9412 | 168 | 0.7436 | 0.7386 | | 0.4979 | 21.9412 | 176 | 0.7338 | 0.7386 | | 0.4745 | 22.9412 | 184 | 0.7291 | 0.75 | | 0.4735 | 23.9412 | 192 | 0.7305 | 0.75 | | 0.4735 | 24.9412 | 200 | 0.7301 | 0.75 | | 0.4862 | 25.9412 | 208 | 0.7283 | 0.75 | | 0.4955 | 26.9412 | 216 | 0.7273 | 0.75 | | 0.4955 | 27.9412 | 224 | 0.7275 | 0.75 | | 0.4602 | 28.9412 | 232 | 0.7280 | 0.75 | | 0.4714 | 29.9412 | 240 | 0.7291 | 0.75 | | 0.4714 | 30.9412 | 248 | 0.7298 | 0.75 | | 0.4727 | 31.9412 | 256 | 0.7301 | 0.75 | | 0.4689 | 32.9412 | 264 | 0.7293 | 0.75 | | 0.4689 | 33.9412 | 272 | 0.7287 | 0.75 | | 0.4725 | 34.9412 | 280 | 0.7287 | 0.75 | | 0.4747 | 35.9412 | 288 | 0.7284 | 0.75 | | 0.4747 | 36.9412 | 296 | 0.7284 | 0.75 | | 0.5012 | 37.9412 | 304 | 0.7284 | 0.75 | | 0.462 | 38.9412 | 312 | 0.7286 | 0.75 | | 0.462 | 39.9412 | 320 | 0.7283 | 0.75 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV49
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV49 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7811 - Accuracy: 0.7386 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.9412 | 8 | 1.5166 | 0.4318 | | 1.5823 | 1.9412 | 16 | 1.4043 | 0.4432 | | 1.5029 | 2.9412 | 24 | 1.3230 | 0.5 | | 1.5029 | 3.9412 | 32 | 1.2373 | 0.5795 | | 1.3569 | 4.9412 | 40 | 1.0701 | 0.6023 | | 1.1064 | 5.9412 | 48 | 0.9832 | 0.6023 | | 1.1064 | 6.9412 | 56 | 0.9004 | 0.6705 | | 0.941 | 7.9412 | 64 | 0.8323 | 0.6591 | | 0.7975 | 8.9412 | 72 | 0.7830 | 0.6818 | | 0.7975 | 9.9412 | 80 | 0.7657 | 0.7045 | | 0.7242 | 10.9412 | 88 | 0.7484 | 0.7386 | | 0.6308 | 11.9412 | 96 | 0.7143 | 0.7386 | | 0.6308 | 12.9412 | 104 | 0.6923 | 0.7273 | | 0.5782 | 13.9412 | 112 | 0.6776 | 0.7386 | | 0.5333 | 14.9412 | 120 | 0.6889 | 0.7614 | | 0.5333 | 15.9412 | 128 | 0.6799 | 0.7841 | | 0.495 | 16.9412 | 136 | 0.6794 | 0.7614 | | 0.4931 | 17.9412 | 144 | 0.6921 | 0.7614 | | 0.4931 | 18.9412 | 152 | 0.7162 | 0.7273 | | 0.435 | 19.9412 | 160 | 0.7128 | 0.7386 | | 0.4109 | 20.9412 | 168 | 0.7157 | 0.75 | | 0.4109 | 21.9412 | 176 | 0.7404 | 0.7386 | | 0.3897 | 22.9412 | 184 | 0.7275 | 0.7386 | | 0.3718 | 23.9412 | 192 | 0.7492 | 0.7727 | | 0.3718 | 24.9412 | 200 | 0.7520 | 0.7386 | | 0.3866 | 25.9412 | 208 | 0.7550 | 0.7273 | | 0.366 | 26.9412 | 216 | 0.7395 | 0.7386 | | 0.366 | 27.9412 | 224 | 0.7340 | 0.7386 | | 0.3454 | 28.9412 | 232 | 0.7578 | 0.7273 | | 0.346 | 29.9412 | 240 | 0.7679 | 0.7273 | | 0.346 | 30.9412 | 248 | 0.7546 | 0.75 | | 0.3325 | 31.9412 | 256 | 0.7600 | 0.75 | | 0.3117 | 32.9412 | 264 | 0.7798 | 0.7386 | | 0.3117 | 33.9412 | 272 | 0.7944 | 0.7273 | | 0.3177 | 34.9412 | 280 | 0.7856 | 0.7386 | | 0.3263 | 35.9412 | 288 | 0.7813 | 0.7386 | | 0.3263 | 36.9412 | 296 | 0.7798 | 0.7386 | | 0.3305 | 37.9412 | 304 | 0.7804 | 0.7386 | | 0.2999 | 38.9412 | 312 | 0.7810 | 0.7386 | | 0.2999 | 39.9412 | 320 | 0.7811 | 0.7386 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
adnananouzla/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2197 - Accuracy: 0.9283 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3702 | 1.0 | 370 | 0.2809 | 0.9350 | | 0.2176 | 2.0 | 740 | 0.2133 | 0.9432 | | 0.1556 | 3.0 | 1110 | 0.1965 | 0.9486 | | 0.1443 | 4.0 | 1480 | 0.1872 | 0.9499 | | 0.1294 | 5.0 | 1850 | 0.1843 | 0.9486 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
ssale2/nsfw_base_model_trained
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nsfw_base_model_trained This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "sfw", "nsfw" ]
ssale2/nsfw_base_model_finetuned_model_v2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nsfw_base_model_finetuned_model_v2 This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "sfw", "nsfw" ]
dima806/clothes_image_detection
Returns the clothes category with about 78% accuracy based on an image. See https://www.kaggle.com/code/dima806/clothes-image-detection-vit for details. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6449300e3adf50d864095b90/nHatsaQxttuXETO_3XM8I.png) ``` Classification report: precision recall f1-score support Blazer 0.7419 0.6900 0.7150 200 Coat 0.7512 0.7550 0.7531 200 Denim Jacket 0.8592 0.9150 0.8862 200 Dresses 0.8603 0.7700 0.8127 200 Hoodie 0.6985 0.9500 0.8051 200 Jacket 0.7686 0.4650 0.5794 200 Jeans 0.8657 0.8700 0.8678 200 Long Pants 0.8112 0.7950 0.8030 200 Polo 0.7929 0.5550 0.6529 200 Shirt 0.7430 0.7950 0.7681 200 Shorts 0.9149 0.8600 0.8866 200 Skirt 0.8102 0.8750 0.8413 200 Sports Jacket 0.6562 0.7350 0.6934 200 Sweater 0.7758 0.8650 0.8180 200 T-shirt 0.7743 0.8750 0.8216 200 accuracy 0.7847 3000 macro avg 0.7883 0.7847 0.7803 3000 weighted avg 0.7883 0.7847 0.7803 3000 ```
[ "blazer", "coat", "denim jacket", "dresses", "hoodie", "jacket", "jeans", "long pants", "polo", "shirt", "shorts", "skirt", "sports jacket", "sweater", "t-shirt" ]
FrogSpeed/ball_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ball_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 300700, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.48.3 - TensorFlow 2.16.1 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "glioma", "meningioma", "no-tumor", "pituitary" ]
platzi/platzi-vit-model-gis-professional
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-vit-model-gis-professional This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0594 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1253 | 3.8462 | 500 | 0.0594 | 0.9850 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
yanjunliu/vit-base-beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0648 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2847 | 1.0 | 130 | 0.2224 | 0.9624 | | 0.1328 | 2.0 | 260 | 0.1294 | 0.9699 | | 0.1384 | 3.0 | 390 | 0.0990 | 0.9774 | | 0.0844 | 4.0 | 520 | 0.0648 | 0.9925 | | 0.1204 | 5.0 | 650 | 0.0841 | 0.9699 | ### Framework versions - Transformers 4.49.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
Manhkun/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.1926 - Accuracy: 0.9364 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4058 | 1.0 | 370 | 0.2814 | 0.9337 | | 0.2013 | 2.0 | 740 | 0.2054 | 0.9459 | | 0.1703 | 3.0 | 1110 | 0.1872 | 0.9418 | | 0.1522 | 4.0 | 1480 | 0.1775 | 0.9472 | | 0.1265 | 5.0 | 1850 | 0.1758 | 0.9432 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
Cesar727/platzi_vit_test_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi_vit_test_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0077 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1284 | 3.8462 | 500 | 0.0077 | 1.0 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.5.1 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV50
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV50 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8354 - Accuracy: 0.7273 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 5 | 1.5451 | 0.3864 | | No log | 2.0 | 10 | 1.5220 | 0.3864 | | 1.4177 | 3.0 | 15 | 1.4938 | 0.4205 | | 1.4177 | 4.0 | 20 | 1.4111 | 0.4432 | | 1.2671 | 5.0 | 25 | 1.2941 | 0.4545 | | 1.2671 | 6.0 | 30 | 1.2036 | 0.4545 | | 1.2671 | 7.0 | 35 | 1.0816 | 0.5114 | | 0.9869 | 8.0 | 40 | 1.0452 | 0.5795 | | 0.9869 | 9.0 | 45 | 0.9876 | 0.625 | | 0.8456 | 10.0 | 50 | 0.9791 | 0.5909 | | 0.8456 | 11.0 | 55 | 0.9662 | 0.6023 | | 0.7126 | 12.0 | 60 | 0.9302 | 0.6364 | | 0.7126 | 13.0 | 65 | 0.9379 | 0.625 | | 0.7126 | 14.0 | 70 | 0.9036 | 0.6705 | | 0.6561 | 15.0 | 75 | 0.8846 | 0.6591 | | 0.6561 | 16.0 | 80 | 0.8689 | 0.6591 | | 0.6367 | 17.0 | 85 | 0.8543 | 0.6591 | | 0.6367 | 18.0 | 90 | 0.8342 | 0.6932 | | 0.6367 | 19.0 | 95 | 0.8185 | 0.6705 | | 0.5463 | 20.0 | 100 | 0.8290 | 0.7159 | | 0.5463 | 21.0 | 105 | 0.8354 | 0.7273 | | 0.5504 | 22.0 | 110 | 0.8160 | 0.7159 | | 0.5504 | 23.0 | 115 | 0.8073 | 0.7159 | | 0.507 | 24.0 | 120 | 0.8071 | 0.7045 | | 0.507 | 25.0 | 125 | 0.8071 | 0.6932 | | 0.507 | 26.0 | 130 | 0.8047 | 0.7045 | | 0.5226 | 27.0 | 135 | 0.8000 | 0.7045 | | 0.5226 | 28.0 | 140 | 0.7987 | 0.7159 | | 0.5144 | 29.0 | 145 | 0.8000 | 0.7159 | | 0.5144 | 30.0 | 150 | 0.8002 | 0.7159 | | 0.5144 | 31.0 | 155 | 0.8008 | 0.7159 | | 0.4862 | 32.0 | 160 | 0.8008 | 0.7159 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV51
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV51 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6495 - Accuracy: 0.8182 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 5 | 1.5572 | 0.3409 | | No log | 2.0 | 10 | 1.3890 | 0.4545 | | 1.4186 | 3.0 | 15 | 1.2638 | 0.5795 | | 1.4186 | 4.0 | 20 | 1.0291 | 0.6023 | | 1.0199 | 5.0 | 25 | 1.0125 | 0.5909 | | 1.0199 | 6.0 | 30 | 0.8328 | 0.6477 | | 1.0199 | 7.0 | 35 | 0.8662 | 0.625 | | 0.7093 | 8.0 | 40 | 0.7048 | 0.7045 | | 0.7093 | 9.0 | 45 | 0.8032 | 0.6818 | | 0.576 | 10.0 | 50 | 0.6944 | 0.7273 | | 0.576 | 11.0 | 55 | 0.7730 | 0.6932 | | 0.4817 | 12.0 | 60 | 0.6605 | 0.7386 | | 0.4817 | 13.0 | 65 | 0.7316 | 0.75 | | 0.4817 | 14.0 | 70 | 0.6380 | 0.7727 | | 0.413 | 15.0 | 75 | 0.6573 | 0.7727 | | 0.413 | 16.0 | 80 | 0.6570 | 0.75 | | 0.3959 | 17.0 | 85 | 0.6173 | 0.7955 | | 0.3959 | 18.0 | 90 | 0.6293 | 0.7841 | | 0.3959 | 19.0 | 95 | 0.6491 | 0.7727 | | 0.3043 | 20.0 | 100 | 0.6382 | 0.7955 | | 0.3043 | 21.0 | 105 | 0.6272 | 0.7955 | | 0.295 | 22.0 | 110 | 0.6423 | 0.8068 | | 0.295 | 23.0 | 115 | 0.6413 | 0.8068 | | 0.2365 | 24.0 | 120 | 0.6388 | 0.7841 | | 0.2365 | 25.0 | 125 | 0.6457 | 0.7841 | | 0.2365 | 26.0 | 130 | 0.6513 | 0.7955 | | 0.2507 | 27.0 | 135 | 0.6495 | 0.8182 | | 0.2507 | 28.0 | 140 | 0.6463 | 0.8182 | | 0.2385 | 29.0 | 145 | 0.6468 | 0.8068 | | 0.2385 | 30.0 | 150 | 0.6480 | 0.8068 | | 0.2385 | 31.0 | 155 | 0.6484 | 0.8068 | | 0.2432 | 32.0 | 160 | 0.6486 | 0.8068 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV52
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV52 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7948 - Accuracy: 0.7841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6162 | 0.2045 | | 1.3978 | 2.0 | 12 | 1.6140 | 0.1818 | | 1.3978 | 3.0 | 18 | 1.4029 | 0.4318 | | 1.0539 | 4.0 | 24 | 1.2503 | 0.5455 | | 1.0539 | 5.0 | 30 | 1.0014 | 0.625 | | 0.7171 | 6.0 | 36 | 0.9539 | 0.6364 | | 0.7171 | 7.0 | 42 | 0.9958 | 0.6136 | | 0.5557 | 8.0 | 48 | 0.8233 | 0.7386 | | 0.5557 | 9.0 | 54 | 0.8813 | 0.6136 | | 0.4942 | 10.0 | 60 | 0.8385 | 0.7159 | | 0.4942 | 11.0 | 66 | 0.7914 | 0.7614 | | 0.3957 | 12.0 | 72 | 0.7742 | 0.7273 | | 0.3957 | 13.0 | 78 | 0.8122 | 0.7045 | | 0.3664 | 14.0 | 84 | 0.7981 | 0.75 | | 0.3664 | 15.0 | 90 | 0.7852 | 0.7159 | | 0.3042 | 16.0 | 96 | 0.8829 | 0.7159 | | 0.3042 | 17.0 | 102 | 0.7630 | 0.7386 | | 0.2673 | 18.0 | 108 | 0.7936 | 0.75 | | 0.2673 | 19.0 | 114 | 0.7491 | 0.7727 | | 0.2308 | 20.0 | 120 | 0.7948 | 0.7841 | | 0.2308 | 21.0 | 126 | 0.7798 | 0.7841 | | 0.2113 | 22.0 | 132 | 0.7635 | 0.7614 | | 0.2113 | 23.0 | 138 | 0.8521 | 0.7159 | | 0.1852 | 24.0 | 144 | 0.8660 | 0.7386 | | 0.1852 | 25.0 | 150 | 0.7984 | 0.7386 | | 0.1765 | 26.0 | 156 | 0.7750 | 0.7614 | | 0.1765 | 27.0 | 162 | 0.7935 | 0.7386 | | 0.1969 | 28.0 | 168 | 0.7956 | 0.75 | | 0.1969 | 29.0 | 174 | 0.7902 | 0.7727 | | 0.1502 | 30.0 | 180 | 0.7868 | 0.7614 | | 0.1502 | 31.0 | 186 | 0.7842 | 0.7614 | | 0.1621 | 32.0 | 192 | 0.7836 | 0.75 | | 0.1621 | 33.0 | 198 | 0.7837 | 0.75 | | 0.1621 | 33.3810 | 200 | 0.7838 | 0.75 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV53
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV53 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8283 - Accuracy: 0.7045 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6493 | 0.0909 | | No log | 2.0 | 12 | 1.5699 | 0.3864 | | No log | 3.0 | 18 | 1.4384 | 0.4205 | | No log | 4.0 | 24 | 1.2748 | 0.4091 | | No log | 5.0 | 30 | 1.2428 | 0.5114 | | No log | 6.0 | 36 | 1.0682 | 0.6023 | | No log | 7.0 | 42 | 1.2919 | 0.5 | | No log | 8.0 | 48 | 0.9125 | 0.6591 | | No log | 9.0 | 54 | 1.0308 | 0.5568 | | No log | 10.0 | 60 | 0.8505 | 0.6705 | | No log | 11.0 | 66 | 0.9354 | 0.625 | | No log | 12.0 | 72 | 0.8283 | 0.7045 | | No log | 13.0 | 78 | 0.8508 | 0.6705 | | No log | 14.0 | 84 | 0.8072 | 0.6477 | | No log | 15.0 | 90 | 0.8574 | 0.6477 | | No log | 16.0 | 96 | 0.8278 | 0.625 | | 0.7213 | 17.0 | 102 | 0.8671 | 0.6364 | | 0.7213 | 18.0 | 108 | 0.8787 | 0.6364 | | 0.7213 | 19.0 | 114 | 0.8215 | 0.6818 | | 0.7213 | 20.0 | 120 | 0.8018 | 0.6932 | | 0.7213 | 21.0 | 126 | 0.8278 | 0.6477 | | 0.7213 | 22.0 | 132 | 0.8424 | 0.6364 | | 0.7213 | 23.0 | 138 | 0.8392 | 0.625 | | 0.7213 | 24.0 | 144 | 0.8371 | 0.625 | | 0.7213 | 25.0 | 150 | 0.8373 | 0.625 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
FrankCCCCC/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3773 - Accuracy: 0.845 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.5081 | 1.0 | 32 | 3.2203 | 0.828 | | 2.7052 | 2.0 | 64 | 2.5499 | 0.839 | | 2.4221 | 2.928 | 93 | 2.3773 | 0.845 | ### Framework versions - Transformers 4.49.0.dev0 - Pytorch 2.5.1+cu118 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
Mingmingchenxin/img_cls
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # img_cls This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.7114 - Accuracy: 0.636 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 8 | 4.2554 | 0.475 | | 4.4485 | 2.0 | 16 | 3.8491 | 0.573 | | 3.9145 | 3.0 | 24 | 3.7114 | 0.636 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
FrankCCCCC/my_awesome_cifar10_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_cifar10_model This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7268 - Accuracy: 0.748 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 512 - eval_batch_size: 512 - seed: 0 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 24 ### Training results ### Framework versions - Transformers 4.49.0.dev0 - Pytorch 2.5.1+cu118 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV54
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV54 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7753 - Accuracy: 0.75 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 1.0 | 3 | 1.5852 | 0.1932 | | No log | 2.0 | 6 | 1.5784 | 0.3182 | | No log | 3.0 | 9 | 1.5374 | 0.4318 | | 1.3768 | 4.0 | 12 | 1.4629 | 0.4091 | | 1.3768 | 5.0 | 15 | 1.2222 | 0.5341 | | 1.3768 | 6.0 | 18 | 1.2437 | 0.5455 | | 0.942 | 7.0 | 21 | 1.2428 | 0.5341 | | 0.942 | 8.0 | 24 | 1.1751 | 0.5341 | | 0.942 | 9.0 | 27 | 1.1279 | 0.5795 | | 0.6265 | 10.0 | 30 | 0.9868 | 0.6477 | | 0.6265 | 11.0 | 33 | 0.9661 | 0.6364 | | 0.6265 | 12.0 | 36 | 0.9892 | 0.6136 | | 0.6265 | 13.0 | 39 | 0.8716 | 0.6818 | | 0.5106 | 14.0 | 42 | 0.8010 | 0.7273 | | 0.5106 | 15.0 | 45 | 0.8596 | 0.6818 | | 0.5106 | 16.0 | 48 | 0.8257 | 0.6932 | | 0.4183 | 17.0 | 51 | 0.8190 | 0.7045 | | 0.4183 | 18.0 | 54 | 0.7628 | 0.7273 | | 0.4183 | 19.0 | 57 | 0.7802 | 0.7159 | | 0.3267 | 20.0 | 60 | 0.7753 | 0.75 | | 0.3267 | 21.0 | 63 | 0.7771 | 0.7386 | | 0.3267 | 22.0 | 66 | 0.7770 | 0.75 | | 0.3267 | 23.0 | 69 | 0.7941 | 0.7273 | | 0.3008 | 24.0 | 72 | 0.7921 | 0.7273 | | 0.3008 | 25.0 | 75 | 0.7899 | 0.7386 | | 0.3008 | 26.0 | 78 | 0.7849 | 0.75 | | 0.2795 | 27.0 | 81 | 0.7891 | 0.75 | | 0.2795 | 28.0 | 84 | 0.7973 | 0.7386 | | 0.2795 | 29.0 | 87 | 0.8068 | 0.7386 | | 0.2526 | 30.0 | 90 | 0.8088 | 0.7386 | | 0.2526 | 31.0 | 93 | 0.8098 | 0.7386 | | 0.2526 | 32.0 | 96 | 0.8096 | 0.7386 | | 0.2526 | 33.0 | 99 | 0.8095 | 0.7386 | | 0.2544 | 33.3810 | 100 | 0.8094 | 0.7386 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV55
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV55 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7678 - Accuracy: 0.8068 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.7159 | 0.0682 | | 1.4016 | 2.0 | 12 | 1.6803 | 0.125 | | 1.4016 | 3.0 | 18 | 1.5242 | 0.2955 | | 1.105 | 4.0 | 24 | 1.2325 | 0.4318 | | 1.105 | 5.0 | 30 | 1.2531 | 0.5114 | | 0.8077 | 6.0 | 36 | 1.1026 | 0.5 | | 0.8077 | 7.0 | 42 | 0.9852 | 0.5909 | | 0.6405 | 8.0 | 48 | 0.8993 | 0.7045 | | 0.6405 | 9.0 | 54 | 0.9155 | 0.6364 | | 0.5807 | 10.0 | 60 | 1.0444 | 0.5568 | | 0.5807 | 11.0 | 66 | 0.7696 | 0.7727 | | 0.511 | 12.0 | 72 | 0.9046 | 0.7159 | | 0.511 | 13.0 | 78 | 0.7709 | 0.7727 | | 0.4578 | 14.0 | 84 | 0.8067 | 0.7273 | | 0.4578 | 15.0 | 90 | 0.8394 | 0.7614 | | 0.357 | 16.0 | 96 | 0.7473 | 0.7727 | | 0.357 | 17.0 | 102 | 0.7678 | 0.8068 | | 0.313 | 18.0 | 108 | 0.7593 | 0.7614 | | 0.313 | 19.0 | 114 | 0.7597 | 0.7841 | | 0.2741 | 20.0 | 120 | 0.7861 | 0.75 | | 0.2741 | 21.0 | 126 | 0.7381 | 0.7386 | | 0.2508 | 22.0 | 132 | 0.7422 | 0.7955 | | 0.2508 | 23.0 | 138 | 0.7751 | 0.75 | | 0.1992 | 24.0 | 144 | 0.7758 | 0.7386 | | 0.1992 | 25.0 | 150 | 0.7272 | 0.7841 | | 0.1897 | 26.0 | 156 | 0.7843 | 0.7841 | | 0.1897 | 27.0 | 162 | 0.7606 | 0.7727 | | 0.2024 | 28.0 | 168 | 0.7456 | 0.7955 | | 0.2024 | 29.0 | 174 | 0.7653 | 0.7955 | | 0.172 | 30.0 | 180 | 0.7677 | 0.7727 | | 0.172 | 31.0 | 186 | 0.7421 | 0.7614 | | 0.1561 | 32.0 | 192 | 0.7326 | 0.7614 | | 0.1561 | 33.0 | 198 | 0.7541 | 0.7614 | | 0.1472 | 34.0 | 204 | 0.7635 | 0.7727 | | 0.1472 | 35.0 | 210 | 0.7504 | 0.7727 | | 0.1402 | 36.0 | 216 | 0.7601 | 0.7727 | | 0.1402 | 37.0 | 222 | 0.7683 | 0.7841 | | 0.1414 | 38.0 | 228 | 0.7707 | 0.7841 | | 0.1414 | 39.0 | 234 | 0.7727 | 0.7727 | | 0.1344 | 40.0 | 240 | 0.7721 | 0.7727 | | 0.1344 | 41.0 | 246 | 0.7715 | 0.7727 | | 0.1344 | 41.7619 | 250 | 0.7712 | 0.7727 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV56
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV56 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8748 - Accuracy: 0.7308 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 45 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.8421 | 4 | 1.5618 | 0.3269 | | No log | 1.8421 | 8 | 1.5396 | 0.4423 | | 1.7209 | 2.8421 | 12 | 1.5124 | 0.3077 | | 1.7209 | 3.8421 | 16 | 1.4679 | 0.3269 | | 1.7209 | 4.8421 | 20 | 1.4130 | 0.3462 | | 1.5864 | 5.8421 | 24 | 1.3107 | 0.5385 | | 1.5864 | 6.8421 | 28 | 1.2112 | 0.5385 | | 1.5864 | 7.8421 | 32 | 1.1194 | 0.5962 | | 1.2629 | 8.8421 | 36 | 1.0422 | 0.5962 | | 1.2629 | 9.8421 | 40 | 0.9706 | 0.6538 | | 1.2629 | 10.8421 | 44 | 0.9638 | 0.6538 | | 0.951 | 11.8421 | 48 | 0.9906 | 0.6154 | | 0.951 | 12.8421 | 52 | 0.9890 | 0.5962 | | 0.951 | 13.8421 | 56 | 0.9110 | 0.6538 | | 0.7947 | 14.8421 | 60 | 0.9282 | 0.6731 | | 0.7947 | 15.8421 | 64 | 0.9315 | 0.6538 | | 0.7947 | 16.8421 | 68 | 0.9230 | 0.6154 | | 0.7143 | 17.8421 | 72 | 0.9068 | 0.6538 | | 0.7143 | 18.8421 | 76 | 0.8997 | 0.6154 | | 0.7143 | 19.8421 | 80 | 0.8648 | 0.6923 | | 0.6329 | 20.8421 | 84 | 0.8624 | 0.6538 | | 0.6329 | 21.8421 | 88 | 0.8737 | 0.6154 | | 0.6329 | 22.8421 | 92 | 0.8636 | 0.6731 | | 0.5508 | 23.8421 | 96 | 0.8545 | 0.6538 | | 0.5508 | 24.8421 | 100 | 0.8617 | 0.6731 | | 0.5508 | 25.8421 | 104 | 0.8635 | 0.6346 | | 0.5009 | 26.8421 | 108 | 0.8650 | 0.6346 | | 0.5009 | 27.8421 | 112 | 0.8638 | 0.6538 | | 0.5009 | 28.8421 | 116 | 0.8730 | 0.6538 | | 0.5286 | 29.8421 | 120 | 0.8886 | 0.6346 | | 0.5286 | 30.8421 | 124 | 0.8827 | 0.6538 | | 0.5286 | 31.8421 | 128 | 0.8748 | 0.7308 | | 0.4559 | 32.8421 | 132 | 0.8671 | 0.7115 | | 0.4559 | 33.8421 | 136 | 0.8727 | 0.6731 | | 0.4559 | 34.8421 | 140 | 0.8755 | 0.7115 | | 0.4704 | 35.8421 | 144 | 0.8760 | 0.7308 | | 0.4704 | 36.8421 | 148 | 0.8786 | 0.7308 | | 0.4704 | 37.8421 | 152 | 0.8781 | 0.7308 | | 0.4582 | 38.8421 | 156 | 0.8771 | 0.7308 | | 0.4582 | 39.8421 | 160 | 0.8754 | 0.7308 | | 0.4582 | 40.8421 | 164 | 0.8741 | 0.7308 | | 0.4538 | 41.8421 | 168 | 0.8742 | 0.7308 | | 0.4538 | 42.8421 | 172 | 0.8740 | 0.7308 | | 0.4538 | 43.8421 | 176 | 0.8740 | 0.7308 | | 0.4476 | 44.8421 | 180 | 0.8741 | 0.7308 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV57
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV57 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8727 - Accuracy: 0.7308 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 45 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.8421 | 4 | 1.5807 | 0.2885 | | No log | 1.8421 | 8 | 1.4549 | 0.4423 | | 1.6981 | 2.8421 | 12 | 1.3224 | 0.4231 | | 1.6981 | 3.8421 | 16 | 1.2079 | 0.5 | | 1.6981 | 4.8421 | 20 | 1.0541 | 0.5769 | | 1.2915 | 5.8421 | 24 | 0.9398 | 0.6346 | | 1.2915 | 6.8421 | 28 | 0.9887 | 0.5577 | | 1.2915 | 7.8421 | 32 | 0.8991 | 0.6538 | | 0.8599 | 8.8421 | 36 | 0.9379 | 0.5577 | | 0.8599 | 9.8421 | 40 | 0.8260 | 0.6923 | | 0.8599 | 10.8421 | 44 | 0.9418 | 0.6731 | | 0.6803 | 11.8421 | 48 | 0.9368 | 0.5769 | | 0.6803 | 12.8421 | 52 | 0.9148 | 0.5962 | | 0.6803 | 13.8421 | 56 | 0.9135 | 0.6346 | | 0.5562 | 14.8421 | 60 | 0.8477 | 0.6731 | | 0.5562 | 15.8421 | 64 | 0.8730 | 0.5962 | | 0.5562 | 16.8421 | 68 | 0.8420 | 0.6923 | | 0.4696 | 17.8421 | 72 | 0.9168 | 0.5962 | | 0.4696 | 18.8421 | 76 | 0.9373 | 0.6538 | | 0.4696 | 19.8421 | 80 | 0.8634 | 0.6538 | | 0.3975 | 20.8421 | 84 | 0.8695 | 0.6538 | | 0.3975 | 21.8421 | 88 | 0.8958 | 0.6923 | | 0.3975 | 22.8421 | 92 | 0.8914 | 0.6731 | | 0.3185 | 23.8421 | 96 | 0.8727 | 0.7308 | | 0.3185 | 24.8421 | 100 | 0.9820 | 0.6923 | | 0.3185 | 25.8421 | 104 | 0.9263 | 0.6923 | | 0.2758 | 26.8421 | 108 | 1.0548 | 0.5962 | | 0.2758 | 27.8421 | 112 | 0.9833 | 0.6731 | | 0.2758 | 28.8421 | 116 | 0.9492 | 0.6923 | | 0.2667 | 29.8421 | 120 | 0.9466 | 0.6346 | | 0.2667 | 30.8421 | 124 | 0.9828 | 0.6923 | | 0.2667 | 31.8421 | 128 | 1.1056 | 0.6923 | | 0.2396 | 32.8421 | 132 | 1.0083 | 0.6731 | | 0.2396 | 33.8421 | 136 | 1.0040 | 0.6923 | | 0.2396 | 34.8421 | 140 | 1.0727 | 0.6731 | | 0.2173 | 35.8421 | 144 | 1.0953 | 0.6923 | | 0.2173 | 36.8421 | 148 | 1.0802 | 0.6538 | | 0.2173 | 37.8421 | 152 | 1.0446 | 0.6923 | | 0.2313 | 38.8421 | 156 | 1.0331 | 0.7115 | | 0.2313 | 39.8421 | 160 | 1.0334 | 0.6923 | | 0.2313 | 40.8421 | 164 | 1.0364 | 0.6923 | | 0.2129 | 41.8421 | 168 | 1.0413 | 0.6731 | | 0.2129 | 42.8421 | 172 | 1.0407 | 0.6731 | | 0.2129 | 43.8421 | 176 | 1.0405 | 0.6731 | | 0.2026 | 44.8421 | 180 | 1.0401 | 0.6731 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV58
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV58 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8236 - Accuracy: 0.7308 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 45 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.8421 | 4 | 1.5015 | 0.4423 | | No log | 1.8421 | 8 | 1.4596 | 0.4231 | | 1.7006 | 2.8421 | 12 | 1.3454 | 0.5192 | | 1.7006 | 3.8421 | 16 | 1.2081 | 0.5577 | | 1.7006 | 4.8421 | 20 | 1.0746 | 0.5577 | | 1.3198 | 5.8421 | 24 | 0.9099 | 0.5962 | | 1.3198 | 6.8421 | 28 | 0.8862 | 0.6346 | | 1.3198 | 7.8421 | 32 | 0.8236 | 0.7308 | | 0.86 | 8.8421 | 36 | 0.8757 | 0.6154 | | 0.86 | 9.8421 | 40 | 0.8234 | 0.7115 | | 0.86 | 10.8421 | 44 | 0.8238 | 0.7115 | | 0.6765 | 11.8421 | 48 | 0.9140 | 0.6538 | | 0.6765 | 12.8421 | 52 | 0.8608 | 0.6731 | | 0.6765 | 13.8421 | 56 | 0.8913 | 0.6538 | | 0.5549 | 14.8421 | 60 | 0.8184 | 0.7115 | | 0.5549 | 15.8421 | 64 | 0.8233 | 0.7308 | | 0.5549 | 16.8421 | 68 | 0.8051 | 0.7308 | | 0.4803 | 17.8421 | 72 | 0.8256 | 0.7115 | | 0.4803 | 18.8421 | 76 | 0.8907 | 0.6731 | | 0.4803 | 19.8421 | 80 | 0.9122 | 0.7115 | | 0.3757 | 20.8421 | 84 | 0.8812 | 0.7115 | | 0.3757 | 21.8421 | 88 | 0.9496 | 0.7308 | | 0.3757 | 22.8421 | 92 | 0.9228 | 0.7115 | | 0.3166 | 23.8421 | 96 | 0.9533 | 0.6538 | | 0.3166 | 24.8421 | 100 | 0.9486 | 0.6731 | | 0.3166 | 25.8421 | 104 | 0.9961 | 0.6731 | | 0.2869 | 26.8421 | 108 | 0.9953 | 0.6538 | | 0.2869 | 27.8421 | 112 | 0.9716 | 0.7308 | | 0.2869 | 28.8421 | 116 | 0.9851 | 0.7115 | | 0.2901 | 29.8421 | 120 | 1.0567 | 0.6731 | | 0.2901 | 30.8421 | 124 | 1.0905 | 0.7308 | | 0.2901 | 31.8421 | 128 | 1.0009 | 0.6731 | | 0.2548 | 32.8421 | 132 | 1.0158 | 0.6538 | | 0.2548 | 33.8421 | 136 | 1.0908 | 0.7308 | | 0.2548 | 34.8421 | 140 | 1.0802 | 0.6538 | | 0.2359 | 35.8421 | 144 | 1.0642 | 0.6346 | | 0.2359 | 36.8421 | 148 | 1.1139 | 0.6538 | | 0.2359 | 37.8421 | 152 | 1.0740 | 0.6731 | | 0.2295 | 38.8421 | 156 | 1.0772 | 0.7115 | | 0.2295 | 39.8421 | 160 | 1.0724 | 0.6731 | | 0.2295 | 40.8421 | 164 | 1.0859 | 0.6731 | | 0.2234 | 41.8421 | 168 | 1.0928 | 0.6923 | | 0.2234 | 42.8421 | 172 | 1.0850 | 0.6923 | | 0.2234 | 43.8421 | 176 | 1.0761 | 0.6923 | | 0.2086 | 44.8421 | 180 | 1.0766 | 0.6731 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
alyzbane/2025-02-10-08-48-20-convnextv2-tiny-1k-224
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2025-02-10-08-48-20-convnextv2-tiny-1k-224 This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0276 - Precision: 0.9953 - Recall: 0.9951 - F1: 0.9951 - Accuracy: 0.9952 - Top1 Accuracy: 0.9951 - Error Rate: 0.0048 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 3407 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Top1 Accuracy | Error Rate | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:| | 1.1857 | 1.0 | 103 | 0.2269 | 0.9480 | 0.9390 | 0.9396 | 0.9449 | 0.9390 | 0.0551 | | 0.2112 | 2.0 | 206 | 0.1580 | 0.9583 | 0.9561 | 0.9563 | 0.9554 | 0.9561 | 0.0446 | | 0.1067 | 3.0 | 309 | 0.1165 | 0.9634 | 0.9610 | 0.9613 | 0.9571 | 0.9610 | 0.0429 | | 0.0922 | 4.0 | 412 | 0.1750 | 0.9608 | 0.9537 | 0.9532 | 0.9549 | 0.9537 | 0.0451 | | 0.0346 | 5.0 | 515 | 0.0920 | 0.9827 | 0.9805 | 0.9805 | 0.9845 | 0.9805 | 0.0155 | | 0.018 | 6.0 | 618 | 0.0483 | 0.9881 | 0.9878 | 0.9877 | 0.9886 | 0.9878 | 0.0114 | | 0.0114 | 7.0 | 721 | 0.0276 | 0.9953 | 0.9951 | 0.9951 | 0.9952 | 0.9951 | 0.0048 | | 0.0067 | 8.0 | 824 | 0.0423 | 0.9906 | 0.9902 | 0.9903 | 0.9894 | 0.9902 | 0.0106 | | 0.0014 | 9.0 | 927 | 0.0310 | 0.9953 | 0.9951 | 0.9951 | 0.9952 | 0.9951 | 0.0048 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.20.3
[ "acacia", "coconut", "dau", "dita", "ilang-ilang", "macarthur", "mango", "mulawin", "narra", "palmera", "royal palm", "santol", "tabebuia" ]
johnsett/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.1862 - Accuracy: 0.9459 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3886 | 1.0 | 370 | 0.3228 | 0.9147 | | 0.1861 | 2.0 | 740 | 0.2697 | 0.9215 | | 0.1505 | 3.0 | 1110 | 0.2472 | 0.9242 | | 0.1325 | 4.0 | 1480 | 0.2428 | 0.9242 | | 0.1343 | 5.0 | 1850 | 0.2405 | 0.9269 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
mvinzangelo/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0863 - Accuracy: 0.9698 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.4789 | 1.0 | 352 | 0.1508 | 0.9498 | | 0.3917 | 2.0 | 704 | 0.0987 | 0.9652 | | 0.2984 | 2.9922 | 1053 | 0.0863 | 0.9698 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "tench, tinca tinca", "goldfish, carassius auratus", "great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias", "tiger shark, galeocerdo cuvieri", "hammerhead, hammerhead shark", "electric ray, crampfish, numbfish, torpedo", "stingray", "cock", "hen", "ostrich, struthio camelus", "brambling, fringilla montifringilla", "goldfinch, carduelis carduelis", "house finch, linnet, carpodacus mexicanus", "junco, snowbird", "indigo bunting, indigo finch, indigo bird, passerina cyanea", "robin, american robin, turdus migratorius", "bulbul", "jay", "magpie", "chickadee", "water ouzel, dipper", "kite", "bald eagle, american eagle, haliaeetus leucocephalus", "vulture", "great grey owl, great gray owl, strix nebulosa", "european fire salamander, salamandra salamandra", "common newt, triturus vulgaris", "eft", "spotted salamander, ambystoma maculatum", "axolotl, mud puppy, ambystoma mexicanum", "bullfrog, rana catesbeiana", "tree frog, tree-frog", "tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui", "loggerhead, loggerhead turtle, caretta caretta", "leatherback turtle, leatherback, leathery turtle, dermochelys coriacea", "mud turtle", "terrapin", "box turtle, box tortoise", "banded gecko", "common iguana, iguana, iguana iguana", "american chameleon, anole, anolis carolinensis", "whiptail, whiptail lizard", "agama", "frilled lizard, chlamydosaurus kingi", "alligator lizard", "gila monster, heloderma suspectum", "green lizard, lacerta viridis", "african chameleon, chamaeleo chamaeleon", "komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis", "african crocodile, nile crocodile, crocodylus niloticus", "american alligator, alligator mississipiensis", "triceratops", "thunder snake, worm snake, carphophis amoenus", "ringneck snake, ring-necked snake, ring snake", "hognose snake, puff adder, sand viper", "green snake, grass snake", "king snake, kingsnake", "garter snake, grass snake", "water snake", "vine snake", "night snake, hypsiglena torquata", "boa constrictor, constrictor constrictor", "rock python, rock snake, python sebae", "indian cobra, naja naja", "green mamba", "sea snake", "horned viper, cerastes, sand viper, horned asp, cerastes cornutus", "diamondback, diamondback rattlesnake, crotalus adamanteus", "sidewinder, horned rattlesnake, crotalus cerastes", "trilobite", "harvestman, daddy longlegs, phalangium opilio", "scorpion", "black and gold garden spider, argiope aurantia", "barn spider, araneus cavaticus", "garden spider, aranea diademata", "black widow, latrodectus mactans", "tarantula", "wolf spider, hunting spider", "tick", "centipede", "black grouse", "ptarmigan", "ruffed grouse, partridge, bonasa umbellus", "prairie chicken, prairie grouse, prairie fowl", "peacock", "quail", "partridge", "african grey, african gray, psittacus erithacus", "macaw", "sulphur-crested cockatoo, kakatoe galerita, cacatua galerita", "lorikeet", "coucal", "bee eater", "hornbill", "hummingbird", "jacamar", "toucan", "drake", "red-breasted merganser, mergus serrator", "goose", "black swan, cygnus atratus", "tusker", "echidna, spiny anteater, anteater", "platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus", "wallaby, brush kangaroo", "koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus", "wombat", "jellyfish", "sea anemone, anemone", "brain coral", "flatworm, platyhelminth", "nematode, nematode worm, roundworm", "conch", "snail", "slug", "sea slug, nudibranch", "chiton, coat-of-mail shell, sea cradle, polyplacophore", "chambered nautilus, pearly nautilus, nautilus", "dungeness crab, cancer magister", "rock crab, cancer irroratus", "fiddler crab", "king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica", "american lobster, northern lobster, maine lobster, homarus americanus", "spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish", "crayfish, crawfish, crawdad, crawdaddy", "hermit crab", "isopod", "white stork, ciconia ciconia", "black stork, ciconia nigra", "spoonbill", "flamingo", "little blue heron, egretta caerulea", "american egret, great white heron, egretta albus", "bittern", "crane", "limpkin, aramus pictus", "european gallinule, porphyrio porphyrio", "american coot, marsh hen, mud hen, water hen, fulica americana", "bustard", "ruddy turnstone, arenaria interpres", "red-backed sandpiper, dunlin, erolia alpina", "redshank, tringa totanus", "dowitcher", "oystercatcher, oyster catcher", "pelican", "king penguin, aptenodytes patagonica", "albatross, mollymawk", "grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus", "killer whale, killer, orca, grampus, sea wolf, orcinus orca", "dugong, dugong dugon", "sea lion", "chihuahua", "japanese spaniel", "maltese dog, maltese terrier, maltese", "pekinese, pekingese, peke", "shih-tzu", "blenheim spaniel", "papillon", "toy terrier", "rhodesian ridgeback", "afghan hound, afghan", "basset, basset hound", "beagle", "bloodhound, sleuthhound", "bluetick", "black-and-tan coonhound", "walker hound, walker foxhound", "english foxhound", "redbone", "borzoi, russian wolfhound", "irish wolfhound", "italian greyhound", "whippet", "ibizan hound, ibizan podenco", "norwegian elkhound, elkhound", "otterhound, otter hound", "saluki, gazelle hound", "scottish deerhound, deerhound", "weimaraner", "staffordshire bullterrier, staffordshire bull terrier", "american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier", "bedlington terrier", "border terrier", "kerry blue terrier", "irish terrier", "norfolk terrier", "norwich terrier", "yorkshire terrier", "wire-haired fox terrier", "lakeland terrier", "sealyham terrier, sealyham", "airedale, airedale terrier", "cairn, cairn terrier", "australian terrier", "dandie dinmont, dandie dinmont terrier", "boston bull, boston terrier", "miniature schnauzer", "giant schnauzer", "standard schnauzer", "scotch terrier, scottish terrier, scottie", "tibetan terrier, chrysanthemum dog", "silky terrier, sydney silky", "soft-coated wheaten terrier", "west highland white terrier", "lhasa, lhasa apso", "flat-coated retriever", "curly-coated retriever", "golden retriever", "labrador retriever", "chesapeake bay retriever", "german short-haired pointer", "vizsla, hungarian pointer", "english setter", "irish setter, red setter", "gordon setter", "brittany spaniel", "clumber, clumber spaniel", "english springer, english springer spaniel", "welsh springer spaniel", "cocker spaniel, english cocker spaniel, cocker", "sussex spaniel", "irish water spaniel", "kuvasz", "schipperke", "groenendael", "malinois", "briard", "kelpie", "komondor", "old english sheepdog, bobtail", "shetland sheepdog, shetland sheep dog, shetland", "collie", "border collie", "bouvier des flandres, bouviers des flandres", "rottweiler", "german shepherd, german shepherd dog, german police dog, alsatian", "doberman, doberman pinscher", "miniature pinscher", "greater swiss mountain dog", "bernese mountain dog", "appenzeller", "entlebucher", "boxer", "bull mastiff", "tibetan mastiff", "french bulldog", "great dane", "saint bernard, st bernard", "eskimo dog, husky", "malamute, malemute, alaskan malamute", "siberian husky", "dalmatian, coach dog, carriage dog", "affenpinscher, monkey pinscher, monkey dog", "basenji", "pug, pug-dog", "leonberg", "newfoundland, newfoundland dog", "great pyrenees", "samoyed, samoyede", "pomeranian", "chow, chow chow", "keeshond", "brabancon griffon", "pembroke, pembroke welsh corgi", "cardigan, cardigan welsh corgi", "toy poodle", "miniature poodle", "standard poodle", "mexican hairless", "timber wolf, grey wolf, gray wolf, canis lupus", "white wolf, arctic wolf, canis lupus tundrarum", "red wolf, maned wolf, canis rufus, canis niger", "coyote, prairie wolf, brush wolf, canis latrans", "dingo, warrigal, warragal, canis dingo", "dhole, cuon alpinus", "african hunting dog, hyena dog, cape hunting dog, lycaon pictus", "hyena, hyaena", "red fox, vulpes vulpes", "kit fox, vulpes macrotis", "arctic fox, white fox, alopex lagopus", "grey fox, gray fox, urocyon cinereoargenteus", "tabby, tabby cat", "tiger cat", "persian cat", "siamese cat, siamese", "egyptian cat", "cougar, puma, catamount, mountain lion, painter, panther, felis concolor", "lynx, catamount", "leopard, panthera pardus", "snow leopard, ounce, panthera uncia", "jaguar, panther, panthera onca, felis onca", "lion, king of beasts, panthera leo", "tiger, panthera tigris", "cheetah, chetah, acinonyx jubatus", "brown bear, bruin, ursus arctos", "american black bear, black bear, ursus americanus, euarctos americanus", "ice bear, polar bear, ursus maritimus, thalarctos maritimus", "sloth bear, melursus ursinus, ursus ursinus", "mongoose", "meerkat, mierkat", "tiger beetle", "ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle", "ground beetle, carabid beetle", "long-horned beetle, longicorn, longicorn beetle", "leaf beetle, chrysomelid", "dung beetle", "rhinoceros beetle", "weevil", "fly", "bee", "ant, emmet, pismire", "grasshopper, hopper", "cricket", "walking stick, walkingstick, stick insect", "cockroach, roach", "mantis, mantid", "cicada, cicala", "leafhopper", "lacewing, lacewing fly", "dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk", "damselfly", "admiral", "ringlet, ringlet butterfly", "monarch, monarch butterfly, milkweed butterfly, danaus plexippus", "cabbage butterfly", "sulphur butterfly, sulfur butterfly", "lycaenid, lycaenid butterfly", "starfish, sea star", "sea urchin", "sea cucumber, holothurian", "wood rabbit, cottontail, cottontail rabbit", "hare", "angora, angora rabbit", "hamster", "porcupine, hedgehog", "fox squirrel, eastern fox squirrel, sciurus niger", "marmot", "beaver", "guinea pig, cavia cobaya", "sorrel", "zebra", "hog, pig, grunter, squealer, sus scrofa", "wild boar, boar, sus scrofa", "warthog", "hippopotamus, hippo, river horse, hippopotamus amphibius", "ox", "water buffalo, water ox, asiatic buffalo, bubalus bubalis", "bison", "ram, tup", "bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis", "ibex, capra ibex", "hartebeest", "impala, aepyceros melampus", "gazelle", "arabian camel, dromedary, camelus dromedarius", "llama", "weasel", "mink", "polecat, fitch, foulmart, foumart, mustela putorius", "black-footed ferret, ferret, mustela nigripes", "otter", "skunk, polecat, wood pussy", "badger", "armadillo", "three-toed sloth, ai, bradypus tridactylus", "orangutan, orang, orangutang, pongo pygmaeus", "gorilla, gorilla gorilla", "chimpanzee, chimp, pan troglodytes", "gibbon, hylobates lar", "siamang, hylobates syndactylus, symphalangus syndactylus", "guenon, guenon monkey", "patas, hussar monkey, erythrocebus patas", "baboon", "macaque", "langur", "colobus, colobus monkey", "proboscis monkey, nasalis larvatus", "marmoset", "capuchin, ringtail, cebus capucinus", "howler monkey, howler", "titi, titi monkey", "spider monkey, ateles geoffroyi", "squirrel monkey, saimiri sciureus", "madagascar cat, ring-tailed lemur, lemur catta", "indri, indris, indri indri, indri brevicaudatus", "indian elephant, elephas maximus", "african elephant, loxodonta africana", "lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens", "giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca", "barracouta, snoek", "eel", "coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch", "rock beauty, holocanthus tricolor", "anemone fish", "sturgeon", "gar, garfish, garpike, billfish, lepisosteus osseus", "lionfish", "puffer, pufferfish, blowfish, globefish", "abacus", "abaya", "academic gown, academic robe, judge's robe", "accordion, piano accordion, squeeze box", "acoustic guitar", "aircraft carrier, carrier, flattop, attack aircraft carrier", "airliner", "airship, dirigible", "altar", "ambulance", "amphibian, amphibious vehicle", "analog clock", "apiary, bee house", "apron", "ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin", "assault rifle, assault gun", "backpack, back pack, knapsack, packsack, rucksack, haversack", "bakery, bakeshop, bakehouse", "balance beam, beam", "balloon", "ballpoint, ballpoint pen, ballpen, biro", "band aid", "banjo", "bannister, banister, balustrade, balusters, handrail", "barbell", "barber chair", "barbershop", "barn", "barometer", "barrel, cask", "barrow, garden cart, lawn cart, wheelbarrow", "baseball", "basketball", "bassinet", "bassoon", "bathing cap, swimming cap", "bath towel", "bathtub, bathing tub, bath, tub", "beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon", "beacon, lighthouse, beacon light, pharos", "beaker", "bearskin, busby, shako", "beer bottle", "beer glass", "bell cote, bell cot", "bib", "bicycle-built-for-two, tandem bicycle, tandem", "bikini, two-piece", "binder, ring-binder", "binoculars, field glasses, opera glasses", "birdhouse", "boathouse", "bobsled, bobsleigh, bob", "bolo tie, bolo, bola tie, bola", "bonnet, poke bonnet", "bookcase", "bookshop, bookstore, bookstall", "bottlecap", "bow", "bow tie, bow-tie, bowtie", "brass, memorial tablet, plaque", "brassiere, bra, bandeau", "breakwater, groin, groyne, mole, bulwark, seawall, jetty", "breastplate, aegis, egis", "broom", "bucket, pail", "buckle", "bulletproof vest", "bullet train, bullet", "butcher shop, meat market", "cab, hack, taxi, taxicab", "caldron, cauldron", "candle, taper, wax light", "cannon", "canoe", "can opener, tin opener", "cardigan", "car mirror", "carousel, carrousel, merry-go-round, roundabout, whirligig", "carpenter's kit, tool kit", "carton", "car wheel", "cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm", "cassette", "cassette player", "castle", "catamaran", "cd player", "cello, violoncello", "cellular telephone, cellular phone, cellphone, cell, mobile phone", "chain", "chainlink fence", "chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour", "chain saw, chainsaw", "chest", "chiffonier, commode", "chime, bell, gong", "china cabinet, china closet", "christmas stocking", "church, church building", "cinema, movie theater, movie theatre, movie house, picture palace", "cleaver, meat cleaver, chopper", "cliff dwelling", "cloak", "clog, geta, patten, sabot", "cocktail shaker", "coffee mug", "coffeepot", "coil, spiral, volute, whorl, helix", "combination lock", "computer keyboard, keypad", "confectionery, confectionary, candy store", "container ship, containership, container vessel", "convertible", "corkscrew, bottle screw", "cornet, horn, trumpet, trump", "cowboy boot", "cowboy hat, ten-gallon hat", "cradle", "crane", "crash helmet", "crate", "crib, cot", "crock pot", "croquet ball", "crutch", "cuirass", "dam, dike, dyke", "desk", "desktop computer", "dial telephone, dial phone", "diaper, nappy, napkin", "digital clock", "digital watch", "dining table, board", "dishrag, dishcloth", "dishwasher, dish washer, dishwashing machine", "disk brake, disc brake", "dock, dockage, docking facility", "dogsled, dog sled, dog sleigh", "dome", "doormat, welcome mat", "drilling platform, offshore rig", "drum, membranophone, tympan", "drumstick", "dumbbell", "dutch oven", "electric fan, blower", "electric guitar", "electric locomotive", "entertainment center", "envelope", "espresso maker", "face powder", "feather boa, boa", "file, file cabinet, filing cabinet", "fireboat", "fire engine, fire truck", "fire screen, fireguard", "flagpole, flagstaff", "flute, transverse flute", "folding chair", "football helmet", "forklift", "fountain", "fountain pen", "four-poster", "freight car", "french horn, horn", "frying pan, frypan, skillet", "fur coat", "garbage truck, dustcart", "gasmask, respirator, gas helmet", "gas pump, gasoline pump, petrol pump, island dispenser", "goblet", "go-kart", "golf ball", "golfcart, golf cart", "gondola", "gong, tam-tam", "gown", "grand piano, grand", "greenhouse, nursery, glasshouse", "grille, radiator grille", "grocery store, grocery, food market, market", "guillotine", "hair slide", "hair spray", "half track", "hammer", "hamper", "hand blower, blow dryer, blow drier, hair dryer, hair drier", "hand-held computer, hand-held microcomputer", "handkerchief, hankie, hanky, hankey", "hard disc, hard disk, fixed disk", "harmonica, mouth organ, harp, mouth harp", "harp", "harvester, reaper", "hatchet", "holster", "home theater, home theatre", "honeycomb", "hook, claw", "hoopskirt, crinoline", "horizontal bar, high bar", "horse cart, horse-cart", "hourglass", "ipod", "iron, smoothing iron", "jack-o'-lantern", "jean, blue jean, denim", "jeep, landrover", "jersey, t-shirt, tee shirt", "jigsaw puzzle", "jinrikisha, ricksha, rickshaw", "joystick", "kimono", "knee pad", "knot", "lab coat, laboratory coat", "ladle", "lampshade, lamp shade", "laptop, laptop computer", "lawn mower, mower", "lens cap, lens cover", "letter opener, paper knife, paperknife", "library", "lifeboat", "lighter, light, igniter, ignitor", "limousine, limo", "liner, ocean liner", "lipstick, lip rouge", "loafer", "lotion", "loudspeaker, speaker, speaker unit, loudspeaker system, speaker system", "loupe, jeweler's loupe", "lumbermill, sawmill", "magnetic compass", "mailbag, postbag", "mailbox, letter box", "maillot", "maillot, tank suit", "manhole cover", "maraca", "marimba, xylophone", "mask", "matchstick", "maypole", "maze, labyrinth", "measuring cup", "medicine chest, medicine cabinet", "megalith, megalithic structure", "microphone, mike", "microwave, microwave oven", "military uniform", "milk can", "minibus", "miniskirt, mini", "minivan", "missile", "mitten", "mixing bowl", "mobile home, manufactured home", "model t", "modem", "monastery", "monitor", "moped", "mortar", "mortarboard", "mosque", "mosquito net", "motor scooter, scooter", "mountain bike, all-terrain bike, off-roader", "mountain tent", "mouse, computer mouse", "mousetrap", "moving van", "muzzle", "nail", "neck brace", "necklace", "nipple", "notebook, notebook computer", "obelisk", "oboe, hautboy, hautbois", "ocarina, sweet potato", "odometer, hodometer, mileometer, milometer", "oil filter", "organ, pipe organ", "oscilloscope, scope, cathode-ray oscilloscope, cro", "overskirt", "oxcart", "oxygen mask", "packet", "paddle, boat paddle", "paddlewheel, paddle wheel", "padlock", "paintbrush", "pajama, pyjama, pj's, jammies", "palace", "panpipe, pandean pipe, syrinx", "paper towel", "parachute, chute", "parallel bars, bars", "park bench", "parking meter", "passenger car, coach, carriage", "patio, terrace", "pay-phone, pay-station", "pedestal, plinth, footstall", "pencil box, pencil case", "pencil sharpener", "perfume, essence", "petri dish", "photocopier", "pick, plectrum, plectron", "pickelhaube", "picket fence, paling", "pickup, pickup truck", "pier", "piggy bank, penny bank", "pill bottle", "pillow", "ping-pong ball", "pinwheel", "pirate, pirate ship", "pitcher, ewer", "plane, carpenter's plane, woodworking plane", "planetarium", "plastic bag", "plate rack", "plow, plough", "plunger, plumber's helper", "polaroid camera, polaroid land camera", "pole", "police van, police wagon, paddy wagon, patrol wagon, wagon, black maria", "poncho", "pool table, billiard table, snooker table", "pop bottle, soda bottle", "pot, flowerpot", "potter's wheel", "power drill", "prayer rug, prayer mat", "printer", "prison, prison house", "projectile, missile", "projector", "puck, hockey puck", "punching bag, punch bag, punching ball, punchball", "purse", "quill, quill pen", "quilt, comforter, comfort, puff", "racer, race car, racing car", "racket, racquet", "radiator", "radio, wireless", "radio telescope, radio reflector", "rain barrel", "recreational vehicle, rv, r.v.", "reel", "reflex camera", "refrigerator, icebox", "remote control, remote", "restaurant, eating house, eating place, eatery", "revolver, six-gun, six-shooter", "rifle", "rocking chair, rocker", "rotisserie", "rubber eraser, rubber, pencil eraser", "rugby ball", "rule, ruler", "running shoe", "safe", "safety pin", "saltshaker, salt shaker", "sandal", "sarong", "sax, saxophone", "scabbard", "scale, weighing machine", "school bus", "schooner", "scoreboard", "screen, crt screen", "screw", "screwdriver", "seat belt, seatbelt", "sewing machine", "shield, buckler", "shoe shop, shoe-shop, shoe store", "shoji", "shopping basket", "shopping cart", "shovel", "shower cap", "shower curtain", "ski", "ski mask", "sleeping bag", "slide rule, slipstick", "sliding door", "slot, one-armed bandit", "snorkel", "snowmobile", "snowplow, snowplough", "soap dispenser", "soccer ball", "sock", "solar dish, solar collector, solar furnace", "sombrero", "soup bowl", "space bar", "space heater", "space shuttle", "spatula", "speedboat", "spider web, spider's web", "spindle", "sports car, sport car", "spotlight, spot", "stage", "steam locomotive", "steel arch bridge", "steel drum", "stethoscope", "stole", "stone wall", "stopwatch, stop watch", "stove", "strainer", "streetcar, tram, tramcar, trolley, trolley car", "stretcher", "studio couch, day bed", "stupa, tope", "submarine, pigboat, sub, u-boat", "suit, suit of clothes", "sundial", "sunglass", "sunglasses, dark glasses, shades", "sunscreen, sunblock, sun blocker", "suspension bridge", "swab, swob, mop", "sweatshirt", "swimming trunks, bathing trunks", "swing", "switch, electric switch, electrical switch", "syringe", "table lamp", "tank, army tank, armored combat vehicle, armoured combat vehicle", "tape player", "teapot", "teddy, teddy bear", "television, television system", "tennis ball", "thatch, thatched roof", "theater curtain, theatre curtain", "thimble", "thresher, thrasher, threshing machine", "throne", "tile roof", "toaster", "tobacco shop, tobacconist shop, tobacconist", "toilet seat", "torch", "totem pole", "tow truck, tow car, wrecker", "toyshop", "tractor", "trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi", "tray", "trench coat", "tricycle, trike, velocipede", "trimaran", "tripod", "triumphal arch", "trolleybus, trolley coach, trackless trolley", "trombone", "tub, vat", "turnstile", "typewriter keyboard", "umbrella", "unicycle, monocycle", "upright, upright piano", "vacuum, vacuum cleaner", "vase", "vault", "velvet", "vending machine", "vestment", "viaduct", "violin, fiddle", "volleyball", "waffle iron", "wall clock", "wallet, billfold, notecase, pocketbook", "wardrobe, closet, press", "warplane, military plane", "washbasin, handbasin, washbowl, lavabo, wash-hand basin", "washer, automatic washer, washing machine", "water bottle", "water jug", "water tower", "whiskey jug", "whistle", "wig", "window screen", "window shade", "windsor tie", "wine bottle", "wing", "wok", "wooden spoon", "wool, woolen, woollen", "worm fence, snake fence, snake-rail fence, virginia fence", "wreck", "yawl", "yurt", "web site, website, internet site, site", "comic book", "crossword puzzle, crossword", "street sign", "traffic light, traffic signal, stoplight", "book jacket, dust cover, dust jacket, dust wrapper", "menu", "plate", "guacamole", "consomme", "hot pot, hotpot", "trifle", "ice cream, icecream", "ice lolly, lolly, lollipop, popsicle", "french loaf", "bagel, beigel", "pretzel", "cheeseburger", "hotdog, hot dog, red hot", "mashed potato", "head cabbage", "broccoli", "cauliflower", "zucchini, courgette", "spaghetti squash", "acorn squash", "butternut squash", "cucumber, cuke", "artichoke, globe artichoke", "bell pepper", "cardoon", "mushroom", "granny smith", "strawberry", "orange", "lemon", "fig", "pineapple, ananas", "banana", "jackfruit, jak, jack", "custard apple", "pomegranate", "hay", "carbonara", "chocolate sauce, chocolate syrup", "dough", "meat loaf, meatloaf", "pizza, pizza pie", "potpie", "burrito", "red wine", "espresso", "cup", "eggnog", "alp", "bubble", "cliff, drop, drop-off", "coral reef", "geyser", "lakeside, lakeshore", "promontory, headland, head, foreland", "sandbar, sand bar", "seashore, coast, seacoast, sea-coast", "valley, vale", "volcano", "ballplayer, baseball player", "groom, bridegroom", "scuba diver", "rapeseed", "daisy", "yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum", "corn", "acorn", "hip, rose hip, rosehip", "buckeye, horse chestnut, conker", "coral fungus", "agaric", "gyromitra", "stinkhorn, carrion fungus", "earthstar", "hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa", "bolete", "ear, spike, capitulum", "toilet tissue, toilet paper, bathroom tissue" ]
elliemci/vit_tumor_classification_model
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "notumor", "tumor" ]
eitankon/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0513 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0752 | 1.5385 | 100 | 0.0531 | 0.9925 | | 0.0261 | 3.0769 | 200 | 0.0513 | 0.9850 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.6.0 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
k4tel/vit-historical-page
# Image classification using fine-tuned ViT - for historical :bowtie: documents sorting ### Goal: solve a task of archive page images sorting (for their further content-based processing) **Scope:** Processing of images, training and evaluation of ViT model, input file/directory processing, class 🏷️ (category) results of top N predictions output, predictions summarizing into a tabular format, HF 😊 hub support for the model ## Versions 🏁 There are currently 2 version of the model available for download, both of them have the same set of categories, but different data annotations. The latest approved `v2.1` is considered to be default and can be found in the `main` branch of HF 😊 hub [^1] 🔗 | Version | Base | Pages | PDFs | Description | |--------:|------------------------|:-----:|:--------:|:--------------------------------------------------------------------------| | `v2.0` | `vit-base-path16-224` | 10073 | **3896** | annotations with mistakes, more heterogenous data | | `v2.1` | `vit-base-path16-224` | 11940 | **5002** | `main`: more diverse pages in each category, less annotation mistakes | | `v2.2` | `vit-base-path16-224` | 15855 | **5730** | same data as `v2.1` + some restored pages from `v2.0` | | `v3.2` | `vit-base-path16-384` | 15855 | **5730** | same data as `v2.2`, but a bit larger model base with higher resolution | | `v5.2` | `vit-large-path16-384` | 15855 | **5730** | same data as `v2.2`, but the largest model base with higher resolution | ## Model description 📇 🔲 Fine-tuned model repository: vit-historical-page [^1] 🔗 🔳 Base model repository: Google's **vit-base-patch16-224**, **vit-base-patch16-384**, **vit-large-patch16-284** [^2] [^13] [^14] 🔗 ### Data 📜 Training set of the model: **8950** images for `v2.0` Training set of the model: **10745** images for `v2.1` Training set of the model: **14565** images for `v2.2`, `v3.2` and `v5.2` ### Categories 🏷️ | Label️ | Description | |----------:|:-----------------------------------------------------------------------------------------------------------------| | `DRAW` | **📈 - drawings, maps, paintings, schematics, or graphics, potentially containing some text labels or captions** | | `DRAW_L` | **📈📏 - drawings, etc but presented within a table-like layout or includes a legend formatted as a table** | | `LINE_HW` | **✏️📏 - handwritten text organized in a tabular or form-like structure** | | `LINE_P` | **📏 - printed text organized in a tabular or form-like structure** | | `LINE_T` | **📏 - machine-typed text organized in a tabular or form-like structure** | | `PHOTO` | **🌄 - photographs or photographic cutouts, potentially with text captions** | | `PHOTO_L` | **🌄📏 - photos presented within a table-like layout or accompanied by tabular annotations** | | `TEXT` | **📰 - mixtures of printed, handwritten, and/or typed text, potentially with minor graphical elements** | | `TEXT_HW` | **✏️📄 - only handwritten text in paragraph or block form (non-tabular)** | | `TEXT_P` | **📄 - only printed text in paragraph or block form (non-tabular)** | | `TEXT_T` | **📄 - only machine-typed text in paragraph or block form (non-tabular)** | Evaluation set: **1290** images (taken from `v2.2` annotations) #### Data preprocessing During training the following transforms were applied randomly with a 50% chance: * transforms.ColorJitter(brightness 0.5) * transforms.ColorJitter(contrast 0.5) * transforms.ColorJitter(saturation 0.5) * transforms.ColorJitter(hue 0.5) * transforms.Lambda(lambda img: ImageEnhance.Sharpness(img).enhance(random.uniform(0.5, 1.5))) * transforms.Lambda(lambda img: img.filter(ImageFilter.GaussianBlur(radius=random.uniform(0, 2)))) ### Training Hyperparameters * eval_strategy "epoch" * save_strategy "epoch" * learning_rate 5e-5 * per_device_train_batch_size 8 * per_device_eval_batch_size 8 * num_train_epochs 3 * warmup_ratio 0.1 * logging_steps 10 * load_best_model_at_end True * metric_for_best_model "accuracy" ### Results 📊 **v2.0** Evaluation set's accuracy (**Top-3**): **95.58%** ![TOP-3 confusion matrix - trained ViT](https://github.com/ufal/atrium-page-classification/blob/main/result/plots/20250526-1147_model_v20_conf_mat_TOP-3.png?raw=true) **v2.1** Evaluation set's accuracy (**Top-3**): **99.84%** ![TOP-3 confusion matrix - trained ViT](https://github.com/ufal/atrium-page-classification/blob/main/result/plots/20250526-1157_model_v21_conf_mat_TOP-3.png?raw=true) **v2.2** Evaluation set's accuracy (**Top-3**): **100.00%** ![TOP-3 confusion matrix - trained ViT](https://github.com/ufal/atrium-page-classification/blob/main/result/plots/20250526-1201_model_v22_conf_mat_TOP-3.png?raw=true) **v2.0** Evaluation set's accuracy (**Top-1**): **84.96%** ![TOP-1 confusion matrix - trained ViT](https://github.com/ufal/atrium-page-classification/blob/main/result/plots/20250526-1152_model_v20_conf_mat_TOP-1.png?raw=true) **v2.1** Evaluation set's accuracy (**Top-1**): **96.36%** ![TOP-1 confusion matrix - trained ViT](https://github.com/ufal/atrium-page-classification/blob/main/result/plots/20250526-1156_model_v21_conf_mat_TOP-1.png?raw=true) **v2.2** Evaluation set's accuracy (**Top-1**): **99.61%** ![TOP-1 confusion matrix - trained ViT](https://github.com/ufal/atrium-page-classification/blob/main/result/plots/20250526-1202_model_v22_conf_mat_TOP-1.png?raw=true) #### Result tables - **v2.0** Manually ✍ **checked** evaluation dataset results (TOP-3): [model_TOP-3_EVAL.csv](https://github.com/ufal/atrium-page-classification/blob/main/result/tables/20250526-1142_model_v20_TOP-3_EVAL.csv) 🔗 - **v2.0** Manually ✍ **checked** evaluation dataset results (TOP-1): [model_TOP-1_EVAL.csv](https://github.com/ufal/atrium-page-classification/blob/main/result/tables/20250526-1148_model_v20_TOP-1_EVAL.csv) 🔗 - **v2.1** Manually ✍ **checked** evaluation dataset results (TOP-3): [model_TOP-3_EVAL.csv](https://github.com/ufal/atrium-page-classification/blob/main/result/tables/20250526-1153_model_v21_TOP-3_EVAL.csv) 🔗 - **v2.1** Manually ✍ **checked** evaluation dataset results (TOP-1): [model_TOP-1_EVAL.csv](https://github.com/ufal/atrium-page-classification/blob/main/result/tables/20250526-1151_model_v21_TOP-1_EVAL.csv) 🔗 - **v2.2** Manually ✍ **checked** evaluation dataset results (TOP-3): [model_TOP-3_EVAL.csv](https://github.com/ufal/atrium-page-classification/blob/main/result/tables/20250526-1156_model_v22_TOP-3_EVAL.csv) 🔗 - **v2.2** Manually ✍ **checked** evaluation dataset results (TOP-1): [model_TOP-1_EVAL.csv](https://github.com/ufal/atrium-page-classification/blob/main/result/tables/20250526-1158_model_v22_TOP-1_EVAL.csv) 🔗 #### Table columns - **FILE** - name of the file - **PAGE** - number of the page - **CLASS-N** - label of the category 🏷️, guess TOP-N - **SCORE-N** - score of the category 🏷️, guess TOP-N - **TRUE** - actual label of the category 🏷️ ### Contacts 📧 For support write to 📧 [email protected] 📧 Official repository: UFAL [^3] ### Acknowledgements 🙏 - **Developed by** UFAL [^5] 👥 - **Funded by** ATRIUM [^4] 💰 - **Shared by** ATRIUM [^4] & UFAL [^5] - **Model type:** fine-tuned ViT with a 224x224 [^2] 🔗 or 384x384 [^13] [^14] 🔗 resolution size **©️ 2022 UFAL & ATRIUM** [^1]: https://huggingface.co/k4tel/vit-historical-page [^2]: https://huggingface.co/google/vit-base-patch16-224 [^3]: https://github.com/ufal/atrium-page-classification [^4]: https://atrium-research.eu/ [^5]: https://ufal.mff.cuni.cz/home-page [^6]: https://huggingface.co/google/vit-base-patch16-384 [^7]: https://huggingface.co/google/vit-large-patch16-384
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6", "label_7", "label_8", "label_9", "label_10" ]
faaany/vit-base-beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0634 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2819 | 1.0 | 130 | 0.2152 | 0.9624 | | 0.1301 | 2.0 | 260 | 0.1301 | 0.9699 | | 0.138 | 3.0 | 390 | 0.0965 | 0.9774 | | 0.087 | 4.0 | 520 | 0.0634 | 0.9925 | | 0.1113 | 5.0 | 650 | 0.0788 | 0.9850 | ### Framework versions - Transformers 4.49.0.dev0 - Pytorch 2.6.0+xpu - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
SarangChouguley/manual_classification_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # manual_classification_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6400 - Accuracy: 0.875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 0.6780 | 0.75 | | No log | 2.0 | 2 | 0.6748 | 0.625 | | No log | 3.0 | 3 | 0.6400 | 0.875 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.1.0+cu118 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "safety_and_warning", "procedural" ]
kustyk97/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6044 - Accuracy: 0.893 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.6648 | 0.992 | 62 | 2.5226 | 0.786 | | 1.7965 | 2.0 | 125 | 1.7668 | 0.876 | | 1.5804 | 2.976 | 186 | 1.6044 | 0.893 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1 - Datasets 2.19.2 - Tokenizers 0.20.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
Srujan012/autotrain-chest-xray-demo-1677859324-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # autotrain-chest-xray-demo-1677859324-finetuned-eurosat This model is a fine-tuned version of [juliensimon/autotrain-chest-xray-demo-1677859324](https://huggingface.co/juliensimon/autotrain-chest-xray-demo-1677859324) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0731 - Accuracy: 0.9713 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.268 | 0.9796 | 36 | 0.0862 | 0.9617 | | 0.1635 | 1.9796 | 72 | 0.0767 | 0.9674 | | 0.1039 | 2.9796 | 108 | 0.0731 | 0.9713 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "normal", "pneumonia" ]
MathiasB/WargonInnovation-ViT-brand
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # WargonInnovation-ViT-brand This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.9420 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 56 - eval_batch_size: 56 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.5828 | 1.0 | 540 | 3.0740 | | 3.0225 | 2.0 | 1080 | 2.9955 | | 2.9336 | 3.0 | 1620 | 2.9478 | | 2.8204 | 4.0 | 2160 | 2.9420 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu118 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "not in the list", "kappahl", "pernilla wahlgren", "nudie jeans", "these glory days", "lee", "everest", "reebok", "bläck", "high mountain", "havana", "park lane", "bondelid", "tenson", "signature", "wico", "svea", "primark", "odd molly", "polo", "crocker", "atlant (dressman)", "gerry weber", "cream", "miss milla", "bitte kai rand", "edc (esprit de corps)", "sweet sktbs", "extended (lindex)", "ralph lauren", "tutti frutti", "atmosphere", "carl william", "riley", "barbour", "jack and jones", "part two", "saint tropez", "cellbes", "mexx", "balmain", "stockhlm", "dobber", "fbsisters (new yorker)", "fransa", "batistini (dressman)", "hamton republic (kappahl)", "object", "j.lindeberg", "alice bizous (joy)", "river island", "never denim låg", "twist & tango", "newbody", "pull and bear", "in motion", "anna field", "isolde", "next låg", "fix (lindex)", "desigual", "sand", "hollister (abercomb & fitch)", "soul river", "hennes collection", "zara man", "pieces", "melka (gammalt svenskt)?", "race marine", "inwear", "carhartt", "boohoo", "hillevi", "puma", "hope", "frk", "rut & circle", "bodyflirt", "j-crew", "east west", "banana republic", "cecil", "365", "o’neill", "noisy may", "maison scotch", "swedemount", "fjällräven", "mon row", "dolce & gabbana (d&g)", "esmara (lidl)", "alexander wang", "asos", "samsö", "kappa", "soulmate", "dagmar", "tiger of sweden", "jc", "juicy couture", "frank dandy", "mywear (ica)", "g-star raw", "gucci", "blendshe", "billabong", "patagonia", "second female", "rappson (gubb)", "cue", "melly moda", "amalia", "brandtex", "nike", "dunderon", "nordiska kompaniet", "newhouse mellan", "mcgordon (dressmann)", "even & odd", "pure instinct", "and other stories", "malene birger", "benneton", "stenströms (skjorta)", "zara", "dorothy perkins", "stormberg", "kari traa", "oscar of sweden", "prêt", "hummel", "tretorn", "bubbleroom", "marcs & spencer", "kello", "free quent", "eton", "tessie", "scotch and soda", "lapidus", "nelly", "indigo", "rodebjer", "shirt factory", "rönisch", "louis vuitton", "lab industries (kappahl)", "hunkydory", "dry lake", "emilio cavallini", "choklate", "hangmatta norhult", "chiquelle", "miss sixty", "trofé", "dkny", "lacoste", "casall", "mac scott", "jasmine", "size & needle", "betty barkley", "coast", "rip curl", "glamorous", "busnel", "promod", "humör (skate)", "röhnish", "savage", "jsfn", "fred perry", "new balance mellan", "oui", "acne", "hunkemöller", "madlady", "isabel marant", "ermenegildo zegna", "indiska", "amelie&me", "kriss", "jofama", "dustin", "columbia", "pulz", "laura ashley mellan", "micha", "haglöfs", "marc'o polo", "forever 21", "murrey", "the north face", "paon", "chelsea", "our legacy", "sailracing", "cavaliere", "kookai", "sharp (lindex)", "fendi", "holly and whyte (lindex)", "me too", "by marlene birger", "burberry", "american apparel", "pure", "sfera", "henri loyd", "stefanel", "michael kors", "laurel", "amisu (new yorker)", "not applicable ", "holly and whyte (lindex)", "kappahl", "xlnt (kappahl)", "logg (h&m)", "stenströms (skjorta)", "jean paul gaultier", "free quent", "vila", "tutti frutti ", "lindex", "na-kd", "massimo dutti", "mexx ", "weekday", "vero moda", "stylein ", "moschino ", "on the peak", "micha ", "amisu (new yorker)", "river island ", "only", "mango ", "soya/soyaconcept", "edc (esprit de corps)", "na-kd ", "soaked in luxury", "greenhouse", "koola anna", "f&f", "björn borg", "dahlin", "nly trend", "fbsisters (new yorker)", "bikbok ", "miss milla ", "mckinley (intersport) ", "monki ", "cos", "tiger of sweden", "masai ", "bondelid ", "new look ", "gina tricot", "frk", "oui ", "pull and bear", "craft", "dagmar ", "bershka (zara)", "stockhlm ", "park lane", "marimekko ", "saint tropez", "etirel", "not applicable", "peak performance ", "gant", "mywear (ica) ", "ljung", "soc (stadium)", "jack and jones", "selected homme/femme", "vans", "westerlind", "cross sportwear", "busnel ", "betty barkley ", "scotch and soda", "mon row ", "jc", "röhnish ", "morris ", "dr denim ", "marc'o polo ", "sharp (lindex)", "dressman", "benneton ", "us polo ass", "mari phillipe ", "chica london", "levi's", "lexington", "missoni", "balenciaga", "converse (kläder)", "censored (new yorker)", "guess", "kenzo", "ann taylor", "maison martin margiela", "houdini", "logg (h&m)", "soya/soyaconcept", "missing", "gudrun sjödén", "meyer", "cheap monday", "marimekko", "didriksons", "mckinley (intersport)", "skill", "day by birger mikkelsen", "dr denim", "oscar jacobsson", "adidas", "filippa k", "boss, hugo boss", "selected homme/femme", "cappuchini", "deval", "on the peak", "gant", "calvin klein", "cubus", "ichi", "abercomb & fitch", "peak performance", "esprit", "monki", "junkyard", "flash", "soc (stadium)", "adrian hammond", "newbie (kappahl)", "name it", "ellos", "h&m", "cacharel", "polarn och pyret", "me & i", "rosebud", "pomp de lux", "noa noa mellan", "gap", "cos", "fila", "craft", "diesel", "helly hansen", "masai", "bikbok", "happy holly", "bershka (zara)", "mingeljeans", "almia", "morris", "massimo dutti", "boomerang", "lager 157", "levi's", "new look", "jackpot", "soaked in luxury", "carin wester", "five seasons", "mango", "lyle & scott", "armani", "chica london" ]
LaLegumbreArtificial/NEO_MUL_EXP5_0
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_MUL_EXP5_0 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0437 - Accuracy: 0.9827 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.3154 | 0.9818 | 27 | 0.1826 | 0.9221 | | 0.1202 | 2.0 | 55 | 0.0801 | 0.9681 | | 0.0784 | 2.9818 | 82 | 0.0586 | 0.9788 | | 0.0438 | 4.0 | 110 | 0.0393 | 0.9843 | | 0.0288 | 4.9091 | 135 | 0.0437 | 0.9827 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "good", "damage" ]
LaLegumbreArtificial/NEO_MUL_EXP5_1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_MUL_EXP5_1 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0445 - Accuracy: 0.9827 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.2451 | 0.9818 | 27 | 0.1498 | 0.9363 | | 0.1317 | 2.0 | 55 | 0.0899 | 0.9650 | | 0.0785 | 2.9818 | 82 | 0.0529 | 0.9804 | | 0.0478 | 4.0 | 110 | 0.0426 | 0.9839 | | 0.0278 | 4.9091 | 135 | 0.0445 | 0.9827 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "good", "damage" ]
LaLegumbreArtificial/NEO_BIN_EXP1_0
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_BIN_EXP1_0 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0216 - Accuracy: 0.9915 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0771 | 0.992 | 93 | 0.0871 | 0.97 | | 0.0652 | 1.9947 | 187 | 0.0403 | 0.986 | | 0.0443 | 2.9973 | 281 | 0.0353 | 0.987 | | 0.0239 | 4.0 | 375 | 0.0346 | 0.988 | | 0.0165 | 4.96 | 465 | 0.0216 | 0.9915 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "good", "damage" ]
LaLegumbreArtificial/NEO_MUL_EXP5_2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_MUL_EXP5_2 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0483 - Accuracy: 0.9824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.244 | 0.9818 | 27 | 0.1837 | 0.9159 | | 0.1241 | 2.0 | 55 | 0.0787 | 0.9694 | | 0.0904 | 2.9818 | 82 | 0.0562 | 0.9789 | | 0.0634 | 4.0 | 110 | 0.0615 | 0.9768 | | 0.0313 | 4.9091 | 135 | 0.0483 | 0.9824 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "good", "damage" ]
LaLegumbreArtificial/NEO_BIN_EXP1_1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_BIN_EXP1_1 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0184 - Accuracy: 0.991 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0896 | 0.992 | 93 | 0.0766 | 0.9695 | | 0.0526 | 1.9947 | 187 | 0.0661 | 0.9745 | | 0.0329 | 2.9973 | 281 | 0.0339 | 0.9885 | | 0.0235 | 4.0 | 375 | 0.0319 | 0.986 | | 0.0174 | 4.96 | 465 | 0.0184 | 0.991 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "good", "damage" ]
LaLegumbreArtificial/NEO_BIN_EXP1_2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_BIN_EXP1_2 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0309 - Accuracy: 0.9905 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0718 | 0.992 | 93 | 0.0621 | 0.976 | | 0.0503 | 1.9947 | 187 | 0.0541 | 0.9765 | | 0.0351 | 2.9973 | 281 | 0.0460 | 0.981 | | 0.0262 | 4.0 | 375 | 0.0296 | 0.9895 | | 0.0181 | 4.96 | 465 | 0.0309 | 0.9905 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "good", "damage" ]
LaLegumbreArtificial/NEO_MUL_EXP5_3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_MUL_EXP5_3 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0483 - Accuracy: 0.9824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.244 | 0.9818 | 27 | 0.1837 | 0.9159 | | 0.1241 | 2.0 | 55 | 0.0787 | 0.9694 | | 0.0904 | 2.9818 | 82 | 0.0562 | 0.9789 | | 0.0634 | 4.0 | 110 | 0.0615 | 0.9768 | | 0.0313 | 4.9091 | 135 | 0.0483 | 0.9824 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "good", "damage" ]
LaLegumbreArtificial/NEO_MUL_EXP5_4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_MUL_EXP5_4 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0483 - Accuracy: 0.9824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.244 | 0.9818 | 27 | 0.1837 | 0.9159 | | 0.1241 | 2.0 | 55 | 0.0787 | 0.9694 | | 0.0904 | 2.9818 | 82 | 0.0562 | 0.9789 | | 0.0634 | 4.0 | 110 | 0.0615 | 0.9768 | | 0.0313 | 4.9091 | 135 | 0.0483 | 0.9824 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "good", "damage" ]
LaLegumbreArtificial/NEO_BIN_EXP1_3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_BIN_EXP1_3 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0261 - Accuracy: 0.991 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1026 | 0.992 | 93 | 0.0706 | 0.9705 | | 0.0558 | 1.9947 | 187 | 0.0491 | 0.9815 | | 0.0342 | 2.9973 | 281 | 0.0335 | 0.9875 | | 0.03 | 4.0 | 375 | 0.0334 | 0.9875 | | 0.026 | 4.96 | 465 | 0.0261 | 0.991 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "good", "damage" ]
LaLegumbreArtificial/NEO_MUL_EXP4_1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_MUL_EXP4_1 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1276 - Accuracy: 0.955 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.4897 | 0.96 | 18 | 0.3910 | 0.8664 | | 0.3343 | 1.9733 | 37 | 0.2107 | 0.9243 | | 0.2419 | 2.9867 | 56 | 0.1616 | 0.9435 | | 0.1567 | 4.0 | 75 | 0.1292 | 0.955 | | 0.1029 | 4.8 | 90 | 0.1276 | 0.955 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/NEO_BIN_EXP1_4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_BIN_EXP1_4 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0184 - Accuracy: 0.991 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0896 | 0.992 | 93 | 0.0766 | 0.9695 | | 0.0526 | 1.9947 | 187 | 0.0661 | 0.9745 | | 0.0329 | 2.9973 | 281 | 0.0339 | 0.9885 | | 0.0235 | 4.0 | 375 | 0.0319 | 0.986 | | 0.0174 | 4.96 | 465 | 0.0184 | 0.991 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "good", "damage" ]
LaLegumbreArtificial/NEO_MUL_EXP4_2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_MUL_EXP4_2 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1079 - Accuracy: 0.9640 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.4909 | 0.96 | 18 | 0.3776 | 0.8707 | | 0.3058 | 1.9733 | 37 | 0.2133 | 0.9256 | | 0.1882 | 2.9867 | 56 | 0.1731 | 0.9385 | | 0.1573 | 4.0 | 75 | 0.1270 | 0.9563 | | 0.111 | 4.8 | 90 | 0.1079 | 0.9640 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/NEO_MUL_EXP4_3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_MUL_EXP4_3 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1079 - Accuracy: 0.9640 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.4909 | 0.96 | 18 | 0.3776 | 0.8707 | | 0.3058 | 1.9733 | 37 | 0.2133 | 0.9256 | | 0.1882 | 2.9867 | 56 | 0.1731 | 0.9385 | | 0.1573 | 4.0 | 75 | 0.1270 | 0.9563 | | 0.111 | 4.8 | 90 | 0.1079 | 0.9640 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/NEO_MUL_EXP4_4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_MUL_EXP4_4 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1079 - Accuracy: 0.9640 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.4909 | 0.96 | 18 | 0.3776 | 0.8707 | | 0.3058 | 1.9733 | 37 | 0.2133 | 0.9256 | | 0.1882 | 2.9867 | 56 | 0.1731 | 0.9385 | | 0.1573 | 4.0 | 75 | 0.1270 | 0.9563 | | 0.111 | 4.8 | 90 | 0.1079 | 0.9640 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/NEO_MUL_EXP3_1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_MUL_EXP3_1 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0818 - Accuracy: 0.9721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.3128 | 0.9867 | 37 | 0.2369 | 0.9156 | | 0.1552 | 2.0 | 75 | 0.1170 | 0.9577 | | 0.1183 | 2.9867 | 112 | 0.1072 | 0.9602 | | 0.0866 | 4.0 | 150 | 0.0967 | 0.9656 | | 0.0849 | 4.9333 | 185 | 0.0818 | 0.9721 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
thenewsupercell/DF_Image_VIT_V1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DF_Image_VIT_V1 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - F1: 0.6780 - Loss: 0.0492 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | F1 | Validation Loss | |:-------------:|:-----:|:-----:|:------:|:---------------:| | 0.0462 | 1.0 | 4604 | 0.4793 | 0.0570 | | 0.0006 | 2.0 | 9208 | 0.4261 | 0.0783 | | 0.0006 | 3.0 | 13812 | 0.6780 | 0.0492 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "fake", "real" ]
LaLegumbreArtificial/NEO_MUL_EXP3_2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_MUL_EXP3_2 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0885 - Accuracy: 0.9708 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.3871 | 0.9867 | 37 | 0.2672 | 0.9071 | | 0.1877 | 2.0 | 75 | 0.1377 | 0.9552 | | 0.1302 | 2.9867 | 112 | 0.1129 | 0.9577 | | 0.1012 | 4.0 | 150 | 0.1165 | 0.9565 | | 0.0934 | 4.9333 | 185 | 0.0885 | 0.9708 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/NEO_MUL_EXP3_3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_MUL_EXP3_3 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0885 - Accuracy: 0.9708 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.3871 | 0.9867 | 37 | 0.2672 | 0.9071 | | 0.1877 | 2.0 | 75 | 0.1377 | 0.9552 | | 0.1302 | 2.9867 | 112 | 0.1129 | 0.9577 | | 0.1012 | 4.0 | 150 | 0.1165 | 0.9565 | | 0.0934 | 4.9333 | 185 | 0.0885 | 0.9708 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/NEO_MUL_EXP3_4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_MUL_EXP3_4 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0785 - Accuracy: 0.9725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.3738 | 0.9867 | 37 | 0.2391 | 0.9208 | | 0.1783 | 2.0 | 75 | 0.1439 | 0.95 | | 0.1213 | 2.9867 | 112 | 0.1164 | 0.9623 | | 0.0892 | 4.0 | 150 | 0.0962 | 0.9673 | | 0.0851 | 4.9333 | 185 | 0.0785 | 0.9725 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
nitasha1996/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0643 - Accuracy: 0.9796 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.1783 | 1.0 | 14 | 0.4320 | 0.9133 | | 0.4347 | 2.0 | 28 | 0.1270 | 0.9643 | | 0.1229 | 3.0 | 42 | 0.0643 | 0.9796 | | 0.0789 | 4.0 | 56 | 0.0731 | 0.9745 | | 0.0683 | 4.6545 | 65 | 0.0569 | 0.9745 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "diseased cotton leaf", "diseased cotton plant", "fresh cotton leaf", "fresh cotton plant" ]
Rgullon/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2072 - Accuracy: 0.9364 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3717 | 1.0 | 370 | 0.3139 | 0.9283 | | 0.2016 | 2.0 | 740 | 0.2460 | 0.9364 | | 0.1706 | 3.0 | 1110 | 0.2287 | 0.9364 | | 0.1529 | 4.0 | 1480 | 0.2222 | 0.9432 | | 0.1308 | 5.0 | 1850 | 0.2188 | 0.9418 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
Eymardh7/finetuned-indian-food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-indian-food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.0 - Tokenizers 0.21.0
[ "burger", "butter_naan", "kaathi_rolls", "kadai_paneer", "kulfi", "masala_dosa", "momos", "paani_puri", "pakode", "pav_bhaji", "pizza", "samosa", "chai", "chapati", "chole_bhature", "dal_makhani", "dhokla", "fried_rice", "idli", "jalebi" ]
LaLegumbreArtificial/NEO_MUL_EXP3_0
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_MUL_EXP3_0 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0818 - Accuracy: 0.9742 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.4038 | 0.9867 | 37 | 0.2211 | 0.9235 | | 0.1786 | 2.0 | 75 | 0.1419 | 0.9504 | | 0.1145 | 2.9867 | 112 | 0.1062 | 0.9633 | | 0.0822 | 4.0 | 150 | 0.0968 | 0.9669 | | 0.0797 | 4.9333 | 185 | 0.0818 | 0.9742 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
luisbetto/beans-final-model-luis_blanco
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beans-final-model-luis_blanco This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
LaLegumbreArtificial/NEO_MUL_EXP4_0
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEO_MUL_EXP4_0 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1273 - Accuracy: 0.9553 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.4221 | 0.96 | 18 | 0.4061 | 0.8656 | | 0.3048 | 1.9733 | 37 | 0.2111 | 0.9265 | | 0.1764 | 2.9867 | 56 | 0.1473 | 0.9508 | | 0.1433 | 4.0 | 75 | 0.1214 | 0.9593 | | 0.1156 | 4.8 | 90 | 0.1273 | 0.9553 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
QiyaoWei/CheXpert-5-convnextv2-tiny-384
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CheXpert-5-convnextv2-tiny-384 This model is a fine-tuned version of [facebook/convnextv2-tiny-22k-384](https://huggingface.co/facebook/convnextv2-tiny-22k-384) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1644 - Auroc Atelectasis: 0.4756 - Auroc Cardiomegaly: 0.6956 - Auroc Consolidation: 0.5969 - Auroc Edema: 0.8883 - Auroc Pleural effusion: 0.8692 - Specificity Atelectasis: 0.0816 - Specificity Cardiomegaly: 0.7215 - Specificity Consolidation: 0.7068 - Specificity Edema: 0.4835 - Specificity Pleural effusion: 0.2532 - Exact Match: 0.0402 - Hamming Distance: 0.4554 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 2500 - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Auroc Atelectasis | Auroc Cardiomegaly | Auroc Consolidation | Auroc Edema | Auroc Pleural effusion | Specificity Atelectasis | Specificity Cardiomegaly | Specificity Consolidation | Specificity Edema | Specificity Pleural effusion | Exact Match | Hamming Distance | |:-------------:|:-----:|:-----:|:---------------:|:-----------------:|:------------------:|:-------------------:|:-----------:|:----------------------:|:-----------------------:|:------------------------:|:-------------------------:|:-----------------:|:----------------------------:|:-----------:|:----------------:| | 0.1201 | 1.0 | 2010 | 0.1180 | 0.7583 | 0.8187 | 0.6532 | 0.8171 | 0.8295 | 0.1447 | 0.4273 | 0.9971 | 0.3333 | 0.0811 | 0.1346 | 0.3061 | | 0.1177 | 2.0 | 4020 | 0.1176 | 0.7586 | 0.8387 | 0.6655 | 0.8196 | 0.8411 | 0.3666 | 0.6803 | 0.7962 | 0.3031 | 0.2830 | 0.1541 | 0.2769 | | 0.1145 | 3.0 | 6030 | 0.1124 | 0.7736 | 0.8423 | 0.6789 | 0.8345 | 0.8541 | 0.4841 | 0.6914 | 0.9640 | 0.2092 | 0.2477 | 0.2013 | 0.2523 | | 0.1098 | 4.0 | 8040 | 0.1094 | 0.7852 | 0.8619 | 0.6923 | 0.8407 | 0.8629 | 0.3826 | 0.8147 | 0.7786 | 0.3836 | 0.1550 | 0.1719 | 0.2590 | | 0.104 | 5.0 | 10050 | 0.1062 | 0.7904 | 0.8632 | 0.7045 | 0.8515 | 0.8736 | 0.4176 | 0.7128 | 0.8683 | 0.3822 | 0.2096 | 0.1955 | 0.2496 | | 0.0984 | 6.0 | 12060 | 0.1061 | 0.7944 | 0.8652 | 0.7057 | 0.8536 | 0.8741 | 0.4495 | 0.7491 | 0.8963 | 0.4333 | 0.2879 | 0.2398 | 0.2349 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.1.2 - Datasets 3.3.0 - Tokenizers 0.21.0
[ "atelectasis", "cardiomegaly", "consolidation", "edema", "pleural effusion" ]
MingPass/vit-base-patch16-224-in21k-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-finetuned-eurosat This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5799 - Accuracy: 0.8899 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 8 | 0.7453 | 0.8648 | | 0.8416 | 2.0 | 16 | 0.6232 | 0.8773 | | 0.6293 | 3.0 | 24 | 0.5799 | 0.8899 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "grab", "lineman", "normal", "win" ]
aharley2/elephant-expl-detector
--- license: MIT pipeline_tag: image-classification --- # Model Card: Fine-Tuned Vision Transformer (ViT) for expl Image Classification ## Model Description The **Fine-Tuned Vision Transformer (ViT)** is a variant of the transformer encoder architecture, similar to BERT, that has been adapted for image classification tasks. This specific model, named "google/vit-base-patch16-224-in21k," is pre-trained on a substantial collection of images in a supervised manner, leveraging the ImageNet-21k dataset. The images in the pre-training dataset are resized to a resolution of 224x224 pixels, making it suitable for a wide range of image recognition tasks. During the training phase, meticulous attention was given to hyperparameter settings to ensure optimal model performance. The model was fine-tuned with a judiciously chosen batch size of 16. This choice not only balanced computational efficiency but also allowed for the model to effectively process and learn from a diverse array of images. To facilitate this fine-tuning process, a learning rate of 5e-5 was employed. The learning rate serves as a critical tuning parameter that dictates the magnitude of adjustments made to the model's parameters during training. In this case, a learning rate of 5e-5 was selected to strike a harmonious balance between rapid convergence and steady optimization, resulting in a model that not only learns swiftly but also steadily refines its capabilities throughout the training process. This training phase was executed using a proprietary dataset containing an extensive collection of 80,000 images, each characterized by a substantial degree of variability. The dataset was thoughtfully curated to include two distinct classes, namely "normal" and "expl." This diversity allowed the model to grasp nuanced visual patterns, equipping it with the competence to accurately differentiate between safe and explicit content. The overarching objective of this meticulous training process was to impart the model with a deep understanding of visual cues, ensuring its robustness and competence in tackling the specific task of expl image classification. The result is a model that stands ready to contribute significantly to content safety and moderation, all while maintaining the highest standards of accuracy and reliability. ## Intended Uses & Limitations ### Intended Uses - **expl Image Classification**: The primary intended use of this model is for the classification of expl images. It has been fine-tuned for this purpose, making it suitable for filtering explicit or inappropriate content in various applications. ### How to use Here is how to use this model to classifiy an image based on 1 of 2 classes (normal,expl): ```markdown # Use a pipeline as a high-level helper from PIL import Image from transformers import pipeline img = Image.open("<path_to_image_file>") classifier = pipeline("image-classification", model="Falconsai/expl_image_detection") classifier(img) ``` <hr> ``` markdown # Load model directly import torch from PIL import Image from transformers import AutoModelForImageClassification, ViTImageProcessor img = Image.open("<path_to_image_file>") model = AutoModelForImageClassification.from_pretrained("Falconsai/expl_image_detection") processor = ViTImageProcessor.from_pretrained('Falconsai/expl_image_detection') with torch.no_grad(): inputs = processor(images=img, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits predicted_label = logits.argmax(-1).item() model.config.id2label[predicted_label] ``` <hr> ### Limitations - **Specialized Task Fine-Tuning**: While the model is adept at expl image classification, its performance may vary when applied to other tasks. - Users interested in employing this model for different tasks should explore fine-tuned versions available in the model hub for optimal results. ## Training Data The model's training data includes a proprietary dataset comprising approximately 80,000 images. This dataset encompasses a significant amount of variability and consists of two distinct classes: "normal" and "expl." The training process on this data aimed to equip the model with the ability to distinguish between safe and explicit content effectively. ### Training Stats ``` markdown - 'eval_loss': 0.07463177293539047, - 'eval_accuracy': 0.980375, - 'eval_runtime': 304.9846, - 'eval_samples_per_second': 52.462, - 'eval_steps_per_second': 3.279 ``` <hr> **Note:** It's essential to use this model responsibly and ethically, adhering to content guidelines and applicable regulations when implementing it in real-world applications, particularly those involving potentially sensitive content. For more details on model fine-tuning and usage, please refer to the model's documentation and the model hub. ## References - [Hugging Face Model Hub](https://huggingface.co/models) - [Vision Transformer (ViT) Paper](https://arxiv.org/abs/2010.11929) - [ImageNet-21k Dataset](http://www.image-net.org/) **Disclaimer:** The model's performance may be influenced by the quality and representativeness of the data it was fine-tuned on. Users are encouraged to assess the model's suitability for their specific applications and datasets.
[ "normal", "nsfw" ]
jjsprockel/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6599 - Accuracy: 0.5614 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 4 | 0.6771 | 0.5789 | | No log | 2.0 | 8 | 0.6415 | 0.6140 | | 0.6754 | 3.0 | 12 | 0.6351 | 0.6667 | | 0.6754 | 4.0 | 16 | 0.6629 | 0.5614 | | 0.6122 | 5.0 | 20 | 0.6631 | 0.5789 | | 0.6122 | 6.0 | 24 | 0.6529 | 0.5789 | | 0.6122 | 7.0 | 28 | 0.6541 | 0.5614 | | 0.5894 | 8.0 | 32 | 0.6599 | 0.5614 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.0 - Tokenizers 0.21.0
[ "fxmp", "nofx" ]
jjsprockel/swin-tiny-patch4-window7-224-FxMaleoloPost
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-FxMaleoloPost This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7064 - Accuracy: 0.5263 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 4 | 0.7583 | 0.4561 | | No log | 2.0 | 8 | 0.7019 | 0.5439 | | 0.6836 | 3.0 | 12 | 0.7403 | 0.3860 | | 0.6836 | 4.0 | 16 | 0.7064 | 0.5263 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.0 - Tokenizers 0.21.0
[ "fxmp", "nofx" ]
quentintaranpino/nsfw-image-classifier
# NSFW Image Classification Model - Fine-Tuned FocalNet ## Overview This repository contains a **fine-tuned NSFW image classification model** based on **FocalNet**, optimized for content moderation tasks. The model classifies images into three categories: **Safe, Questionable, and Unsafe**. This model is an improved version of **[`MichalMlodawski/nsfw-image-detection-large`](https://huggingface.co/MichalMlodawski/nsfw-image-detection-large)**, which was originally built on **[`microsoft/focalnet-base`](https://huggingface.co/microsoft/focalnet-base)**. It has been further **fine-tuned with additional image data obtained from nostrcheck.me**, enhancing its accuracy and robustness in identifying inappropriate content. ## Model Details - **Base Model:** `microsoft/focalnet-base` - **Fine-Tuned From:** `MichalMlodawski/nsfw-image-detection-large` - **Architecture:** `FocalNetForImageClassification` - **Image Input Size:** `224x224 pixels` - **Classification Labels:** - **Safe**: Appropriate content - **Questionable**: Requires manual review - **Unsafe**: NSFW or inappropriate content - **Training Framework:** `PyTorch` - **Model Format:** `safetensors` ## Limitations - **False Positives/Negatives**: The model is highly accurate but may still produce incorrect classifications. - **Contextual Understanding**: The model analyzes images in isolation, without considering accompanying text or metadata. - **License Restrictions**: This model is released under **CC BY-NC-SA 4.0**, which requires attribution, prohibits commercial use, and mandates sharing under the same license. ## Acknowledgments This model is a fine-tuned version of **[`MichalMlodawski/nsfw-image-detection-large`](https://huggingface.co/MichalMlodawski/nsfw-image-detection-large)**, originally trained on **[`microsoft/focalnet-base`](https://huggingface.co/microsoft/focalnet-base)**. Additional training data was obtained from **nostrcheck.me** to further improve its performance. ## References - [FocalNet Official Repository](https://github.com/microsoft/FocalNet) - [Transformers Documentation](https://huggingface.co/docs/transformers/index) - [Hugging Face Model Hub](https://huggingface.co/models) This model is part of an ongoing effort to improve **content moderation through AI**. Contributions and feedback are welcome.
[ "safe", "questionable", "unsafe" ]
JOSEFELDIB/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.1662 - Accuracy: 0.9459 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1718 | 1.0 | 370 | 0.1980 | 0.9472 | | 0.1649 | 2.0 | 740 | 0.1902 | 0.9459 | | 0.1238 | 3.0 | 1110 | 0.1822 | 0.9499 | | 0.12 | 4.0 | 1480 | 0.1786 | 0.9513 | | 0.1031 | 5.0 | 1850 | 0.1776 | 0.9513 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.0 - Tokenizers 0.21.0
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
AldoSN/biomedical_model_aldosn
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "benign", "malignant", "normal" ]
Andrew-Finch/vit-base-beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0434 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0803 | 1.5385 | 100 | 0.0434 | 0.9925 | | 0.0179 | 3.0769 | 200 | 0.0762 | 0.9774 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.6.0 - Datasets 3.3.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
princeGedeon/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.1976 - Accuracy: 0.9337 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3741 | 1.0 | 370 | 0.2627 | 0.9337 | | 0.2308 | 2.0 | 740 | 0.2047 | 0.9459 | | 0.176 | 3.0 | 1110 | 0.1815 | 0.9486 | | 0.1253 | 4.0 | 1480 | 0.1753 | 0.9499 | | 0.1366 | 5.0 | 1850 | 0.1734 | 0.9513 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.0 - Tokenizers 0.21.0
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
kiranteja/mri_brain_tumour_vision_transformers
## Model Details This is a fine-tuned vision transformer model to detect brain tumour from MRI scan.
[ "glioma", "meningioma", "notumour", "pituitary" ]
muslimaziz/image_classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # image_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5834 - Accuracy: 0.907 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.6657 | 1.0 | 63 | 2.4887 | 0.855 | | 1.8334 | 2.0 | 126 | 1.7698 | 0.885 | | 1.563 | 2.96 | 186 | 1.5854 | 0.905 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Tokenizers 0.21.0
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
afifai/image_classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # image_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6224 - Accuracy: 0.903 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.6983 | 1.0 | 63 | 2.5181 | 0.823 | | 1.8565 | 2.0 | 126 | 1.7855 | 0.875 | | 1.5998 | 2.96 | 186 | 1.6226 | 0.889 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.0 - Tokenizers 0.21.0
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
MarfinF/emotion_classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.9752 - Accuracy: 0.3063 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0584 | 1.0 | 10 | 2.0231 | 0.275 | | 1.9785 | 2.0 | 20 | 1.9722 | 0.3063 | | 1.9134 | 3.0 | 30 | 1.9484 | 0.275 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
corranm/square_run_second_vote_full_pic_stratified
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # square_run_second_vote_full_pic_stratified This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8527 - F1 Macro: 0.6046 - F1 Micro: 0.7121 - F1 Weighted: 0.6979 - Precision Macro: 0.6148 - Precision Micro: 0.7121 - Precision Weighted: 0.7042 - Recall Macro: 0.6157 - Recall Micro: 0.7121 - Recall Weighted: 0.7121 - Accuracy: 0.7121 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.8233 | 1.0 | 58 | 1.8311 | 0.1942 | 0.2727 | 0.2274 | 0.1804 | 0.2727 | 0.2139 | 0.2367 | 0.2727 | 0.2727 | 0.2727 | | 1.8698 | 2.0 | 116 | 1.9889 | 0.0840 | 0.1742 | 0.0989 | 0.0796 | 0.1742 | 0.1011 | 0.1719 | 0.1742 | 0.1742 | 0.1742 | | 1.9152 | 3.0 | 174 | 1.7868 | 0.1549 | 0.2727 | 0.1775 | 0.3078 | 0.2727 | 0.3022 | 0.2068 | 0.2727 | 0.2727 | 0.2727 | | 1.6147 | 4.0 | 232 | 1.9341 | 0.2055 | 0.2955 | 0.2523 | 0.3212 | 0.2955 | 0.3791 | 0.2645 | 0.2955 | 0.2955 | 0.2955 | | 1.6414 | 5.0 | 290 | 1.4736 | 0.3384 | 0.4470 | 0.3905 | 0.3487 | 0.4470 | 0.3844 | 0.3667 | 0.4470 | 0.4470 | 0.4470 | | 1.1299 | 6.0 | 348 | 1.2117 | 0.4394 | 0.5227 | 0.5100 | 0.4779 | 0.5227 | 0.5389 | 0.4409 | 0.5227 | 0.5227 | 0.5227 | | 1.493 | 7.0 | 406 | 1.3023 | 0.4501 | 0.5227 | 0.5130 | 0.4975 | 0.5227 | 0.5811 | 0.4699 | 0.5227 | 0.5227 | 0.5227 | | 1.191 | 8.0 | 464 | 1.1375 | 0.5136 | 0.6136 | 0.5937 | 0.5140 | 0.6136 | 0.5852 | 0.5227 | 0.6136 | 0.6136 | 0.6136 | | 1.6657 | 9.0 | 522 | 0.9972 | 0.5421 | 0.6439 | 0.6344 | 0.5400 | 0.6439 | 0.6329 | 0.5526 | 0.6439 | 0.6439 | 0.6439 | | 0.6272 | 10.0 | 580 | 1.1733 | 0.4743 | 0.5833 | 0.5586 | 0.4990 | 0.5833 | 0.5887 | 0.4983 | 0.5833 | 0.5833 | 0.5833 | | 0.3887 | 11.0 | 638 | 1.2098 | 0.4849 | 0.5833 | 0.5713 | 0.4956 | 0.5833 | 0.5931 | 0.5094 | 0.5833 | 0.5833 | 0.5833 | | 0.5232 | 12.0 | 696 | 1.1906 | 0.5205 | 0.6212 | 0.6061 | 0.5470 | 0.6212 | 0.6370 | 0.5378 | 0.6212 | 0.6212 | 0.6212 | | 1.1531 | 13.0 | 754 | 1.1958 | 0.5960 | 0.6439 | 0.6364 | 0.6959 | 0.6439 | 0.6619 | 0.5765 | 0.6439 | 0.6439 | 0.6439 | | 0.4566 | 14.0 | 812 | 1.2707 | 0.5381 | 0.6061 | 0.5919 | 0.5768 | 0.6061 | 0.6055 | 0.5356 | 0.6061 | 0.6061 | 0.6061 | | 0.6865 | 15.0 | 870 | 1.3936 | 0.5478 | 0.6364 | 0.6304 | 0.5668 | 0.6364 | 0.6634 | 0.5658 | 0.6364 | 0.6364 | 0.6364 | | 0.4421 | 16.0 | 928 | 1.3080 | 0.5732 | 0.6591 | 0.6616 | 0.5855 | 0.6591 | 0.6781 | 0.5718 | 0.6591 | 0.6591 | 0.6591 | | 0.3093 | 17.0 | 986 | 1.7662 | 0.4753 | 0.5682 | 0.5524 | 0.5212 | 0.5682 | 0.6010 | 0.4840 | 0.5682 | 0.5682 | 0.5682 | | 0.134 | 18.0 | 1044 | 1.4813 | 0.5427 | 0.6364 | 0.6250 | 0.5700 | 0.6364 | 0.6476 | 0.5510 | 0.6364 | 0.6364 | 0.6364 | | 0.0406 | 19.0 | 1102 | 1.5180 | 0.5583 | 0.6515 | 0.6354 | 0.5572 | 0.6515 | 0.6338 | 0.5736 | 0.6515 | 0.6515 | 0.6515 | | 0.0198 | 20.0 | 1160 | 1.8253 | 0.5403 | 0.6212 | 0.6226 | 0.5579 | 0.6212 | 0.6497 | 0.5487 | 0.6212 | 0.6212 | 0.6212 | | 0.0054 | 21.0 | 1218 | 1.6285 | 0.5574 | 0.6515 | 0.6478 | 0.5722 | 0.6515 | 0.6595 | 0.5596 | 0.6515 | 0.6515 | 0.6515 | | 0.2816 | 22.0 | 1276 | 1.7743 | 0.5174 | 0.6212 | 0.6028 | 0.5400 | 0.6212 | 0.6203 | 0.5257 | 0.6212 | 0.6212 | 0.6212 | | 0.0076 | 23.0 | 1334 | 1.7558 | 0.5353 | 0.6212 | 0.6174 | 0.5506 | 0.6212 | 0.6324 | 0.5389 | 0.6212 | 0.6212 | 0.6212 | | 0.0046 | 24.0 | 1392 | 1.7770 | 0.5518 | 0.6364 | 0.6346 | 0.5720 | 0.6364 | 0.6581 | 0.5580 | 0.6364 | 0.6364 | 0.6364 | | 0.0018 | 25.0 | 1450 | 1.5917 | 0.5926 | 0.6818 | 0.6812 | 0.6030 | 0.6818 | 0.6864 | 0.5869 | 0.6818 | 0.6818 | 0.6818 | | 0.0041 | 26.0 | 1508 | 1.7247 | 0.5866 | 0.6667 | 0.6725 | 0.6022 | 0.6667 | 0.6874 | 0.5788 | 0.6667 | 0.6667 | 0.6667 | | 0.0013 | 27.0 | 1566 | 1.6674 | 0.5977 | 0.6894 | 0.6852 | 0.6010 | 0.6894 | 0.6861 | 0.5997 | 0.6894 | 0.6894 | 0.6894 | | 0.0022 | 28.0 | 1624 | 1.7056 | 0.5938 | 0.6818 | 0.6794 | 0.5964 | 0.6818 | 0.6828 | 0.5975 | 0.6818 | 0.6818 | 0.6818 | | 0.001 | 29.0 | 1682 | 1.6834 | 0.5978 | 0.6894 | 0.6849 | 0.5975 | 0.6894 | 0.6828 | 0.6006 | 0.6894 | 0.6894 | 0.6894 | | 0.0006 | 30.0 | 1740 | 1.6730 | 0.6075 | 0.6970 | 0.6932 | 0.6149 | 0.6970 | 0.6968 | 0.6081 | 0.6970 | 0.6970 | 0.6970 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/square_run_second_vote_full_pic_age_gender
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # square_run_second_vote_full_pic_age_gender This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0942 - F1 Macro: 0.4946 - F1 Micro: 0.5909 - F1 Weighted: 0.5567 - Precision Macro: 0.5352 - Precision Micro: 0.5909 - Precision Weighted: 0.6194 - Recall Macro: 0.5362 - Recall Micro: 0.5909 - Recall Weighted: 0.5909 - Accuracy: 0.5909 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.7962 | 1.0 | 58 | 1.7841 | 0.2169 | 0.3409 | 0.2791 | 0.2076 | 0.3409 | 0.2573 | 0.2540 | 0.3409 | 0.3409 | 0.3409 | | 1.7195 | 2.0 | 116 | 1.8915 | 0.2134 | 0.3409 | 0.2372 | 0.1677 | 0.3409 | 0.1872 | 0.3116 | 0.3409 | 0.3409 | 0.3409 | | 1.4056 | 3.0 | 174 | 1.5382 | 0.3310 | 0.4470 | 0.3785 | 0.4080 | 0.4470 | 0.4904 | 0.3773 | 0.4470 | 0.4470 | 0.4470 | | 1.3472 | 4.0 | 232 | 1.3224 | 0.4005 | 0.5530 | 0.4824 | 0.4535 | 0.5530 | 0.5284 | 0.4615 | 0.5530 | 0.5530 | 0.5530 | | 1.3266 | 5.0 | 290 | 1.3801 | 0.3815 | 0.5 | 0.4467 | 0.3655 | 0.5 | 0.4261 | 0.4203 | 0.5 | 0.5 | 0.5 | | 1.3514 | 6.0 | 348 | 1.4009 | 0.3982 | 0.4697 | 0.4703 | 0.5142 | 0.4697 | 0.6060 | 0.4123 | 0.4697 | 0.4697 | 0.4697 | | 1.0789 | 7.0 | 406 | 1.0679 | 0.5153 | 0.6136 | 0.5974 | 0.5624 | 0.6136 | 0.6350 | 0.5174 | 0.6136 | 0.6136 | 0.6136 | | 0.9054 | 8.0 | 464 | 1.0248 | 0.5610 | 0.6591 | 0.6392 | 0.5675 | 0.6591 | 0.6528 | 0.5790 | 0.6591 | 0.6591 | 0.6591 | | 0.9475 | 9.0 | 522 | 1.0533 | 0.5533 | 0.6439 | 0.6313 | 0.5630 | 0.6439 | 0.6377 | 0.5627 | 0.6439 | 0.6439 | 0.6439 | | 0.7595 | 10.0 | 580 | 1.2404 | 0.5064 | 0.6061 | 0.5985 | 0.5585 | 0.6061 | 0.6540 | 0.5220 | 0.6061 | 0.6061 | 0.6061 | | 0.6635 | 11.0 | 638 | 1.2577 | 0.5481 | 0.6515 | 0.6358 | 0.5697 | 0.6515 | 0.6554 | 0.5644 | 0.6515 | 0.6515 | 0.6515 | | 0.6638 | 12.0 | 696 | 1.1971 | 0.5943 | 0.6894 | 0.6847 | 0.6031 | 0.6894 | 0.6966 | 0.6035 | 0.6894 | 0.6894 | 0.6894 | | 1.3747 | 13.0 | 754 | 1.3014 | 0.5376 | 0.6136 | 0.6094 | 0.5734 | 0.6136 | 0.6522 | 0.5461 | 0.6136 | 0.6136 | 0.6136 | | 0.3888 | 14.0 | 812 | 1.3645 | 0.5671 | 0.6364 | 0.6212 | 0.6146 | 0.6364 | 0.6701 | 0.5834 | 0.6364 | 0.6364 | 0.6364 | | 0.3119 | 15.0 | 870 | 1.4637 | 0.5753 | 0.6591 | 0.6482 | 0.5955 | 0.6591 | 0.6694 | 0.5839 | 0.6591 | 0.6591 | 0.6591 | | 0.1874 | 16.0 | 928 | 1.4016 | 0.5387 | 0.6288 | 0.6220 | 0.5523 | 0.6288 | 0.6300 | 0.5383 | 0.6288 | 0.6288 | 0.6288 | | 0.0585 | 17.0 | 986 | 1.5412 | 0.5895 | 0.6894 | 0.6834 | 0.6173 | 0.6894 | 0.7082 | 0.5938 | 0.6894 | 0.6894 | 0.6894 | | 0.0425 | 18.0 | 1044 | 1.5022 | 0.6341 | 0.6970 | 0.6980 | 0.6602 | 0.6970 | 0.7305 | 0.6383 | 0.6970 | 0.6970 | 0.6970 | | 0.1893 | 19.0 | 1102 | 1.5766 | 0.6294 | 0.6818 | 0.6736 | 0.6630 | 0.6818 | 0.6847 | 0.6225 | 0.6818 | 0.6818 | 0.6818 | | 0.0059 | 20.0 | 1160 | 1.5288 | 0.6187 | 0.7273 | 0.7173 | 0.6260 | 0.7273 | 0.7246 | 0.6302 | 0.7273 | 0.7273 | 0.7273 | | 0.0019 | 21.0 | 1218 | 1.5794 | 0.6116 | 0.7121 | 0.7044 | 0.6149 | 0.7121 | 0.7040 | 0.6158 | 0.7121 | 0.7121 | 0.7121 | | 0.0043 | 22.0 | 1276 | 1.6290 | 0.5979 | 0.6970 | 0.6910 | 0.6144 | 0.6970 | 0.6984 | 0.5944 | 0.6970 | 0.6970 | 0.6970 | | 0.0012 | 23.0 | 1334 | 1.6983 | 0.6387 | 0.6894 | 0.6835 | 0.6647 | 0.6894 | 0.6874 | 0.6310 | 0.6894 | 0.6894 | 0.6894 | | 0.0007 | 24.0 | 1392 | 1.6381 | 0.6084 | 0.6970 | 0.6986 | 0.6195 | 0.6970 | 0.7039 | 0.6007 | 0.6970 | 0.6970 | 0.6970 | | 0.0035 | 25.0 | 1450 | 1.6691 | 0.6100 | 0.6970 | 0.6975 | 0.6162 | 0.6970 | 0.7018 | 0.6077 | 0.6970 | 0.6970 | 0.6970 | | 0.0318 | 26.0 | 1508 | 1.6443 | 0.6116 | 0.7045 | 0.7036 | 0.6223 | 0.7045 | 0.7080 | 0.6055 | 0.7045 | 0.7045 | 0.7045 | | 0.0005 | 27.0 | 1566 | 1.6647 | 0.6203 | 0.7121 | 0.7117 | 0.6312 | 0.7121 | 0.7167 | 0.6139 | 0.7121 | 0.7121 | 0.7121 | | 0.0059 | 28.0 | 1624 | 1.6387 | 0.6243 | 0.7273 | 0.7198 | 0.6269 | 0.7273 | 0.7211 | 0.6309 | 0.7273 | 0.7273 | 0.7273 | | 0.0009 | 29.0 | 1682 | 1.6511 | 0.5960 | 0.6894 | 0.6864 | 0.5988 | 0.6894 | 0.6864 | 0.5963 | 0.6894 | 0.6894 | 0.6894 | | 0.003 | 30.0 | 1740 | 1.6608 | 0.6046 | 0.6970 | 0.6945 | 0.6080 | 0.6970 | 0.6954 | 0.6047 | 0.6970 | 0.6970 | 0.6970 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
javiergrandat/vit-base-patch16-224-in21k_jgrandat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k_jgrandat This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1472 - Accuracy: 0.9624 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0784 | 3.8462 | 500 | 0.1472 | 0.9624 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
corranm/square_run_first_vote_full_pic_75
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # square_run_first_vote_full_pic_75 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7586 - F1 Macro: 0.4234 - F1 Micro: 0.5152 - F1 Weighted: 0.4789 - Precision Macro: 0.4562 - Precision Micro: 0.5152 - Precision Weighted: 0.5061 - Recall Macro: 0.4488 - Recall Micro: 0.5152 - Recall Weighted: 0.5152 - Accuracy: 0.5152 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.9675 | 1.0 | 58 | 1.9322 | 0.1051 | 0.1894 | 0.1204 | 0.1047 | 0.1894 | 0.1191 | 0.1661 | 0.1894 | 0.1894 | 0.1894 | | 1.8921 | 2.0 | 116 | 1.9534 | 0.0786 | 0.1818 | 0.0855 | 0.0551 | 0.1818 | 0.0597 | 0.1656 | 0.1818 | 0.1818 | 0.1818 | | 1.9081 | 3.0 | 174 | 1.8370 | 0.1526 | 0.2803 | 0.1976 | 0.1283 | 0.2803 | 0.1642 | 0.2117 | 0.2803 | 0.2803 | 0.2803 | | 1.5193 | 4.0 | 232 | 1.7240 | 0.1963 | 0.3258 | 0.2445 | 0.2948 | 0.3258 | 0.3289 | 0.2476 | 0.3258 | 0.3258 | 0.3258 | | 1.7743 | 5.0 | 290 | 1.5478 | 0.3382 | 0.4318 | 0.3920 | 0.3494 | 0.4318 | 0.4204 | 0.3837 | 0.4318 | 0.4318 | 0.4318 | | 1.9879 | 6.0 | 348 | 1.5070 | 0.3157 | 0.4470 | 0.3865 | 0.4597 | 0.4470 | 0.5200 | 0.3499 | 0.4470 | 0.4470 | 0.4470 | | 1.9096 | 7.0 | 406 | 1.4281 | 0.3859 | 0.4545 | 0.4410 | 0.4248 | 0.4545 | 0.4763 | 0.3931 | 0.4545 | 0.4545 | 0.4545 | | 1.4577 | 8.0 | 464 | 1.4558 | 0.3862 | 0.4773 | 0.4381 | 0.3827 | 0.4773 | 0.4425 | 0.4346 | 0.4773 | 0.4773 | 0.4773 | | 1.9664 | 9.0 | 522 | 1.5863 | 0.3757 | 0.4773 | 0.4227 | 0.3967 | 0.4773 | 0.4530 | 0.4288 | 0.4773 | 0.4773 | 0.4773 | | 0.7655 | 10.0 | 580 | 1.3785 | 0.4015 | 0.5 | 0.4621 | 0.5175 | 0.5 | 0.5866 | 0.4427 | 0.5 | 0.5 | 0.5 | | 0.707 | 11.0 | 638 | 1.3441 | 0.4772 | 0.5530 | 0.5356 | 0.4915 | 0.5530 | 0.5453 | 0.4861 | 0.5530 | 0.5530 | 0.5530 | | 0.782 | 12.0 | 696 | 1.3983 | 0.4716 | 0.5530 | 0.5325 | 0.4860 | 0.5530 | 0.5432 | 0.4877 | 0.5530 | 0.5530 | 0.5530 | | 0.7316 | 13.0 | 754 | 1.6155 | 0.4880 | 0.5530 | 0.5497 | 0.5085 | 0.5530 | 0.5892 | 0.5080 | 0.5530 | 0.5530 | 0.5530 | | 1.0819 | 14.0 | 812 | 1.4869 | 0.4936 | 0.5379 | 0.5312 | 0.5124 | 0.5379 | 0.5370 | 0.4900 | 0.5379 | 0.5379 | 0.5379 | | 0.8757 | 15.0 | 870 | 1.6936 | 0.4741 | 0.5303 | 0.5300 | 0.4809 | 0.5303 | 0.5481 | 0.4847 | 0.5303 | 0.5303 | 0.5303 | | 0.7228 | 16.0 | 928 | 1.7370 | 0.4442 | 0.5227 | 0.4986 | 0.4401 | 0.5227 | 0.4939 | 0.4646 | 0.5227 | 0.5227 | 0.5227 | | 0.3016 | 17.0 | 986 | 1.6977 | 0.5279 | 0.5682 | 0.5642 | 0.6353 | 0.5682 | 0.5994 | 0.5176 | 0.5682 | 0.5682 | 0.5682 | | 0.2097 | 18.0 | 1044 | 1.9026 | 0.4769 | 0.5606 | 0.5414 | 0.5384 | 0.5606 | 0.5783 | 0.4819 | 0.5606 | 0.5606 | 0.5606 | | 0.0388 | 19.0 | 1102 | 1.8276 | 0.5259 | 0.6136 | 0.5981 | 0.5252 | 0.6136 | 0.5945 | 0.5382 | 0.6136 | 0.6136 | 0.6136 | | 0.4837 | 20.0 | 1160 | 1.8658 | 0.5336 | 0.5985 | 0.5863 | 0.5502 | 0.5985 | 0.5866 | 0.5342 | 0.5985 | 0.5985 | 0.5985 | | 0.1531 | 21.0 | 1218 | 2.0415 | 0.4703 | 0.5606 | 0.5384 | 0.4917 | 0.5606 | 0.5489 | 0.4762 | 0.5606 | 0.5606 | 0.5606 | | 0.0142 | 22.0 | 1276 | 2.0812 | 0.4969 | 0.5303 | 0.5260 | 0.5067 | 0.5303 | 0.5364 | 0.5008 | 0.5303 | 0.5303 | 0.5303 | | 0.0036 | 23.0 | 1334 | 2.0662 | 0.5315 | 0.5758 | 0.5781 | 0.5480 | 0.5758 | 0.5925 | 0.5316 | 0.5758 | 0.5758 | 0.5758 | | 0.0065 | 24.0 | 1392 | 2.1023 | 0.5090 | 0.5606 | 0.5516 | 0.5140 | 0.5606 | 0.5550 | 0.5154 | 0.5606 | 0.5606 | 0.5606 | | 0.1359 | 25.0 | 1450 | 2.0555 | 0.4994 | 0.5455 | 0.5440 | 0.5018 | 0.5455 | 0.5474 | 0.5021 | 0.5455 | 0.5455 | 0.5455 | | 0.0037 | 26.0 | 1508 | 2.1745 | 0.5206 | 0.5758 | 0.5691 | 0.5289 | 0.5758 | 0.5695 | 0.5204 | 0.5758 | 0.5758 | 0.5758 | | 0.0391 | 27.0 | 1566 | 2.2087 | 0.5204 | 0.5758 | 0.5676 | 0.5335 | 0.5758 | 0.5745 | 0.5228 | 0.5758 | 0.5758 | 0.5758 | | 0.0017 | 28.0 | 1624 | 2.1219 | 0.5178 | 0.5682 | 0.5633 | 0.5218 | 0.5682 | 0.5649 | 0.5212 | 0.5682 | 0.5682 | 0.5682 | | 0.0015 | 29.0 | 1682 | 2.1455 | 0.5198 | 0.5682 | 0.5618 | 0.5342 | 0.5682 | 0.5641 | 0.5190 | 0.5682 | 0.5682 | 0.5682 | | 0.0015 | 30.0 | 1740 | 2.1308 | 0.5192 | 0.5682 | 0.5617 | 0.5315 | 0.5682 | 0.5621 | 0.5190 | 0.5682 | 0.5682 | 0.5682 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/square_run_second_vote_full_pic_75
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # square_run_second_vote_full_pic_75 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4672 - F1 Macro: 0.3451 - F1 Micro: 0.4545 - F1 Weighted: 0.4226 - Precision Macro: 0.3766 - Precision Micro: 0.4545 - Precision Weighted: 0.4564 - Recall Macro: 0.3721 - Recall Micro: 0.4545 - Recall Weighted: 0.4545 - Accuracy: 0.4545 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.7944 | 1.0 | 58 | 1.8560 | 0.1268 | 0.2424 | 0.1788 | 0.1005 | 0.2424 | 0.1425 | 0.1734 | 0.2424 | 0.2424 | 0.2424 | | 1.8983 | 2.0 | 116 | 1.8865 | 0.1533 | 0.2197 | 0.1705 | 0.1708 | 0.2197 | 0.1813 | 0.1858 | 0.2197 | 0.2197 | 0.2197 | | 1.4428 | 3.0 | 174 | 1.8035 | 0.2270 | 0.3030 | 0.2733 | 0.2740 | 0.3030 | 0.3193 | 0.2427 | 0.3030 | 0.3030 | 0.3030 | | 1.5854 | 4.0 | 232 | 1.7041 | 0.2293 | 0.3333 | 0.2630 | 0.2292 | 0.3333 | 0.2516 | 0.2763 | 0.3333 | 0.3333 | 0.3333 | | 1.644 | 5.0 | 290 | 1.6834 | 0.2759 | 0.4242 | 0.3362 | 0.2453 | 0.4242 | 0.3082 | 0.3601 | 0.4242 | 0.4242 | 0.4242 | | 1.7673 | 6.0 | 348 | 1.7508 | 0.2257 | 0.3333 | 0.2896 | 0.2153 | 0.3333 | 0.2776 | 0.2611 | 0.3333 | 0.3333 | 0.3333 | | 1.8791 | 7.0 | 406 | 1.6072 | 0.3200 | 0.4167 | 0.3759 | 0.3171 | 0.4167 | 0.3666 | 0.3494 | 0.4167 | 0.4167 | 0.4167 | | 1.3323 | 8.0 | 464 | 1.6554 | 0.3265 | 0.4015 | 0.3794 | 0.3576 | 0.4015 | 0.4045 | 0.3323 | 0.4015 | 0.4015 | 0.4015 | | 1.7047 | 9.0 | 522 | 1.7295 | 0.3186 | 0.3864 | 0.3716 | 0.3364 | 0.3864 | 0.3884 | 0.3275 | 0.3864 | 0.3864 | 0.3864 | | 1.1897 | 10.0 | 580 | 1.7238 | 0.2782 | 0.3561 | 0.3224 | 0.2981 | 0.3561 | 0.3401 | 0.3110 | 0.3561 | 0.3561 | 0.3561 | | 0.8908 | 11.0 | 638 | 2.1481 | 0.2774 | 0.3333 | 0.3266 | 0.3840 | 0.3333 | 0.4285 | 0.2900 | 0.3333 | 0.3333 | 0.3333 | | 0.4492 | 12.0 | 696 | 1.9300 | 0.2862 | 0.3636 | 0.3330 | 0.3623 | 0.3636 | 0.3947 | 0.2959 | 0.3636 | 0.3636 | 0.3636 | | 0.6555 | 13.0 | 754 | 1.8931 | 0.3053 | 0.3788 | 0.3670 | 0.3329 | 0.3788 | 0.3854 | 0.3084 | 0.3788 | 0.3788 | 0.3788 | | 0.3586 | 14.0 | 812 | 2.0316 | 0.3475 | 0.4242 | 0.4103 | 0.3722 | 0.4242 | 0.4360 | 0.3616 | 0.4242 | 0.4242 | 0.4242 | | 0.6805 | 15.0 | 870 | 2.0638 | 0.3389 | 0.4091 | 0.3917 | 0.3613 | 0.4091 | 0.4150 | 0.3572 | 0.4091 | 0.4091 | 0.4091 | | 0.8902 | 16.0 | 928 | 2.2817 | 0.2992 | 0.3636 | 0.3466 | 0.3388 | 0.3636 | 0.4092 | 0.3156 | 0.3636 | 0.3636 | 0.3636 | | 0.3393 | 17.0 | 986 | 2.4104 | 0.3031 | 0.3485 | 0.3527 | 0.3116 | 0.3485 | 0.3658 | 0.3015 | 0.3485 | 0.3485 | 0.3485 | | 0.2469 | 18.0 | 1044 | 2.4341 | 0.3373 | 0.3939 | 0.3980 | 0.3472 | 0.3939 | 0.4130 | 0.3381 | 0.3939 | 0.3939 | 0.3939 | | 0.0485 | 19.0 | 1102 | 2.5798 | 0.3454 | 0.4015 | 0.3908 | 0.3639 | 0.4015 | 0.4075 | 0.3545 | 0.4015 | 0.4015 | 0.4015 | | 0.0693 | 20.0 | 1160 | 2.4961 | 0.3781 | 0.4470 | 0.4350 | 0.4173 | 0.4470 | 0.4664 | 0.3837 | 0.4470 | 0.4470 | 0.4470 | | 0.0095 | 21.0 | 1218 | 2.7183 | 0.3840 | 0.4621 | 0.4510 | 0.4268 | 0.4621 | 0.4916 | 0.3868 | 0.4621 | 0.4621 | 0.4621 | | 0.0078 | 22.0 | 1276 | 2.7620 | 0.3520 | 0.4091 | 0.4091 | 0.3617 | 0.4091 | 0.4199 | 0.3528 | 0.4091 | 0.4091 | 0.4091 | | 0.0083 | 23.0 | 1334 | 2.8349 | 0.3507 | 0.4167 | 0.4058 | 0.3826 | 0.4167 | 0.4324 | 0.3560 | 0.4167 | 0.4167 | 0.4167 | | 0.0024 | 24.0 | 1392 | 2.7839 | 0.3630 | 0.4167 | 0.4169 | 0.4041 | 0.4167 | 0.4515 | 0.3594 | 0.4167 | 0.4167 | 0.4167 | | 0.0107 | 25.0 | 1450 | 2.8616 | 0.3600 | 0.4242 | 0.4212 | 0.3597 | 0.4242 | 0.4236 | 0.3653 | 0.4242 | 0.4242 | 0.4242 | | 0.0014 | 26.0 | 1508 | 2.9104 | 0.3790 | 0.4394 | 0.4343 | 0.3870 | 0.4394 | 0.4386 | 0.3810 | 0.4394 | 0.4394 | 0.4394 | | 0.0021 | 27.0 | 1566 | 2.9763 | 0.3778 | 0.4394 | 0.4342 | 0.3989 | 0.4394 | 0.4515 | 0.3785 | 0.4394 | 0.4394 | 0.4394 | | 0.0106 | 28.0 | 1624 | 2.9525 | 0.3866 | 0.4545 | 0.4479 | 0.3957 | 0.4545 | 0.4533 | 0.3892 | 0.4545 | 0.4545 | 0.4545 | | 0.0044 | 29.0 | 1682 | 2.9417 | 0.3862 | 0.4545 | 0.4438 | 0.3911 | 0.4545 | 0.4447 | 0.3930 | 0.4545 | 0.4545 | 0.4545 | | 0.0019 | 30.0 | 1740 | 2.9399 | 0.3876 | 0.4545 | 0.4461 | 0.3937 | 0.4545 | 0.4485 | 0.3928 | 0.4545 | 0.4545 | 0.4545 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/square_run_second_vote_full_pic_50
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # square_run_second_vote_full_pic_50 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6568 - F1 Macro: 0.2803 - F1 Micro: 0.3939 - F1 Weighted: 0.3344 - Precision Macro: 0.3642 - Precision Micro: 0.3939 - Precision Weighted: 0.4123 - Recall Macro: 0.3362 - Recall Micro: 0.3939 - Recall Weighted: 0.3939 - Accuracy: 0.3939 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.8125 | 1.0 | 58 | 1.8594 | 0.1226 | 0.2121 | 0.1692 | 0.1180 | 0.2121 | 0.1586 | 0.1501 | 0.2121 | 0.2121 | 0.2121 | | 1.8401 | 2.0 | 116 | 1.9425 | 0.0860 | 0.1742 | 0.1036 | 0.0668 | 0.1742 | 0.0824 | 0.1580 | 0.1742 | 0.1742 | 0.1742 | | 1.7455 | 3.0 | 174 | 1.8949 | 0.1450 | 0.2424 | 0.1731 | 0.2634 | 0.2424 | 0.3029 | 0.1819 | 0.2424 | 0.2424 | 0.2424 | | 1.8283 | 4.0 | 232 | 1.8868 | 0.0989 | 0.2121 | 0.1383 | 0.0794 | 0.2121 | 0.1089 | 0.1482 | 0.2121 | 0.2121 | 0.2121 | | 1.729 | 5.0 | 290 | 1.8830 | 0.1271 | 0.1894 | 0.1496 | 0.1438 | 0.1894 | 0.1799 | 0.1663 | 0.1894 | 0.1894 | 0.1894 | | 1.6643 | 6.0 | 348 | 1.8247 | 0.1450 | 0.2424 | 0.1852 | 0.1921 | 0.2424 | 0.2200 | 0.1749 | 0.2424 | 0.2424 | 0.2424 | | 1.9317 | 7.0 | 406 | 1.8338 | 0.1470 | 0.1894 | 0.1785 | 0.1535 | 0.1894 | 0.1869 | 0.1574 | 0.1894 | 0.1894 | 0.1894 | | 1.4753 | 8.0 | 464 | 1.7873 | 0.1617 | 0.2652 | 0.2071 | 0.1458 | 0.2652 | 0.1843 | 0.2046 | 0.2652 | 0.2652 | 0.2652 | | 2.0844 | 9.0 | 522 | 1.8694 | 0.2562 | 0.3106 | 0.3029 | 0.2622 | 0.3106 | 0.3076 | 0.2610 | 0.3106 | 0.3106 | 0.3106 | | 1.558 | 10.0 | 580 | 1.8684 | 0.2203 | 0.2803 | 0.2542 | 0.2140 | 0.2803 | 0.2502 | 0.2442 | 0.2803 | 0.2803 | 0.2803 | | 1.6059 | 11.0 | 638 | 1.9295 | 0.2746 | 0.3182 | 0.3103 | 0.3107 | 0.3182 | 0.3453 | 0.2849 | 0.3182 | 0.3182 | 0.3182 | | 1.0749 | 12.0 | 696 | 2.0512 | 0.2284 | 0.3182 | 0.2797 | 0.2882 | 0.3182 | 0.3204 | 0.2409 | 0.3182 | 0.3182 | 0.3182 | | 1.5171 | 13.0 | 754 | 2.1976 | 0.2193 | 0.2955 | 0.2645 | 0.2698 | 0.2955 | 0.3064 | 0.2359 | 0.2955 | 0.2955 | 0.2955 | | 0.6995 | 14.0 | 812 | 2.3271 | 0.2159 | 0.3030 | 0.2658 | 0.2928 | 0.3030 | 0.3244 | 0.2312 | 0.3030 | 0.3030 | 0.3030 | | 1.2603 | 15.0 | 870 | 2.6123 | 0.2353 | 0.2727 | 0.2714 | 0.2778 | 0.2727 | 0.3214 | 0.2418 | 0.2727 | 0.2727 | 0.2727 | | 0.6293 | 16.0 | 928 | 2.5967 | 0.1990 | 0.2576 | 0.2312 | 0.2149 | 0.2576 | 0.2568 | 0.2202 | 0.2576 | 0.2576 | 0.2576 | | 0.3242 | 17.0 | 986 | 2.7596 | 0.2242 | 0.2727 | 0.2580 | 0.2423 | 0.2727 | 0.2818 | 0.2348 | 0.2727 | 0.2727 | 0.2727 | | 0.6081 | 18.0 | 1044 | 2.8475 | 0.2060 | 0.25 | 0.2401 | 0.2329 | 0.25 | 0.2604 | 0.2054 | 0.25 | 0.25 | 0.25 | | 0.3241 | 19.0 | 1102 | 3.1226 | 0.1989 | 0.25 | 0.2334 | 0.2199 | 0.25 | 0.2494 | 0.2033 | 0.25 | 0.25 | 0.25 | | 0.1119 | 20.0 | 1160 | 3.1286 | 0.2302 | 0.2803 | 0.2653 | 0.2654 | 0.2803 | 0.2992 | 0.2332 | 0.2803 | 0.2803 | 0.2803 | | 0.0946 | 21.0 | 1218 | 3.2789 | 0.2265 | 0.2955 | 0.2698 | 0.2472 | 0.2955 | 0.2835 | 0.2359 | 0.2955 | 0.2955 | 0.2955 | | 0.0434 | 22.0 | 1276 | 3.2405 | 0.2357 | 0.2652 | 0.2666 | 0.2398 | 0.2652 | 0.2744 | 0.2360 | 0.2652 | 0.2652 | 0.2652 | | 0.0926 | 23.0 | 1334 | 3.3668 | 0.2435 | 0.2955 | 0.2829 | 0.2650 | 0.2955 | 0.2973 | 0.2461 | 0.2955 | 0.2955 | 0.2955 | | 0.1002 | 24.0 | 1392 | 3.4633 | 0.2105 | 0.2727 | 0.2544 | 0.2310 | 0.2727 | 0.2643 | 0.2149 | 0.2727 | 0.2727 | 0.2727 | | 0.0602 | 25.0 | 1450 | 3.4614 | 0.2575 | 0.3030 | 0.2990 | 0.2662 | 0.3030 | 0.3027 | 0.2555 | 0.3030 | 0.3030 | 0.3030 | | 0.0079 | 26.0 | 1508 | 3.7489 | 0.2416 | 0.2879 | 0.2764 | 0.2489 | 0.2879 | 0.2847 | 0.2503 | 0.2879 | 0.2879 | 0.2879 | | 0.1364 | 27.0 | 1566 | 3.8018 | 0.2234 | 0.2727 | 0.2626 | 0.2312 | 0.2727 | 0.2655 | 0.2253 | 0.2727 | 0.2727 | 0.2727 | | 0.0141 | 28.0 | 1624 | 3.7614 | 0.2435 | 0.2879 | 0.2816 | 0.2527 | 0.2879 | 0.2858 | 0.2437 | 0.2879 | 0.2879 | 0.2879 | | 0.1638 | 29.0 | 1682 | 3.7921 | 0.2341 | 0.2803 | 0.2745 | 0.2423 | 0.2803 | 0.2795 | 0.2345 | 0.2803 | 0.2803 | 0.2803 | | 0.0049 | 30.0 | 1740 | 3.7955 | 0.2345 | 0.2803 | 0.2743 | 0.2431 | 0.2803 | 0.2792 | 0.2345 | 0.2803 | 0.2803 | 0.2803 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/square_run_first_vote_full_pic_50
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # square_run_first_vote_full_pic_50 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8395 - F1 Macro: 0.2931 - F1 Micro: 0.3182 - F1 Weighted: 0.3126 - Precision Macro: 0.3510 - Precision Micro: 0.3182 - Precision Weighted: 0.3702 - Recall Macro: 0.2978 - Recall Micro: 0.3182 - Recall Weighted: 0.3182 - Accuracy: 0.3182 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.9357 | 1.0 | 58 | 1.9251 | 0.1142 | 0.2045 | 0.1326 | 0.2916 | 0.2045 | 0.3719 | 0.1805 | 0.2045 | 0.2045 | 0.2045 | | 1.8428 | 2.0 | 116 | 1.9975 | 0.0864 | 0.1894 | 0.0941 | 0.0690 | 0.1894 | 0.0756 | 0.1762 | 0.1894 | 0.1894 | 0.1894 | | 1.8324 | 3.0 | 174 | 1.8298 | 0.1725 | 0.2727 | 0.2132 | 0.1532 | 0.2727 | 0.1894 | 0.2191 | 0.2727 | 0.2727 | 0.2727 | | 1.5574 | 4.0 | 232 | 1.7557 | 0.1128 | 0.2652 | 0.1548 | 0.0798 | 0.2652 | 0.1095 | 0.1928 | 0.2652 | 0.2652 | 0.2652 | | 1.9434 | 5.0 | 290 | 1.7244 | 0.3019 | 0.3939 | 0.3572 | 0.2839 | 0.3939 | 0.3404 | 0.3375 | 0.3939 | 0.3939 | 0.3939 | | 1.9357 | 6.0 | 348 | 1.6611 | 0.3208 | 0.3712 | 0.3669 | 0.3285 | 0.3712 | 0.3749 | 0.3253 | 0.3712 | 0.3712 | 0.3712 | | 1.8454 | 7.0 | 406 | 1.6835 | 0.3043 | 0.3939 | 0.3397 | 0.3472 | 0.3939 | 0.3939 | 0.3515 | 0.3939 | 0.3939 | 0.3939 | | 1.7616 | 8.0 | 464 | 1.8893 | 0.2312 | 0.2803 | 0.2544 | 0.3179 | 0.2803 | 0.3702 | 0.2728 | 0.2803 | 0.2803 | 0.2803 | | 1.5512 | 9.0 | 522 | 1.7856 | 0.2366 | 0.3182 | 0.2696 | 0.2788 | 0.3182 | 0.3172 | 0.2820 | 0.3182 | 0.3182 | 0.3182 | | 1.777 | 10.0 | 580 | 1.9182 | 0.3136 | 0.3864 | 0.3465 | 0.3176 | 0.3864 | 0.3434 | 0.3525 | 0.3864 | 0.3864 | 0.3864 | | 1.3075 | 11.0 | 638 | 1.7205 | 0.3324 | 0.3939 | 0.3795 | 0.3461 | 0.3939 | 0.3893 | 0.3407 | 0.3939 | 0.3939 | 0.3939 | | 0.8476 | 12.0 | 696 | 1.8083 | 0.3203 | 0.3788 | 0.3495 | 0.3297 | 0.3788 | 0.3672 | 0.3581 | 0.3788 | 0.3788 | 0.3788 | | 1.0324 | 13.0 | 754 | 1.9825 | 0.3046 | 0.3485 | 0.3341 | 0.3316 | 0.3485 | 0.3807 | 0.3315 | 0.3485 | 0.3485 | 0.3485 | | 1.154 | 14.0 | 812 | 2.0418 | 0.2869 | 0.3333 | 0.3151 | 0.2847 | 0.3333 | 0.3140 | 0.3064 | 0.3333 | 0.3333 | 0.3333 | | 0.5406 | 15.0 | 870 | 2.1651 | 0.3242 | 0.3561 | 0.3453 | 0.3366 | 0.3561 | 0.3561 | 0.3313 | 0.3561 | 0.3561 | 0.3561 | | 1.5052 | 16.0 | 928 | 2.3796 | 0.2814 | 0.3561 | 0.3228 | 0.3189 | 0.3561 | 0.3611 | 0.3127 | 0.3561 | 0.3561 | 0.3561 | | 0.1641 | 17.0 | 986 | 2.2210 | 0.3286 | 0.3864 | 0.3741 | 0.3346 | 0.3864 | 0.3768 | 0.3361 | 0.3864 | 0.3864 | 0.3864 | | 0.1201 | 18.0 | 1044 | 2.2744 | 0.3384 | 0.3939 | 0.3852 | 0.3331 | 0.3939 | 0.3811 | 0.3474 | 0.3939 | 0.3939 | 0.3939 | | 0.1059 | 19.0 | 1102 | 2.4881 | 0.3198 | 0.3712 | 0.3485 | 0.3702 | 0.3712 | 0.3640 | 0.3244 | 0.3712 | 0.3712 | 0.3712 | | 0.0828 | 20.0 | 1160 | 2.6911 | 0.3369 | 0.4091 | 0.3897 | 0.3378 | 0.4091 | 0.3826 | 0.3473 | 0.4091 | 0.4091 | 0.4091 | | 0.0903 | 21.0 | 1218 | 2.9249 | 0.3351 | 0.3561 | 0.3564 | 0.3430 | 0.3561 | 0.3614 | 0.3341 | 0.3561 | 0.3561 | 0.3561 | | 0.0455 | 22.0 | 1276 | 3.1538 | 0.2830 | 0.3409 | 0.3261 | 0.2951 | 0.3409 | 0.3330 | 0.2889 | 0.3409 | 0.3409 | 0.3409 | | 0.0137 | 23.0 | 1334 | 3.0196 | 0.3147 | 0.3712 | 0.3598 | 0.3095 | 0.3712 | 0.3530 | 0.3246 | 0.3712 | 0.3712 | 0.3712 | | 0.0088 | 24.0 | 1392 | 3.0033 | 0.3512 | 0.4015 | 0.3958 | 0.3562 | 0.4015 | 0.4024 | 0.3586 | 0.4015 | 0.4015 | 0.4015 | | 0.205 | 25.0 | 1450 | 3.1499 | 0.3854 | 0.4091 | 0.3978 | 0.3923 | 0.4091 | 0.4032 | 0.3939 | 0.4091 | 0.4091 | 0.4091 | | 0.0072 | 26.0 | 1508 | 3.2906 | 0.3440 | 0.3712 | 0.3651 | 0.3438 | 0.3712 | 0.3663 | 0.3516 | 0.3712 | 0.3712 | 0.3712 | | 0.0019 | 27.0 | 1566 | 3.3223 | 0.3542 | 0.3712 | 0.3663 | 0.3524 | 0.3712 | 0.3673 | 0.3627 | 0.3712 | 0.3712 | 0.3712 | | 0.0043 | 28.0 | 1624 | 3.2986 | 0.3729 | 0.3864 | 0.3840 | 0.3726 | 0.3864 | 0.3838 | 0.3753 | 0.3864 | 0.3864 | 0.3864 | | 0.0016 | 29.0 | 1682 | 3.3453 | 0.3469 | 0.3788 | 0.3741 | 0.3504 | 0.3788 | 0.3744 | 0.3483 | 0.3788 | 0.3788 | 0.3788 | | 0.0031 | 30.0 | 1740 | 3.3308 | 0.3465 | 0.3788 | 0.3753 | 0.3514 | 0.3788 | 0.3760 | 0.3456 | 0.3788 | 0.3788 | 0.3788 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/square_run_first_vote_full_pic_50_age_gender
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # square_run_first_vote_full_pic_50_age_gender This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6395 - F1 Macro: 0.2469 - F1 Micro: 0.3485 - F1 Weighted: 0.2801 - Precision Macro: 0.2428 - Precision Micro: 0.3485 - Precision Weighted: 0.2847 - Recall Macro: 0.3062 - Recall Micro: 0.3485 - Recall Weighted: 0.3485 - Accuracy: 0.3485 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.952 | 1.0 | 58 | 1.9331 | 0.1015 | 0.1894 | 0.1193 | 0.0782 | 0.1894 | 0.0909 | 0.1571 | 0.1894 | 0.1894 | 0.1894 | | 1.8507 | 2.0 | 116 | 1.9609 | 0.0434 | 0.1364 | 0.0478 | 0.0314 | 0.1364 | 0.0344 | 0.1228 | 0.1364 | 0.1364 | 0.1364 | | 1.8752 | 3.0 | 174 | 1.8530 | 0.1515 | 0.2197 | 0.1864 | 0.1396 | 0.2197 | 0.1747 | 0.1834 | 0.2197 | 0.2197 | 0.2197 | | 1.6369 | 4.0 | 232 | 1.8198 | 0.1627 | 0.2652 | 0.2010 | 0.2272 | 0.2652 | 0.2553 | 0.2011 | 0.2652 | 0.2652 | 0.2652 | | 1.9706 | 5.0 | 290 | 1.7950 | 0.2160 | 0.2955 | 0.2460 | 0.2685 | 0.2955 | 0.3249 | 0.2686 | 0.2955 | 0.2955 | 0.2955 | | 1.738 | 6.0 | 348 | 1.6951 | 0.3343 | 0.4091 | 0.3811 | 0.4054 | 0.4091 | 0.4381 | 0.3537 | 0.4091 | 0.4091 | 0.4091 | | 1.924 | 7.0 | 406 | 1.6492 | 0.3123 | 0.3636 | 0.3150 | 0.5154 | 0.3636 | 0.4299 | 0.3365 | 0.3636 | 0.3636 | 0.3636 | | 1.8234 | 8.0 | 464 | 1.7347 | 0.3183 | 0.3712 | 0.3515 | 0.3698 | 0.3712 | 0.4277 | 0.3554 | 0.3712 | 0.3712 | 0.3712 | | 1.5916 | 9.0 | 522 | 1.8146 | 0.2720 | 0.3712 | 0.3008 | 0.3597 | 0.3712 | 0.3850 | 0.3281 | 0.3712 | 0.3712 | 0.3712 | | 1.578 | 10.0 | 580 | 1.7509 | 0.2703 | 0.3561 | 0.3033 | 0.3571 | 0.3561 | 0.4226 | 0.3217 | 0.3561 | 0.3561 | 0.3561 | | 1.5857 | 11.0 | 638 | 1.8672 | 0.3019 | 0.3561 | 0.3520 | 0.3399 | 0.3561 | 0.4037 | 0.3121 | 0.3561 | 0.3561 | 0.3561 | | 1.1684 | 12.0 | 696 | 1.8947 | 0.2724 | 0.3258 | 0.3168 | 0.2707 | 0.3258 | 0.3138 | 0.2793 | 0.3258 | 0.3258 | 0.3258 | | 1.3507 | 13.0 | 754 | 1.9123 | 0.3161 | 0.3636 | 0.3651 | 0.3271 | 0.3636 | 0.3832 | 0.3189 | 0.3636 | 0.3636 | 0.3636 | | 1.1413 | 14.0 | 812 | 1.9088 | 0.3210 | 0.3788 | 0.3659 | 0.3221 | 0.3788 | 0.3622 | 0.3287 | 0.3788 | 0.3788 | 0.3788 | | 1.2466 | 15.0 | 870 | 2.0905 | 0.3428 | 0.4015 | 0.3831 | 0.3574 | 0.4015 | 0.4108 | 0.3649 | 0.4015 | 0.4015 | 0.4015 | | 1.2063 | 16.0 | 928 | 2.2063 | 0.3331 | 0.3864 | 0.3746 | 0.3376 | 0.3864 | 0.3775 | 0.3418 | 0.3864 | 0.3864 | 0.3864 | | 0.25 | 17.0 | 986 | 2.1276 | 0.3909 | 0.4242 | 0.4287 | 0.4100 | 0.4242 | 0.4593 | 0.3942 | 0.4242 | 0.4242 | 0.4242 | | 0.3857 | 18.0 | 1044 | 2.3733 | 0.3631 | 0.4242 | 0.4119 | 0.3618 | 0.4242 | 0.4050 | 0.3701 | 0.4242 | 0.4242 | 0.4242 | | 0.0546 | 19.0 | 1102 | 2.4860 | 0.3277 | 0.3864 | 0.3769 | 0.3432 | 0.3864 | 0.3902 | 0.3345 | 0.3864 | 0.3864 | 0.3864 | | 0.0621 | 20.0 | 1160 | 2.5209 | 0.3879 | 0.4242 | 0.4119 | 0.4393 | 0.4242 | 0.4335 | 0.3850 | 0.4242 | 0.4242 | 0.4242 | | 0.1491 | 21.0 | 1218 | 2.7192 | 0.3713 | 0.4242 | 0.4142 | 0.3740 | 0.4242 | 0.4124 | 0.3777 | 0.4242 | 0.4242 | 0.4242 | | 0.4118 | 22.0 | 1276 | 2.9182 | 0.3327 | 0.3864 | 0.3752 | 0.3317 | 0.3864 | 0.3734 | 0.3427 | 0.3864 | 0.3864 | 0.3864 | | 0.1833 | 23.0 | 1334 | 2.9567 | 0.3204 | 0.3636 | 0.3580 | 0.3325 | 0.3636 | 0.3805 | 0.3318 | 0.3636 | 0.3636 | 0.3636 | | 0.0022 | 24.0 | 1392 | 3.0022 | 0.3432 | 0.4015 | 0.3824 | 0.3452 | 0.4015 | 0.3877 | 0.3643 | 0.4015 | 0.4015 | 0.4015 | | 0.2174 | 25.0 | 1450 | 3.0656 | 0.3537 | 0.3864 | 0.3803 | 0.3574 | 0.3864 | 0.3796 | 0.3568 | 0.3864 | 0.3864 | 0.3864 | | 0.0191 | 26.0 | 1508 | 3.1698 | 0.3451 | 0.4091 | 0.3920 | 0.3502 | 0.4091 | 0.3992 | 0.3627 | 0.4091 | 0.4091 | 0.4091 | | 0.0051 | 27.0 | 1566 | 3.3015 | 0.3389 | 0.4015 | 0.3816 | 0.3394 | 0.4015 | 0.3844 | 0.3566 | 0.4015 | 0.4015 | 0.4015 | | 0.0205 | 28.0 | 1624 | 3.2677 | 0.3457 | 0.4091 | 0.3893 | 0.3422 | 0.4091 | 0.3864 | 0.3634 | 0.4091 | 0.4091 | 0.4091 | | 0.0016 | 29.0 | 1682 | 3.1995 | 0.3385 | 0.3939 | 0.3810 | 0.3324 | 0.3939 | 0.3723 | 0.3479 | 0.3939 | 0.3939 | 0.3939 | | 0.0011 | 30.0 | 1740 | 3.2359 | 0.3495 | 0.4091 | 0.3927 | 0.3409 | 0.4091 | 0.3823 | 0.3634 | 0.4091 | 0.4091 | 0.4091 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/square_run_second_vote_full_pic_50_age_gender
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # square_run_second_vote_full_pic_50_age_gender This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6845 - F1 Macro: 0.2886 - F1 Micro: 0.3182 - F1 Weighted: 0.3108 - Precision Macro: 0.3139 - Precision Micro: 0.3182 - Precision Weighted: 0.3485 - Recall Macro: 0.3032 - Recall Micro: 0.3182 - Recall Weighted: 0.3182 - Accuracy: 0.3182 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.8934 | 1.0 | 58 | 1.8939 | 0.0919 | 0.1970 | 0.1321 | 0.0799 | 0.1970 | 0.1097 | 0.1331 | 0.1970 | 0.1970 | 0.1970 | | 1.851 | 2.0 | 116 | 2.0049 | 0.1066 | 0.1818 | 0.1280 | 0.1010 | 0.1818 | 0.1322 | 0.1720 | 0.1818 | 0.1818 | 0.1818 | | 1.6292 | 3.0 | 174 | 1.8533 | 0.0849 | 0.2197 | 0.1086 | 0.0673 | 0.2197 | 0.0808 | 0.1568 | 0.2197 | 0.2197 | 0.2197 | | 1.7267 | 4.0 | 232 | 1.8215 | 0.1715 | 0.2803 | 0.2141 | 0.3136 | 0.2803 | 0.3357 | 0.2133 | 0.2803 | 0.2803 | 0.2803 | | 1.7363 | 5.0 | 290 | 1.8649 | 0.1065 | 0.1894 | 0.1268 | 0.1423 | 0.1894 | 0.1728 | 0.1730 | 0.1894 | 0.1894 | 0.1894 | | 1.5974 | 6.0 | 348 | 1.8716 | 0.0916 | 0.25 | 0.1212 | 0.0721 | 0.25 | 0.0900 | 0.1746 | 0.25 | 0.25 | 0.25 | | 1.8537 | 7.0 | 406 | 1.7017 | 0.2792 | 0.3333 | 0.3191 | 0.3181 | 0.3333 | 0.3633 | 0.2977 | 0.3333 | 0.3333 | 0.3333 | | 1.4206 | 8.0 | 464 | 1.7872 | 0.1898 | 0.2727 | 0.2361 | 0.2092 | 0.2727 | 0.2442 | 0.2089 | 0.2727 | 0.2727 | 0.2727 | | 2.1512 | 9.0 | 522 | 1.7402 | 0.2806 | 0.3333 | 0.3177 | 0.3019 | 0.3333 | 0.3379 | 0.2985 | 0.3333 | 0.3333 | 0.3333 | | 1.6426 | 10.0 | 580 | 1.7943 | 0.2511 | 0.3106 | 0.2867 | 0.2731 | 0.3106 | 0.3042 | 0.2713 | 0.3106 | 0.3106 | 0.3106 | | 1.5341 | 11.0 | 638 | 1.8591 | 0.2551 | 0.3030 | 0.2945 | 0.2564 | 0.3030 | 0.2941 | 0.2623 | 0.3030 | 0.3030 | 0.3030 | | 1.0766 | 12.0 | 696 | 1.9545 | 0.2281 | 0.2955 | 0.2800 | 0.2455 | 0.2955 | 0.2902 | 0.2341 | 0.2955 | 0.2955 | 0.2955 | | 0.8697 | 13.0 | 754 | 2.3504 | 0.1614 | 0.2727 | 0.2089 | 0.1739 | 0.2727 | 0.2207 | 0.2027 | 0.2727 | 0.2727 | 0.2727 | | 0.7089 | 14.0 | 812 | 1.9392 | 0.2557 | 0.3409 | 0.3160 | 0.2531 | 0.3409 | 0.3054 | 0.2705 | 0.3409 | 0.3409 | 0.3409 | | 0.9405 | 15.0 | 870 | 2.1086 | 0.2788 | 0.3485 | 0.3362 | 0.2945 | 0.3485 | 0.3480 | 0.2866 | 0.3485 | 0.3485 | 0.3485 | | 0.768 | 16.0 | 928 | 2.1161 | 0.2990 | 0.3636 | 0.3599 | 0.3112 | 0.3636 | 0.3779 | 0.3073 | 0.3636 | 0.3636 | 0.3636 | | 0.5405 | 17.0 | 986 | 2.2513 | 0.2757 | 0.3258 | 0.3197 | 0.2851 | 0.3258 | 0.3370 | 0.2867 | 0.3258 | 0.3258 | 0.3258 | | 0.6639 | 18.0 | 1044 | 2.4633 | 0.2542 | 0.3106 | 0.3033 | 0.2616 | 0.3106 | 0.3156 | 0.2668 | 0.3106 | 0.3106 | 0.3106 | | 0.1962 | 19.0 | 1102 | 2.5737 | 0.2463 | 0.3258 | 0.2995 | 0.2654 | 0.3258 | 0.3085 | 0.2575 | 0.3258 | 0.3258 | 0.3258 | | 0.2221 | 20.0 | 1160 | 2.7099 | 0.2449 | 0.2803 | 0.2825 | 0.2577 | 0.2803 | 0.3097 | 0.2521 | 0.2803 | 0.2803 | 0.2803 | | 0.3077 | 21.0 | 1218 | 2.7888 | 0.2527 | 0.3106 | 0.2980 | 0.2673 | 0.3106 | 0.3100 | 0.2626 | 0.3106 | 0.3106 | 0.3106 | | 0.1271 | 22.0 | 1276 | 2.9443 | 0.2291 | 0.2652 | 0.2661 | 0.2315 | 0.2652 | 0.2727 | 0.2319 | 0.2652 | 0.2652 | 0.2652 | | 0.1309 | 23.0 | 1334 | 3.0628 | 0.2714 | 0.3409 | 0.3305 | 0.3025 | 0.3409 | 0.3511 | 0.2700 | 0.3409 | 0.3409 | 0.3409 | | 0.2454 | 24.0 | 1392 | 3.1552 | 0.2497 | 0.3182 | 0.3009 | 0.2573 | 0.3182 | 0.3021 | 0.2578 | 0.3182 | 0.3182 | 0.3182 | | 0.0606 | 25.0 | 1450 | 3.2449 | 0.2319 | 0.2879 | 0.2810 | 0.2294 | 0.2879 | 0.2790 | 0.2379 | 0.2879 | 0.2879 | 0.2879 | | 0.0862 | 26.0 | 1508 | 3.2262 | 0.2554 | 0.3182 | 0.3119 | 0.2557 | 0.3182 | 0.3114 | 0.2608 | 0.3182 | 0.3182 | 0.3182 | | 0.0062 | 27.0 | 1566 | 3.2928 | 0.2711 | 0.3258 | 0.3251 | 0.2740 | 0.3258 | 0.3278 | 0.2719 | 0.3258 | 0.3258 | 0.3258 | | 0.0427 | 28.0 | 1624 | 3.3795 | 0.2599 | 0.3182 | 0.3121 | 0.2595 | 0.3182 | 0.3101 | 0.2644 | 0.3182 | 0.3182 | 0.3182 | | 0.007 | 29.0 | 1682 | 3.3703 | 0.2412 | 0.2955 | 0.2917 | 0.2459 | 0.2955 | 0.2940 | 0.2427 | 0.2955 | 0.2955 | 0.2955 | | 0.0051 | 30.0 | 1740 | 3.4040 | 0.2484 | 0.3030 | 0.3003 | 0.2530 | 0.3030 | 0.3037 | 0.2500 | 0.3030 | 0.3030 | 0.3030 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]