model_id
stringlengths
7
105
model_card
stringlengths
1
130k
model_labels
listlengths
2
80k
corranm/square_run_second_vote_full_pic_75_age_gender
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # square_run_second_vote_full_pic_75_age_gender This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3432 - F1 Macro: 0.3328 - F1 Micro: 0.4697 - F1 Weighted: 0.4098 - Precision Macro: 0.3270 - Precision Micro: 0.4697 - Precision Weighted: 0.4161 - Recall Macro: 0.3849 - Recall Micro: 0.4697 - Recall Weighted: 0.4697 - Accuracy: 0.4697 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.874 | 1.0 | 58 | 1.8367 | 0.1624 | 0.2576 | 0.2032 | 0.1524 | 0.2576 | 0.1870 | 0.2024 | 0.2576 | 0.2576 | 0.2576 | | 1.9776 | 2.0 | 116 | 1.9333 | 0.0771 | 0.1439 | 0.0881 | 0.0743 | 0.1439 | 0.0940 | 0.1402 | 0.1439 | 0.1439 | 0.1439 | | 1.645 | 3.0 | 174 | 1.8058 | 0.1674 | 0.2803 | 0.1932 | 0.1766 | 0.2803 | 0.1880 | 0.2217 | 0.2803 | 0.2803 | 0.2803 | | 1.6353 | 4.0 | 232 | 1.6974 | 0.2498 | 0.3636 | 0.2906 | 0.2124 | 0.3636 | 0.2464 | 0.3095 | 0.3636 | 0.3636 | 0.3636 | | 1.6165 | 5.0 | 290 | 1.6144 | 0.2440 | 0.3409 | 0.2804 | 0.2940 | 0.3409 | 0.3290 | 0.2857 | 0.3409 | 0.3409 | 0.3409 | | 1.7496 | 6.0 | 348 | 1.7440 | 0.2610 | 0.3636 | 0.3308 | 0.2845 | 0.3636 | 0.3528 | 0.2888 | 0.3636 | 0.3636 | 0.3636 | | 1.8783 | 7.0 | 406 | 1.5126 | 0.3535 | 0.4167 | 0.3994 | 0.3637 | 0.4167 | 0.4048 | 0.3646 | 0.4167 | 0.4167 | 0.4167 | | 1.2903 | 8.0 | 464 | 1.5240 | 0.3589 | 0.4167 | 0.4036 | 0.3702 | 0.4167 | 0.4107 | 0.3644 | 0.4167 | 0.4167 | 0.4167 | | 1.8885 | 9.0 | 522 | 1.5423 | 0.3727 | 0.4470 | 0.4283 | 0.3800 | 0.4470 | 0.4329 | 0.3856 | 0.4470 | 0.4470 | 0.4470 | | 1.0726 | 10.0 | 580 | 1.8002 | 0.3168 | 0.4015 | 0.3651 | 0.3364 | 0.4015 | 0.3788 | 0.3456 | 0.4015 | 0.4015 | 0.4015 | | 1.2297 | 11.0 | 638 | 1.9532 | 0.3087 | 0.3788 | 0.3653 | 0.3752 | 0.3788 | 0.4335 | 0.3300 | 0.3788 | 0.3788 | 0.3788 | | 0.7152 | 12.0 | 696 | 1.8452 | 0.3120 | 0.3864 | 0.3677 | 0.3922 | 0.3864 | 0.4313 | 0.3156 | 0.3864 | 0.3864 | 0.3864 | | 0.7479 | 13.0 | 754 | 1.7619 | 0.3686 | 0.4394 | 0.4348 | 0.3981 | 0.4394 | 0.4546 | 0.3645 | 0.4394 | 0.4394 | 0.4394 | | 0.2766 | 14.0 | 812 | 1.8000 | 0.3931 | 0.4924 | 0.4657 | 0.4146 | 0.4924 | 0.4792 | 0.4080 | 0.4924 | 0.4924 | 0.4924 | | 0.4092 | 15.0 | 870 | 2.0428 | 0.3611 | 0.4318 | 0.4252 | 0.3772 | 0.4318 | 0.4421 | 0.3673 | 0.4318 | 0.4318 | 0.4318 | | 0.1272 | 16.0 | 928 | 2.1450 | 0.3493 | 0.4242 | 0.4203 | 0.3651 | 0.4242 | 0.4419 | 0.3598 | 0.4242 | 0.4242 | 0.4242 | | 0.2751 | 17.0 | 986 | 2.3002 | 0.3782 | 0.4394 | 0.4357 | 0.4548 | 0.4394 | 0.5101 | 0.3712 | 0.4394 | 0.4394 | 0.4394 | | 0.3277 | 18.0 | 1044 | 2.2109 | 0.3832 | 0.4470 | 0.4450 | 0.4073 | 0.4470 | 0.4770 | 0.3856 | 0.4470 | 0.4470 | 0.4470 | | 0.0134 | 19.0 | 1102 | 2.4450 | 0.3585 | 0.4470 | 0.4219 | 0.3987 | 0.4470 | 0.4533 | 0.3729 | 0.4470 | 0.4470 | 0.4470 | | 0.0737 | 20.0 | 1160 | 2.5434 | 0.3468 | 0.4091 | 0.4054 | 0.3581 | 0.4091 | 0.4161 | 0.3508 | 0.4091 | 0.4091 | 0.4091 | | 0.0203 | 21.0 | 1218 | 2.8118 | 0.3895 | 0.4773 | 0.4493 | 0.4176 | 0.4773 | 0.4699 | 0.4098 | 0.4773 | 0.4773 | 0.4773 | | 0.0072 | 22.0 | 1276 | 2.7996 | 0.3620 | 0.4242 | 0.4165 | 0.3783 | 0.4242 | 0.4359 | 0.3729 | 0.4242 | 0.4242 | 0.4242 | | 0.1251 | 23.0 | 1334 | 2.9001 | 0.4009 | 0.4394 | 0.4291 | 0.4500 | 0.4394 | 0.4703 | 0.4067 | 0.4394 | 0.4394 | 0.4394 | | 0.0054 | 24.0 | 1392 | 2.8660 | 0.4011 | 0.4470 | 0.4327 | 0.4245 | 0.4470 | 0.4535 | 0.4147 | 0.4470 | 0.4470 | 0.4470 | | 0.0091 | 25.0 | 1450 | 2.8868 | 0.3852 | 0.4167 | 0.4086 | 0.3965 | 0.4167 | 0.4115 | 0.3858 | 0.4167 | 0.4167 | 0.4167 | | 0.002 | 26.0 | 1508 | 2.9311 | 0.3952 | 0.4394 | 0.4272 | 0.4054 | 0.4394 | 0.4343 | 0.4043 | 0.4394 | 0.4394 | 0.4394 | | 0.0008 | 27.0 | 1566 | 2.9526 | 0.4052 | 0.4470 | 0.4388 | 0.4173 | 0.4470 | 0.4483 | 0.4118 | 0.4470 | 0.4470 | 0.4470 | | 0.002 | 28.0 | 1624 | 3.0159 | 0.4074 | 0.4470 | 0.4389 | 0.4227 | 0.4470 | 0.4489 | 0.4116 | 0.4470 | 0.4470 | 0.4470 | | 0.0017 | 29.0 | 1682 | 2.9797 | 0.4121 | 0.4545 | 0.4431 | 0.4192 | 0.4545 | 0.4464 | 0.4193 | 0.4545 | 0.4545 | 0.4545 | | 0.0016 | 30.0 | 1740 | 2.9981 | 0.3677 | 0.4394 | 0.4256 | 0.3741 | 0.4394 | 0.4271 | 0.3761 | 0.4394 | 0.4394 | 0.4394 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/square_run_first_vote_full_pic_75_age_gender
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # square_run_first_vote_full_pic_75_age_gender This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7554 - F1 Macro: 0.3377 - F1 Micro: 0.4091 - F1 Weighted: 0.3702 - Precision Macro: 0.3432 - Precision Micro: 0.4091 - Precision Weighted: 0.3813 - Recall Macro: 0.3764 - Recall Micro: 0.4091 - Recall Weighted: 0.4091 - Accuracy: 0.4091 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.9621 | 1.0 | 58 | 1.9294 | 0.1188 | 0.2121 | 0.1307 | 0.0924 | 0.2121 | 0.1040 | 0.2009 | 0.2121 | 0.2121 | 0.2121 | | 1.6897 | 2.0 | 116 | 1.8039 | 0.2212 | 0.3258 | 0.2698 | 0.2607 | 0.3258 | 0.3321 | 0.2817 | 0.3258 | 0.3258 | 0.3258 | | 1.9018 | 3.0 | 174 | 1.8333 | 0.1488 | 0.2727 | 0.1810 | 0.1261 | 0.2727 | 0.1499 | 0.2142 | 0.2727 | 0.2727 | 0.2727 | | 1.3206 | 4.0 | 232 | 1.8180 | 0.1941 | 0.2955 | 0.2237 | 0.2405 | 0.2955 | 0.2639 | 0.2493 | 0.2955 | 0.2955 | 0.2955 | | 1.7794 | 5.0 | 290 | 1.4852 | 0.3258 | 0.4167 | 0.3866 | 0.3195 | 0.4167 | 0.3831 | 0.3576 | 0.4167 | 0.4167 | 0.4167 | | 1.7676 | 6.0 | 348 | 1.4509 | 0.4341 | 0.4924 | 0.4845 | 0.4298 | 0.4924 | 0.4816 | 0.4437 | 0.4924 | 0.4924 | 0.4924 | | 1.6212 | 7.0 | 406 | 1.5437 | 0.4281 | 0.4773 | 0.4424 | 0.4381 | 0.4773 | 0.4653 | 0.4668 | 0.4773 | 0.4773 | 0.4773 | | 1.5547 | 8.0 | 464 | 1.4854 | 0.3596 | 0.4242 | 0.4034 | 0.3810 | 0.4242 | 0.4362 | 0.3863 | 0.4242 | 0.4242 | 0.4242 | | 1.5353 | 9.0 | 522 | 1.3877 | 0.4190 | 0.5076 | 0.4726 | 0.4442 | 0.5076 | 0.5046 | 0.4533 | 0.5076 | 0.5076 | 0.5076 | | 0.7126 | 10.0 | 580 | 1.4099 | 0.4492 | 0.5227 | 0.5096 | 0.4685 | 0.5227 | 0.5359 | 0.4685 | 0.5227 | 0.5227 | 0.5227 | | 1.0466 | 11.0 | 638 | 1.5217 | 0.4805 | 0.5682 | 0.5464 | 0.4933 | 0.5682 | 0.5566 | 0.4972 | 0.5682 | 0.5682 | 0.5682 | | 0.648 | 12.0 | 696 | 1.5465 | 0.4666 | 0.5303 | 0.5284 | 0.4870 | 0.5303 | 0.5655 | 0.4872 | 0.5303 | 0.5303 | 0.5303 | | 0.6292 | 13.0 | 754 | 1.5671 | 0.4553 | 0.5152 | 0.5116 | 0.5059 | 0.5152 | 0.5740 | 0.4652 | 0.5152 | 0.5152 | 0.5152 | | 0.3081 | 14.0 | 812 | 1.5835 | 0.5087 | 0.5909 | 0.5700 | 0.5276 | 0.5909 | 0.5752 | 0.5130 | 0.5909 | 0.5909 | 0.5909 | | 0.349 | 15.0 | 870 | 1.7548 | 0.4364 | 0.5076 | 0.4959 | 0.4563 | 0.5076 | 0.5064 | 0.4397 | 0.5076 | 0.5076 | 0.5076 | | 0.2594 | 16.0 | 928 | 1.9070 | 0.4717 | 0.5455 | 0.5287 | 0.4803 | 0.5455 | 0.5289 | 0.4780 | 0.5455 | 0.5455 | 0.5455 | | 0.2384 | 17.0 | 986 | 1.8439 | 0.5212 | 0.5909 | 0.5797 | 0.5462 | 0.5909 | 0.5900 | 0.5211 | 0.5909 | 0.5909 | 0.5909 | | 0.1308 | 18.0 | 1044 | 2.0280 | 0.5064 | 0.5758 | 0.5659 | 0.5375 | 0.5758 | 0.5885 | 0.5074 | 0.5758 | 0.5758 | 0.5758 | | 0.042 | 19.0 | 1102 | 1.9038 | 0.5345 | 0.6136 | 0.6004 | 0.5672 | 0.6136 | 0.6168 | 0.5360 | 0.6136 | 0.6136 | 0.6136 | | 0.0605 | 20.0 | 1160 | 2.1862 | 0.5215 | 0.5909 | 0.5855 | 0.5231 | 0.5909 | 0.5867 | 0.5264 | 0.5909 | 0.5909 | 0.5909 | | 0.0153 | 21.0 | 1218 | 2.1651 | 0.5037 | 0.5682 | 0.5637 | 0.5110 | 0.5682 | 0.5718 | 0.5059 | 0.5682 | 0.5682 | 0.5682 | | 0.0069 | 22.0 | 1276 | 2.1574 | 0.4779 | 0.5455 | 0.5377 | 0.4819 | 0.5455 | 0.5367 | 0.4793 | 0.5455 | 0.5455 | 0.5455 | | 0.0138 | 23.0 | 1334 | 2.3532 | 0.4938 | 0.5606 | 0.5544 | 0.4944 | 0.5606 | 0.5692 | 0.5119 | 0.5606 | 0.5606 | 0.5606 | | 0.0016 | 24.0 | 1392 | 2.3575 | 0.4795 | 0.5606 | 0.5442 | 0.4724 | 0.5606 | 0.5342 | 0.4922 | 0.5606 | 0.5606 | 0.5606 | | 0.2167 | 25.0 | 1450 | 2.4082 | 0.5045 | 0.5758 | 0.5691 | 0.5056 | 0.5758 | 0.5715 | 0.5127 | 0.5758 | 0.5758 | 0.5758 | | 0.0024 | 26.0 | 1508 | 2.4224 | 0.5014 | 0.5682 | 0.5612 | 0.5003 | 0.5682 | 0.5623 | 0.5102 | 0.5682 | 0.5682 | 0.5682 | | 0.0042 | 27.0 | 1566 | 2.3936 | 0.5133 | 0.5833 | 0.5745 | 0.5123 | 0.5833 | 0.5716 | 0.5189 | 0.5833 | 0.5833 | 0.5833 | | 0.0008 | 28.0 | 1624 | 2.3829 | 0.5239 | 0.5985 | 0.5885 | 0.5309 | 0.5985 | 0.5892 | 0.5290 | 0.5985 | 0.5985 | 0.5985 | | 0.0009 | 29.0 | 1682 | 2.4101 | 0.4891 | 0.5606 | 0.5532 | 0.4833 | 0.5606 | 0.5484 | 0.4976 | 0.5606 | 0.5606 | 0.5606 | | 0.0009 | 30.0 | 1740 | 2.4118 | 0.5015 | 0.5758 | 0.5656 | 0.4970 | 0.5758 | 0.5602 | 0.5108 | 0.5758 | 0.5758 | 0.5758 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
thenewsupercell/Emotion_DF_Image_VIT_V1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Emotion_DF_Image_VIT_V1 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3507 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.3386 | 1.0 | 75 | 2.1762 | 0.9642 | | 1.6256 | 2.0 | 150 | 1.5539 | 1.0 | | 1.3628 | 3.0 | 225 | 1.3507 | 1.0 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "ses02f_impro01", "ses02f_impro02", "ses02f_impro03", "ses02f_impro04", "ses02f_impro05", "ses02f_impro06", "ses02f_impro07", "ses02f_impro08", "ses02f_script01_1", "ses02f_script01_2", "ses02f_script01_3", "ses02f_script02_1", "ses02f_script02_2", "ses02f_script03_1", "ses02f_script03_2", "ses02m_impro01", "ses02m_impro02", "ses02m_impro03", "ses02m_impro04", "ses02m_impro05", "ses02m_impro06", "ses02m_impro07", "ses02m_impro08", "ses02m_script01_1", "ses02m_script01_2", "ses02m_script01_3", "ses02m_script02_1", "ses02m_script02_2", "ses02m_script03_1", "ses02m_script03_2" ]
nxtn/convnext-tiny-finetuned-phanloaiMTTn
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "healthy", "rubbish", "unhealthy" ]
Fructoze/vit-small-patch16-224-finetuned
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-small-patch16-224-finetuned This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5747 - Accuracy: 0.7623 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.8345 | 0.9620 | 19 | 0.5747 | 0.7623 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "healthy", "rubbish", "unhealthy" ]
siyah1/SWin-ViT-Xray
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "label_0", "label_1", "label_2", "label_3", "label_4" ]
scalet98/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2080 - Accuracy: 0.9405 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4037 | 1.0 | 370 | 0.2725 | 0.9418 | | 0.2098 | 2.0 | 740 | 0.2033 | 0.9445 | | 0.1603 | 3.0 | 1110 | 0.1811 | 0.9486 | | 0.151 | 4.0 | 1480 | 0.1749 | 0.9499 | | 0.1304 | 5.0 | 1850 | 0.1737 | 0.9499 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.6.0+cu124 - Datasets 3.3.0 - Tokenizers 0.21.0
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
prithivMLmods/Deepfake-Detection-Exp-02-21
![3.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/PxdwhpADOqXaQ5hXXCPss.png) # **Deepfake-Detection-Exp-02-21** Deepfake-Detection-Exp-02-21 is a minimalist, high-quality dataset trained on a ViT-based model for image classification, distinguishing between deepfake and real images. The model is based on Google's **`google/vit-base-patch16-224-in21k`**. ```bitex Mapping of IDs to Labels: {0: 'Deepfake', 1: 'Real'} Mapping of Labels to IDs: {'Deepfake': 0, 'Real': 1} ``` ```py Classification report: precision recall f1-score support Deepfake 0.9962 0.9806 0.9883 1600 Real 0.9809 0.9962 0.9885 1600 accuracy 0.9884 3200 macro avg 0.9886 0.9884 0.9884 3200 weighted avg 0.9886 0.9884 0.9884 3200 ``` ![download.png](https://cdn-uploads.huggingface.co/production/uploads/6720824b15b6282a2464fc58/0ISoyjxLs-zpqt9Gv4YRo.png) # **Inference with Hugging Face Pipeline** ```python from transformers import pipeline # Load the model pipe = pipeline('image-classification', model="prithivMLmods/Deepfake-Detection-Exp-02-21", device=0) # Predict on an image result = pipe("path_to_image.jpg") print(result) ``` # **Inference with PyTorch** ```python from transformers import ViTForImageClassification, ViTImageProcessor from PIL import Image import torch # Load the model and processor model = ViTForImageClassification.from_pretrained("prithivMLmods/Deepfake-Detection-Exp-02-21") processor = ViTImageProcessor.from_pretrained("prithivMLmods/Deepfake-Detection-Exp-02-21") # Load and preprocess the image image = Image.open("path_to_image.jpg").convert("RGB") inputs = processor(images=image, return_tensors="pt") # Perform inference with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class = torch.argmax(logits, dim=1).item() # Map class index to label label = model.config.id2label[predicted_class] print(f"Predicted Label: {label}") ``` # **Limitations** 1. **Generalization Issues** – The model may not perform well on deepfake images generated by unseen or novel deepfake techniques. 2. **Dataset Bias** – The training data might not cover all variations of real and fake images, leading to biased predictions. 3. **Resolution Constraints** – Since the model is based on `vit-base-patch16-224-in21k`, it is optimized for 224x224 image resolution, which may limit its effectiveness on high-resolution images. 4. **Adversarial Vulnerabilities** – The model may be susceptible to adversarial attacks designed to fool vision transformers. 5. **False Positives & False Negatives** – The model may occasionally misclassify real images as deepfake and vice versa, requiring human validation in critical applications. # **Intended Use** 1. **Deepfake Detection** – Designed for identifying deepfake images in media, social platforms, and forensic analysis. 2. **Research & Development** – Useful for researchers studying deepfake detection and improving ViT-based classification models. 3. **Content Moderation** – Can be integrated into platforms to detect and flag manipulated images. 4. **Security & Forensics** – Assists in cybersecurity applications where verifying the authenticity of images is crucial. 5. **Educational Purposes** – Can be used in training AI practitioners and students in the field of computer vision and deepfake detection.
[ "deepfake", "real" ]
testtset/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1080 - Accuracy: 0.961 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.5234 | 1.0 | 352 | 0.1946 | 0.9296 | | 0.3763 | 2.0 | 704 | 0.1318 | 0.9524 | | 0.3502 | 2.9922 | 1053 | 0.1080 | 0.961 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck" ]
Studentrnt/convnext-tiny-224-finetuned-eurosat-albumentations
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-tiny-224-finetuned-eurosat-albumentations This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0741 - Accuracy: 0.9804 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1455 | 1.0 | 190 | 0.1553 | 0.9637 | | 0.0634 | 2.0 | 380 | 0.0850 | 0.9778 | | 0.0575 | 3.0 | 570 | 0.0741 | 0.9804 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "annualcrop", "forest", "herbaceousvegetation", "highway", "industrial", "pasture", "permanentcrop", "residential", "river", "sealake" ]
minseok-ds/resnet-18_first_hf_tutorial
Hello, Hugging Face! # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "tench, tinca tinca", "goldfish, carassius auratus", "great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias", "tiger shark, galeocerdo cuvieri", "hammerhead, hammerhead shark", "electric ray, crampfish, numbfish, torpedo", "stingray", "cock", "hen", "ostrich, struthio camelus", "brambling, fringilla montifringilla", "goldfinch, carduelis carduelis", "house finch, linnet, carpodacus mexicanus", "junco, snowbird", "indigo bunting, indigo finch, indigo bird, passerina cyanea", "robin, american robin, turdus migratorius", "bulbul", "jay", "magpie", "chickadee", "water ouzel, dipper", "kite", "bald eagle, american eagle, haliaeetus leucocephalus", "vulture", "great grey owl, great gray owl, strix nebulosa", "european fire salamander, salamandra salamandra", "common newt, triturus vulgaris", "eft", "spotted salamander, ambystoma maculatum", "axolotl, mud puppy, ambystoma mexicanum", "bullfrog, rana catesbeiana", "tree frog, tree-frog", "tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui", "loggerhead, loggerhead turtle, caretta caretta", "leatherback turtle, leatherback, leathery turtle, dermochelys coriacea", "mud turtle", "terrapin", "box turtle, box tortoise", "banded gecko", "common iguana, iguana, iguana iguana", "american chameleon, anole, anolis carolinensis", "whiptail, whiptail lizard", "agama", "frilled lizard, chlamydosaurus kingi", "alligator lizard", "gila monster, heloderma suspectum", "green lizard, lacerta viridis", "african chameleon, chamaeleo chamaeleon", "komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis", "african crocodile, nile crocodile, crocodylus niloticus", "american alligator, alligator mississipiensis", "triceratops", "thunder snake, worm snake, carphophis amoenus", "ringneck snake, ring-necked snake, ring snake", "hognose snake, puff adder, sand viper", "green snake, grass snake", "king snake, kingsnake", "garter snake, grass snake", "water snake", "vine snake", "night snake, hypsiglena torquata", "boa constrictor, constrictor constrictor", "rock python, rock snake, python sebae", "indian cobra, naja naja", "green mamba", "sea snake", "horned viper, cerastes, sand viper, horned asp, cerastes cornutus", "diamondback, diamondback rattlesnake, crotalus adamanteus", "sidewinder, horned rattlesnake, crotalus cerastes", "trilobite", "harvestman, daddy longlegs, phalangium opilio", "scorpion", "black and gold garden spider, argiope aurantia", "barn spider, araneus cavaticus", "garden spider, aranea diademata", "black widow, latrodectus mactans", "tarantula", "wolf spider, hunting spider", "tick", "centipede", "black grouse", "ptarmigan", "ruffed grouse, partridge, bonasa umbellus", "prairie chicken, prairie grouse, prairie fowl", "peacock", "quail", "partridge", "african grey, african gray, psittacus erithacus", "macaw", "sulphur-crested cockatoo, kakatoe galerita, cacatua galerita", "lorikeet", "coucal", "bee eater", "hornbill", "hummingbird", "jacamar", "toucan", "drake", "red-breasted merganser, mergus serrator", "goose", "black swan, cygnus atratus", "tusker", "echidna, spiny anteater, anteater", "platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus", "wallaby, brush kangaroo", "koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus", "wombat", "jellyfish", "sea anemone, anemone", "brain coral", "flatworm, platyhelminth", "nematode, nematode worm, roundworm", "conch", "snail", "slug", "sea slug, nudibranch", "chiton, coat-of-mail shell, sea cradle, polyplacophore", "chambered nautilus, pearly nautilus, nautilus", "dungeness crab, cancer magister", "rock crab, cancer irroratus", "fiddler crab", "king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica", "american lobster, northern lobster, maine lobster, homarus americanus", "spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish", "crayfish, crawfish, crawdad, crawdaddy", "hermit crab", "isopod", "white stork, ciconia ciconia", "black stork, ciconia nigra", "spoonbill", "flamingo", "little blue heron, egretta caerulea", "american egret, great white heron, egretta albus", "bittern", "crane", "limpkin, aramus pictus", "european gallinule, porphyrio porphyrio", "american coot, marsh hen, mud hen, water hen, fulica americana", "bustard", "ruddy turnstone, arenaria interpres", "red-backed sandpiper, dunlin, erolia alpina", "redshank, tringa totanus", "dowitcher", "oystercatcher, oyster catcher", "pelican", "king penguin, aptenodytes patagonica", "albatross, mollymawk", "grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus", "killer whale, killer, orca, grampus, sea wolf, orcinus orca", "dugong, dugong dugon", "sea lion", "chihuahua", "japanese spaniel", "maltese dog, maltese terrier, maltese", "pekinese, pekingese, peke", "shih-tzu", "blenheim spaniel", "papillon", "toy terrier", "rhodesian ridgeback", "afghan hound, afghan", "basset, basset hound", "beagle", "bloodhound, sleuthhound", "bluetick", "black-and-tan coonhound", "walker hound, walker foxhound", "english foxhound", "redbone", "borzoi, russian wolfhound", "irish wolfhound", "italian greyhound", "whippet", "ibizan hound, ibizan podenco", "norwegian elkhound, elkhound", "otterhound, otter hound", "saluki, gazelle hound", "scottish deerhound, deerhound", "weimaraner", "staffordshire bullterrier, staffordshire bull terrier", "american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier", "bedlington terrier", "border terrier", "kerry blue terrier", "irish terrier", "norfolk terrier", "norwich terrier", "yorkshire terrier", "wire-haired fox terrier", "lakeland terrier", "sealyham terrier, sealyham", "airedale, airedale terrier", "cairn, cairn terrier", "australian terrier", "dandie dinmont, dandie dinmont terrier", "boston bull, boston terrier", "miniature schnauzer", "giant schnauzer", "standard schnauzer", "scotch terrier, scottish terrier, scottie", "tibetan terrier, chrysanthemum dog", "silky terrier, sydney silky", "soft-coated wheaten terrier", "west highland white terrier", "lhasa, lhasa apso", "flat-coated retriever", "curly-coated retriever", "golden retriever", "labrador retriever", "chesapeake bay retriever", "german short-haired pointer", "vizsla, hungarian pointer", "english setter", "irish setter, red setter", "gordon setter", "brittany spaniel", "clumber, clumber spaniel", "english springer, english springer spaniel", "welsh springer spaniel", "cocker spaniel, english cocker spaniel, cocker", "sussex spaniel", "irish water spaniel", "kuvasz", "schipperke", "groenendael", "malinois", "briard", "kelpie", "komondor", "old english sheepdog, bobtail", "shetland sheepdog, shetland sheep dog, shetland", "collie", "border collie", "bouvier des flandres, bouviers des flandres", "rottweiler", "german shepherd, german shepherd dog, german police dog, alsatian", "doberman, doberman pinscher", "miniature pinscher", "greater swiss mountain dog", "bernese mountain dog", "appenzeller", "entlebucher", "boxer", "bull mastiff", "tibetan mastiff", "french bulldog", "great dane", "saint bernard, st bernard", "eskimo dog, husky", "malamute, malemute, alaskan malamute", "siberian husky", "dalmatian, coach dog, carriage dog", "affenpinscher, monkey pinscher, monkey dog", "basenji", "pug, pug-dog", "leonberg", "newfoundland, newfoundland dog", "great pyrenees", "samoyed, samoyede", "pomeranian", "chow, chow chow", "keeshond", "brabancon griffon", "pembroke, pembroke welsh corgi", "cardigan, cardigan welsh corgi", "toy poodle", "miniature poodle", "standard poodle", "mexican hairless", "timber wolf, grey wolf, gray wolf, canis lupus", "white wolf, arctic wolf, canis lupus tundrarum", "red wolf, maned wolf, canis rufus, canis niger", "coyote, prairie wolf, brush wolf, canis latrans", "dingo, warrigal, warragal, canis dingo", "dhole, cuon alpinus", "african hunting dog, hyena dog, cape hunting dog, lycaon pictus", "hyena, hyaena", "red fox, vulpes vulpes", "kit fox, vulpes macrotis", "arctic fox, white fox, alopex lagopus", "grey fox, gray fox, urocyon cinereoargenteus", "tabby, tabby cat", "tiger cat", "persian cat", "siamese cat, siamese", "egyptian cat", "cougar, puma, catamount, mountain lion, painter, panther, felis concolor", "lynx, catamount", "leopard, panthera pardus", "snow leopard, ounce, panthera uncia", "jaguar, panther, panthera onca, felis onca", "lion, king of beasts, panthera leo", "tiger, panthera tigris", "cheetah, chetah, acinonyx jubatus", "brown bear, bruin, ursus arctos", "american black bear, black bear, ursus americanus, euarctos americanus", "ice bear, polar bear, ursus maritimus, thalarctos maritimus", "sloth bear, melursus ursinus, ursus ursinus", "mongoose", "meerkat, mierkat", "tiger beetle", "ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle", "ground beetle, carabid beetle", "long-horned beetle, longicorn, longicorn beetle", "leaf beetle, chrysomelid", "dung beetle", "rhinoceros beetle", "weevil", "fly", "bee", "ant, emmet, pismire", "grasshopper, hopper", "cricket", "walking stick, walkingstick, stick insect", "cockroach, roach", "mantis, mantid", "cicada, cicala", "leafhopper", "lacewing, lacewing fly", "dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk", "damselfly", "admiral", "ringlet, ringlet butterfly", "monarch, monarch butterfly, milkweed butterfly, danaus plexippus", "cabbage butterfly", "sulphur butterfly, sulfur butterfly", "lycaenid, lycaenid butterfly", "starfish, sea star", "sea urchin", "sea cucumber, holothurian", "wood rabbit, cottontail, cottontail rabbit", "hare", "angora, angora rabbit", "hamster", "porcupine, hedgehog", "fox squirrel, eastern fox squirrel, sciurus niger", "marmot", "beaver", "guinea pig, cavia cobaya", "sorrel", "zebra", "hog, pig, grunter, squealer, sus scrofa", "wild boar, boar, sus scrofa", "warthog", "hippopotamus, hippo, river horse, hippopotamus amphibius", "ox", "water buffalo, water ox, asiatic buffalo, bubalus bubalis", "bison", "ram, tup", "bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis", "ibex, capra ibex", "hartebeest", "impala, aepyceros melampus", "gazelle", "arabian camel, dromedary, camelus dromedarius", "llama", "weasel", "mink", "polecat, fitch, foulmart, foumart, mustela putorius", "black-footed ferret, ferret, mustela nigripes", "otter", "skunk, polecat, wood pussy", "badger", "armadillo", "three-toed sloth, ai, bradypus tridactylus", "orangutan, orang, orangutang, pongo pygmaeus", "gorilla, gorilla gorilla", "chimpanzee, chimp, pan troglodytes", "gibbon, hylobates lar", "siamang, hylobates syndactylus, symphalangus syndactylus", "guenon, guenon monkey", "patas, hussar monkey, erythrocebus patas", "baboon", "macaque", "langur", "colobus, colobus monkey", "proboscis monkey, nasalis larvatus", "marmoset", "capuchin, ringtail, cebus capucinus", "howler monkey, howler", "titi, titi monkey", "spider monkey, ateles geoffroyi", "squirrel monkey, saimiri sciureus", "madagascar cat, ring-tailed lemur, lemur catta", "indri, indris, indri indri, indri brevicaudatus", "indian elephant, elephas maximus", "african elephant, loxodonta africana", "lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens", "giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca", "barracouta, snoek", "eel", "coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch", "rock beauty, holocanthus tricolor", "anemone fish", "sturgeon", "gar, garfish, garpike, billfish, lepisosteus osseus", "lionfish", "puffer, pufferfish, blowfish, globefish", "abacus", "abaya", "academic gown, academic robe, judge's robe", "accordion, piano accordion, squeeze box", "acoustic guitar", "aircraft carrier, carrier, flattop, attack aircraft carrier", "airliner", "airship, dirigible", "altar", "ambulance", "amphibian, amphibious vehicle", "analog clock", "apiary, bee house", "apron", "ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin", "assault rifle, assault gun", "backpack, back pack, knapsack, packsack, rucksack, haversack", "bakery, bakeshop, bakehouse", "balance beam, beam", "balloon", "ballpoint, ballpoint pen, ballpen, biro", "band aid", "banjo", "bannister, banister, balustrade, balusters, handrail", "barbell", "barber chair", "barbershop", "barn", "barometer", "barrel, cask", "barrow, garden cart, lawn cart, wheelbarrow", "baseball", "basketball", "bassinet", "bassoon", "bathing cap, swimming cap", "bath towel", "bathtub, bathing tub, bath, tub", "beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon", "beacon, lighthouse, beacon light, pharos", "beaker", "bearskin, busby, shako", "beer bottle", "beer glass", "bell cote, bell cot", "bib", "bicycle-built-for-two, tandem bicycle, tandem", "bikini, two-piece", "binder, ring-binder", "binoculars, field glasses, opera glasses", "birdhouse", "boathouse", "bobsled, bobsleigh, bob", "bolo tie, bolo, bola tie, bola", "bonnet, poke bonnet", "bookcase", "bookshop, bookstore, bookstall", "bottlecap", "bow", "bow tie, bow-tie, bowtie", "brass, memorial tablet, plaque", "brassiere, bra, bandeau", "breakwater, groin, groyne, mole, bulwark, seawall, jetty", "breastplate, aegis, egis", "broom", "bucket, pail", "buckle", "bulletproof vest", "bullet train, bullet", "butcher shop, meat market", "cab, hack, taxi, taxicab", "caldron, cauldron", "candle, taper, wax light", "cannon", "canoe", "can opener, tin opener", "cardigan", "car mirror", "carousel, carrousel, merry-go-round, roundabout, whirligig", "carpenter's kit, tool kit", "carton", "car wheel", "cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm", "cassette", "cassette player", "castle", "catamaran", "cd player", "cello, violoncello", "cellular telephone, cellular phone, cellphone, cell, mobile phone", "chain", "chainlink fence", "chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour", "chain saw, chainsaw", "chest", "chiffonier, commode", "chime, bell, gong", "china cabinet, china closet", "christmas stocking", "church, church building", "cinema, movie theater, movie theatre, movie house, picture palace", "cleaver, meat cleaver, chopper", "cliff dwelling", "cloak", "clog, geta, patten, sabot", "cocktail shaker", "coffee mug", "coffeepot", "coil, spiral, volute, whorl, helix", "combination lock", "computer keyboard, keypad", "confectionery, confectionary, candy store", "container ship, containership, container vessel", "convertible", "corkscrew, bottle screw", "cornet, horn, trumpet, trump", "cowboy boot", "cowboy hat, ten-gallon hat", "cradle", "crane", "crash helmet", "crate", "crib, cot", "crock pot", "croquet ball", "crutch", "cuirass", "dam, dike, dyke", "desk", "desktop computer", "dial telephone, dial phone", "diaper, nappy, napkin", "digital clock", "digital watch", "dining table, board", "dishrag, dishcloth", "dishwasher, dish washer, dishwashing machine", "disk brake, disc brake", "dock, dockage, docking facility", "dogsled, dog sled, dog sleigh", "dome", "doormat, welcome mat", "drilling platform, offshore rig", "drum, membranophone, tympan", "drumstick", "dumbbell", "dutch oven", "electric fan, blower", "electric guitar", "electric locomotive", "entertainment center", "envelope", "espresso maker", "face powder", "feather boa, boa", "file, file cabinet, filing cabinet", "fireboat", "fire engine, fire truck", "fire screen, fireguard", "flagpole, flagstaff", "flute, transverse flute", "folding chair", "football helmet", "forklift", "fountain", "fountain pen", "four-poster", "freight car", "french horn, horn", "frying pan, frypan, skillet", "fur coat", "garbage truck, dustcart", "gasmask, respirator, gas helmet", "gas pump, gasoline pump, petrol pump, island dispenser", "goblet", "go-kart", "golf ball", "golfcart, golf cart", "gondola", "gong, tam-tam", "gown", "grand piano, grand", "greenhouse, nursery, glasshouse", "grille, radiator grille", "grocery store, grocery, food market, market", "guillotine", "hair slide", "hair spray", "half track", "hammer", "hamper", "hand blower, blow dryer, blow drier, hair dryer, hair drier", "hand-held computer, hand-held microcomputer", "handkerchief, hankie, hanky, hankey", "hard disc, hard disk, fixed disk", "harmonica, mouth organ, harp, mouth harp", "harp", "harvester, reaper", "hatchet", "holster", "home theater, home theatre", "honeycomb", "hook, claw", "hoopskirt, crinoline", "horizontal bar, high bar", "horse cart, horse-cart", "hourglass", "ipod", "iron, smoothing iron", "jack-o'-lantern", "jean, blue jean, denim", "jeep, landrover", "jersey, t-shirt, tee shirt", "jigsaw puzzle", "jinrikisha, ricksha, rickshaw", "joystick", "kimono", "knee pad", "knot", "lab coat, laboratory coat", "ladle", "lampshade, lamp shade", "laptop, laptop computer", "lawn mower, mower", "lens cap, lens cover", "letter opener, paper knife, paperknife", "library", "lifeboat", "lighter, light, igniter, ignitor", "limousine, limo", "liner, ocean liner", "lipstick, lip rouge", "loafer", "lotion", "loudspeaker, speaker, speaker unit, loudspeaker system, speaker system", "loupe, jeweler's loupe", "lumbermill, sawmill", "magnetic compass", "mailbag, postbag", "mailbox, letter box", "maillot", "maillot, tank suit", "manhole cover", "maraca", "marimba, xylophone", "mask", "matchstick", "maypole", "maze, labyrinth", "measuring cup", "medicine chest, medicine cabinet", "megalith, megalithic structure", "microphone, mike", "microwave, microwave oven", "military uniform", "milk can", "minibus", "miniskirt, mini", "minivan", "missile", "mitten", "mixing bowl", "mobile home, manufactured home", "model t", "modem", "monastery", "monitor", "moped", "mortar", "mortarboard", "mosque", "mosquito net", "motor scooter, scooter", "mountain bike, all-terrain bike, off-roader", "mountain tent", "mouse, computer mouse", "mousetrap", "moving van", "muzzle", "nail", "neck brace", "necklace", "nipple", "notebook, notebook computer", "obelisk", "oboe, hautboy, hautbois", "ocarina, sweet potato", "odometer, hodometer, mileometer, milometer", "oil filter", "organ, pipe organ", "oscilloscope, scope, cathode-ray oscilloscope, cro", "overskirt", "oxcart", "oxygen mask", "packet", "paddle, boat paddle", "paddlewheel, paddle wheel", "padlock", "paintbrush", "pajama, pyjama, pj's, jammies", "palace", "panpipe, pandean pipe, syrinx", "paper towel", "parachute, chute", "parallel bars, bars", "park bench", "parking meter", "passenger car, coach, carriage", "patio, terrace", "pay-phone, pay-station", "pedestal, plinth, footstall", "pencil box, pencil case", "pencil sharpener", "perfume, essence", "petri dish", "photocopier", "pick, plectrum, plectron", "pickelhaube", "picket fence, paling", "pickup, pickup truck", "pier", "piggy bank, penny bank", "pill bottle", "pillow", "ping-pong ball", "pinwheel", "pirate, pirate ship", "pitcher, ewer", "plane, carpenter's plane, woodworking plane", "planetarium", "plastic bag", "plate rack", "plow, plough", "plunger, plumber's helper", "polaroid camera, polaroid land camera", "pole", "police van, police wagon, paddy wagon, patrol wagon, wagon, black maria", "poncho", "pool table, billiard table, snooker table", "pop bottle, soda bottle", "pot, flowerpot", "potter's wheel", "power drill", "prayer rug, prayer mat", "printer", "prison, prison house", "projectile, missile", "projector", "puck, hockey puck", "punching bag, punch bag, punching ball, punchball", "purse", "quill, quill pen", "quilt, comforter, comfort, puff", "racer, race car, racing car", "racket, racquet", "radiator", "radio, wireless", "radio telescope, radio reflector", "rain barrel", "recreational vehicle, rv, r.v.", "reel", "reflex camera", "refrigerator, icebox", "remote control, remote", "restaurant, eating house, eating place, eatery", "revolver, six-gun, six-shooter", "rifle", "rocking chair, rocker", "rotisserie", "rubber eraser, rubber, pencil eraser", "rugby ball", "rule, ruler", "running shoe", "safe", "safety pin", "saltshaker, salt shaker", "sandal", "sarong", "sax, saxophone", "scabbard", "scale, weighing machine", "school bus", "schooner", "scoreboard", "screen, crt screen", "screw", "screwdriver", "seat belt, seatbelt", "sewing machine", "shield, buckler", "shoe shop, shoe-shop, shoe store", "shoji", "shopping basket", "shopping cart", "shovel", "shower cap", "shower curtain", "ski", "ski mask", "sleeping bag", "slide rule, slipstick", "sliding door", "slot, one-armed bandit", "snorkel", "snowmobile", "snowplow, snowplough", "soap dispenser", "soccer ball", "sock", "solar dish, solar collector, solar furnace", "sombrero", "soup bowl", "space bar", "space heater", "space shuttle", "spatula", "speedboat", "spider web, spider's web", "spindle", "sports car, sport car", "spotlight, spot", "stage", "steam locomotive", "steel arch bridge", "steel drum", "stethoscope", "stole", "stone wall", "stopwatch, stop watch", "stove", "strainer", "streetcar, tram, tramcar, trolley, trolley car", "stretcher", "studio couch, day bed", "stupa, tope", "submarine, pigboat, sub, u-boat", "suit, suit of clothes", "sundial", "sunglass", "sunglasses, dark glasses, shades", "sunscreen, sunblock, sun blocker", "suspension bridge", "swab, swob, mop", "sweatshirt", "swimming trunks, bathing trunks", "swing", "switch, electric switch, electrical switch", "syringe", "table lamp", "tank, army tank, armored combat vehicle, armoured combat vehicle", "tape player", "teapot", "teddy, teddy bear", "television, television system", "tennis ball", "thatch, thatched roof", "theater curtain, theatre curtain", "thimble", "thresher, thrasher, threshing machine", "throne", "tile roof", "toaster", "tobacco shop, tobacconist shop, tobacconist", "toilet seat", "torch", "totem pole", "tow truck, tow car, wrecker", "toyshop", "tractor", "trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi", "tray", "trench coat", "tricycle, trike, velocipede", "trimaran", "tripod", "triumphal arch", "trolleybus, trolley coach, trackless trolley", "trombone", "tub, vat", "turnstile", "typewriter keyboard", "umbrella", "unicycle, monocycle", "upright, upright piano", "vacuum, vacuum cleaner", "vase", "vault", "velvet", "vending machine", "vestment", "viaduct", "violin, fiddle", "volleyball", "waffle iron", "wall clock", "wallet, billfold, notecase, pocketbook", "wardrobe, closet, press", "warplane, military plane", "washbasin, handbasin, washbowl, lavabo, wash-hand basin", "washer, automatic washer, washing machine", "water bottle", "water jug", "water tower", "whiskey jug", "whistle", "wig", "window screen", "window shade", "windsor tie", "wine bottle", "wing", "wok", "wooden spoon", "wool, woolen, woollen", "worm fence, snake fence, snake-rail fence, virginia fence", "wreck", "yawl", "yurt", "web site, website, internet site, site", "comic book", "crossword puzzle, crossword", "street sign", "traffic light, traffic signal, stoplight", "book jacket, dust cover, dust jacket, dust wrapper", "menu", "plate", "guacamole", "consomme", "hot pot, hotpot", "trifle", "ice cream, icecream", "ice lolly, lolly, lollipop, popsicle", "french loaf", "bagel, beigel", "pretzel", "cheeseburger", "hotdog, hot dog, red hot", "mashed potato", "head cabbage", "broccoli", "cauliflower", "zucchini, courgette", "spaghetti squash", "acorn squash", "butternut squash", "cucumber, cuke", "artichoke, globe artichoke", "bell pepper", "cardoon", "mushroom", "granny smith", "strawberry", "orange", "lemon", "fig", "pineapple, ananas", "banana", "jackfruit, jak, jack", "custard apple", "pomegranate", "hay", "carbonara", "chocolate sauce, chocolate syrup", "dough", "meat loaf, meatloaf", "pizza, pizza pie", "potpie", "burrito", "red wine", "espresso", "cup", "eggnog", "alp", "bubble", "cliff, drop, drop-off", "coral reef", "geyser", "lakeside, lakeshore", "promontory, headland, head, foreland", "sandbar, sand bar", "seashore, coast, seacoast, sea-coast", "valley, vale", "volcano", "ballplayer, baseball player", "groom, bridegroom", "scuba diver", "rapeseed", "daisy", "yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum", "corn", "acorn", "hip, rose hip, rosehip", "buckeye, horse chestnut, conker", "coral fungus", "agaric", "gyromitra", "stinkhorn, carrion fungus", "earthstar", "hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa", "bolete", "ear, spike, capitulum", "toilet tissue, toilet paper, bathroom tissue" ]
LaurianeMD/vit-skin-disease
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "acne", "actinic keratosis", "benign tumors", "bullous", "candidiasis", "drug eruption", "eczema", "infestations bites", "lichen", "lupus", "moles", "psoriasis", "rosacea", "seborrh keratoses", "skin cancer", "sun sunlight damage", "tinea", "unknown normal", "vascular tumors", "vasculitis", "vitiligo", "warts" ]
Andrew-Finch/vit-base-rocks
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-rocks This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the rocks dataset. It achieves the following results on the evaluation set: - Loss: 0.7099 - Accuracy: 0.7778 ## Model description This model is a fine-tuned version of Google's vit-base-patch16-224-in21k designed to identify geological hand samples. ## Intended uses & limitations Currently the VIT is fine-tuned on 10 classes: ['Andesite', 'Basalt', 'Chalk', 'Dolomite', 'Flint', 'Gneiss', 'Granite', 'Limestone', 'Sandstone', 'Slate'] Future iteartions of the model will feature an expanded breadth of rock categories. ## Training and evaluation data The model performs relatively well on 10 classes of rocks - with some confusion between limestone and other carbonates. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/67b218c9f745d44676c938cb/zZXcIybZLvUtEKpb8Lk8u.png) ## Training procedure 495 images of geological hand samples were selected with an 80:20 train-test/validation split. Classes were roughly equally represented across the 495 samples. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 2.0408 | 1.4286 | 10 | 1.7371 | 0.6111 | | 1.4489 | 2.8571 | 20 | 1.3254 | 0.7407 | | 0.9469 | 4.2857 | 30 | 1.0768 | 0.7407 | | 0.586 | 5.7143 | 40 | 0.9118 | 0.7778 | | 0.3757 | 7.1429 | 50 | 0.9902 | 0.6852 | | 0.2798 | 8.5714 | 60 | 0.8498 | 0.7778 | | 0.2087 | 10.0 | 70 | 0.7939 | 0.7407 | | 0.176 | 11.4286 | 80 | 0.8220 | 0.7222 | | 0.1613 | 12.8571 | 90 | 0.7288 | 0.8148 | | 0.1337 | 14.2857 | 100 | 0.7178 | 0.7963 | | 0.1326 | 15.7143 | 110 | 0.7403 | 0.7778 | | 0.119 | 17.1429 | 120 | 0.7099 | 0.7778 | | 0.1193 | 18.5714 | 130 | 0.7626 | 0.7778 | | 0.1227 | 20.0 | 140 | 0.7125 | 0.7963 | | 0.1102 | 21.4286 | 150 | 0.7493 | 0.7963 | | 0.1134 | 22.8571 | 160 | 0.7396 | 0.7963 | | 0.1173 | 24.2857 | 170 | 0.7187 | 0.7963 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.6.0 - Datasets 3.3.0 - Tokenizers 0.21.0
[ "andesite", "basalt", "chalk", "dolomite", "flint", "gneiss", "granite", "limestone", "sandstone", "slate" ]
prithivMLmods/Deepfake-Detection-Exp-02-22
![2.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/UJytx7u0VTx_SAz49640L.png) # **Deepfake-Detection-Exp-02-22** Deepfake-Detection-Exp-02-22 is a minimalist, high-quality dataset trained on a ViT-based model for image classification, distinguishing between deepfake and real images. The model is based on Google's **`google/vit-base-patch32-224-in21k`**. ```bitex Mapping of IDs to Labels: {0: 'Deepfake', 1: 'Real'} Mapping of Labels to IDs: {'Deepfake': 0, 'Real': 1} ``` ```python Classification report: precision recall f1-score support Deepfake 0.9833 0.9187 0.9499 1600 Real 0.9238 0.9844 0.9531 1600 accuracy 0.9516 3200 macro avg 0.9535 0.9516 0.9515 3200 weighted avg 0.9535 0.9516 0.9515 3200 ``` ![download (1).png](https://cdn-uploads.huggingface.co/production/uploads/6720824b15b6282a2464fc58/-25Oh3wureg_MI4nvjh7w.png) # **Inference with Hugging Face Pipeline** ```python from transformers import pipeline # Load the model pipe = pipeline('image-classification', model="prithivMLmods/Deepfake-Detection-Exp-02-22", device=0) # Predict on an image result = pipe("path_to_image.jpg") print(result) ``` # **Inference with PyTorch** ```python from transformers import ViTForImageClassification, ViTImageProcessor from PIL import Image import torch # Load the model and processor model = ViTForImageClassification.from_pretrained("prithivMLmods/Deepfake-Detection-Exp-02-22") processor = ViTImageProcessor.from_pretrained("prithivMLmods/Deepfake-Detection-Exp-02-22") # Load and preprocess the image image = Image.open("path_to_image.jpg").convert("RGB") inputs = processor(images=image, return_tensors="pt") # Perform inference with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class = torch.argmax(logits, dim=1).item() # Map class index to label label = model.config.id2label[predicted_class] print(f"Predicted Label: {label}") ``` # **Limitations** 1. **Generalization Issues** – The model may not perform well on deepfake images generated by unseen or novel deepfake techniques. 2. **Dataset Bias** – The training data might not cover all variations of real and fake images, leading to biased predictions. 3. **Resolution Constraints** – Since the model is based on `vit-base-patch32-224-in21k`, it is optimized for 224x224 image resolution, which may limit its effectiveness on high-resolution images. 4. **Adversarial Vulnerabilities** – The model may be susceptible to adversarial attacks designed to fool vision transformers. 5. **False Positives & False Negatives** – The model may occasionally misclassify real images as deepfake and vice versa, requiring human validation in critical applications. # **Intended Use** 1. **Deepfake Detection** – Designed for identifying deepfake images in media, social platforms, and forensic analysis. 2. **Research & Development** – Useful for researchers studying deepfake detection and improving ViT-based classification models. 3. **Content Moderation** – Can be integrated into platforms to detect and flag manipulated images. 4. **Security & Forensics** – Assists in cybersecurity applications where verifying the authenticity of images is crucial. 5. **Educational Purposes** – Can be used in training AI practitioners and students in the field of computer vision and deepfake detection.
[ "deepfake", "real" ]
prithivMLmods/Deepfake-QualityAssess-85M
![5.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/4XTyNBg6dGiKXQuFlPoXn.png) # **Deepfake-QualityAssess-85M** Deepfake-QualityAssess-85M is an image classification model for quality assessment of good and bad quality deepfakes. It is based on Google's ViT model (`google/vit-base-patch16-224-in21k`). A reasonable number of training samples were used to achieve good efficiency in the final training process and its efficiency metrics. Since this task involves classifying deepfake images with varying quality levels, the model was trained accordingly. Future improvements will be made based on the complexity of the task. ```python id2label: { "0": "Issue In Deepfake", "1": "High Quality Deepfake" } ``` ```python Classification report: precision recall f1-score support Issue In Deepfake 0.7962 0.8067 0.8014 1500 High Quality Deepfake 0.7877 0.7767 0.7822 1500 accuracy 0.7940 3000 macro avg 0.7920 0.7917 0.7918 3000 weighted avg 0.7920 0.7917 0.7918 3000 ``` # **Inference with Hugging Face Pipeline** ```python from transformers import pipeline # Load the model pipe = pipeline('image-classification', model="prithivMLmods/Deepfake-QualityAssess-85M", device=0) # Predict on an image result = pipe("path_to_image.jpg") print(result) ``` # **Inference with PyTorch** ```python from transformers import ViTForImageClassification, ViTImageProcessor from PIL import Image import torch # Load the model and processor model = ViTForImageClassification.from_pretrained("prithivMLmods/Deepfake-QualityAssess-85M") processor = ViTImageProcessor.from_pretrained("prithivMLmods/Deepfake-QualityAssess-85M") # Load and preprocess the image image = Image.open("path_to_image.jpg").convert("RGB") inputs = processor(images=image, return_tensors="pt") # Perform inference with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class = torch.argmax(logits, dim=1).item() # Map class index to label label = model.config.id2label[predicted_class] print(f"Predicted Label: {label}") ``` # **Limitations of Deepfake-QualityAssess-85M** 1. **Limited Generalization** – The model is trained on specific datasets and may not generalize well to unseen deepfake generation techniques or novel deepfake artifacts. 2. **Variability in Deepfake Quality** – Different deepfake creation methods introduce varying levels of noise and artifacts, which may affect model performance. 3. **Dependence on Training Data** – The model's accuracy is influenced by the quality and diversity of the training data. Biases in the dataset could lead to misclassification. 4. **Resolution Sensitivity** – Performance may degrade when analyzing extremely high- or low-resolution images not seen during training. 5. **Potential False Positives/Negatives** – The model may sometimes misclassify good-quality deepfakes as bad (or vice versa), limiting its reliability in critical applications. 6. **Lack of Explainability** – Being based on a ViT (Vision Transformer), the decision-making process is less interpretable than traditional models, making it harder to analyze why certain classifications are made. 7. **Not a Deepfake Detector** – This model assesses the quality of deepfakes but does not determine whether an image is real or fake. # **Intended Use of Deepfake-QualityAssess-85M** - **Quality Assessment for Research** – Used by researchers to analyze and improve deepfake generation methods by assessing output quality. - **Dataset Filtering** – Helps filter out low-quality deepfake samples in datasets for better training of deepfake detection models. - **Forensic Analysis** – Supports forensic teams in evaluating deepfake quality to prioritize high-quality samples for deeper analysis. - **Content Moderation** – Assists social media platforms and content moderation teams in assessing deepfake quality before deciding on further actions. - **Benchmarking Deepfake Models** – Used to compare and evaluate different deepfake generation models based on their output quality.
[ "issue in deepfake", "high quality deepfake" ]
prithivMLmods/Deepfake-QualityAssess-88M
![6.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/2WT2C1XXTzUJRU2kFDLUG.png) # **Deepfake-QualityAssess-88M** Deepfake-QualityAssess-88M is an image classification model for quality assessment of good and bad quality deepfakes. It is based on Google's ViT model (`google/vit-base-patch32-224-in21k`). A reasonable number of training samples were used to achieve good efficiency in the final training process and its efficiency metrics. Since this task involves classifying deepfake images with varying quality levels, the model was trained accordingly. Future improvements will be made based on the complexity of the task. ```python id2label: { "0": "Issue In Deepfake", "1": "High Quality Deepfake" } ``` ```python Classification report: precision recall f1-score support Issue In Deepfake 0.7560 0.7467 0.7513 1500 High Quality Deepfake 0.7365 0.7473 0.7418 1500 accuracy 0.7467 3000 macro avg 0.7463 0.7470 0.7465 3000 weighted avg 0.7463 0.7470 0.7465 3000 ``` # **Inference with Hugging Face Pipeline** ```python from transformers import pipeline # Load the model pipe = pipeline('image-classification', model="prithivMLmods/Deepfake-QualityAssess-88M", device=0) # Predict on an image result = pipe("path_to_image.jpg") print(result) ``` # **Inference with PyTorch** ```python from transformers import ViTForImageClassification, ViTImageProcessor from PIL import Image import torch # Load the model and processor model = ViTForImageClassification.from_pretrained("prithivMLmods/Deepfake-QualityAssess-88M") processor = ViTImageProcessor.from_pretrained("prithivMLmods/Deepfake-QualityAssess-88M") # Load and preprocess the image image = Image.open("path_to_image.jpg").convert("RGB") inputs = processor(images=image, return_tensors="pt") # Perform inference with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class = torch.argmax(logits, dim=1).item() # Map class index to label label = model.config.id2label[predicted_class] print(f"Predicted Label: {label}") ``` # **Limitations of Deepfake-QualityAssess-88M** 1. **Limited Generalization** – The model is trained on specific datasets and may not generalize well to unseen deepfake generation techniques or novel deepfake artifacts. 2. **Variability in Deepfake Quality** – Different deepfake creation methods introduce varying levels of noise and artifacts, which may affect model performance. 3. **Dependence on Training Data** – The model's accuracy is influenced by the quality and diversity of the training data. Biases in the dataset could lead to misclassification. 4. **Resolution Sensitivity** – Performance may degrade when analyzing extremely high- or low-resolution images not seen during training. 5. **Potential False Positives/Negatives** – The model may sometimes misclassify good-quality deepfakes as bad (or vice versa), limiting its reliability in critical applications. 6. **Lack of Explainability** – Being based on a ViT (Vision Transformer), the decision-making process is less interpretable than traditional models, making it harder to analyze why certain classifications are made. 7. **Not a Deepfake Detector** – This model assesses the quality of deepfakes but does not determine whether an image is real or fake. # **Intended Use of Deepfake-QualityAssess-88M** - **Quality Assessment for Research** – Used by researchers to analyze and improve deepfake generation methods by assessing output quality. - **Dataset Filtering** – Helps filter out low-quality deepfake samples in datasets for better training of deepfake detection models. - **Forensic Analysis** – Supports forensic teams in evaluating deepfake quality to prioritize high-quality samples for deeper analysis. - **Content Moderation** – Assists social media platforms and content moderation teams in assessing deepfake quality before deciding on further actions. - **Benchmarking Deepfake Models** – Used to compare and evaluate different deepfake generation models based on their output quality.
[ "issue in deepfake", "high quality deepfake" ]
Mievst/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.1838 - Accuracy: 0.9445 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3735 | 1.0 | 370 | 0.3234 | 0.9161 | | 0.2278 | 2.0 | 740 | 0.2487 | 0.9296 | | 0.1418 | 3.0 | 1110 | 0.2302 | 0.9350 | | 0.1461 | 4.0 | 1480 | 0.2209 | 0.9350 | | 0.1381 | 5.0 | 1850 | 0.2174 | 0.9391 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
Anupam251272/finetuned-indian-food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-indian-food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset. It achieves the following results on the evaluation set: - Loss: 0.2180 - Accuracy: 0.9490 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.0379 | 0.3003 | 100 | 0.9497 | 0.8533 | | 0.8471 | 0.6006 | 200 | 0.6507 | 0.8597 | | 0.5657 | 0.9009 | 300 | 0.5872 | 0.8512 | | 0.5011 | 1.2012 | 400 | 0.4549 | 0.8842 | | 0.3625 | 1.5015 | 500 | 0.4718 | 0.8725 | | 0.5228 | 1.8018 | 600 | 0.3749 | 0.8990 | | 0.2337 | 2.1021 | 700 | 0.3502 | 0.9107 | | 0.234 | 2.4024 | 800 | 0.3021 | 0.9267 | | 0.241 | 2.7027 | 900 | 0.2905 | 0.9245 | | 0.1572 | 3.0030 | 1000 | 0.2573 | 0.9426 | | 0.1522 | 3.3033 | 1100 | 0.2363 | 0.9384 | | 0.1375 | 3.6036 | 1200 | 0.2256 | 0.9479 | | 0.1089 | 3.9039 | 1300 | 0.2180 | 0.9490 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "burger", "butter_naan", "kaathi_rolls", "kadai_paneer", "kulfi", "masala_dosa", "momos", "paani_puri", "pakode", "pav_bhaji", "pizza", "samosa", "chai", "chapati", "chole_bhature", "dal_makhani", "dhokla", "fried_rice", "idli", "jalebi" ]
tanvibalsara18/DiaCareChatbot
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "label_0", "label_1", "label_2", "label_3" ]
prithivMLmods/Deepfake-QualityAssess-88M-ONNX
![6.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/2WT2C1XXTzUJRU2kFDLUG.png) # **Deepfake-QualityAssess-88M-ONNX** Deepfake-QualityAssess-88M is an image classification model for quality assessment of good and bad quality deepfakes. It is based on Google's ViT model (`google/vit-base-patch32-224-in21k`). A reasonable number of training samples were used to achieve good efficiency in the final training process and its efficiency metrics. Since this task involves classifying deepfake images with varying quality levels, the model was trained accordingly. Future improvements will be made based on the complexity of the task. ```python id2label: { "0": "Issue In Deepfake", "1": "High Quality Deepfake" } ``` ```python Classification report: precision recall f1-score support Issue In Deepfake 0.7560 0.7467 0.7513 1500 High Quality Deepfake 0.7365 0.7473 0.7418 1500 accuracy 0.7467 3000 macro avg 0.7463 0.7470 0.7465 3000 weighted avg 0.7463 0.7470 0.7465 3000 ``` # **Inference with Hugging Face Pipeline** ```python from transformers import pipeline # Load the model pipe = pipeline('image-classification', model="prithivMLmods/Deepfake-QualityAssess-88M", device=0) # Predict on an image result = pipe("path_to_image.jpg") print(result) ``` # **Inference with PyTorch** ```python from transformers import ViTForImageClassification, ViTImageProcessor from PIL import Image import torch # Load the model and processor model = ViTForImageClassification.from_pretrained("prithivMLmods/Deepfake-QualityAssess-88M") processor = ViTImageProcessor.from_pretrained("prithivMLmods/Deepfake-QualityAssess-88M") # Load and preprocess the image image = Image.open("path_to_image.jpg").convert("RGB") inputs = processor(images=image, return_tensors="pt") # Perform inference with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class = torch.argmax(logits, dim=1).item() # Map class index to label label = model.config.id2label[predicted_class] print(f"Predicted Label: {label}") ``` # **Limitations of Deepfake-QualityAssess-88M** 1. **Limited Generalization** – The model is trained on specific datasets and may not generalize well to unseen deepfake generation techniques or novel deepfake artifacts. 2. **Variability in Deepfake Quality** – Different deepfake creation methods introduce varying levels of noise and artifacts, which may affect model performance. 3. **Dependence on Training Data** – The model's accuracy is influenced by the quality and diversity of the training data. Biases in the dataset could lead to misclassification. 4. **Resolution Sensitivity** – Performance may degrade when analyzing extremely high- or low-resolution images not seen during training. 5. **Potential False Positives/Negatives** – The model may sometimes misclassify good-quality deepfakes as bad (or vice versa), limiting its reliability in critical applications. 6. **Lack of Explainability** – Being based on a ViT (Vision Transformer), the decision-making process is less interpretable than traditional models, making it harder to analyze why certain classifications are made. 7. **Not a Deepfake Detector** – This model assesses the quality of deepfakes but does not determine whether an image is real or fake. # **Intended Use of Deepfake-QualityAssess-88M** - **Quality Assessment for Research** – Used by researchers to analyze and improve deepfake generation methods by assessing output quality. - **Dataset Filtering** – Helps filter out low-quality deepfake samples in datasets for better training of deepfake detection models. - **Forensic Analysis** – Supports forensic teams in evaluating deepfake quality to prioritize high-quality samples for deeper analysis. - **Content Moderation** – Assists social media platforms and content moderation teams in assessing deepfake quality before deciding on further actions. - **Benchmarking Deepfake Models** – Used to compare and evaluate different deepfake generation models based on their output quality.
[ "issue in deepfake", "high quality deepfake" ]
prithivMLmods/Deepfake-QualityAssess-85M-ONNX
![5.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/4XTyNBg6dGiKXQuFlPoXn.png) # **Deepfake-QualityAssess-85M-ONNX** Deepfake-QualityAssess-85M is an image classification model for quality assessment of good and bad quality deepfakes. It is based on Google's ViT model (`google/vit-base-patch16-224-in21k`). A reasonable number of training samples were used to achieve good efficiency in the final training process and its efficiency metrics. Since this task involves classifying deepfake images with varying quality levels, the model was trained accordingly. Future improvements will be made based on the complexity of the task. ```python id2label: { "0": "Issue In Deepfake", "1": "High Quality Deepfake" } ``` ```python Classification report: precision recall f1-score support Issue In Deepfake 0.7962 0.8067 0.8014 1500 High Quality Deepfake 0.7877 0.7767 0.7822 1500 accuracy 0.7940 3000 macro avg 0.7920 0.7917 0.7918 3000 weighted avg 0.7920 0.7917 0.7918 3000 ``` # **Inference with Hugging Face Pipeline** ```python from transformers import pipeline # Load the model pipe = pipeline('image-classification', model="prithivMLmods/Deepfake-QualityAssess-85M", device=0) # Predict on an image result = pipe("path_to_image.jpg") print(result) ``` # **Inference with PyTorch** ```python from transformers import ViTForImageClassification, ViTImageProcessor from PIL import Image import torch # Load the model and processor model = ViTForImageClassification.from_pretrained("prithivMLmods/Deepfake-QualityAssess-85M") processor = ViTImageProcessor.from_pretrained("prithivMLmods/Deepfake-QualityAssess-85M") # Load and preprocess the image image = Image.open("path_to_image.jpg").convert("RGB") inputs = processor(images=image, return_tensors="pt") # Perform inference with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class = torch.argmax(logits, dim=1).item() # Map class index to label label = model.config.id2label[predicted_class] print(f"Predicted Label: {label}") ``` # **Limitations of Deepfake-QualityAssess-85M** 1. **Limited Generalization** – The model is trained on specific datasets and may not generalize well to unseen deepfake generation techniques or novel deepfake artifacts. 2. **Variability in Deepfake Quality** – Different deepfake creation methods introduce varying levels of noise and artifacts, which may affect model performance. 3. **Dependence on Training Data** – The model's accuracy is influenced by the quality and diversity of the training data. Biases in the dataset could lead to misclassification. 4. **Resolution Sensitivity** – Performance may degrade when analyzing extremely high- or low-resolution images not seen during training. 5. **Potential False Positives/Negatives** – The model may sometimes misclassify good-quality deepfakes as bad (or vice versa), limiting its reliability in critical applications. 6. **Lack of Explainability** – Being based on a ViT (Vision Transformer), the decision-making process is less interpretable than traditional models, making it harder to analyze why certain classifications are made. 7. **Not a Deepfake Detector** – This model assesses the quality of deepfakes but does not determine whether an image is real or fake. # **Intended Use of Deepfake-QualityAssess-85M** - **Quality Assessment for Research** – Used by researchers to analyze and improve deepfake generation methods by assessing output quality. - **Dataset Filtering** – Helps filter out low-quality deepfake samples in datasets for better training of deepfake detection models. - **Forensic Analysis** – Supports forensic teams in evaluating deepfake quality to prioritize high-quality samples for deeper analysis. - **Content Moderation** – Assists social media platforms and content moderation teams in assessing deepfake quality before deciding on further actions. - **Benchmarking Deepfake Models** – Used to compare and evaluate different deepfake generation models based on their output quality.
[ "issue in deepfake", "high quality deepfake" ]
edm-research/CV-Model
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "benign", "malignant", "normal" ]
prithivMLmods/Deepfake-Detection-Exp-02-22-ONNX
![2.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/UJytx7u0VTx_SAz49640L.png) # **Deepfake-Detection-Exp-02-22-ONNX** Deepfake-Detection-Exp-02-22 is a minimalist, high-quality dataset trained on a ViT-based model for image classification, distinguishing between deepfake and real images. The model is based on Google's **`google/vit-base-patch32-224-in21k`**. ```bitex Mapping of IDs to Labels: {0: 'Deepfake', 1: 'Real'} Mapping of Labels to IDs: {'Deepfake': 0, 'Real': 1} ``` ```python Classification report: precision recall f1-score support Deepfake 0.9833 0.9187 0.9499 1600 Real 0.9238 0.9844 0.9531 1600 accuracy 0.9516 3200 macro avg 0.9535 0.9516 0.9515 3200 weighted avg 0.9535 0.9516 0.9515 3200 ``` ![download (1).png](https://cdn-uploads.huggingface.co/production/uploads/6720824b15b6282a2464fc58/-25Oh3wureg_MI4nvjh7w.png) # **Inference with Hugging Face Pipeline** ```python from transformers import pipeline # Load the model pipe = pipeline('image-classification', model="prithivMLmods/Deepfake-Detection-Exp-02-22", device=0) # Predict on an image result = pipe("path_to_image.jpg") print(result) ``` # **Inference with PyTorch** ```python from transformers import ViTForImageClassification, ViTImageProcessor from PIL import Image import torch # Load the model and processor model = ViTForImageClassification.from_pretrained("prithivMLmods/Deepfake-Detection-Exp-02-22") processor = ViTImageProcessor.from_pretrained("prithivMLmods/Deepfake-Detection-Exp-02-22") # Load and preprocess the image image = Image.open("path_to_image.jpg").convert("RGB") inputs = processor(images=image, return_tensors="pt") # Perform inference with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class = torch.argmax(logits, dim=1).item() # Map class index to label label = model.config.id2label[predicted_class] print(f"Predicted Label: {label}") ``` # **Limitations** 1. **Generalization Issues** – The model may not perform well on deepfake images generated by unseen or novel deepfake techniques. 2. **Dataset Bias** – The training data might not cover all variations of real and fake images, leading to biased predictions. 3. **Resolution Constraints** – Since the model is based on `vit-base-patch32-224-in21k`, it is optimized for 224x224 image resolution, which may limit its effectiveness on high-resolution images. 4. **Adversarial Vulnerabilities** – The model may be susceptible to adversarial attacks designed to fool vision transformers. 5. **False Positives & False Negatives** – The model may occasionally misclassify real images as deepfake and vice versa, requiring human validation in critical applications. # **Intended Use** 1. **Deepfake Detection** – Designed for identifying deepfake images in media, social platforms, and forensic analysis. 2. **Research & Development** – Useful for researchers studying deepfake detection and improving ViT-based classification models. 3. **Content Moderation** – Can be integrated into platforms to detect and flag manipulated images. 4. **Security & Forensics** – Assists in cybersecurity applications where verifying the authenticity of images is crucial. 5. **Educational Purposes** – Can be used in training AI practitioners and students in the field of computer vision and deepfake detection.
[ "deepfake", "real" ]
prithivMLmods/Deepfake-Detection-Exp-02-21-ONNX
![3.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/PxdwhpADOqXaQ5hXXCPss.png) # **Deepfake-Detection-Exp-02-21-ONNX** Deepfake-Detection-Exp-02-21 is a minimalist, high-quality dataset trained on a ViT-based model for image classification, distinguishing between deepfake and real images. The model is based on Google's **`google/vit-base-patch16-224-in21k`**. ```bitex Mapping of IDs to Labels: {0: 'Deepfake', 1: 'Real'} Mapping of Labels to IDs: {'Deepfake': 0, 'Real': 1} ``` ```py Classification report: precision recall f1-score support Deepfake 0.9962 0.9806 0.9883 1600 Real 0.9809 0.9962 0.9885 1600 accuracy 0.9884 3200 macro avg 0.9886 0.9884 0.9884 3200 weighted avg 0.9886 0.9884 0.9884 3200 ``` ![download.png](https://cdn-uploads.huggingface.co/production/uploads/6720824b15b6282a2464fc58/0ISoyjxLs-zpqt9Gv4YRo.png) # **Inference with Hugging Face Pipeline** ```python from transformers import pipeline # Load the model pipe = pipeline('image-classification', model="prithivMLmods/Deepfake-Detection-Exp-02-21", device=0) # Predict on an image result = pipe("path_to_image.jpg") print(result) ``` # **Inference with PyTorch** ```python from transformers import ViTForImageClassification, ViTImageProcessor from PIL import Image import torch # Load the model and processor model = ViTForImageClassification.from_pretrained("prithivMLmods/Deepfake-Detection-Exp-02-21") processor = ViTImageProcessor.from_pretrained("prithivMLmods/Deepfake-Detection-Exp-02-21") # Load and preprocess the image image = Image.open("path_to_image.jpg").convert("RGB") inputs = processor(images=image, return_tensors="pt") # Perform inference with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class = torch.argmax(logits, dim=1).item() # Map class index to label label = model.config.id2label[predicted_class] print(f"Predicted Label: {label}") ``` # **Limitations** 1. **Generalization Issues** – The model may not perform well on deepfake images generated by unseen or novel deepfake techniques. 2. **Dataset Bias** – The training data might not cover all variations of real and fake images, leading to biased predictions. 3. **Resolution Constraints** – Since the model is based on `vit-base-patch16-224-in21k`, it is optimized for 224x224 image resolution, which may limit its effectiveness on high-resolution images. 4. **Adversarial Vulnerabilities** – The model may be susceptible to adversarial attacks designed to fool vision transformers. 5. **False Positives & False Negatives** – The model may occasionally misclassify real images as deepfake and vice versa, requiring human validation in critical applications. # **Intended Use** 1. **Deepfake Detection** – Designed for identifying deepfake images in media, social platforms, and forensic analysis. 2. **Research & Development** – Useful for researchers studying deepfake detection and improving ViT-based classification models. 3. **Content Moderation** – Can be integrated into platforms to detect and flag manipulated images. 4. **Security & Forensics** – Assists in cybersecurity applications where verifying the authenticity of images is crucial. 5. **Educational Purposes** – Can be used in training AI practitioners and students in the field of computer vision and deepfake detection.
[ "deepfake", "real" ]
cdstelly/vit-xray-pneumonia-classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-xray-pneumonia-classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1222 - Accuracy: 0.9614 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.073 | 0.9882 | 63 | 0.1030 | 0.9639 | | 0.0719 | 1.9882 | 126 | 0.1485 | 0.9519 | | 0.0813 | 2.9882 | 189 | 0.1420 | 0.9494 | | 0.0602 | 3.9882 | 252 | 0.0957 | 0.9674 | | 0.0688 | 4.9882 | 315 | 0.1031 | 0.9665 | | 0.0664 | 5.9882 | 378 | 0.1075 | 0.9657 | | 0.0525 | 6.9882 | 441 | 0.1222 | 0.9614 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "normal", "pneumonia" ]
akridge/noaa-esd-coral-bleaching-vit-classifier-v1
# NOAA ESD Coral Bleaching ViT Based Patch Image Classifier ## 📝 Model Overview This model was trained to **classify coral bleaching conditions** using the Vision Transformer (ViT) architecture on imagery from **NOAA-PIFSC Ecosystem Sciences Division (ESD)** Coral Bleaching Classifier dataset. The dataset includes **human-annotated points** indicating healthy and bleached coral, enabling classification for marine ecosystem monitoring. - **Model Architecture**: Google/ViT Base Patch16 224 - **Task**: Coral Bleaching Image Classification - **Classes**: - `CORAL`: Healthy coral - `CORAL_BL`: Bleached coral ![results](./00_example.png) ![results](./01_example.png) ## 📊 Model Weights - Download the **TorchScript model** [here](./noaa-esd-coral-bleaching-vit-classifier-v1.pt) - Download the **ONNX model** [here](./noaa-esd-coral-bleaching-vit-classifier-v1.onnx) - Access the **base model folder** [here](./model.safetensors) ## 📅 Dataset & Annotations - **Dataset**: [NOAA ESD Coral Bleaching Classifier Dataset](https://huggingface.co/datasets/akridge/NOAA-ESD-CORAL-Bleaching-Dataset) - **Annotation Method**: - Points annotated by human experts using both **randomly generated** and **targeted sampling methods**. | Split | Images | Description | |------------|---------|------------------------------------| | Training | 7,292 | Used for model training. | | Validation | 1,562 | Used for model hyperparameter tuning and early stopping. | | Test | 1,565 | Used for final model evaluation. | ## 📚 Training Configuration - **Dataset**: NOAA ESD Coral Bleaching Classifier Dataset - **Training/Validation Split**: 70% training, 15% validation, 15% testing - **Epochs**: 100 - **Batch Size**: 16 - **Learning Rate**: 3e-4 - **Image Size**: 224x224 (consistent with ViT input requirements) ## 📈 Results and Metrics The model was evaluated using a withheld test set. The predictions were compared against human-labeled points for validation. 📄 Classification Report: | |precision |recall|f1-score| |-|-|-|-| |CORAL| 0.86| 0.91| 0.88| |CORAL_BL| 0.84| 0.75| 0.79| | | | | | | |accuracy| | | 0.85| |macro avg| 0.85| 0.83| 0.84| |weighted avg| 0.85| 0.85| 0.85| ![results](./02_example.png) ## 🚀 How to Use the Model ### 🔗 Load with Transformers ``` # Load model directly from transformers import AutoImageProcessor, AutoModelForImageClassification processor = AutoImageProcessor.from_pretrained("akridge/noaa-esd-coral-bleaching-vit-classifier-v1") model = AutoModelForImageClassification.from_pretrained("akridge/noaa-esd-coral-bleaching-vit-classifier-v1") ``` ``` import torch from transformers import ViTForImageClassification, AutoImageProcessor from PIL import Image # ✅ Load the model and processor model = ViTForImageClassification.from_pretrained("akridge/noaa-esd-coral-bleaching-vit-classifier-v1") processor = AutoImageProcessor.from_pretrained("akridge/noaa-esd-coral-bleaching-vit-classifier-v1") # ✅ Load and process image image = Image.open("your_image.jpg").convert("RGB") inputs = processor(images=image, return_tensors="pt") # ✅ Perform inference with torch.no_grad(): outputs = model(**inputs) prediction = outputs.logits.argmax(-1).item() id2label = model.config.id2label print(f"Prediction: {prediction} ({id2label[prediction]})") ``` ### 🔗 Use TorchScript ``` import torch # ✅ Load the TorchScript model scripted_model = torch.jit.load("noaa-esd-coral-bleaching-vit-classifier-v1.pt") scripted_model.eval() # ✅ Inference with TorchScript model with torch.no_grad(): scripted_output = scripted_model(inputs["pixel_values"]) scripted_prediction = scripted_output.argmax(-1).item() print(f"TorchScript Prediction: {id2label[scripted_prediction]}") ``` ### 🔗 Use ONNX ``` import onnxruntime as ort # ✅ Load the ONNX model onnx_model = "noaa-esd-coral-bleaching-vit-classifier-v1.onnx" ort_session = ort.InferenceSession(onnx_model) # ✅ Prepare ONNX input onnx_inputs = {"input": inputs["pixel_values"].numpy()} # ✅ Run inference with ONNX onnx_outputs = ort_session.run(None, onnx_inputs) onnx_prediction = onnx_outputs[0].argmax(axis=1)[0] print(f"ONNX Prediction: {id2label[onnx_prediction]}") ``` ### Intended Use - **Monitoring coral reef health** through automated image classification. - **Scientific research** in marine biology and ecosystem science. ### Limitations - The model was trained on the NOAA ESD dataset; it may not generalize to different regions or unrepresented coral species. - Images with **low resolution** or **poor lighting** may lead to incorrect predictions. - **Vertical or flipped images** should be processed with appropriate orientation adjustments. ### Ethical Considerations - Predictions should not replace expert human validation in critical conservation decisions. ## Metadata / Citation **Citation:** Pacific Islands Fisheries Science Center (2025). Ecosystem Sciences Division (ESD); **Related Metadata:** - [Benthic Cover Derived from Analysis of Benthic Images (2019)](https://www.fisheries.noaa.gov/inport/item/59195) - [NOAA ESD Coral Bleaching Classifier Annotations Data Dictionary](https://www.fisheries.noaa.gov/inport/item/68138) - [Developing a semi-automated CoralNet Bleaching Classifier: annotations and imagery from survey sites across the Hawaiian Archipelago between 2014 and 2019](https://www.fisheries.noaa.gov/inport/item/67962) #### Disclaimer This repository is a scientific product and is not official communication of the National Oceanic and Atmospheric Administration, or the United States Department of Commerce. All NOAA project content is provided on an ‘as is’ basis and the user assumes responsibility for its use. Any claims against the Department of Commerce or Department of Commerce bureaus stemming from the use of this project will be governed by all applicable Federal law. Any reference to specific commercial products, processes, or services by service mark, trademark, manufacturer, or otherwise, does not constitute or imply their endorsement, recommendation or favoring by the Department of Commerce. The Department of Commerce seal and logo, or the seal and logo of a DOC bureau, shall not be used in any manner to imply endorsement of any commercial product or activity by DOC or the United States Government.
[ "coral", "coral_bl" ]
csprrrrk/resnet-18
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "tench, tinca tinca", "goldfish, carassius auratus", "great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias", "tiger shark, galeocerdo cuvieri", "hammerhead, hammerhead shark", "electric ray, crampfish, numbfish, torpedo", "stingray", "cock", "hen", "ostrich, struthio camelus", "brambling, fringilla montifringilla", "goldfinch, carduelis carduelis", "house finch, linnet, carpodacus mexicanus", "junco, snowbird", "indigo bunting, indigo finch, indigo bird, passerina cyanea", "robin, american robin, turdus migratorius", "bulbul", "jay", "magpie", "chickadee", "water ouzel, dipper", "kite", "bald eagle, american eagle, haliaeetus leucocephalus", "vulture", "great grey owl, great gray owl, strix nebulosa", "european fire salamander, salamandra salamandra", "common newt, triturus vulgaris", "eft", "spotted salamander, ambystoma maculatum", "axolotl, mud puppy, ambystoma mexicanum", "bullfrog, rana catesbeiana", "tree frog, tree-frog", "tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui", "loggerhead, loggerhead turtle, caretta caretta", "leatherback turtle, leatherback, leathery turtle, dermochelys coriacea", "mud turtle", "terrapin", "box turtle, box tortoise", "banded gecko", "common iguana, iguana, iguana iguana", "american chameleon, anole, anolis carolinensis", "whiptail, whiptail lizard", "agama", "frilled lizard, chlamydosaurus kingi", "alligator lizard", "gila monster, heloderma suspectum", "green lizard, lacerta viridis", "african chameleon, chamaeleo chamaeleon", "komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis", "african crocodile, nile crocodile, crocodylus niloticus", "american alligator, alligator mississipiensis", "triceratops", "thunder snake, worm snake, carphophis amoenus", "ringneck snake, ring-necked snake, ring snake", "hognose snake, puff adder, sand viper", "green snake, grass snake", "king snake, kingsnake", "garter snake, grass snake", "water snake", "vine snake", "night snake, hypsiglena torquata", "boa constrictor, constrictor constrictor", "rock python, rock snake, python sebae", "indian cobra, naja naja", "green mamba", "sea snake", "horned viper, cerastes, sand viper, horned asp, cerastes cornutus", "diamondback, diamondback rattlesnake, crotalus adamanteus", "sidewinder, horned rattlesnake, crotalus cerastes", "trilobite", "harvestman, daddy longlegs, phalangium opilio", "scorpion", "black and gold garden spider, argiope aurantia", "barn spider, araneus cavaticus", "garden spider, aranea diademata", "black widow, latrodectus mactans", "tarantula", "wolf spider, hunting spider", "tick", "centipede", "black grouse", "ptarmigan", "ruffed grouse, partridge, bonasa umbellus", "prairie chicken, prairie grouse, prairie fowl", "peacock", "quail", "partridge", "african grey, african gray, psittacus erithacus", "macaw", "sulphur-crested cockatoo, kakatoe galerita, cacatua galerita", "lorikeet", "coucal", "bee eater", "hornbill", "hummingbird", "jacamar", "toucan", "drake", "red-breasted merganser, mergus serrator", "goose", "black swan, cygnus atratus", "tusker", "echidna, spiny anteater, anteater", "platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus", "wallaby, brush kangaroo", "koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus", "wombat", "jellyfish", "sea anemone, anemone", "brain coral", "flatworm, platyhelminth", "nematode, nematode worm, roundworm", "conch", "snail", "slug", "sea slug, nudibranch", "chiton, coat-of-mail shell, sea cradle, polyplacophore", "chambered nautilus, pearly nautilus, nautilus", "dungeness crab, cancer magister", "rock crab, cancer irroratus", "fiddler crab", "king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica", "american lobster, northern lobster, maine lobster, homarus americanus", "spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish", "crayfish, crawfish, crawdad, crawdaddy", "hermit crab", "isopod", "white stork, ciconia ciconia", "black stork, ciconia nigra", "spoonbill", "flamingo", "little blue heron, egretta caerulea", "american egret, great white heron, egretta albus", "bittern", "crane", "limpkin, aramus pictus", "european gallinule, porphyrio porphyrio", "american coot, marsh hen, mud hen, water hen, fulica americana", "bustard", "ruddy turnstone, arenaria interpres", "red-backed sandpiper, dunlin, erolia alpina", "redshank, tringa totanus", "dowitcher", "oystercatcher, oyster catcher", "pelican", "king penguin, aptenodytes patagonica", "albatross, mollymawk", "grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus", "killer whale, killer, orca, grampus, sea wolf, orcinus orca", "dugong, dugong dugon", "sea lion", "chihuahua", "japanese spaniel", "maltese dog, maltese terrier, maltese", "pekinese, pekingese, peke", "shih-tzu", "blenheim spaniel", "papillon", "toy terrier", "rhodesian ridgeback", "afghan hound, afghan", "basset, basset hound", "beagle", "bloodhound, sleuthhound", "bluetick", "black-and-tan coonhound", "walker hound, walker foxhound", "english foxhound", "redbone", "borzoi, russian wolfhound", "irish wolfhound", "italian greyhound", "whippet", "ibizan hound, ibizan podenco", "norwegian elkhound, elkhound", "otterhound, otter hound", "saluki, gazelle hound", "scottish deerhound, deerhound", "weimaraner", "staffordshire bullterrier, staffordshire bull terrier", "american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier", "bedlington terrier", "border terrier", "kerry blue terrier", "irish terrier", "norfolk terrier", "norwich terrier", "yorkshire terrier", "wire-haired fox terrier", "lakeland terrier", "sealyham terrier, sealyham", "airedale, airedale terrier", "cairn, cairn terrier", "australian terrier", "dandie dinmont, dandie dinmont terrier", "boston bull, boston terrier", "miniature schnauzer", "giant schnauzer", "standard schnauzer", "scotch terrier, scottish terrier, scottie", "tibetan terrier, chrysanthemum dog", "silky terrier, sydney silky", "soft-coated wheaten terrier", "west highland white terrier", "lhasa, lhasa apso", "flat-coated retriever", "curly-coated retriever", "golden retriever", "labrador retriever", "chesapeake bay retriever", "german short-haired pointer", "vizsla, hungarian pointer", "english setter", "irish setter, red setter", "gordon setter", "brittany spaniel", "clumber, clumber spaniel", "english springer, english springer spaniel", "welsh springer spaniel", "cocker spaniel, english cocker spaniel, cocker", "sussex spaniel", "irish water spaniel", "kuvasz", "schipperke", "groenendael", "malinois", "briard", "kelpie", "komondor", "old english sheepdog, bobtail", "shetland sheepdog, shetland sheep dog, shetland", "collie", "border collie", "bouvier des flandres, bouviers des flandres", "rottweiler", "german shepherd, german shepherd dog, german police dog, alsatian", "doberman, doberman pinscher", "miniature pinscher", "greater swiss mountain dog", "bernese mountain dog", "appenzeller", "entlebucher", "boxer", "bull mastiff", "tibetan mastiff", "french bulldog", "great dane", "saint bernard, st bernard", "eskimo dog, husky", "malamute, malemute, alaskan malamute", "siberian husky", "dalmatian, coach dog, carriage dog", "affenpinscher, monkey pinscher, monkey dog", "basenji", "pug, pug-dog", "leonberg", "newfoundland, newfoundland dog", "great pyrenees", "samoyed, samoyede", "pomeranian", "chow, chow chow", "keeshond", "brabancon griffon", "pembroke, pembroke welsh corgi", "cardigan, cardigan welsh corgi", "toy poodle", "miniature poodle", "standard poodle", "mexican hairless", "timber wolf, grey wolf, gray wolf, canis lupus", "white wolf, arctic wolf, canis lupus tundrarum", "red wolf, maned wolf, canis rufus, canis niger", "coyote, prairie wolf, brush wolf, canis latrans", "dingo, warrigal, warragal, canis dingo", "dhole, cuon alpinus", "african hunting dog, hyena dog, cape hunting dog, lycaon pictus", "hyena, hyaena", "red fox, vulpes vulpes", "kit fox, vulpes macrotis", "arctic fox, white fox, alopex lagopus", "grey fox, gray fox, urocyon cinereoargenteus", "tabby, tabby cat", "tiger cat", "persian cat", "siamese cat, siamese", "egyptian cat", "cougar, puma, catamount, mountain lion, painter, panther, felis concolor", "lynx, catamount", "leopard, panthera pardus", "snow leopard, ounce, panthera uncia", "jaguar, panther, panthera onca, felis onca", "lion, king of beasts, panthera leo", "tiger, panthera tigris", "cheetah, chetah, acinonyx jubatus", "brown bear, bruin, ursus arctos", "american black bear, black bear, ursus americanus, euarctos americanus", "ice bear, polar bear, ursus maritimus, thalarctos maritimus", "sloth bear, melursus ursinus, ursus ursinus", "mongoose", "meerkat, mierkat", "tiger beetle", "ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle", "ground beetle, carabid beetle", "long-horned beetle, longicorn, longicorn beetle", "leaf beetle, chrysomelid", "dung beetle", "rhinoceros beetle", "weevil", "fly", "bee", "ant, emmet, pismire", "grasshopper, hopper", "cricket", "walking stick, walkingstick, stick insect", "cockroach, roach", "mantis, mantid", "cicada, cicala", "leafhopper", "lacewing, lacewing fly", "dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk", "damselfly", "admiral", "ringlet, ringlet butterfly", "monarch, monarch butterfly, milkweed butterfly, danaus plexippus", "cabbage butterfly", "sulphur butterfly, sulfur butterfly", "lycaenid, lycaenid butterfly", "starfish, sea star", "sea urchin", "sea cucumber, holothurian", "wood rabbit, cottontail, cottontail rabbit", "hare", "angora, angora rabbit", "hamster", "porcupine, hedgehog", "fox squirrel, eastern fox squirrel, sciurus niger", "marmot", "beaver", "guinea pig, cavia cobaya", "sorrel", "zebra", "hog, pig, grunter, squealer, sus scrofa", "wild boar, boar, sus scrofa", "warthog", "hippopotamus, hippo, river horse, hippopotamus amphibius", "ox", "water buffalo, water ox, asiatic buffalo, bubalus bubalis", "bison", "ram, tup", "bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis", "ibex, capra ibex", "hartebeest", "impala, aepyceros melampus", "gazelle", "arabian camel, dromedary, camelus dromedarius", "llama", "weasel", "mink", "polecat, fitch, foulmart, foumart, mustela putorius", "black-footed ferret, ferret, mustela nigripes", "otter", "skunk, polecat, wood pussy", "badger", "armadillo", "three-toed sloth, ai, bradypus tridactylus", "orangutan, orang, orangutang, pongo pygmaeus", "gorilla, gorilla gorilla", "chimpanzee, chimp, pan troglodytes", "gibbon, hylobates lar", "siamang, hylobates syndactylus, symphalangus syndactylus", "guenon, guenon monkey", "patas, hussar monkey, erythrocebus patas", "baboon", "macaque", "langur", "colobus, colobus monkey", "proboscis monkey, nasalis larvatus", "marmoset", "capuchin, ringtail, cebus capucinus", "howler monkey, howler", "titi, titi monkey", "spider monkey, ateles geoffroyi", "squirrel monkey, saimiri sciureus", "madagascar cat, ring-tailed lemur, lemur catta", "indri, indris, indri indri, indri brevicaudatus", "indian elephant, elephas maximus", "african elephant, loxodonta africana", "lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens", "giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca", "barracouta, snoek", "eel", "coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch", "rock beauty, holocanthus tricolor", "anemone fish", "sturgeon", "gar, garfish, garpike, billfish, lepisosteus osseus", "lionfish", "puffer, pufferfish, blowfish, globefish", "abacus", "abaya", "academic gown, academic robe, judge's robe", "accordion, piano accordion, squeeze box", "acoustic guitar", "aircraft carrier, carrier, flattop, attack aircraft carrier", "airliner", "airship, dirigible", "altar", "ambulance", "amphibian, amphibious vehicle", "analog clock", "apiary, bee house", "apron", "ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin", "assault rifle, assault gun", "backpack, back pack, knapsack, packsack, rucksack, haversack", "bakery, bakeshop, bakehouse", "balance beam, beam", "balloon", "ballpoint, ballpoint pen, ballpen, biro", "band aid", "banjo", "bannister, banister, balustrade, balusters, handrail", "barbell", "barber chair", "barbershop", "barn", "barometer", "barrel, cask", "barrow, garden cart, lawn cart, wheelbarrow", "baseball", "basketball", "bassinet", "bassoon", "bathing cap, swimming cap", "bath towel", "bathtub, bathing tub, bath, tub", "beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon", "beacon, lighthouse, beacon light, pharos", "beaker", "bearskin, busby, shako", "beer bottle", "beer glass", "bell cote, bell cot", "bib", "bicycle-built-for-two, tandem bicycle, tandem", "bikini, two-piece", "binder, ring-binder", "binoculars, field glasses, opera glasses", "birdhouse", "boathouse", "bobsled, bobsleigh, bob", "bolo tie, bolo, bola tie, bola", "bonnet, poke bonnet", "bookcase", "bookshop, bookstore, bookstall", "bottlecap", "bow", "bow tie, bow-tie, bowtie", "brass, memorial tablet, plaque", "brassiere, bra, bandeau", "breakwater, groin, groyne, mole, bulwark, seawall, jetty", "breastplate, aegis, egis", "broom", "bucket, pail", "buckle", "bulletproof vest", "bullet train, bullet", "butcher shop, meat market", "cab, hack, taxi, taxicab", "caldron, cauldron", "candle, taper, wax light", "cannon", "canoe", "can opener, tin opener", "cardigan", "car mirror", "carousel, carrousel, merry-go-round, roundabout, whirligig", "carpenter's kit, tool kit", "carton", "car wheel", "cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm", "cassette", "cassette player", "castle", "catamaran", "cd player", "cello, violoncello", "cellular telephone, cellular phone, cellphone, cell, mobile phone", "chain", "chainlink fence", "chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour", "chain saw, chainsaw", "chest", "chiffonier, commode", "chime, bell, gong", "china cabinet, china closet", "christmas stocking", "church, church building", "cinema, movie theater, movie theatre, movie house, picture palace", "cleaver, meat cleaver, chopper", "cliff dwelling", "cloak", "clog, geta, patten, sabot", "cocktail shaker", "coffee mug", "coffeepot", "coil, spiral, volute, whorl, helix", "combination lock", "computer keyboard, keypad", "confectionery, confectionary, candy store", "container ship, containership, container vessel", "convertible", "corkscrew, bottle screw", "cornet, horn, trumpet, trump", "cowboy boot", "cowboy hat, ten-gallon hat", "cradle", "crane", "crash helmet", "crate", "crib, cot", "crock pot", "croquet ball", "crutch", "cuirass", "dam, dike, dyke", "desk", "desktop computer", "dial telephone, dial phone", "diaper, nappy, napkin", "digital clock", "digital watch", "dining table, board", "dishrag, dishcloth", "dishwasher, dish washer, dishwashing machine", "disk brake, disc brake", "dock, dockage, docking facility", "dogsled, dog sled, dog sleigh", "dome", "doormat, welcome mat", "drilling platform, offshore rig", "drum, membranophone, tympan", "drumstick", "dumbbell", "dutch oven", "electric fan, blower", "electric guitar", "electric locomotive", "entertainment center", "envelope", "espresso maker", "face powder", "feather boa, boa", "file, file cabinet, filing cabinet", "fireboat", "fire engine, fire truck", "fire screen, fireguard", "flagpole, flagstaff", "flute, transverse flute", "folding chair", "football helmet", "forklift", "fountain", "fountain pen", "four-poster", "freight car", "french horn, horn", "frying pan, frypan, skillet", "fur coat", "garbage truck, dustcart", "gasmask, respirator, gas helmet", "gas pump, gasoline pump, petrol pump, island dispenser", "goblet", "go-kart", "golf ball", "golfcart, golf cart", "gondola", "gong, tam-tam", "gown", "grand piano, grand", "greenhouse, nursery, glasshouse", "grille, radiator grille", "grocery store, grocery, food market, market", "guillotine", "hair slide", "hair spray", "half track", "hammer", "hamper", "hand blower, blow dryer, blow drier, hair dryer, hair drier", "hand-held computer, hand-held microcomputer", "handkerchief, hankie, hanky, hankey", "hard disc, hard disk, fixed disk", "harmonica, mouth organ, harp, mouth harp", "harp", "harvester, reaper", "hatchet", "holster", "home theater, home theatre", "honeycomb", "hook, claw", "hoopskirt, crinoline", "horizontal bar, high bar", "horse cart, horse-cart", "hourglass", "ipod", "iron, smoothing iron", "jack-o'-lantern", "jean, blue jean, denim", "jeep, landrover", "jersey, t-shirt, tee shirt", "jigsaw puzzle", "jinrikisha, ricksha, rickshaw", "joystick", "kimono", "knee pad", "knot", "lab coat, laboratory coat", "ladle", "lampshade, lamp shade", "laptop, laptop computer", "lawn mower, mower", "lens cap, lens cover", "letter opener, paper knife, paperknife", "library", "lifeboat", "lighter, light, igniter, ignitor", "limousine, limo", "liner, ocean liner", "lipstick, lip rouge", "loafer", "lotion", "loudspeaker, speaker, speaker unit, loudspeaker system, speaker system", "loupe, jeweler's loupe", "lumbermill, sawmill", "magnetic compass", "mailbag, postbag", "mailbox, letter box", "maillot", "maillot, tank suit", "manhole cover", "maraca", "marimba, xylophone", "mask", "matchstick", "maypole", "maze, labyrinth", "measuring cup", "medicine chest, medicine cabinet", "megalith, megalithic structure", "microphone, mike", "microwave, microwave oven", "military uniform", "milk can", "minibus", "miniskirt, mini", "minivan", "missile", "mitten", "mixing bowl", "mobile home, manufactured home", "model t", "modem", "monastery", "monitor", "moped", "mortar", "mortarboard", "mosque", "mosquito net", "motor scooter, scooter", "mountain bike, all-terrain bike, off-roader", "mountain tent", "mouse, computer mouse", "mousetrap", "moving van", "muzzle", "nail", "neck brace", "necklace", "nipple", "notebook, notebook computer", "obelisk", "oboe, hautboy, hautbois", "ocarina, sweet potato", "odometer, hodometer, mileometer, milometer", "oil filter", "organ, pipe organ", "oscilloscope, scope, cathode-ray oscilloscope, cro", "overskirt", "oxcart", "oxygen mask", "packet", "paddle, boat paddle", "paddlewheel, paddle wheel", "padlock", "paintbrush", "pajama, pyjama, pj's, jammies", "palace", "panpipe, pandean pipe, syrinx", "paper towel", "parachute, chute", "parallel bars, bars", "park bench", "parking meter", "passenger car, coach, carriage", "patio, terrace", "pay-phone, pay-station", "pedestal, plinth, footstall", "pencil box, pencil case", "pencil sharpener", "perfume, essence", "petri dish", "photocopier", "pick, plectrum, plectron", "pickelhaube", "picket fence, paling", "pickup, pickup truck", "pier", "piggy bank, penny bank", "pill bottle", "pillow", "ping-pong ball", "pinwheel", "pirate, pirate ship", "pitcher, ewer", "plane, carpenter's plane, woodworking plane", "planetarium", "plastic bag", "plate rack", "plow, plough", "plunger, plumber's helper", "polaroid camera, polaroid land camera", "pole", "police van, police wagon, paddy wagon, patrol wagon, wagon, black maria", "poncho", "pool table, billiard table, snooker table", "pop bottle, soda bottle", "pot, flowerpot", "potter's wheel", "power drill", "prayer rug, prayer mat", "printer", "prison, prison house", "projectile, missile", "projector", "puck, hockey puck", "punching bag, punch bag, punching ball, punchball", "purse", "quill, quill pen", "quilt, comforter, comfort, puff", "racer, race car, racing car", "racket, racquet", "radiator", "radio, wireless", "radio telescope, radio reflector", "rain barrel", "recreational vehicle, rv, r.v.", "reel", "reflex camera", "refrigerator, icebox", "remote control, remote", "restaurant, eating house, eating place, eatery", "revolver, six-gun, six-shooter", "rifle", "rocking chair, rocker", "rotisserie", "rubber eraser, rubber, pencil eraser", "rugby ball", "rule, ruler", "running shoe", "safe", "safety pin", "saltshaker, salt shaker", "sandal", "sarong", "sax, saxophone", "scabbard", "scale, weighing machine", "school bus", "schooner", "scoreboard", "screen, crt screen", "screw", "screwdriver", "seat belt, seatbelt", "sewing machine", "shield, buckler", "shoe shop, shoe-shop, shoe store", "shoji", "shopping basket", "shopping cart", "shovel", "shower cap", "shower curtain", "ski", "ski mask", "sleeping bag", "slide rule, slipstick", "sliding door", "slot, one-armed bandit", "snorkel", "snowmobile", "snowplow, snowplough", "soap dispenser", "soccer ball", "sock", "solar dish, solar collector, solar furnace", "sombrero", "soup bowl", "space bar", "space heater", "space shuttle", "spatula", "speedboat", "spider web, spider's web", "spindle", "sports car, sport car", "spotlight, spot", "stage", "steam locomotive", "steel arch bridge", "steel drum", "stethoscope", "stole", "stone wall", "stopwatch, stop watch", "stove", "strainer", "streetcar, tram, tramcar, trolley, trolley car", "stretcher", "studio couch, day bed", "stupa, tope", "submarine, pigboat, sub, u-boat", "suit, suit of clothes", "sundial", "sunglass", "sunglasses, dark glasses, shades", "sunscreen, sunblock, sun blocker", "suspension bridge", "swab, swob, mop", "sweatshirt", "swimming trunks, bathing trunks", "swing", "switch, electric switch, electrical switch", "syringe", "table lamp", "tank, army tank, armored combat vehicle, armoured combat vehicle", "tape player", "teapot", "teddy, teddy bear", "television, television system", "tennis ball", "thatch, thatched roof", "theater curtain, theatre curtain", "thimble", "thresher, thrasher, threshing machine", "throne", "tile roof", "toaster", "tobacco shop, tobacconist shop, tobacconist", "toilet seat", "torch", "totem pole", "tow truck, tow car, wrecker", "toyshop", "tractor", "trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi", "tray", "trench coat", "tricycle, trike, velocipede", "trimaran", "tripod", "triumphal arch", "trolleybus, trolley coach, trackless trolley", "trombone", "tub, vat", "turnstile", "typewriter keyboard", "umbrella", "unicycle, monocycle", "upright, upright piano", "vacuum, vacuum cleaner", "vase", "vault", "velvet", "vending machine", "vestment", "viaduct", "violin, fiddle", "volleyball", "waffle iron", "wall clock", "wallet, billfold, notecase, pocketbook", "wardrobe, closet, press", "warplane, military plane", "washbasin, handbasin, washbowl, lavabo, wash-hand basin", "washer, automatic washer, washing machine", "water bottle", "water jug", "water tower", "whiskey jug", "whistle", "wig", "window screen", "window shade", "windsor tie", "wine bottle", "wing", "wok", "wooden spoon", "wool, woolen, woollen", "worm fence, snake fence, snake-rail fence, virginia fence", "wreck", "yawl", "yurt", "web site, website, internet site, site", "comic book", "crossword puzzle, crossword", "street sign", "traffic light, traffic signal, stoplight", "book jacket, dust cover, dust jacket, dust wrapper", "menu", "plate", "guacamole", "consomme", "hot pot, hotpot", "trifle", "ice cream, icecream", "ice lolly, lolly, lollipop, popsicle", "french loaf", "bagel, beigel", "pretzel", "cheeseburger", "hotdog, hot dog, red hot", "mashed potato", "head cabbage", "broccoli", "cauliflower", "zucchini, courgette", "spaghetti squash", "acorn squash", "butternut squash", "cucumber, cuke", "artichoke, globe artichoke", "bell pepper", "cardoon", "mushroom", "granny smith", "strawberry", "orange", "lemon", "fig", "pineapple, ananas", "banana", "jackfruit, jak, jack", "custard apple", "pomegranate", "hay", "carbonara", "chocolate sauce, chocolate syrup", "dough", "meat loaf, meatloaf", "pizza, pizza pie", "potpie", "burrito", "red wine", "espresso", "cup", "eggnog", "alp", "bubble", "cliff, drop, drop-off", "coral reef", "geyser", "lakeside, lakeshore", "promontory, headland, head, foreland", "sandbar, sand bar", "seashore, coast, seacoast, sea-coast", "valley, vale", "volcano", "ballplayer, baseball player", "groom, bridegroom", "scuba diver", "rapeseed", "daisy", "yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum", "corn", "acorn", "hip, rose hip, rosehip", "buckeye, horse chestnut, conker", "coral fungus", "agaric", "gyromitra", "stinkhorn, carrion fungus", "earthstar", "hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa", "bolete", "ear, spike, capitulum", "toilet tissue, toilet paper, bathroom tissue" ]
Chaimabr/dino-vits16-finetuned-BUI
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dino-vits16-finetuned-BUI This model is a fine-tuned version of [facebook/dino-vits16](https://huggingface.co/facebook/dino-vits16) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2273 - Accuracy: 0.6667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.8 | 2 | 0.7731 | 0.6032 | | 0.496 | 2.0 | 5 | 0.8259 | 0.6190 | | 0.496 | 2.8 | 7 | 0.8272 | 0.4286 | | 0.5238 | 4.0 | 10 | 1.3584 | 0.3810 | | 0.5238 | 4.8 | 12 | 1.3436 | 0.6508 | | 0.7243 | 6.0 | 15 | 1.3677 | 0.3492 | | 0.7243 | 6.8 | 17 | 0.9869 | 0.3651 | | 0.986 | 8.0 | 20 | 0.6984 | 0.6190 | | 0.986 | 8.8 | 22 | 0.9421 | 0.3968 | | 0.6498 | 10.0 | 25 | 1.0109 | 0.6508 | | 0.6498 | 10.8 | 27 | 0.7512 | 0.5238 | | 0.5898 | 12.0 | 30 | 0.7748 | 0.6508 | | 0.5898 | 12.8 | 32 | 0.7725 | 0.6190 | | 0.5113 | 14.0 | 35 | 0.8037 | 0.5873 | | 0.5113 | 14.8 | 37 | 0.8376 | 0.6190 | | 0.4379 | 16.0 | 40 | 1.1071 | 0.6508 | | 0.4379 | 16.8 | 42 | 1.0065 | 0.5397 | | 0.4803 | 18.0 | 45 | 1.3034 | 0.6667 | | 0.4803 | 18.8 | 47 | 1.0748 | 0.6349 | | 0.6217 | 20.0 | 50 | 1.0113 | 0.4603 | | 0.6217 | 20.8 | 52 | 1.2488 | 0.6508 | | 0.3975 | 22.0 | 55 | 1.0682 | 0.5556 | | 0.3975 | 22.8 | 57 | 1.5174 | 0.6349 | | 0.3629 | 24.0 | 60 | 1.0892 | 0.4921 | | 0.3629 | 24.8 | 62 | 1.0608 | 0.5556 | | 0.3678 | 26.0 | 65 | 1.4989 | 0.6508 | | 0.3678 | 26.8 | 67 | 1.0048 | 0.5714 | | 0.362 | 28.0 | 70 | 0.9922 | 0.6032 | | 0.362 | 28.8 | 72 | 1.0652 | 0.6190 | | 0.2969 | 30.0 | 75 | 1.0573 | 0.5397 | | 0.2969 | 30.8 | 77 | 1.1591 | 0.6190 | | 0.288 | 32.0 | 80 | 1.1467 | 0.6349 | | 0.288 | 32.8 | 82 | 1.1856 | 0.5556 | | 0.2285 | 34.0 | 85 | 1.1599 | 0.6190 | | 0.2285 | 34.8 | 87 | 1.2128 | 0.6190 | | 0.239 | 36.0 | 90 | 1.1406 | 0.6032 | | 0.239 | 36.8 | 92 | 1.1442 | 0.6349 | | 0.2024 | 38.0 | 95 | 1.1724 | 0.6508 | | 0.2024 | 38.8 | 97 | 1.2025 | 0.6667 | | 0.1864 | 40.0 | 100 | 1.2273 | 0.6667 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.6.0+cu124 - Datasets 3.3.1 - Tokenizers 0.19.1
[ "0", "1" ]
KMH158/neurofusion_classification
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "left", "right", "parietal", "temporal", "frontal", "occipital", "basal_ganglia", "necrotic", "cystic", "glioblastoma_multiforme", "astrocytoma", "glioma", "metastasis", "abscess", "dysembryoplastic_neuroepithelial_tumor" ]
udaantech/vit-food-recognition-v1
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6", "label_7", "label_8", "label_9", "label_10", "label_11", "label_12", "label_13", "label_14", "label_15", "label_16", "label_17", "label_18", "label_19", "label_20", "label_21", "label_22", "label_23", "label_24", "label_25", "label_26", "label_27", "label_28", "label_29", "label_30", "label_31", "label_32", "label_33", "label_34", "label_35", "label_36", "label_37", "label_38", "label_39", "label_40", "label_41", "label_42", "label_43", "label_44", "label_45", "label_46", "label_47", "label_48", "label_49", "label_50", "label_51", "label_52", "label_53", "label_54", "label_55", "label_56", "label_57", "label_58", "label_59", "label_60", "label_61", "label_62", "label_63", "label_64", "label_65", "label_66", "label_67", "label_68", "label_69", "label_70", "label_71", "label_72", "label_73", "label_74", "label_75", "label_76", "label_77", "label_78", "label_79", "label_80", "label_81", "label_82", "label_83", "label_84", "label_85", "label_86", "label_87", "label_88", "label_89", "label_90", "label_91", "label_92", "label_93", "label_94", "label_95", "label_96", "label_97", "label_98", "label_99", "label_100" ]
KMH158/vit_base_neurofusion_classification
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "left", "right", "parietal", "temporal", "frontal", "occipital", "basal_ganglia", "necrotic", "cystic", "glioblastoma_multiforme", "astrocytoma", "glioma", "metastasis", "abscess", "dysembryoplastic_neuroepithelial_tumor" ]
KMH158/vit_base_neurofusion_classification_123
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "left", "right", "parietal", "temporal", "frontal", "occipital", "basal_ganglia", "necrotic", "cystic", "glioblastoma_multiforme", "astrocytoma", "glioma", "metastasis", "abscess", "dysembryoplastic_neuroepithelial_tumor" ]
KMH158/vit_large_neurofusion_classification_123
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "left", "right", "parietal", "temporal", "frontal", "occipital", "basal_ganglia", "necrotic", "cystic", "glioblastoma_multiforme", "astrocytoma", "glioma", "metastasis", "abscess", "dysembryoplastic_neuroepithelial_tumor" ]
KMH158/swin_tiny_neurofusion_classification_123
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "left", "right", "parietal", "temporal", "frontal", "occipital", "basal_ganglia", "necrotic", "cystic", "glioblastoma_multiforme", "astrocytoma", "glioma", "metastasis", "abscess", "dysembryoplastic_neuroepithelial_tumor" ]
KMH158/resnet_50_neurofusion_classification_123
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "left", "right", "parietal", "temporal", "frontal", "occipital", "basal_ganglia", "necrotic", "cystic", "glioblastoma_multiforme", "astrocytoma", "glioma", "metastasis", "abscess", "dysembryoplastic_neuroepithelial_tumor" ]
KMH158/resnet_152_neurofusion_classification_123
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "left", "right", "parietal", "temporal", "frontal", "occipital", "basal_ganglia", "necrotic", "cystic", "glioblastoma_multiforme", "astrocytoma", "glioma", "metastasis", "abscess", "dysembryoplastic_neuroepithelial_tumor" ]
KMH158/resnet_152_neurofusion_classification
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "left", "right", "parietal", "temporal", "frontal", "occipital", "basal_ganglia", "necrotic", "cystic", "glioblastoma_multiforme", "astrocytoma", "glioma", "metastasis", "abscess", "dysembryoplastic_neuroepithelial_tumor" ]
LaLegumbreArtificial/SPIE_MULTICLASS_MICROSOFT_1_0
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SPIE_MULTICLASS_MICROSOFT_1_0 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0943 - Accuracy: 0.9658 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.3384 | 0.9886 | 65 | 0.3535 | 0.875 | | 0.2 | 1.9924 | 131 | 0.1561 | 0.945 | | 0.1527 | 2.9962 | 197 | 0.1315 | 0.9483 | | 0.1093 | 4.0 | 263 | 0.1082 | 0.9533 | | 0.1128 | 4.9430 | 325 | 0.0943 | 0.9658 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/SPIE_MULTICLASS_MICROSOFT_1_1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SPIE_MULTICLASS_MICROSOFT_1_1 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0814 - Accuracy: 0.9733 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 1 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.3073 | 0.9886 | 65 | 0.2232 | 0.9133 | | 0.1859 | 1.9924 | 131 | 0.1449 | 0.9508 | | 0.1198 | 2.9962 | 197 | 0.1120 | 0.9642 | | 0.1207 | 4.0 | 263 | 0.0959 | 0.96 | | 0.12 | 4.9430 | 325 | 0.0814 | 0.9733 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/SPIE_MULTICLASS_MICROSOFT_1_2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SPIE_MULTICLASS_MICROSOFT_1_2 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1168 - Accuracy: 0.9592 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.3786 | 0.9886 | 65 | 0.3010 | 0.8942 | | 0.1822 | 1.9924 | 131 | 0.1648 | 0.9383 | | 0.1725 | 2.9962 | 197 | 0.1339 | 0.9533 | | 0.1198 | 4.0 | 263 | 0.0962 | 0.97 | | 0.1082 | 4.9430 | 325 | 0.1168 | 0.9592 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/SPIE_MULTICLASS_MICROSOFT_1_3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SPIE_MULTICLASS_MICROSOFT_1_3 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0929 - Accuracy: 0.9708 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 3 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.3174 | 0.9886 | 65 | 0.2778 | 0.9075 | | 0.1773 | 1.9924 | 131 | 0.1377 | 0.95 | | 0.1313 | 2.9962 | 197 | 0.1448 | 0.9467 | | 0.1046 | 4.0 | 263 | 0.1068 | 0.9625 | | 0.1115 | 4.9430 | 325 | 0.0929 | 0.9708 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/SPIE_MULTICLASS_MICROSOFT_1_4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SPIE_MULTICLASS_MICROSOFT_1_4 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1009 - Accuracy: 0.9617 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.3539 | 0.9886 | 65 | 0.2581 | 0.9092 | | 0.1926 | 1.9924 | 131 | 0.1362 | 0.9575 | | 0.1659 | 2.9962 | 197 | 0.1290 | 0.955 | | 0.1195 | 4.0 | 263 | 0.1325 | 0.9533 | | 0.095 | 4.9430 | 325 | 0.1009 | 0.9617 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/SPIE_MULTICLASS_MICROSOFT_2_0
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SPIE_MULTICLASS_MICROSOFT_2_0 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2493 - Accuracy: 0.91 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.6887 | 0.96 | 18 | 0.7373 | 0.7719 | | 0.5295 | 1.9733 | 37 | 0.3386 | 0.8783 | | 0.3065 | 2.9867 | 56 | 0.2924 | 0.9015 | | 0.2515 | 4.0 | 75 | 0.2440 | 0.9131 | | 0.2029 | 4.8 | 90 | 0.2493 | 0.91 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/SPIE_MULTICLASS_MICROSOFT_2_1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SPIE_MULTICLASS_MICROSOFT_2_1 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2947 - Accuracy: 0.8914 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 1 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.7569 | 0.96 | 18 | 0.9553 | 0.6733 | | 0.6777 | 1.9733 | 37 | 0.4582 | 0.8308 | | 0.4062 | 2.9867 | 56 | 0.3590 | 0.8690 | | 0.288 | 4.0 | 75 | 0.2981 | 0.8936 | | 0.2815 | 4.8 | 90 | 0.2947 | 0.8914 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/SPIE_MULTICLASS_MICROSOFT_2_2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SPIE_MULTICLASS_MICROSOFT_2_2 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2578 - Accuracy: 0.9032 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.6674 | 0.96 | 18 | 0.7670 | 0.7386 | | 0.5325 | 1.9733 | 37 | 0.4181 | 0.8424 | | 0.3262 | 2.9867 | 56 | 0.2970 | 0.8904 | | 0.2715 | 4.0 | 75 | 0.2768 | 0.8956 | | 0.2333 | 4.8 | 90 | 0.2578 | 0.9032 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/SPIE_MULTICLASS_MICROSOFT_2_3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SPIE_MULTICLASS_MICROSOFT_2_3 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2856 - Accuracy: 0.8967 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 3 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.6911 | 0.96 | 18 | 0.7916 | 0.7306 | | 0.5972 | 1.9733 | 37 | 0.4421 | 0.8332 | | 0.42 | 2.9867 | 56 | 0.3346 | 0.8801 | | 0.34 | 4.0 | 75 | 0.2811 | 0.8993 | | 0.2616 | 4.8 | 90 | 0.2856 | 0.8967 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/SPIE_MULTICLASS_MICROSOFT_2_4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SPIE_MULTICLASS_MICROSOFT_2_4 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2777 - Accuracy: 0.9031 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.6398 | 0.96 | 18 | 0.8568 | 0.6985 | | 0.5817 | 1.9733 | 37 | 0.4171 | 0.8517 | | 0.3432 | 2.9867 | 56 | 0.3288 | 0.8858 | | 0.2637 | 4.0 | 75 | 0.2601 | 0.9083 | | 0.231 | 4.8 | 90 | 0.2777 | 0.9031 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/SPIE_MULTICLASS_FACEBOOK_1_0
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SPIE_MULTICLASS_FACEBOOK_1_0 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0624 - Accuracy: 0.975 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.2193 | 0.9886 | 65 | 0.1686 | 0.9442 | | 0.1208 | 1.9924 | 131 | 0.1095 | 0.9608 | | 0.1029 | 2.9962 | 197 | 0.0895 | 0.9658 | | 0.0609 | 4.0 | 263 | 0.0764 | 0.9733 | | 0.0772 | 4.9430 | 325 | 0.0624 | 0.975 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/SPIE_MULTICLASS_FACEBOOK_1_1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SPIE_MULTICLASS_FACEBOOK_1_1 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0683 - Accuracy: 0.9725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 1 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.2051 | 0.9886 | 65 | 0.1590 | 0.945 | | 0.106 | 1.9924 | 131 | 0.1330 | 0.9533 | | 0.0758 | 2.9962 | 197 | 0.0767 | 0.9708 | | 0.0608 | 4.0 | 263 | 0.0731 | 0.9733 | | 0.0814 | 4.9430 | 325 | 0.0683 | 0.9725 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/SPIE_MULTICLASS_FACEBOOK_1_2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SPIE_MULTICLASS_FACEBOOK_1_2 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0721 - Accuracy: 0.9783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.2206 | 0.9886 | 65 | 0.1739 | 0.9308 | | 0.1149 | 1.9924 | 131 | 0.1008 | 0.965 | | 0.102 | 2.9962 | 197 | 0.0969 | 0.9633 | | 0.0679 | 4.0 | 263 | 0.0738 | 0.9742 | | 0.0662 | 4.9430 | 325 | 0.0721 | 0.9783 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LaLegumbreArtificial/SPIE_MULTICLASS_FACEBOOK_1_3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SPIE_MULTICLASS_FACEBOOK_1_3 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0626 - Accuracy: 0.9783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 3 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.2081 | 0.9886 | 65 | 0.2005 | 0.93 | | 0.1544 | 1.9924 | 131 | 0.1325 | 0.9517 | | 0.0879 | 2.9962 | 197 | 0.0898 | 0.9767 | | 0.0842 | 4.0 | 263 | 0.0790 | 0.97 | | 0.0999 | 4.9430 | 325 | 0.0626 | 0.9783 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "burn through", "contamination", "good weld", "lack of fusion", "lack of penetration", "misalignment" ]
LuisRegis/swin-finetuned-food101
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-finetuned-food101 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5309 - Accuracy: 0.9202 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.7319 | 1.0 | 12 | 1.4270 | 0.6649 | | 1.2966 | 2.0 | 24 | 0.7318 | 0.8777 | | 0.7916 | 2.7660 | 33 | 0.5309 | 0.9202 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "ron", "ron01ml", "ron02ml", "ron03ml", "ron04ml", "ron05ml" ]
LuisRegis/swin-tiny-patch4-window7-224
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4857 - Accuracy: 0.8351 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.478 | 1.2553 | 30 | 1.3662 | 0.5160 | | 0.8021 | 2.5106 | 60 | 0.6806 | 0.75 | | 0.3854 | 3.7660 | 90 | 0.4857 | 0.8351 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "ron00ml", "ron01ml", "ron02ml", "ron03ml", "ron04ml", "ron05ml" ]
bmedeiros/vit-msn-small-lipid-invalidation-nobg
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-lipid-invalidation-nobg This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1866 - Accuracy: 0.9235 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 0.5868 | 0.7059 | | 0.583 | 2.0 | 12 | 0.2850 | 0.8706 | | 0.583 | 3.0 | 18 | 0.3167 | 0.8647 | | 0.4996 | 4.0 | 24 | 0.3738 | 0.8294 | | 0.3097 | 5.0 | 30 | 0.2105 | 0.9176 | | 0.3097 | 6.0 | 36 | 0.2695 | 0.8824 | | 0.2893 | 7.0 | 42 | 0.2304 | 0.8941 | | 0.2893 | 8.0 | 48 | 0.2978 | 0.8529 | | 0.3305 | 9.0 | 54 | 0.2952 | 0.8882 | | 0.2775 | 10.0 | 60 | 0.2661 | 0.8882 | | 0.2775 | 11.0 | 66 | 0.2533 | 0.8824 | | 0.223 | 12.0 | 72 | 0.2586 | 0.8882 | | 0.223 | 13.0 | 78 | 0.2034 | 0.9176 | | 0.216 | 14.0 | 84 | 0.1866 | 0.9235 | | 0.1858 | 15.0 | 90 | 0.2631 | 0.8882 | | 0.1858 | 16.0 | 96 | 0.2440 | 0.8882 | | 0.1866 | 17.0 | 102 | 0.2480 | 0.8941 | | 0.1866 | 18.0 | 108 | 0.3897 | 0.8647 | | 0.1701 | 19.0 | 114 | 0.3999 | 0.8647 | | 0.1717 | 20.0 | 120 | 0.3307 | 0.8706 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.3.2 - Tokenizers 0.19.1
[ "invalid", "valid" ]
Ivanrs/vit-finetune-kidney-stone-Michel_Daudon_-w256_1k_v1-_MIX
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-finetune-kidney-stone-Michel_Daudon_-w256_1k_v1-_MIX This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4725 - Accuracy: 0.87 - Precision: 0.8748 - Recall: 0.87 - F1: 0.8708 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.3292 | 0.3333 | 100 | 0.5792 | 0.8154 | 0.8765 | 0.8154 | 0.8093 | | 0.0884 | 0.6667 | 200 | 0.4725 | 0.87 | 0.8748 | 0.87 | 0.8708 | | 0.0752 | 1.0 | 300 | 0.4837 | 0.8688 | 0.8749 | 0.8688 | 0.8681 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "mix-subtype_iva", "mix-subtype_iva2", "mix-subtype_ivc", "mix-subtype_ivd", "mix-subtype_ia", "mix-subtype_va" ]
Ivanrs/vit-finetune-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_MIX
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-finetune-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_MIX This model was trained from scratch on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4183 - Accuracy: 0.8721 - Precision: 0.8771 - Recall: 0.8721 - F1: 0.8709 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.1141 | 0.3333 | 100 | 0.4384 | 0.8675 | 0.8841 | 0.8675 | 0.8659 | | 0.0513 | 0.6667 | 200 | 0.4183 | 0.8721 | 0.8771 | 0.8721 | 0.8709 | | 0.028 | 1.0 | 300 | 0.4418 | 0.8812 | 0.8842 | 0.8812 | 0.8810 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "mix-subtype_iiia", "mix-subtype_iia", "mix-subtype_ivc", "mix-subtype_ivd", "mix-subtype_ia", "mix-subtype_va" ]
Ivanrs/vit-finetune-kidney-stone-Michel_Daudon_-w256_1k_v1-_SEC
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-finetune-kidney-stone-Michel_Daudon_-w256_1k_v1-_SEC This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3029 - Accuracy: 0.9108 - Precision: 0.9233 - Recall: 0.9108 - F1: 0.9106 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.1214 | 0.6667 | 100 | 0.3029 | 0.9108 | 0.9233 | 0.9108 | 0.9106 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "sec-subtype_iva", "sec-subtype_iva2", "sec-subtype_ivc", "sec-subtype_ivd", "sec-subtype_ia", "sec-subtype_va" ]
Ivanrs/vit-finetune-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_SEC
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-finetune-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_SEC This model was trained from scratch on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1649 - Accuracy: 0.9533 - Precision: 0.9601 - Recall: 0.9533 - F1: 0.9534 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.0293 | 0.6667 | 100 | 0.1649 | 0.9533 | 0.9601 | 0.9533 | 0.9534 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "sec-subtype_iiia", "sec-subtype_iia", "sec-subtype_ivc", "sec-subtype_ivd", "sec-subtype_ia", "sec-subtype_va" ]
Ivanrs/vit-finetune-kidney-stone-Michel_Daudon_-w256_1k_v1-_SUR
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-finetune-kidney-stone-Michel_Daudon_-w256_1k_v1-_SUR This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7842 - Accuracy: 0.7621 - Precision: 0.7781 - Recall: 0.7621 - F1: 0.7574 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.0757 | 0.6667 | 100 | 0.7842 | 0.7621 | 0.7781 | 0.7621 | 0.7574 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "sur-subtype_iva", "sur-subtype_iva2", "sur-subtype_ivc", "sur-subtype_ivd", "sur-subtype_ia", "sur-subtype_va" ]
Ivanrs/vit-finetune-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_SUR
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-finetune-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_SUR This model was trained from scratch on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3726 - Accuracy: 0.8917 - Precision: 0.8912 - Recall: 0.8917 - F1: 0.8897 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.0863 | 0.6667 | 100 | 0.3726 | 0.8917 | 0.8912 | 0.8917 | 0.8897 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "sur-subtype_iiia", "sur-subtype_iia", "sur-subtype_ivc", "sur-subtype_ivd", "sur-subtype_ia", "sur-subtype_va" ]
Ivanrs/vit-finetune-kidney-stone-Michel_Daudon_-w256_1k_v1-_MIX-pretrain
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-finetune-kidney-stone-Michel_Daudon_-w256_1k_v1-_MIX-pretrain This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4746 - Accuracy: 0.8496 - Precision: 0.8598 - Recall: 0.8496 - F1: 0.8525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.2473 | 0.3333 | 100 | 0.4746 | 0.8496 | 0.8598 | 0.8496 | 0.8525 | | 0.2861 | 0.6667 | 200 | 0.8501 | 0.7712 | 0.8390 | 0.7712 | 0.7669 | | 0.1879 | 1.0 | 300 | 0.5770 | 0.8087 | 0.8161 | 0.8087 | 0.8050 | | 0.0231 | 1.3333 | 400 | 0.6048 | 0.8413 | 0.8497 | 0.8413 | 0.8397 | | 0.095 | 1.6667 | 500 | 0.6374 | 0.8454 | 0.8771 | 0.8454 | 0.8458 | | 0.0454 | 2.0 | 600 | 0.6772 | 0.8204 | 0.8424 | 0.8204 | 0.8275 | | 0.0668 | 2.3333 | 700 | 0.7371 | 0.8321 | 0.8458 | 0.8321 | 0.8313 | | 0.0145 | 2.6667 | 800 | 0.8734 | 0.8363 | 0.8700 | 0.8363 | 0.8369 | | 0.0288 | 3.0 | 900 | 0.9109 | 0.8279 | 0.8649 | 0.8279 | 0.8276 | | 0.0216 | 3.3333 | 1000 | 1.0871 | 0.7983 | 0.8372 | 0.7983 | 0.7925 | | 0.0874 | 3.6667 | 1100 | 1.1486 | 0.7975 | 0.8589 | 0.7975 | 0.7993 | | 0.0036 | 4.0 | 1200 | 0.8451 | 0.8308 | 0.8581 | 0.8308 | 0.8326 | | 0.0059 | 4.3333 | 1300 | 0.6169 | 0.8667 | 0.8932 | 0.8667 | 0.8679 | | 0.0476 | 4.6667 | 1400 | 0.7147 | 0.8579 | 0.8615 | 0.8579 | 0.8532 | | 0.1213 | 5.0 | 1500 | 1.0007 | 0.8233 | 0.8589 | 0.8233 | 0.8199 | | 0.0267 | 5.3333 | 1600 | 0.7032 | 0.8508 | 0.8587 | 0.8508 | 0.8510 | | 0.0024 | 5.6667 | 1700 | 0.5666 | 0.8908 | 0.9006 | 0.8908 | 0.8931 | | 0.0149 | 6.0 | 1800 | 0.5346 | 0.9062 | 0.9122 | 0.9062 | 0.9063 | | 0.0011 | 6.3333 | 1900 | 0.9493 | 0.8304 | 0.8595 | 0.8304 | 0.8162 | | 0.1168 | 6.6667 | 2000 | 0.7843 | 0.8642 | 0.8732 | 0.8642 | 0.8673 | | 0.0015 | 7.0 | 2100 | 0.7234 | 0.8638 | 0.8777 | 0.8638 | 0.8563 | | 0.0007 | 7.3333 | 2200 | 0.7182 | 0.8721 | 0.8875 | 0.8721 | 0.8680 | | 0.052 | 7.6667 | 2300 | 0.7523 | 0.8692 | 0.8869 | 0.8692 | 0.8628 | | 0.0013 | 8.0 | 2400 | 0.9651 | 0.8104 | 0.8386 | 0.8104 | 0.8117 | | 0.0006 | 8.3333 | 2500 | 0.8654 | 0.8496 | 0.8497 | 0.8496 | 0.8452 | | 0.0006 | 8.6667 | 2600 | 0.9136 | 0.8438 | 0.8532 | 0.8438 | 0.8414 | | 0.0005 | 9.0 | 2700 | 0.8312 | 0.8525 | 0.8640 | 0.8525 | 0.8477 | | 0.0005 | 9.3333 | 2800 | 0.7532 | 0.8675 | 0.8719 | 0.8675 | 0.8640 | | 0.0005 | 9.6667 | 2900 | 0.9026 | 0.8421 | 0.8648 | 0.8421 | 0.8409 | | 0.0004 | 10.0 | 3000 | 0.8117 | 0.8538 | 0.8702 | 0.8538 | 0.8539 | | 0.0003 | 10.3333 | 3100 | 0.8112 | 0.8546 | 0.8697 | 0.8546 | 0.8544 | | 0.0003 | 10.6667 | 3200 | 0.8165 | 0.8546 | 0.8697 | 0.8546 | 0.8544 | | 0.0003 | 11.0 | 3300 | 0.8219 | 0.855 | 0.8698 | 0.855 | 0.8549 | | 0.0003 | 11.3333 | 3400 | 0.8266 | 0.8546 | 0.8694 | 0.8546 | 0.8545 | | 0.0003 | 11.6667 | 3500 | 0.8307 | 0.8546 | 0.8694 | 0.8546 | 0.8545 | | 0.0003 | 12.0 | 3600 | 0.8349 | 0.8546 | 0.8694 | 0.8546 | 0.8544 | | 0.0003 | 12.3333 | 3700 | 0.8381 | 0.855 | 0.8699 | 0.855 | 0.8548 | | 0.0003 | 12.6667 | 3800 | 0.8411 | 0.8558 | 0.8707 | 0.8558 | 0.8557 | | 0.0002 | 13.0 | 3900 | 0.8439 | 0.8554 | 0.8704 | 0.8554 | 0.8553 | | 0.0002 | 13.3333 | 4000 | 0.8459 | 0.8562 | 0.8712 | 0.8562 | 0.8561 | | 0.0002 | 13.6667 | 4100 | 0.8479 | 0.8562 | 0.8713 | 0.8562 | 0.8561 | | 0.0002 | 14.0 | 4200 | 0.8496 | 0.8558 | 0.8710 | 0.8558 | 0.8556 | | 0.0002 | 14.3333 | 4300 | 0.8508 | 0.8558 | 0.8710 | 0.8558 | 0.8556 | | 0.0002 | 14.6667 | 4400 | 0.8515 | 0.855 | 0.8702 | 0.855 | 0.8548 | | 0.0002 | 15.0 | 4500 | 0.8517 | 0.8554 | 0.8707 | 0.8554 | 0.8552 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "mix-subtype_iva", "mix-subtype_iva2", "mix-subtype_ivc", "mix-subtype_ivd", "mix-subtype_ia", "mix-subtype_va" ]
Ivanrs/vit-finetune-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_MIX-finetune
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-finetune-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_MIX-finetune This model was trained from scratch on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3739 - Accuracy: 0.9025 - Precision: 0.9065 - Recall: 0.9025 - F1: 0.9011 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.1672 | 0.3333 | 100 | 0.3739 | 0.9025 | 0.9065 | 0.9025 | 0.9011 | | 0.1364 | 0.6667 | 200 | 0.7118 | 0.7879 | 0.8371 | 0.7879 | 0.7837 | | 0.0603 | 1.0 | 300 | 0.6678 | 0.8275 | 0.8502 | 0.8275 | 0.8257 | | 0.0532 | 1.3333 | 400 | 0.6051 | 0.8596 | 0.8785 | 0.8596 | 0.8578 | | 0.0195 | 1.6667 | 500 | 0.6989 | 0.8263 | 0.8493 | 0.8263 | 0.8278 | | 0.0284 | 2.0 | 600 | 0.7349 | 0.8342 | 0.8608 | 0.8342 | 0.8366 | | 0.0145 | 2.3333 | 700 | 0.7102 | 0.8662 | 0.8741 | 0.8662 | 0.8636 | | 0.0142 | 2.6667 | 800 | 0.7562 | 0.8583 | 0.8652 | 0.8583 | 0.8554 | | 0.0327 | 3.0 | 900 | 0.6251 | 0.87 | 0.8830 | 0.87 | 0.8697 | | 0.0014 | 3.3333 | 1000 | 0.6991 | 0.8571 | 0.8772 | 0.8571 | 0.8535 | | 0.0015 | 3.6667 | 1100 | 0.4318 | 0.9075 | 0.9117 | 0.9075 | 0.9077 | | 0.0022 | 4.0 | 1200 | 0.7833 | 0.8592 | 0.8752 | 0.8592 | 0.8583 | | 0.0049 | 4.3333 | 1300 | 0.4950 | 0.9054 | 0.9088 | 0.9054 | 0.9049 | | 0.0125 | 4.6667 | 1400 | 0.5476 | 0.8879 | 0.8898 | 0.8879 | 0.8873 | | 0.0163 | 5.0 | 1500 | 0.4917 | 0.9096 | 0.9099 | 0.9096 | 0.9087 | | 0.003 | 5.3333 | 1600 | 0.8279 | 0.8612 | 0.8665 | 0.8612 | 0.8586 | | 0.0027 | 5.6667 | 1700 | 0.9960 | 0.8242 | 0.8615 | 0.8242 | 0.8141 | | 0.0015 | 6.0 | 1800 | 0.7634 | 0.8621 | 0.8865 | 0.8621 | 0.8611 | | 0.0006 | 6.3333 | 1900 | 0.5313 | 0.9 | 0.9068 | 0.9 | 0.8991 | | 0.0005 | 6.6667 | 2000 | 0.4222 | 0.9225 | 0.9243 | 0.9225 | 0.9222 | | 0.0322 | 7.0 | 2100 | 0.5260 | 0.9067 | 0.9115 | 0.9067 | 0.9063 | | 0.0106 | 7.3333 | 2200 | 0.5679 | 0.8817 | 0.8903 | 0.8817 | 0.8819 | | 0.0006 | 7.6667 | 2300 | 0.7876 | 0.8517 | 0.8828 | 0.8517 | 0.8532 | | 0.0004 | 8.0 | 2400 | 0.5605 | 0.8992 | 0.9061 | 0.8992 | 0.8987 | | 0.0003 | 8.3333 | 2500 | 0.5620 | 0.9021 | 0.9084 | 0.9021 | 0.9016 | | 0.0003 | 8.6667 | 2600 | 0.5725 | 0.9004 | 0.9071 | 0.9004 | 0.9001 | | 0.0002 | 9.0 | 2700 | 0.5745 | 0.9008 | 0.9074 | 0.9008 | 0.9006 | | 0.0002 | 9.3333 | 2800 | 0.5751 | 0.9012 | 0.9074 | 0.9012 | 0.9009 | | 0.0002 | 9.6667 | 2900 | 0.5769 | 0.9017 | 0.9078 | 0.9017 | 0.9013 | | 0.0002 | 10.0 | 3000 | 0.5792 | 0.9012 | 0.9075 | 0.9012 | 0.9009 | | 0.0002 | 10.3333 | 3100 | 0.5812 | 0.9017 | 0.9078 | 0.9017 | 0.9014 | | 0.0002 | 10.6667 | 3200 | 0.5832 | 0.9017 | 0.9078 | 0.9017 | 0.9014 | | 0.0002 | 11.0 | 3300 | 0.5849 | 0.9017 | 0.9078 | 0.9017 | 0.9014 | | 0.0002 | 11.3333 | 3400 | 0.5864 | 0.9021 | 0.9080 | 0.9021 | 0.9018 | | 0.0002 | 11.6667 | 3500 | 0.5881 | 0.9021 | 0.9080 | 0.9021 | 0.9018 | | 0.0001 | 12.0 | 3600 | 0.5898 | 0.9029 | 0.9086 | 0.9029 | 0.9026 | | 0.0002 | 12.3333 | 3700 | 0.5913 | 0.9033 | 0.9089 | 0.9033 | 0.9030 | | 0.0001 | 12.6667 | 3800 | 0.5925 | 0.9038 | 0.9093 | 0.9038 | 0.9034 | | 0.0001 | 13.0 | 3900 | 0.5936 | 0.9038 | 0.9093 | 0.9038 | 0.9034 | | 0.0001 | 13.3333 | 4000 | 0.5945 | 0.9038 | 0.9093 | 0.9038 | 0.9034 | | 0.0001 | 13.6667 | 4100 | 0.5953 | 0.9038 | 0.9093 | 0.9038 | 0.9034 | | 0.0001 | 14.0 | 4200 | 0.5961 | 0.9038 | 0.9093 | 0.9038 | 0.9034 | | 0.0001 | 14.3333 | 4300 | 0.5966 | 0.9038 | 0.9093 | 0.9038 | 0.9034 | | 0.0001 | 14.6667 | 4400 | 0.5970 | 0.9038 | 0.9093 | 0.9038 | 0.9034 | | 0.0001 | 15.0 | 4500 | 0.5971 | 0.9038 | 0.9093 | 0.9038 | 0.9034 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "mix-subtype_iiia", "mix-subtype_iia", "mix-subtype_ivc", "mix-subtype_ivd", "mix-subtype_ia", "mix-subtype_va" ]
Ivanrs/vit-finetune-kidney-stone-Michel_Daudon_-w256_1k_v1-_SEC-pretrain
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-finetune-kidney-stone-Michel_Daudon_-w256_1k_v1-_SEC-pretrain This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3455 - Accuracy: 0.9108 - Precision: 0.9190 - Recall: 0.9108 - F1: 0.9103 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.1494 | 0.6667 | 100 | 0.6088 | 0.8442 | 0.8766 | 0.8442 | 0.8390 | | 0.0665 | 1.3333 | 200 | 0.5533 | 0.8492 | 0.8810 | 0.8492 | 0.8542 | | 0.0215 | 2.0 | 300 | 0.3721 | 0.9017 | 0.9082 | 0.9017 | 0.8985 | | 0.0101 | 2.6667 | 400 | 0.5347 | 0.8942 | 0.9061 | 0.8942 | 0.8920 | | 0.043 | 3.3333 | 500 | 0.7850 | 0.8425 | 0.8592 | 0.8425 | 0.8427 | | 0.0641 | 4.0 | 600 | 0.7735 | 0.8583 | 0.8770 | 0.8583 | 0.8574 | | 0.0036 | 4.6667 | 700 | 0.7351 | 0.8367 | 0.8623 | 0.8367 | 0.8250 | | 0.0039 | 5.3333 | 800 | 0.3455 | 0.9108 | 0.9190 | 0.9108 | 0.9103 | | 0.0021 | 6.0 | 900 | 0.5940 | 0.8758 | 0.8985 | 0.8758 | 0.8730 | | 0.054 | 6.6667 | 1000 | 0.7463 | 0.8733 | 0.9068 | 0.8733 | 0.8714 | | 0.0015 | 7.3333 | 1100 | 0.8915 | 0.8392 | 0.8722 | 0.8392 | 0.8243 | | 0.0013 | 8.0 | 1200 | 0.5725 | 0.8917 | 0.8943 | 0.8917 | 0.8909 | | 0.0011 | 8.6667 | 1300 | 0.5772 | 0.8933 | 0.8960 | 0.8933 | 0.8926 | | 0.001 | 9.3333 | 1400 | 0.5820 | 0.8933 | 0.8956 | 0.8933 | 0.8926 | | 0.0009 | 10.0 | 1500 | 0.5859 | 0.8933 | 0.8954 | 0.8933 | 0.8925 | | 0.0008 | 10.6667 | 1600 | 0.5901 | 0.8933 | 0.8955 | 0.8933 | 0.8926 | | 0.0008 | 11.3333 | 1700 | 0.5938 | 0.8933 | 0.8955 | 0.8933 | 0.8926 | | 0.0007 | 12.0 | 1800 | 0.5971 | 0.8933 | 0.8953 | 0.8933 | 0.8925 | | 0.0007 | 12.6667 | 1900 | 0.5998 | 0.8933 | 0.8952 | 0.8933 | 0.8926 | | 0.0007 | 13.3333 | 2000 | 0.6016 | 0.8933 | 0.8952 | 0.8933 | 0.8926 | | 0.0006 | 14.0 | 2100 | 0.6032 | 0.8933 | 0.8952 | 0.8933 | 0.8926 | | 0.0006 | 14.6667 | 2200 | 0.6039 | 0.8933 | 0.8952 | 0.8933 | 0.8926 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "sec-subtype_iva", "sec-subtype_iva2", "sec-subtype_ivc", "sec-subtype_ivd", "sec-subtype_ia", "sec-subtype_va" ]
akw0088/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0498 - Accuracy: 0.9819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2209 | 1.0 | 190 | 0.0911 | 0.9693 | | 0.161 | 2.0 | 380 | 0.0671 | 0.9789 | | 0.1283 | 3.0 | 570 | 0.0498 | 0.9819 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "annualcrop", "forest", "herbaceousvegetation", "highway", "industrial", "pasture", "permanentcrop", "residential", "river", "sealake" ]
Ivanrs/vit-finetune-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_SEC-finetune
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-finetune-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_SEC-finetune This model was trained from scratch on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3310 - Accuracy: 0.9083 - Precision: 0.9122 - Recall: 0.9083 - F1: 0.9062 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.058 | 0.6667 | 100 | 0.3310 | 0.9083 | 0.9122 | 0.9083 | 0.9062 | | 0.0028 | 1.3333 | 200 | 1.0903 | 0.7817 | 0.8859 | 0.7817 | 0.7660 | | 0.016 | 2.0 | 300 | 0.8386 | 0.8167 | 0.8599 | 0.8167 | 0.8163 | | 0.0032 | 2.6667 | 400 | 0.7872 | 0.8592 | 0.8953 | 0.8592 | 0.8567 | | 0.0029 | 3.3333 | 500 | 1.1179 | 0.8058 | 0.8379 | 0.8058 | 0.8004 | | 0.001 | 4.0 | 600 | 0.7550 | 0.8617 | 0.8971 | 0.8617 | 0.8628 | | 0.0006 | 4.6667 | 700 | 0.6433 | 0.8833 | 0.9051 | 0.8833 | 0.8850 | | 0.0004 | 5.3333 | 800 | 0.6051 | 0.8883 | 0.9094 | 0.8883 | 0.8903 | | 0.0004 | 6.0 | 900 | 0.6016 | 0.8925 | 0.9128 | 0.8925 | 0.8946 | | 0.0003 | 6.6667 | 1000 | 0.6000 | 0.8933 | 0.9138 | 0.8933 | 0.8956 | | 0.0003 | 7.3333 | 1100 | 0.6001 | 0.8925 | 0.9130 | 0.8925 | 0.8947 | | 0.0003 | 8.0 | 1200 | 0.6025 | 0.8933 | 0.9134 | 0.8933 | 0.8955 | | 0.0002 | 8.6667 | 1300 | 0.6047 | 0.8958 | 0.9151 | 0.8958 | 0.8980 | | 0.0002 | 9.3333 | 1400 | 0.6045 | 0.8958 | 0.9151 | 0.8958 | 0.8980 | | 0.0002 | 10.0 | 1500 | 0.6056 | 0.8958 | 0.9147 | 0.8958 | 0.8979 | | 0.0002 | 10.6667 | 1600 | 0.6063 | 0.8958 | 0.9147 | 0.8958 | 0.8979 | | 0.0002 | 11.3333 | 1700 | 0.6082 | 0.8967 | 0.9152 | 0.8967 | 0.8987 | | 0.0002 | 12.0 | 1800 | 0.6092 | 0.8967 | 0.9152 | 0.8967 | 0.8987 | | 0.0002 | 12.6667 | 1900 | 0.6091 | 0.8967 | 0.9152 | 0.8967 | 0.8987 | | 0.0002 | 13.3333 | 2000 | 0.6104 | 0.8967 | 0.9152 | 0.8967 | 0.8987 | | 0.0002 | 14.0 | 2100 | 0.6111 | 0.8967 | 0.9152 | 0.8967 | 0.8987 | | 0.0002 | 14.6667 | 2200 | 0.6114 | 0.8967 | 0.9152 | 0.8967 | 0.8987 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "sec-subtype_iiia", "sec-subtype_iia", "sec-subtype_ivc", "sec-subtype_ivd", "sec-subtype_ia", "sec-subtype_va" ]
thenewsupercell/my_Emotion_DF_Image_ViT_V1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Louis_Emotion_DF_Image_VIT_V1 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8621 - Accuracy: 0.7035 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9613 | 1.0 | 1795 | 0.9020 | 0.6693 | | 0.5198 | 2.0 | 3590 | 0.8173 | 0.7072 | | 0.4838 | 3.0 | 5385 | 0.8501 | 0.7127 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angry", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
Ivanrs/vit-finetune-kidney-stone-Michel_Daudon_-w256_1k_v1-_SUR-pretrain
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-finetune-kidney-stone-Michel_Daudon_-w256_1k_v1-_SUR-pretrain This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8241 - Accuracy: 0.7318 - Precision: 0.7397 - Recall: 0.7318 - F1: 0.7202 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.189 | 0.6667 | 100 | 0.8241 | 0.7318 | 0.7397 | 0.7318 | 0.7202 | | 0.0343 | 1.3333 | 200 | 1.1125 | 0.7269 | 0.8038 | 0.7269 | 0.7294 | | 0.0214 | 2.0 | 300 | 0.9077 | 0.7645 | 0.7745 | 0.7645 | 0.7681 | | 0.0684 | 2.6667 | 400 | 1.3120 | 0.7498 | 0.7677 | 0.7498 | 0.7542 | | 0.0543 | 3.3333 | 500 | 1.4106 | 0.7212 | 0.7429 | 0.7212 | 0.7291 | | 0.0367 | 4.0 | 600 | 0.9240 | 0.7850 | 0.8052 | 0.7850 | 0.7868 | | 0.0028 | 4.6667 | 700 | 0.9933 | 0.8013 | 0.8130 | 0.8013 | 0.8037 | | 0.0023 | 5.3333 | 800 | 1.1196 | 0.7964 | 0.8140 | 0.7964 | 0.8023 | | 0.0279 | 6.0 | 900 | 1.1338 | 0.7825 | 0.8063 | 0.7825 | 0.7742 | | 0.0351 | 6.6667 | 1000 | 1.2453 | 0.8046 | 0.8289 | 0.8046 | 0.7990 | | 0.0015 | 7.3333 | 1100 | 1.4902 | 0.7833 | 0.8110 | 0.7833 | 0.7821 | | 0.0012 | 8.0 | 1200 | 1.5158 | 0.7817 | 0.8050 | 0.7817 | 0.7801 | | 0.001 | 8.6667 | 1300 | 1.5461 | 0.7776 | 0.7989 | 0.7776 | 0.7765 | | 0.0009 | 9.3333 | 1400 | 1.5691 | 0.7735 | 0.7930 | 0.7735 | 0.7728 | | 0.0009 | 10.0 | 1500 | 1.5899 | 0.7743 | 0.7935 | 0.7743 | 0.7735 | | 0.0008 | 10.6667 | 1600 | 1.6074 | 0.7735 | 0.7927 | 0.7735 | 0.7731 | | 0.0007 | 11.3333 | 1700 | 1.6235 | 0.7735 | 0.7927 | 0.7735 | 0.7731 | | 0.0007 | 12.0 | 1800 | 1.6367 | 0.7727 | 0.7914 | 0.7727 | 0.7723 | | 0.0007 | 12.6667 | 1900 | 1.6468 | 0.7735 | 0.7919 | 0.7735 | 0.7730 | | 0.0006 | 13.3333 | 2000 | 1.6551 | 0.7735 | 0.7909 | 0.7735 | 0.7729 | | 0.0006 | 14.0 | 2100 | 1.6609 | 0.7727 | 0.7896 | 0.7727 | 0.7721 | | 0.0006 | 14.6667 | 2200 | 1.6637 | 0.7727 | 0.7896 | 0.7727 | 0.7721 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "sur-subtype_iva", "sur-subtype_iva2", "sur-subtype_ivc", "sur-subtype_ivd", "sur-subtype_ia", "sur-subtype_va" ]
Ivanrs/vit-finetune-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_SUR-finetune
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-finetune-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_SUR-finetune This model was trained from scratch on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5530 - Accuracy: 0.8733 - Precision: 0.8738 - Recall: 0.8733 - F1: 0.8688 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.1177 | 0.6667 | 100 | 0.5870 | 0.8358 | 0.8505 | 0.8358 | 0.8374 | | 0.0716 | 1.3333 | 200 | 0.5973 | 0.84 | 0.8532 | 0.84 | 0.8370 | | 0.0493 | 2.0 | 300 | 0.8088 | 0.815 | 0.8349 | 0.815 | 0.8146 | | 0.0037 | 2.6667 | 400 | 0.9701 | 0.8183 | 0.8454 | 0.8183 | 0.8159 | | 0.0835 | 3.3333 | 500 | 0.5530 | 0.8733 | 0.8738 | 0.8733 | 0.8688 | | 0.0169 | 4.0 | 600 | 0.9139 | 0.8008 | 0.8338 | 0.8008 | 0.7979 | | 0.0019 | 4.6667 | 700 | 0.8676 | 0.84 | 0.8546 | 0.84 | 0.8397 | | 0.0013 | 5.3333 | 800 | 0.7638 | 0.8525 | 0.8594 | 0.8525 | 0.8506 | | 0.0011 | 6.0 | 900 | 0.7257 | 0.8675 | 0.8711 | 0.8675 | 0.8658 | | 0.0009 | 6.6667 | 1000 | 0.7446 | 0.8717 | 0.8746 | 0.8717 | 0.8695 | | 0.0008 | 7.3333 | 1100 | 0.7601 | 0.8725 | 0.8759 | 0.8725 | 0.8702 | | 0.0007 | 8.0 | 1200 | 0.7734 | 0.8725 | 0.8759 | 0.8725 | 0.8702 | | 0.0006 | 8.6667 | 1300 | 0.7845 | 0.8725 | 0.8762 | 0.8725 | 0.8702 | | 0.0006 | 9.3333 | 1400 | 0.7941 | 0.8717 | 0.8753 | 0.8717 | 0.8692 | | 0.0005 | 10.0 | 1500 | 0.8019 | 0.8717 | 0.8753 | 0.8717 | 0.8692 | | 0.0005 | 10.6667 | 1600 | 0.8085 | 0.8717 | 0.8753 | 0.8717 | 0.8692 | | 0.0005 | 11.3333 | 1700 | 0.8148 | 0.8717 | 0.8753 | 0.8717 | 0.8692 | | 0.0004 | 12.0 | 1800 | 0.8197 | 0.8717 | 0.8753 | 0.8717 | 0.8692 | | 0.0004 | 12.6667 | 1900 | 0.8236 | 0.8717 | 0.8753 | 0.8717 | 0.8692 | | 0.0004 | 13.3333 | 2000 | 0.8268 | 0.8717 | 0.8753 | 0.8717 | 0.8692 | | 0.0004 | 14.0 | 2100 | 0.8289 | 0.8717 | 0.8753 | 0.8717 | 0.8692 | | 0.0004 | 14.6667 | 2200 | 0.8300 | 0.8717 | 0.8753 | 0.8717 | 0.8692 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "sur-subtype_iiia", "sur-subtype_iia", "sur-subtype_ivc", "sur-subtype_ivd", "sur-subtype_ia", "sur-subtype_va" ]
shavirazh/my_first_emotion_classification_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3718 - Accuracy: 0.45 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.8675 | 1.0 | 40 | 1.7931 | 0.3125 | | 1.568 | 2.0 | 80 | 1.5873 | 0.3688 | | 1.3605 | 3.0 | 120 | 1.5087 | 0.4375 | | 1.0784 | 4.0 | 160 | 1.4299 | 0.45 | | 0.8568 | 5.0 | 200 | 1.4141 | 0.475 | | 0.649 | 6.0 | 240 | 1.4242 | 0.4562 | | 0.4787 | 7.0 | 280 | 1.3718 | 0.45 | | 0.359 | 8.0 | 320 | 1.3828 | 0.45 | | 0.3032 | 9.0 | 360 | 1.3888 | 0.4688 | | 0.2782 | 10.0 | 400 | 1.3995 | 0.4437 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6", "label_7" ]
shawnmichael/vit-fire-smoke-detection
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-fire-smoke-detection This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "none", "fire", "smoke", "smoke and fire" ]
shawnmichael/vit-fire-smoke-detection-v2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-fire-smoke-detection-v2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "none", "fire", "smoke", "smoke and fire" ]
shawnmichael/vit-fire-smoke-detection-v3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-fire-smoke-detection-v3 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "none", "fire", "smoke", "smoke and fire" ]
shawnmichael/swin-fire-smoke-detection-v1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-fire-smoke-detection-v1 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "none", "fire", "smoke", "smoke and fire" ]
shawnmichael/convnext-tiny-fire-smoke-detection-v1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-tiny-fire-smoke-detection-v1 This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "none", "fire", "smoke", "smoke and fire" ]
teguhteja/results
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5240 - Accuracy: 0.4813 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 80 | 2.0769 | 0.1562 | | No log | 2.0 | 160 | 2.0542 | 0.2125 | | No log | 3.0 | 240 | 1.9931 | 0.3125 | | No log | 4.0 | 320 | 1.8756 | 0.2938 | | No log | 5.0 | 400 | 1.6917 | 0.3875 | | No log | 6.0 | 480 | 1.5471 | 0.4188 | | 1.7305 | 7.0 | 560 | 1.4615 | 0.4562 | | 1.7305 | 8.0 | 640 | 1.4356 | 0.4688 | | 1.7305 | 9.0 | 720 | 1.3676 | 0.4875 | | 1.7305 | 10.0 | 800 | 1.4125 | 0.5062 | | 1.7305 | 11.0 | 880 | 1.5065 | 0.4688 | | 1.7305 | 12.0 | 960 | 1.5047 | 0.4938 | | 0.3363 | 13.0 | 1040 | 1.5180 | 0.4875 | | 0.3363 | 14.0 | 1120 | 1.5228 | 0.4813 | | 0.3363 | 15.0 | 1200 | 1.5240 | 0.4813 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
felixchiuman/vit-emotion
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-emotion This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6412 - Accuracy: 0.45 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 80 | 2.0356 | 0.2 | | 2.0342 | 2.0 | 160 | 1.8868 | 0.3312 | | 1.7429 | 3.0 | 240 | 1.7304 | 0.4188 | | 1.4173 | 4.0 | 320 | 1.6726 | 0.4125 | | 1.1255 | 5.0 | 400 | 1.6412 | 0.45 | | 1.1255 | 6.0 | 480 | 1.6340 | 0.4375 | | 0.8705 | 7.0 | 560 | 1.6473 | 0.4188 | | 0.7143 | 8.0 | 640 | 1.6618 | 0.425 | | 0.6206 | 9.0 | 720 | 1.6705 | 0.4313 | | 0.5788 | 10.0 | 800 | 1.6769 | 0.4313 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Tokenizers 0.21.0
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6", "label_7" ]
emmahuan28/iemocap-video
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6", "label_7", "label_8" ]
cvmil/vit-base-patch16-224_augmented-v2_tl
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224_rice-leaf-disease-augmented-v2_tl This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6919 - Accuracy: 0.7679 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.1171 | 1.0 | 63 | 1.8775 | 0.2946 | | 1.6139 | 2.0 | 126 | 1.3619 | 0.5476 | | 1.1727 | 3.0 | 189 | 1.1003 | 0.6577 | | 0.9586 | 4.0 | 252 | 0.9665 | 0.7232 | | 0.8409 | 5.0 | 315 | 0.8663 | 0.7440 | | 0.7632 | 6.0 | 378 | 0.8322 | 0.7381 | | 0.7093 | 7.0 | 441 | 0.8039 | 0.7470 | | 0.6667 | 8.0 | 504 | 0.7722 | 0.75 | | 0.6353 | 9.0 | 567 | 0.7477 | 0.7560 | | 0.6101 | 10.0 | 630 | 0.7304 | 0.7589 | | 0.5894 | 11.0 | 693 | 0.7229 | 0.7649 | | 0.5737 | 12.0 | 756 | 0.7130 | 0.7619 | | 0.5627 | 13.0 | 819 | 0.7033 | 0.7649 | | 0.5524 | 14.0 | 882 | 0.7009 | 0.7649 | | 0.5439 | 15.0 | 945 | 0.6945 | 0.7679 | | 0.5397 | 16.0 | 1008 | 0.6937 | 0.7649 | | 0.5357 | 17.0 | 1071 | 0.6933 | 0.7679 | | 0.5337 | 18.0 | 1134 | 0.6919 | 0.7679 | | 0.5322 | 19.0 | 1197 | 0.6921 | 0.7679 | | 0.5325 | 20.0 | 1260 | 0.6919 | 0.7679 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
cvmil/resnet-50_augmented-v2_tl
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-50_rice-leaf-disease-augmented-v2_tl This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3083 - Accuracy: 0.5952 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0633 | 1.0 | 63 | 2.0143 | 0.3452 | | 1.9625 | 2.0 | 126 | 1.8719 | 0.5060 | | 1.8119 | 3.0 | 189 | 1.7332 | 0.5 | | 1.6826 | 4.0 | 252 | 1.6271 | 0.5268 | | 1.5879 | 5.0 | 315 | 1.5436 | 0.5595 | | 1.516 | 6.0 | 378 | 1.4871 | 0.5536 | | 1.4572 | 7.0 | 441 | 1.4566 | 0.5655 | | 1.4104 | 8.0 | 504 | 1.4224 | 0.5685 | | 1.3734 | 9.0 | 567 | 1.4033 | 0.5685 | | 1.3414 | 10.0 | 630 | 1.3735 | 0.5952 | | 1.3186 | 11.0 | 693 | 1.3579 | 0.5714 | | 1.2972 | 12.0 | 756 | 1.3402 | 0.5923 | | 1.2862 | 13.0 | 819 | 1.3342 | 0.5893 | | 1.2716 | 14.0 | 882 | 1.3271 | 0.5863 | | 1.2632 | 15.0 | 945 | 1.3210 | 0.6042 | | 1.2546 | 16.0 | 1008 | 1.3146 | 0.5923 | | 1.2485 | 17.0 | 1071 | 1.3061 | 0.6012 | | 1.25 | 18.0 | 1134 | 1.3090 | 0.5923 | | 1.2457 | 19.0 | 1197 | 1.3106 | 0.6042 | | 1.2466 | 20.0 | 1260 | 1.3083 | 0.5952 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
cvmil/swin-base-patch4-window7-224_augmented-v2_tl
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-base-patch4-window7-224_rice-leaf-disease-augmented-v2_tl This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6677 - Accuracy: 0.7857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9001 | 1.0 | 63 | 1.6795 | 0.4613 | | 1.429 | 2.0 | 126 | 1.2172 | 0.6310 | | 1.0492 | 3.0 | 189 | 1.0030 | 0.6905 | | 0.8729 | 4.0 | 252 | 0.8982 | 0.7381 | | 0.7743 | 5.0 | 315 | 0.8246 | 0.7321 | | 0.7084 | 6.0 | 378 | 0.7977 | 0.7440 | | 0.6631 | 7.0 | 441 | 0.7650 | 0.7649 | | 0.6279 | 8.0 | 504 | 0.7327 | 0.7619 | | 0.6004 | 9.0 | 567 | 0.7189 | 0.7768 | | 0.577 | 10.0 | 630 | 0.7078 | 0.7798 | | 0.5625 | 11.0 | 693 | 0.6952 | 0.7738 | | 0.5449 | 12.0 | 756 | 0.6857 | 0.7857 | | 0.537 | 13.0 | 819 | 0.6802 | 0.7827 | | 0.5301 | 14.0 | 882 | 0.6746 | 0.7857 | | 0.5224 | 15.0 | 945 | 0.6715 | 0.7857 | | 0.5188 | 16.0 | 1008 | 0.6704 | 0.7857 | | 0.5153 | 17.0 | 1071 | 0.6685 | 0.7857 | | 0.5112 | 18.0 | 1134 | 0.6676 | 0.7857 | | 0.5119 | 19.0 | 1197 | 0.6678 | 0.7857 | | 0.5112 | 20.0 | 1260 | 0.6677 | 0.7857 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
cvmil/deit-base-patch16-224_augmented-v2_tl
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deit-base-patch16-224_rice-leaf-disease-augmented-v2_tl This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8257 - Accuracy: 0.7232 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0445 | 1.0 | 63 | 1.8996 | 0.3631 | | 1.6611 | 2.0 | 126 | 1.4965 | 0.5536 | | 1.2741 | 3.0 | 189 | 1.2567 | 0.6190 | | 1.0663 | 4.0 | 252 | 1.1184 | 0.6458 | | 0.9481 | 5.0 | 315 | 1.0362 | 0.6696 | | 0.8707 | 6.0 | 378 | 0.9921 | 0.6726 | | 0.8161 | 7.0 | 441 | 0.9520 | 0.6875 | | 0.7731 | 8.0 | 504 | 0.9180 | 0.6994 | | 0.7417 | 9.0 | 567 | 0.8972 | 0.7054 | | 0.7163 | 10.0 | 630 | 0.8765 | 0.7054 | | 0.6954 | 11.0 | 693 | 0.8683 | 0.7054 | | 0.6794 | 12.0 | 756 | 0.8527 | 0.7143 | | 0.668 | 13.0 | 819 | 0.8441 | 0.7202 | | 0.6575 | 14.0 | 882 | 0.8366 | 0.7202 | | 0.65 | 15.0 | 945 | 0.8314 | 0.7173 | | 0.645 | 16.0 | 1008 | 0.8284 | 0.7202 | | 0.6417 | 17.0 | 1071 | 0.8270 | 0.7232 | | 0.6402 | 18.0 | 1134 | 0.8262 | 0.7232 | | 0.639 | 19.0 | 1197 | 0.8258 | 0.7232 | | 0.6381 | 20.0 | 1260 | 0.8257 | 0.7232 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
cvmil/beit-base-patch16-224_augmented-v2_tl
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beit-base-patch16-224_rice-leaf-disease-augmented-v2_tl This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4947 - Accuracy: 0.8512 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0747 | 1.0 | 63 | 1.7043 | 0.4435 | | 1.3282 | 2.0 | 126 | 1.0444 | 0.6845 | | 0.8626 | 3.0 | 189 | 0.7962 | 0.7470 | | 0.6929 | 4.0 | 252 | 0.6883 | 0.8125 | | 0.5935 | 5.0 | 315 | 0.6247 | 0.8214 | | 0.5427 | 6.0 | 378 | 0.5926 | 0.8244 | | 0.5002 | 7.0 | 441 | 0.5735 | 0.8452 | | 0.4704 | 8.0 | 504 | 0.5520 | 0.8482 | | 0.4521 | 9.0 | 567 | 0.5330 | 0.8363 | | 0.4311 | 10.0 | 630 | 0.5249 | 0.8512 | | 0.4096 | 11.0 | 693 | 0.5185 | 0.8512 | | 0.3999 | 12.0 | 756 | 0.5112 | 0.8542 | | 0.3918 | 13.0 | 819 | 0.5042 | 0.8512 | | 0.3862 | 14.0 | 882 | 0.4984 | 0.8542 | | 0.3784 | 15.0 | 945 | 0.4985 | 0.8512 | | 0.3733 | 16.0 | 1008 | 0.4967 | 0.8512 | | 0.3763 | 17.0 | 1071 | 0.4947 | 0.8512 | | 0.3736 | 18.0 | 1134 | 0.4949 | 0.8512 | | 0.3718 | 19.0 | 1197 | 0.4948 | 0.8512 | | 0.3722 | 20.0 | 1260 | 0.4947 | 0.8512 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
cvmil/dinov2-base_augmented-v2_tl
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dinov2-base_rice-leaf-disease-augmented-v2_tl This model is a fine-tuned version of [facebook/dinov2-base](https://huggingface.co/facebook/dinov2-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5013 - Accuracy: 0.8512 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0771 | 1.0 | 63 | 1.5173 | 0.4881 | | 1.0706 | 2.0 | 126 | 0.9022 | 0.7173 | | 0.6568 | 3.0 | 189 | 0.7203 | 0.7619 | | 0.5123 | 4.0 | 252 | 0.6513 | 0.7827 | | 0.4311 | 5.0 | 315 | 0.5929 | 0.8125 | | 0.3812 | 6.0 | 378 | 0.5802 | 0.8274 | | 0.3426 | 7.0 | 441 | 0.5669 | 0.8274 | | 0.3145 | 8.0 | 504 | 0.5452 | 0.8333 | | 0.2919 | 9.0 | 567 | 0.5532 | 0.8155 | | 0.276 | 10.0 | 630 | 0.5275 | 0.8423 | | 0.263 | 11.0 | 693 | 0.5189 | 0.8512 | | 0.2502 | 12.0 | 756 | 0.5181 | 0.8512 | | 0.2416 | 13.0 | 819 | 0.5058 | 0.8482 | | 0.2343 | 14.0 | 882 | 0.5050 | 0.8542 | | 0.2292 | 15.0 | 945 | 0.5009 | 0.8482 | | 0.2245 | 16.0 | 1008 | 0.5057 | 0.8482 | | 0.2224 | 17.0 | 1071 | 0.5040 | 0.8512 | | 0.2205 | 18.0 | 1134 | 0.5021 | 0.8482 | | 0.2195 | 19.0 | 1197 | 0.5017 | 0.8512 | | 0.2188 | 20.0 | 1260 | 0.5013 | 0.8512 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
cvmil/convnext-base-224_augmented-v2_tl
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-base-224_rice-leaf-disease-augmented-v2_tl This model is a fine-tuned version of [facebook/convnext-base-224](https://huggingface.co/facebook/convnext-base-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9910 - Accuracy: 0.6935 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0641 | 1.0 | 63 | 1.9738 | 0.3780 | | 1.833 | 2.0 | 126 | 1.7100 | 0.5 | | 1.5486 | 3.0 | 189 | 1.4962 | 0.5595 | | 1.3602 | 4.0 | 252 | 1.3563 | 0.5982 | | 1.2385 | 5.0 | 315 | 1.2619 | 0.625 | | 1.1522 | 6.0 | 378 | 1.1995 | 0.6458 | | 1.0888 | 7.0 | 441 | 1.1555 | 0.6607 | | 1.0383 | 8.0 | 504 | 1.1159 | 0.6667 | | 1.0003 | 9.0 | 567 | 1.0857 | 0.6726 | | 0.971 | 10.0 | 630 | 1.0579 | 0.6845 | | 0.9447 | 11.0 | 693 | 1.0446 | 0.6815 | | 0.9265 | 12.0 | 756 | 1.0262 | 0.6905 | | 0.9123 | 13.0 | 819 | 1.0158 | 0.6845 | | 0.8985 | 14.0 | 882 | 1.0050 | 0.6935 | | 0.8902 | 15.0 | 945 | 0.9993 | 0.6964 | | 0.8846 | 16.0 | 1008 | 0.9950 | 0.6935 | | 0.8803 | 17.0 | 1071 | 0.9926 | 0.6935 | | 0.8774 | 18.0 | 1134 | 0.9916 | 0.6935 | | 0.8766 | 19.0 | 1197 | 0.9911 | 0.6935 | | 0.8772 | 20.0 | 1260 | 0.9910 | 0.6935 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
daniakartika/emotion-classifier
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion-classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.6448 - Accuracy: 0.3812 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 20 | 1.7814 | 0.3375 | | No log | 2.0 | 40 | 1.7125 | 0.3563 | | No log | 3.0 | 60 | 1.6787 | 0.3688 | | No log | 4.0 | 80 | 1.6547 | 0.3625 | | No log | 5.0 | 100 | 1.6448 | 0.3812 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
cvmil/vit-hybrid-base-bit-384_augmented-v2_tl
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-hybrid-base-bit-384_rice-leaf-disease-augmented-v2_tl This model is a fine-tuned version of [google/vit-hybrid-base-bit-384](https://huggingface.co/google/vit-hybrid-base-bit-384) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5725 - Accuracy: 0.8185 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0186 | 1.0 | 63 | 1.7057 | 0.4256 | | 1.3213 | 2.0 | 126 | 1.0837 | 0.6756 | | 0.8384 | 3.0 | 189 | 0.8425 | 0.7411 | | 0.6553 | 4.0 | 252 | 0.7551 | 0.7738 | | 0.5601 | 5.0 | 315 | 0.6954 | 0.7768 | | 0.5006 | 6.0 | 378 | 0.6683 | 0.7798 | | 0.4592 | 7.0 | 441 | 0.6419 | 0.7976 | | 0.4272 | 8.0 | 504 | 0.6246 | 0.8095 | | 0.4029 | 9.0 | 567 | 0.6096 | 0.8036 | | 0.3844 | 10.0 | 630 | 0.6018 | 0.7976 | | 0.3685 | 11.0 | 693 | 0.5945 | 0.8125 | | 0.3566 | 12.0 | 756 | 0.5869 | 0.8095 | | 0.3486 | 13.0 | 819 | 0.5806 | 0.8155 | | 0.3405 | 14.0 | 882 | 0.5764 | 0.8214 | | 0.3351 | 15.0 | 945 | 0.5758 | 0.8185 | | 0.3314 | 16.0 | 1008 | 0.5751 | 0.8185 | | 0.3296 | 17.0 | 1071 | 0.5738 | 0.8185 | | 0.327 | 18.0 | 1134 | 0.5726 | 0.8185 | | 0.3265 | 19.0 | 1197 | 0.5726 | 0.8185 | | 0.326 | 20.0 | 1260 | 0.5725 | 0.8185 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
cvmil/resnet-50_augmented-v2_fft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-50_rice-leaf-disease-augmented-v2_fft This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1313 - Accuracy: 0.6726 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0639 | 1.0 | 125 | 2.0235 | 0.3393 | | 1.9838 | 2.0 | 250 | 1.9041 | 0.4911 | | 1.8621 | 3.0 | 375 | 1.7795 | 0.5238 | | 1.7579 | 4.0 | 500 | 1.6965 | 0.5446 | | 1.6945 | 5.0 | 625 | 1.6616 | 0.5625 | | 1.6741 | 6.0 | 750 | 1.6497 | 0.5565 | | 1.6042 | 7.0 | 875 | 1.5223 | 0.5685 | | 1.4807 | 8.0 | 1000 | 1.4272 | 0.5893 | | 1.3988 | 9.0 | 1125 | 1.3771 | 0.6101 | | 1.3575 | 10.0 | 1250 | 1.3642 | 0.6071 | | 1.3377 | 11.0 | 1375 | 1.3011 | 0.6220 | | 1.2331 | 12.0 | 1500 | 1.2030 | 0.6548 | | 1.1439 | 13.0 | 1625 | 1.1507 | 0.6577 | | 1.0902 | 14.0 | 1750 | 1.1259 | 0.6548 | | 1.0735 | 15.0 | 1875 | 1.1313 | 0.6726 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
prithivMLmods/Deepfake-QualityAssess2.0-85M
![10.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/SYd9v58DVUvB6QV6i_mhl.png) # **Deepfake-QualityAssess2.0-85M** Deepfake-QualityAssess2.0-85M is an image classification model for quality assessment of good and bad quality deepfakes. It is based on Google's ViT model (`google/vit-base-patch32-224-in21k`). A reasonable number of training samples were used to achieve good efficiency in the final training process and its efficiency metrics. Since this task involves classifying deepfake images with varying quality levels, the model was trained accordingly. Future improvements will be made based on the complexity of the task. ```python id2label: { "0": "Issue In Deepfake", "1": "High Quality Deepfake" } ``` ```python Classification report: precision recall f1-score support Issue In Deepfake 0.8051 0.7380 0.7703 2000 High Quality Deepfake 0.7665 0.8350 0.7994 2000 accuracy 0.8105 4000 macro avg 0.7858 0.7865 0.7849 4000 weighted avg 0.7858 0.7865 0.7849 4000 ``` # **Inference with Hugging Face Pipeline** ```python from transformers import pipeline # Load the model pipe = pipeline('image-classification', model="prithivMLmods/Deepfake-QualityAssess2.0-85M", device=0) # Predict on an image result = pipe("path_to_image.jpg") print(result) ``` # **Inference with PyTorch** ```python from transformers import ViTForImageClassification, ViTImageProcessor from PIL import Image import torch # Load the model and processor model = ViTForImageClassification.from_pretrained("prithivMLmods/Deepfake-QualityAssess2.0-85M") processor = ViTImageProcessor.from_pretrained("prithivMLmods/Deepfake-QualityAssess2.0-85M") # Load and preprocess the image image = Image.open("path_to_image.jpg").convert("RGB") inputs = processor(images=image, return_tensors="pt") # Perform inference with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class = torch.argmax(logits, dim=1).item() # Map class index to label label = model.config.id2label[predicted_class] print(f"Predicted Label: {label}") ``` # **Limitations of Deepfake-QualityAssess2.0-85M** 1. **Limited Generalization** – The model is trained on specific datasets and may not generalize well to unseen deepfake generation techniques or novel deepfake artifacts. 2. **Variability in Deepfake Quality** – Different deepfake creation methods introduce varying levels of noise and artifacts, which may affect model performance. 3. **Dependence on Training Data** – The model's accuracy is influenced by the quality and diversity of the training data. Biases in the dataset could lead to misclassification. 4. **Resolution Sensitivity** – Performance may degrade when analyzing extremely high- or low-resolution images not seen during training. 5. **Potential False Positives/Negatives** – The model may sometimes misclassify good-quality deepfakes as bad (or vice versa), limiting its reliability in critical applications. 6. **Lack of Explainability** – Being based on a ViT (Vision Transformer), the decision-making process is less interpretable than traditional models, making it harder to analyze why certain classifications are made. 7. **Not a Deepfake Detector** – This model assesses the quality of deepfakes but does not determine whether an image is real or fake. # **Intended Use of Deepfake-QualityAssess2.0-85M** - **Quality Assessment for Research** – Used by researchers to analyze and improve deepfake generation methods by assessing output quality. - **Dataset Filtering** – Helps filter out low-quality deepfake samples in datasets for better training of deepfake detection models. - **Forensic Analysis** – Supports forensic teams in evaluating deepfake quality to prioritize high-quality samples for deeper analysis. - **Content Moderation** – Assists social media platforms and content moderation teams in assessing deepfake quality before deciding on further actions. - **Benchmarking Deepfake Models** – Used to compare and evaluate different deepfake generation models based on their output quality.
[ "issue in deepfake", "high quality deepfake" ]
prithivMLmods/Deepfake-QualityAssess2.1-85M
![9.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/CJCTKBIv92WwFYdnmGrFR.png) # **Deepfake-QualityAssess2.1-85M** Deepfake-QualityAssess2.1-85M is an image classification model for quality assessment of good and bad quality deepfakes. It is based on Google's ViT model (`google/vit-base-patch32-224-in21k`). A reasonable number of training samples were used to achieve good efficiency in the final training process and its efficiency metrics. Since this task involves classifying deepfake images with varying quality levels, the model was trained accordingly. Future improvements will be made based on the complexity of the task. ```python id2label: { "0": "Issue In Deepfake", "1": "High Quality Deepfake" } ``` ```python Classification report: precision recall f1-score support Issue In Deepfake 0.7851 0.7380 0.7610 2000 High Quality Deepfake 0.7765 0.8250 0.8000 2000 accuracy 0.7815 4000 macro avg 0.7808 0.7815 0.7805 4000 weighted avg 0.7808 0.7815 0.7805 4000 ``` # **Inference with Hugging Face Pipeline** ```python from transformers import pipeline # Load the model pipe = pipeline('image-classification', model="prithivMLmods/Deepfake-QualityAssess2.1-85M", device=0) # Predict on an image result = pipe("path_to_image.jpg") print(result) ``` # **Inference with PyTorch** ```python from transformers import ViTForImageClassification, ViTImageProcessor from PIL import Image import torch # Load the model and processor model = ViTForImageClassification.from_pretrained("prithivMLmods/Deepfake-QualityAssess2.1-85M") processor = ViTImageProcessor.from_pretrained("prithivMLmods/Deepfake-QualityAssess2.1-85M") # Load and preprocess the image image = Image.open("path_to_image.jpg").convert("RGB") inputs = processor(images=image, return_tensors="pt") # Perform inference with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class = torch.argmax(logits, dim=1).item() # Map class index to label label = model.config.id2label[predicted_class] print(f"Predicted Label: {label}") ``` # **Limitations of Deepfake-QualityAssess2.1-85M** 1. **Limited Generalization** – The model is trained on specific datasets and may not generalize well to unseen deepfake generation techniques or novel deepfake artifacts. 2. **Variability in Deepfake Quality** – Different deepfake creation methods introduce varying levels of noise and artifacts, which may affect model performance. 3. **Dependence on Training Data** – The model's accuracy is influenced by the quality and diversity of the training data. Biases in the dataset could lead to misclassification. 4. **Resolution Sensitivity** – Performance may degrade when analyzing extremely high- or low-resolution images not seen during training. 5. **Potential False Positives/Negatives** – The model may sometimes misclassify good-quality deepfakes as bad (or vice versa), limiting its reliability in critical applications. 6. **Lack of Explainability** – Being based on a ViT (Vision Transformer), the decision-making process is less interpretable than traditional models, making it harder to analyze why certain classifications are made. 7. **Not a Deepfake Detector** – This model assesses the quality of deepfakes but does not determine whether an image is real or fake. # **Intended Use of Deepfake-QualityAssess2.1-85M** - **Quality Assessment for Research** – Used by researchers to analyze and improve deepfake generation methods by assessing output quality. - **Dataset Filtering** – Helps filter out low-quality deepfake samples in datasets for better training of deepfake detection models. - **Forensic Analysis** – Supports forensic teams in evaluating deepfake quality to prioritize high-quality samples for deeper analysis. - **Content Moderation** – Assists social media platforms and content moderation teams in assessing deepfake quality before deciding on further actions. - **Benchmarking Deepfake Models** – Used to compare and evaluate different deepfake generation models based on their output quality.
[ "issue in deepfake", "high quality deepfake" ]
prithivMLmods/AI-vs-Deepfake-vs-Real
![kkk.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/bXfKBT3LQkbeLzPCBHTGT.png) # **AI-vs-Deepfake-vs-Real** AI-vs-Deepfake-vs-Real is an image classification model for differentiating between artificial, deepfake, and real images. It is based on Google's ViT model (`google/vit-base-patch32-224-in21k`). A reasonable number of training samples were used to achieve good efficiency in the final training process and its efficiency metrics. Since this task involves classifying images into three categories (artificial, deepfake, and real), the model was trained accordingly. Future improvements will be made based on the complexity of the task. ```python id2label: { "0": "Artificial", "1": "Deepfake", "2": "Real" } ``` ```python Classification report: precision recall f1-score support Artificial 0.9897 0.9347 0.9614 1333 Deepfake 0.9409 0.9910 0.9653 1333 Real 0.9970 0.9993 0.9981 1334 accuracy 0.9750 4000 macro avg 0.9759 0.9750 0.9749 4000 weighted avg 0.9759 0.9750 0.9750 4000 ``` ![download.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/FhUNkuzKxgs9SmwcvR4yP.png) # **Inference with Hugging Face Pipeline** ```python from transformers import pipeline # Load the model pipe = pipeline('image-classification', model="prithivMLmods/AI-vs-Deepfake-vs-Real", device=0) # Predict on an image result = pipe("path_to_image.jpg") print(result) ``` # **Inference with PyTorch** ```python from transformers import ViTForImageClassification, ViTImageProcessor from PIL import Image import torch # Load the model and processor model = ViTForImageClassification.from_pretrained("prithivMLmods/AI-vs-Deepfake-vs-Real") processor = ViTImageProcessor.from_pretrained("prithivMLmods/AI-vs-Deepfake-vs-Real") # Load and preprocess the image image = Image.open("path_to_image.jpg").convert("RGB") inputs = processor(images=image, return_tensors="pt") # Perform inference with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class = torch.argmax(logits, dim=1).item() # Map class index to label label = model.config.id2label[predicted_class] print(f"Predicted Label: {label}") ``` # **Limitations of AI-vs-Deepfake-vs-Real** 1. **Limited Generalization** – The model is trained on specific datasets and may not generalize well to unseen deepfake generation techniques or novel deepfake artifacts. 2. **Variability in Deepfake Quality** – Different deepfake creation methods introduce varying levels of noise and artifacts, which may affect model performance. 3. **Dependence on Training Data** – The model's accuracy is influenced by the quality and diversity of the training data. Biases in the dataset could lead to misclassification. 4. **Resolution Sensitivity** – Performance may degrade when analyzing extremely high- or low-resolution images not seen during training. 5. **Potential False Positives/Negatives** – The model may sometimes misclassify artificial, deepfake, or real images, limiting its reliability in critical applications. 6. **Lack of Explainability** – Being based on a ViT (Vision Transformer), the decision-making process is less interpretable than traditional models, making it harder to analyze why certain classifications are made. 7. **Not a Deepfake Detector** – This model categorizes images but does not specifically determine whether an image is fake; rather, it differentiates between artificial, deepfake, and real images. # **Intended Use of AI-vs-Deepfake-vs-Real** - **Quality Assessment for Research** – Used by researchers to analyze and improve deepfake generation methods by assessing output quality. - **Dataset Filtering** – Helps filter out low-quality deepfake samples in datasets for better training of deepfake detection models. - **Forensic Analysis** – Supports forensic teams in evaluating image authenticity and prioritizing high-quality deepfakes for deeper analysis. - **Content Moderation** – Assists social media platforms and content moderation teams in assessing image authenticity before deciding on further actions. - **Benchmarking Deepfake Models** – Used to compare and evaluate different deepfake generation models based on their output quality and authenticity.
[ "artificial", "deepfake", "real" ]
strangerguardhf/vit_deepfake_detection
![cCWXMAl30EqsKqFXPIfC9.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/3cnuydT5poHDy8AUI7nl4.png) # **Deepfake-Detection-Exp-02-22** `Deepfake-Detection-Exp-02-22 / vit_deepfake_detection` is a minimalist, high-quality dataset trained on a ViT-based model for image classification, distinguishing between deepfake and real images. The model is based on Google's **`google/vit-base-patch32-224-in21k`**. ```bitex Mapping of IDs to Labels: {0: 'Deepfake', 1: 'Real'} Mapping of Labels to IDs: {'Deepfake': 0, 'Real': 1} ``` ```python Classification report: precision recall f1-score support Deepfake 0.9833 0.9187 0.9499 1600 Real 0.9238 0.9844 0.9531 1600 accuracy 0.9516 3200 macro avg 0.9535 0.9516 0.9515 3200 weighted avg 0.9535 0.9516 0.9515 3200 ``` ![download (1).png](https://cdn-uploads.huggingface.co/production/uploads/6720824b15b6282a2464fc58/-25Oh3wureg_MI4nvjh7w.png) # **Inference with Hugging Face Pipeline** ```python from transformers import pipeline # Load the model pipe = pipeline('image-classification', model="prithivMLmods/Deepfake-Detection-Exp-02-22", device=0) # Predict on an image result = pipe("path_to_image.jpg") print(result) ``` # **Inference with PyTorch** ```python from transformers import ViTForImageClassification, ViTImageProcessor from PIL import Image import torch # Load the model and processor model = ViTForImageClassification.from_pretrained("strangerguardhf/vit_deepfake_detection") processor = ViTImageProcessor.from_pretrained("strangerguardhf/vit_deepfake_detection") # Load and preprocess the image image = Image.open("path_to_image.jpg").convert("RGB") inputs = processor(images=image, return_tensors="pt") # Perform inference with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class = torch.argmax(logits, dim=1).item() # Map class index to label label = model.config.id2label[predicted_class] print(f"Predicted Label: {label}") ``` # **Limitations** 1. **Generalization Issues** – The model may not perform well on deepfake images generated by unseen or novel deepfake techniques. 2. **Dataset Bias** – The training data might not cover all variations of real and fake images, leading to biased predictions. 3. **Resolution Constraints** – Since the model is based on `vit-base-patch32-224-in21k`, it is optimized for 224x224 image resolution, which may limit its effectiveness on high-resolution images. 4. **Adversarial Vulnerabilities** – The model may be susceptible to adversarial attacks designed to fool vision transformers. 5. **False Positives & False Negatives** – The model may occasionally misclassify real images as deepfake and vice versa, requiring human validation in critical applications. # **Intended Use** 1. **Deepfake Detection** – Designed for identifying deepfake images in media, social platforms, and forensic analysis. 2. **Research & Development** – Useful for researchers studying deepfake detection and improving ViT-based classification models. 3. **Content Moderation** – Can be integrated into platforms to detect and flag manipulated images. 4. **Security & Forensics** – Assists in cybersecurity applications where verifying the authenticity of images is crucial. 5. **Educational Purposes** – Can be used in training AI practitioners and students in the field of computer vision and deepfake detection.
[ "deepfake", "real" ]
poprap/vit16L-FT-cellclassification
# Fine-tuned ViT image classifier This repository provides a fine-tuned Vision Transformer model for classifying leukemia patient peripheral blood mononuclear cells. ## Model Overview - **Base Model**: google/vit-large-patch16-224-in21k - **Task**: 5-class classification of leukemia cells - **Input**: 224x224 pixel dual-channel fluorescence microscopy images (R: ch1, G: ch6) - **Output**: Probability distribution over 5 classes ## Performance - **Architecture**: ViT-Large/16 (patch size 16x16) - **Parameters**: ~307M - **Accuracy**: 94.67% (evaluation dataset) ## Data Preparation ### Prerequisites for Data Processing ```bash # Required libraries for image processing pip install numpy pillow tifffile ``` ### Data Processing Tool `tools/prepare_data.py` is a lightweight script for preprocessing dual-channel (ch1, ch6) cell images. Implemented primarily using standard libraries, it performs the following operations: 1. Detects ch1 and ch6 image pairs 2. Normalizes each channel (0-255 scaling) 3. Converts to RGB format (R: ch1, G: ch6, B: empty channel) 4. Saves to specified output directory ```bash # Basic usage python prepare_data.py input_dir output_dir # Example with options python prepare_data.py \ /path/to/raw_images \ /path/to/processed_images \ --workers 8 \ --recursive ``` #### Options - `--workers`: Number of parallel workers (default: 4) - `--recursive`: Process subdirectories recursively #### Input Directory Structure ``` input_dir/ ├── class1/ │ ├── ch1_1.tif │ ├── ch6_1.tif │ ├── ch1_2.tif │ └── ch6_2.tif └── class2/ ├── ch1_1.tif ├── ch6_1.tif ... ``` #### Output Directory Structure ``` output_dir/ ├── class1/ │ ├── merged_1.tif │ └── merged_2.tif └── class2/ ├── merged_1.tif ... ``` ## Model Usage ### Prerequisites for Model ```bash # Required libraries for model inference pip install torch torchvision transformers ``` ### Usage Example #### Single Image Inference ```python from transformers import ViTForImageClassification, ViTImageProcessor import torch from PIL import Image # Load model and processor model = ViTForImageClassification.from_pretrained("poprap/vit16L-FT-cellclassification") processor = ViTImageProcessor.from_pretrained("poprap/vit16L-FT-cellclassification") # Preprocess image image = Image.open("cell_image.tif") inputs = processor(images=image, return_tensors="pt") # Inference outputs = model(**inputs) probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1) predicted_class = torch.argmax(probabilities, dim=-1).item() ``` #### Batch Processing and Evaluation For batch processing and comprehensive evaluation metrics calculation: ```python import torch import numpy as np import time from pathlib import Path from tqdm import tqdm from torchvision import transforms, datasets from torch.utils.data import DataLoader from transformers import ViTForImageClassification, ViTImageProcessor import matplotlib.pyplot as plt import seaborn as sns from sklearn.metrics import ( confusion_matrix, accuracy_score, recall_score, precision_score, f1_score, roc_auc_score, classification_report ) from sklearn.preprocessing import label_binarize # --- 1. データセット準備用関数 --- def transform_function(feature_extractor, img): resized = transforms.Resize((224, 224))(img) encoded = feature_extractor(images=resized, return_tensors="pt") return encoded["pixel_values"][0] def collate_fn(batch): pixel_values = torch.stack([item[0] for item in batch]) labels = torch.tensor([item[1] for item in batch]) return {"pixel_values": pixel_values, "labels": labels} # --- 2. モデルとデータセットの準備 --- # モデルの準備 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = ViTForImageClassification.from_pretrained("poprap/vit16L-FT-cellclassification") feature_extractor = ViTImageProcessor.from_pretrained("poprap/vit16L-FT-cellclassification") model.to(device) # データセットとデータローダーの準備 eval_dir = Path("path/to/eval/data") # 評価データのパス dataset = datasets.ImageFolder( root=str(eval_dir), transform=lambda img: transform_function(feature_extractor, img) ) dataloader = DataLoader( dataset, batch_size=32, shuffle=False, collate_fn=collate_fn ) # --- 3. バッチ推論の実行 --- model.eval() all_preds = [] all_labels = [] all_probs = [] start_time = time.time() with torch.no_grad(): for batch in tqdm(dataloader, desc="Evaluating"): inputs = batch["pixel_values"].to(device) labels = batch["labels"].to(device) outputs = model(inputs) logits = outputs.logits probs = torch.softmax(logits, dim=1) preds = torch.argmax(probs, dim=1) all_preds.extend(preds.cpu().numpy()) all_labels.extend(labels.cpu().numpy()) all_probs.extend(probs.cpu().numpy()) end_time = time.time() # --- 4. 性能指標の計算 --- # 処理時間の計算 total_images = len(all_labels) total_time = end_time - start_time time_per_image = total_time / total_images # 基本的な指標 cm = confusion_matrix(all_labels, all_preds) accuracy = accuracy_score(all_labels, all_preds) recall_weighted = recall_score(all_labels, all_preds, average="weighted") precision_weighted = precision_score(all_labels, all_preds, average="weighted") f1_weighted = f1_score(all_labels, all_preds, average="weighted") # クラスごとのAUC計算 num_classes = len(dataset.classes) all_labels_onehot = label_binarize(all_labels, classes=range(num_classes)) all_probs = np.array(all_probs) auc_scores = {} for class_idx in range(num_classes): try: auc = roc_auc_score(all_labels_onehot[:, class_idx], all_probs[:, class_idx]) auc_scores[dataset.classes[class_idx]] = auc except ValueError: auc_scores[dataset.classes[class_idx]] = None # --- 5. 結果の可視化 --- # Confusion Matrixの可視化 plt.figure(figsize=(10, 8)) sns.heatmap(cm, annot=True, fmt="d", cmap="Blues", xticklabels=dataset.classes, yticklabels=dataset.classes) plt.xlabel("Predicted Label") plt.ylabel("True Label") plt.title("Confusion Matrix") plt.tight_layout() plt.show() # 結果の出力 print(f"\nEvaluation Results:") print(f"Accuracy: {accuracy:.4f}") print(f"Weighted Recall: {recall_weighted:.4f}") print(f"Weighted Precision: {precision_weighted:.4f}") print(f"Weighted F1: {f1_weighted:.4f}") print(f"\nAUC Scores per Class:") for class_name, auc in auc_scores.items(): print(f"{class_name}: {auc:.4f}" if auc is not None else f"{class_name}: N/A") print(f"\nDetailed Classification Report:") print(classification_report(all_labels, all_preds, target_names=dataset.classes)) print(f"\nPerformance Metrics:") print(f"Total images evaluated: {total_images}") print(f"Total time: {total_time:.2f} seconds") print(f"Average time per image: {time_per_image:.4f} seconds") ``` This example demonstrates how to: 1. Process multiple images in batches 2. Calculate comprehensive evaluation metrics 3. Generate confusion matrix visualization 4. Measure inference time performance Key metrics calculated: - Accuracy, Precision, Recall, F1-score - Class-wise AUC scores - Confusion matrix - Detailed classification report - Processing time statistics ## Training Configuration The model was fine-tuned with the following settings: ### Hyperparameters - Batch size: 56 - Learning rate: 1e-5 - Number of epochs: 20 - Mixed precision training (FP16) - Label smoothing: 0.1 - Cosine scheduling with warmup (warmup steps: 100) ### Data Augmentation - RandomResizedCrop (224x224, scale=(0.8, 1.0)) - RandomHorizontalFlip - RandomRotation (±10 degrees) - ColorJitter (brightness=0.2, contrast=0.2, saturation=0.2, hue=0.1) ### Implementation Details - Utilized HuggingFace Transformers' `Trainer` class - Checkpoint saving: every 100 steps - Evaluation: every 100 steps - Logging: every 10 steps ## Data Source This project uses data from the following research paper: Phillip Eulenberg, Niklas Köhler, Thomas Blasi, Andrew Filby, Anne E. Carpenter, Paul Rees, Fabian J. Theis & F. Alexander Wolf. "Reconstructing cell cycle and disease progression using deep learning." Nature Communications volume 8, Article number: 463 (2017). ## License This project is licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0), inheriting the same license as the base Google Vision Transformer model. ## Citations ```bibtex @misc{dosovitskiy2021vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Alexey Dosovitskiy and others}, year={2021}, eprint={2010.11929}, archivePrefix={arXiv} }
[ "label_0", "label_1", "label_2", "label_3", "label_4" ]
umaidzaffar/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6588 - Accuracy: 0.893 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7674 | 1.0 | 63 | 2.5693 | 0.834 | | 1.895 | 2.0 | 126 | 1.8141 | 0.877 | | 1.651 | 2.96 | 186 | 1.6588 | 0.893 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
cvmil/deit-base-patch16-224_augmented-v2_fft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deit-base-patch16-224_rice-leaf-disease-augmented-v2_fft This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4335 - Accuracy: 0.8810 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7917 | 1.0 | 125 | 1.1504 | 0.6577 | | 0.6305 | 2.0 | 250 | 0.4746 | 0.8363 | | 0.1802 | 3.0 | 375 | 0.3663 | 0.8631 | | 0.0508 | 4.0 | 500 | 0.3550 | 0.8690 | | 0.0152 | 5.0 | 625 | 0.3373 | 0.8839 | | 0.0092 | 6.0 | 750 | 0.3433 | 0.8839 | | 0.0067 | 7.0 | 875 | 0.3768 | 0.8839 | | 0.002 | 8.0 | 1000 | 0.3861 | 0.875 | | 0.001 | 9.0 | 1125 | 0.3976 | 0.8810 | | 0.0009 | 10.0 | 1250 | 0.3989 | 0.8839 | | 0.0008 | 11.0 | 1375 | 0.4085 | 0.8839 | | 0.0006 | 12.0 | 1500 | 0.4185 | 0.8810 | | 0.0004 | 13.0 | 1625 | 0.4294 | 0.8780 | | 0.0004 | 14.0 | 1750 | 0.4326 | 0.8780 | | 0.0004 | 15.0 | 1875 | 0.4335 | 0.8810 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
Syizuril/emotion_classifier
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2399 - Accuracy: 0.6 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9118 | 1.0 | 40 | 1.8032 | 0.4 | | 1.5331 | 2.0 | 80 | 1.5209 | 0.4813 | | 1.2503 | 3.0 | 120 | 1.4198 | 0.5 | | 0.9242 | 4.0 | 160 | 1.3261 | 0.5625 | | 0.6821 | 5.0 | 200 | 1.2891 | 0.5625 | | 0.4062 | 6.0 | 240 | 1.2399 | 0.6 | | 0.2304 | 7.0 | 280 | 1.2819 | 0.5563 | | 0.1572 | 8.0 | 320 | 1.2891 | 0.5625 | | 0.1205 | 9.0 | 360 | 1.3398 | 0.5563 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
cvmil/swin-base-patch4-window7-224_augmented-v2_fft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-base-patch4-window7-224_rice-leaf-disease-augmented-v2_fft This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4238 - Accuracy: 0.9345 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7682 | 1.0 | 125 | 1.0104 | 0.7083 | | 0.5426 | 2.0 | 250 | 0.4423 | 0.8571 | | 0.1598 | 3.0 | 375 | 0.3282 | 0.8810 | | 0.0529 | 4.0 | 500 | 0.3018 | 0.9137 | | 0.0216 | 5.0 | 625 | 0.2928 | 0.9226 | | 0.0135 | 6.0 | 750 | 0.2874 | 0.9286 | | 0.0135 | 7.0 | 875 | 0.3382 | 0.9137 | | 0.0082 | 8.0 | 1000 | 0.3456 | 0.9226 | | 0.0039 | 9.0 | 1125 | 0.3589 | 0.9256 | | 0.0025 | 10.0 | 1250 | 0.3539 | 0.9315 | | 0.0038 | 11.0 | 1375 | 0.4166 | 0.9196 | | 0.004 | 12.0 | 1500 | 0.4284 | 0.9286 | | 0.0022 | 13.0 | 1625 | 0.4279 | 0.9345 | | 0.001 | 14.0 | 1750 | 0.4176 | 0.9345 | | 0.0014 | 15.0 | 1875 | 0.4238 | 0.9345 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
hieulhwork24/vit-butterflies-google-final
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-butterflies-google-final This is a fine-tuned version of base model: [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the butterflies dataset below. It achieved the following results on the evaluation set: - Loss: 0.0294 - Accuracy: 0.992 **Notice:** This is the second fine-tuned version based on a previous version which had been trained on dataset once. The difference of two version lied in the image augmentation step before training, leading to the better performance of this final version. Source code for training and predict availabel on [Github](https://github.com/hieulhaiwork/butterflies-classification) ## Datasets This is an open dataset avilabel on [butterfly-image-classification](https://www.kaggle.com/datasets/phucthaiv02/butterfly-image-classification). The dataset features 75 different classes of Butterflies. The dataset contains about 1000+ labelled images including the validation images. Each image belongs to only one butterfly category. ## Model description The structure of model was kept the same as original Google's model. ## How to use This is how to use model in Pytorch: ```(python) from transformers import AutoFeatureExtractor, ViTForImageClassification from PIL import Image import requests model_name = "hieulhwork24/vit-butterflies-google-final" processor = AutoFeatureExtractor.from_pretrained(model_name) model = ViTForImageClassification.from_pretrained(model_name) inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1228 | 0.8 | 100 | 0.0360 | 0.989 | | 0.0845 | 1.6 | 200 | 0.0828 | 0.983 | | 0.046 | 2.4 | 300 | 0.0315 | 0.993 | | 0.0223 | 3.2 | 400 | 0.0449 | 0.985 | | 0.0221 | 4.0 | 500 | 0.0309 | 0.99 | | 0.0092 | 4.8 | 600 | 0.0294 | 0.992 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "adonis", "african giant swallowtail", "blue morpho", "blue spotted crow", "brown siproeta", "cabbage white", "cairns birdwing", "checquered skipper", "chestnut", "cleopatra", "clodius parnassian", "clouded sulphur", "american snoot", "common banded awl", "common wood-nymph", "copper tail", "crecent", "crimson patch", "danaid eggfly", "eastern coma", "eastern dapple white", "eastern pine elfin", "elbowed pierrot", "an 88", "gold banded", "great eggfly", "great jay", "green celled cattleheart", "grey hairstreak", "indra swallow", "iphiclus sister", "julia", "large marble", "malachite", "appollo", "mangrove skipper", "mestra", "metalmark", "milberts tortoiseshell", "monarch", "mourning cloak", "orange oakleaf", "orange tip", "orchard swallow", "painted lady", "atala", "paper kite", "peacock", "pine white", "pipevine swallow", "popinjay", "purple hairstreak", "purplish copper", "question mark", "red admiral", "red cracker", "banded orange heliconian", "red postman", "red spotted purple", "scarce swallow", "silver spot skipper", "sleepy orange", "sootywing", "southern dogface", "straited queen", "tropical leafwing", "two barred flasher", "banded peacock", "ulyses", "viceroy", "wood satyr", "yellow swallow tail", "zebra long wing", "beckers white", "black hairstreak" ]
cvmil/beit-base-patch16-224_augmented-v2_fft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beit-base-patch16-224_rice-leaf-disease-augmented-v2_fft This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5324 - Accuracy: 0.9137 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.6553 | 1.0 | 125 | 0.7916 | 0.7530 | | 0.481 | 2.0 | 250 | 0.3628 | 0.8810 | | 0.1631 | 3.0 | 375 | 0.2895 | 0.8988 | | 0.0609 | 4.0 | 500 | 0.3242 | 0.9167 | | 0.0293 | 5.0 | 625 | 0.3503 | 0.9196 | | 0.0223 | 6.0 | 750 | 0.3411 | 0.9226 | | 0.0302 | 7.0 | 875 | 0.3786 | 0.9226 | | 0.014 | 8.0 | 1000 | 0.4169 | 0.9256 | | 0.0069 | 9.0 | 1125 | 0.4648 | 0.9137 | | 0.006 | 10.0 | 1250 | 0.4697 | 0.9137 | | 0.0053 | 11.0 | 1375 | 0.5192 | 0.8958 | | 0.0093 | 12.0 | 1500 | 0.5058 | 0.9048 | | 0.0069 | 13.0 | 1625 | 0.5486 | 0.9077 | | 0.005 | 14.0 | 1750 | 0.5252 | 0.9167 | | 0.004 | 15.0 | 1875 | 0.5324 | 0.9137 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
cvmil/convnext-base-224_augmented-v2_fft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-base-224_rice-leaf-disease-augmented-v2_fft This model is a fine-tuned version of [facebook/convnext-base-224](https://huggingface.co/facebook/convnext-base-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3334 - Accuracy: 0.8929 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9755 | 1.0 | 125 | 1.7906 | 0.5119 | | 1.3671 | 2.0 | 250 | 1.0186 | 0.75 | | 0.7289 | 3.0 | 375 | 0.6316 | 0.8185 | | 0.4637 | 4.0 | 500 | 0.5039 | 0.8482 | | 0.3583 | 5.0 | 625 | 0.4573 | 0.8571 | | 0.3262 | 6.0 | 750 | 0.4528 | 0.8631 | | 0.2609 | 7.0 | 875 | 0.3666 | 0.8780 | | 0.1505 | 8.0 | 1000 | 0.3273 | 0.8899 | | 0.0925 | 9.0 | 1125 | 0.3147 | 0.8869 | | 0.0696 | 10.0 | 1250 | 0.3147 | 0.8839 | | 0.0616 | 11.0 | 1375 | 0.3084 | 0.8929 | | 0.0335 | 12.0 | 1500 | 0.3218 | 0.8929 | | 0.0151 | 13.0 | 1625 | 0.3257 | 0.8929 | | 0.0096 | 14.0 | 1750 | 0.3379 | 0.8929 | | 0.0085 | 15.0 | 1875 | 0.3334 | 0.8929 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
cvmil/vit-hybrid-base-bit-384_augmented-v2_fft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-hybrid-base-bit-384_rice-leaf-disease-augmented-v2_fft This model is a fine-tuned version of [google/vit-hybrid-base-bit-384](https://huggingface.co/google/vit-hybrid-base-bit-384) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3756 - Accuracy: 0.9286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.203 | 1.0 | 250 | 0.4459 | 0.8512 | | 0.1159 | 2.0 | 500 | 0.3121 | 0.9077 | | 0.0136 | 3.0 | 750 | 0.3433 | 0.9226 | | 0.001 | 4.0 | 1000 | 0.3377 | 0.9226 | | 0.0003 | 5.0 | 1250 | 0.3365 | 0.9226 | | 0.0002 | 6.0 | 1500 | 0.3366 | 0.9286 | | 0.0002 | 7.0 | 1750 | 0.3432 | 0.9286 | | 0.0001 | 8.0 | 2000 | 0.3478 | 0.9286 | | 0.0001 | 9.0 | 2250 | 0.3530 | 0.9286 | | 0.0001 | 10.0 | 2500 | 0.3543 | 0.9286 | | 0.0001 | 11.0 | 2750 | 0.3592 | 0.9286 | | 0.0 | 12.0 | 3000 | 0.3698 | 0.9286 | | 0.0 | 13.0 | 3250 | 0.3730 | 0.9286 | | 0.0 | 14.0 | 3500 | 0.3750 | 0.9286 | | 0.0 | 15.0 | 3750 | 0.3756 | 0.9286 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]