model_id
stringlengths
7
105
model_card
stringlengths
1
130k
model_labels
listlengths
2
80k
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV21
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV21 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8631 - Accuracy: 0.7308 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 42 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.9302 | 10 | 1.5510 | 0.3462 | | 6.5929 | 1.9302 | 20 | 1.4802 | 0.2692 | | 5.6252 | 2.9302 | 30 | 1.1115 | 0.4038 | | 3.874 | 3.9302 | 40 | 0.9996 | 0.5577 | | 2.7168 | 4.9302 | 50 | 0.8436 | 0.6538 | | 2.2435 | 5.9302 | 60 | 0.9320 | 0.6154 | | 2.2435 | 6.9302 | 70 | 0.8412 | 0.6346 | | 1.9334 | 7.9302 | 80 | 0.8622 | 0.6731 | | 1.6303 | 8.9302 | 90 | 0.9152 | 0.7115 | | 1.2748 | 9.9302 | 100 | 0.9721 | 0.6731 | | 1.0945 | 10.9302 | 110 | 1.0827 | 0.6538 | | 0.8395 | 11.9302 | 120 | 0.9153 | 0.7115 | | 0.8395 | 12.9302 | 130 | 0.8631 | 0.7308 | | 0.8587 | 13.9302 | 140 | 1.1039 | 0.6538 | | 0.8574 | 14.9302 | 150 | 1.0463 | 0.6923 | | 0.7096 | 15.9302 | 160 | 0.9991 | 0.7115 | | 0.6606 | 16.9302 | 170 | 1.0519 | 0.6731 | | 0.5513 | 17.9302 | 180 | 1.0865 | 0.7115 | | 0.5513 | 18.9302 | 190 | 1.1140 | 0.6731 | | 0.61 | 19.9302 | 200 | 1.0290 | 0.6731 | | 0.5278 | 20.9302 | 210 | 1.1003 | 0.6923 | | 0.4639 | 21.9302 | 220 | 1.2472 | 0.6538 | | 0.4719 | 22.9302 | 230 | 1.1546 | 0.6923 | | 0.4212 | 23.9302 | 240 | 1.1084 | 0.7308 | | 0.4212 | 24.9302 | 250 | 1.2953 | 0.6731 | | 0.4109 | 25.9302 | 260 | 1.1868 | 0.7308 | | 0.4236 | 26.9302 | 270 | 1.2560 | 0.6346 | | 0.3638 | 27.9302 | 280 | 1.2161 | 0.7115 | | 0.3944 | 28.9302 | 290 | 1.1582 | 0.7308 | | 0.3621 | 29.9302 | 300 | 1.2993 | 0.6923 | | 0.3621 | 30.9302 | 310 | 1.1401 | 0.7115 | | 0.3203 | 31.9302 | 320 | 1.3228 | 0.7115 | | 0.3014 | 32.9302 | 330 | 1.2813 | 0.6923 | | 0.3464 | 33.9302 | 340 | 1.4768 | 0.6538 | | 0.2891 | 34.9302 | 350 | 1.2304 | 0.7308 | | 0.3153 | 35.9302 | 360 | 1.3096 | 0.6923 | | 0.3153 | 36.9302 | 370 | 1.3565 | 0.7115 | | 0.2762 | 37.9302 | 380 | 1.2931 | 0.6923 | | 0.3191 | 38.9302 | 390 | 1.2441 | 0.7308 | | 0.3009 | 39.9302 | 400 | 1.2110 | 0.7308 | | 0.2645 | 40.9302 | 410 | 1.2433 | 0.7115 | | 0.2497 | 41.9302 | 420 | 1.2461 | 0.6923 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
alyzbane/2025-01-21-15-57-43-swin-base-patch4-window7-224
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2025-01-21-15-57-43-swin-base-patch4-window7-224 This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0384 - Precision: 0.9928 - Recall: 0.9926 - F1: 0.9926 - Accuracy: 0.992 - Top1 Accuracy: 0.9926 - Error Rate: 0.0080 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 3407 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Top1 Accuracy | Error Rate | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:| | 0.732 | 1.0 | 34 | 0.3980 | 0.9165 | 0.8741 | 0.8590 | 0.8649 | 0.8741 | 0.1351 | | 0.2462 | 2.0 | 68 | 0.1051 | 0.9538 | 0.9481 | 0.9484 | 0.9499 | 0.9481 | 0.0501 | | 0.1991 | 3.0 | 102 | 0.0384 | 0.9928 | 0.9926 | 0.9926 | 0.992 | 0.9926 | 0.0080 | | 0.1559 | 4.0 | 136 | 0.0890 | 0.9802 | 0.9778 | 0.9780 | 0.9777 | 0.9778 | 0.0223 | | 0.1024 | 5.0 | 170 | 0.1092 | 0.9863 | 0.9852 | 0.9852 | 0.9846 | 0.9852 | 0.0154 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.20.3
[ "ilang-ilang", "mango", "narra", "royal palm", "tabebuia" ]
AadeshMndr/food_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # AadeshMndr/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3532 - Validation Loss: 0.2855 - Train Accuracy: 0.937 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.3532 | 0.2855 | 0.937 | 0 | ### Framework versions - Transformers 4.47.1 - TensorFlow 2.17.1 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
alyzbane/2025-01-21-16-13-04-vit-base-patch16-224
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2025-01-21-16-13-04-vit-base-patch16-224 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0188 - Precision: 0.9929 - Recall: 0.9926 - F1: 0.9926 - Accuracy: 0.9931 - Top1 Accuracy: 0.9926 - Error Rate: 0.0069 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 3407 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Top1 Accuracy | Error Rate | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:| | 0.7465 | 1.0 | 34 | 0.1092 | 0.9536 | 0.9407 | 0.9400 | 0.9366 | 0.9407 | 0.0634 | | 0.212 | 2.0 | 68 | 0.2754 | 0.9338 | 0.9111 | 0.9061 | 0.9049 | 0.9111 | 0.0951 | | 0.115 | 3.0 | 102 | 0.0534 | 0.9854 | 0.9852 | 0.9852 | 0.9851 | 0.9852 | 0.0149 | | 0.0723 | 4.0 | 136 | 0.0188 | 0.9929 | 0.9926 | 0.9926 | 0.9931 | 0.9926 | 0.0069 | | 0.0716 | 5.0 | 170 | 0.0195 | 0.9928 | 0.9926 | 0.9926 | 0.992 | 0.9926 | 0.0080 | | 0.0161 | 6.0 | 204 | 0.0389 | 0.9791 | 0.9778 | 0.9778 | 0.9775 | 0.9778 | 0.0225 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.20.3
[ "ilang-ilang", "mango", "narra", "royal palm", "tabebuia" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV22
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV22 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0386 - Accuracy: 0.6346 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.3 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 3.1723 | 1.0 | 22 | 1.6164 | 0.1923 | | 2.8734 | 2.0 | 44 | 1.4347 | 0.3846 | | 2.1553 | 3.0 | 66 | 1.1520 | 0.4808 | | 1.6674 | 4.0 | 88 | 1.0921 | 0.4808 | | 1.1204 | 5.0 | 110 | 0.9091 | 0.5962 | | 1.0373 | 6.0 | 132 | 0.8185 | 0.6923 | | 0.9181 | 7.0 | 154 | 0.9377 | 0.6731 | | 0.7475 | 8.0 | 176 | 0.8407 | 0.6731 | | 0.6679 | 9.0 | 198 | 0.9488 | 0.6923 | | 0.4914 | 10.0 | 220 | 0.8699 | 0.7115 | | 0.4421 | 11.0 | 242 | 1.1132 | 0.6538 | | 0.3759 | 12.0 | 264 | 0.9250 | 0.7115 | | 0.4317 | 13.0 | 286 | 0.9220 | 0.6731 | | 0.4137 | 14.0 | 308 | 1.0225 | 0.7115 | | 0.3451 | 15.0 | 330 | 1.0872 | 0.6538 | | 0.3482 | 16.0 | 352 | 1.0129 | 0.6731 | | 0.3346 | 17.0 | 374 | 1.0314 | 0.6538 | | 0.3105 | 18.0 | 396 | 1.0348 | 0.6538 | | 0.252 | 19.0 | 418 | 1.0386 | 0.6346 | | 0.317 | 19.0930 | 420 | 1.0386 | 0.6346 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV23
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV23 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6901 - Accuracy: 0.8118 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 6.4493 | 1.0 | 17 | 1.5281 | 0.2941 | | 5.7922 | 2.0 | 34 | 1.3176 | 0.3882 | | 4.2502 | 3.0 | 51 | 1.2015 | 0.4353 | | 3.2402 | 4.0 | 68 | 0.8902 | 0.7176 | | 2.5386 | 5.0 | 85 | 0.6509 | 0.7765 | | 2.0351 | 6.0 | 102 | 0.6759 | 0.7647 | | 1.8225 | 7.0 | 119 | 0.6607 | 0.7765 | | 1.4778 | 8.0 | 136 | 0.7162 | 0.7529 | | 1.4076 | 9.0 | 153 | 0.9084 | 0.7294 | | 1.2056 | 10.0 | 170 | 0.6901 | 0.8118 | | 0.9552 | 11.0 | 187 | 0.9153 | 0.7765 | | 0.9859 | 12.0 | 204 | 0.8694 | 0.7529 | | 0.8309 | 13.0 | 221 | 0.7666 | 0.8 | | 0.7722 | 14.0 | 238 | 0.9118 | 0.7529 | | 0.7632 | 15.0 | 255 | 0.8953 | 0.7529 | | 0.5868 | 16.0 | 272 | 0.9678 | 0.7529 | | 0.6577 | 17.0 | 289 | 1.0503 | 0.7765 | | 0.5816 | 18.0 | 306 | 1.0602 | 0.7294 | | 0.6222 | 19.0 | 323 | 1.1543 | 0.7765 | | 0.4861 | 20.0 | 340 | 0.9739 | 0.8118 | | 0.4422 | 21.0 | 357 | 1.0354 | 0.8 | | 0.506 | 22.0 | 374 | 1.1097 | 0.8118 | | 0.3833 | 23.0 | 391 | 1.2009 | 0.7765 | | 0.4574 | 24.0 | 408 | 1.1366 | 0.7765 | | 0.4467 | 25.0 | 425 | 1.0601 | 0.8118 | | 0.4451 | 26.0 | 442 | 1.0935 | 0.7765 | | 0.4384 | 27.0 | 459 | 1.1617 | 0.7647 | | 0.4321 | 28.0 | 476 | 1.1012 | 0.7765 | | 0.4398 | 29.0 | 493 | 1.0825 | 0.7882 | | 0.361 | 30.0 | 510 | 1.1127 | 0.7647 | | 0.4428 | 31.0 | 527 | 1.2024 | 0.7529 | | 0.451 | 32.0 | 544 | 1.1550 | 0.7647 | | 0.403 | 33.0 | 561 | 1.1646 | 0.7765 | | 0.3059 | 34.0 | 578 | 1.2442 | 0.7765 | | 0.3022 | 35.0 | 595 | 1.1976 | 0.7765 | | 0.319 | 36.0 | 612 | 1.1564 | 0.7765 | | 0.3737 | 37.0 | 629 | 1.1857 | 0.7765 | | 0.3063 | 37.6667 | 640 | 1.1930 | 0.7765 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
joshx7/vit-base-oxford-iiit-pets
# vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2139 - Accuracy: 0.9350 ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. ## Intended uses & limitations ### Intended Uses This model is intended for image classification tasks, particularly those aligned with the ImageNet dataset's domain. It can also serve as a feature extractor for transfer learning on smaller, domain-specific datasets. ### Limitations This model may not generalize well to datasets that differ significantly from ImageNet. It is computationally intensive and may be unsuitable for use cases requiring low-latency predictions. ## Training and evaluation data ### Training Data Pretraining Data: ImageNet-21k (14M images, 21k classes). Fine-tuning Data: ImageNet ILSVRC2012 (1M images, 1k classes). ### Evaluation Data Dataset: ImageNet ILSVRC2012 validation set. Size: 50,000 images across 1,000 classes. Metrics: Loss (0.2031), Accuracy (94.59%). ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3953 | 1.0 | 370 | 0.2863 | 0.9310 | | 0.1918 | 2.0 | 740 | 0.2139 | 0.9391 | | 0.165 | 3.0 | 1110 | 0.2008 | 0.9418 | | 0.1476 | 4.0 | 1480 | 0.1912 | 0.9432 | | 0.1359 | 5.0 | 1850 | 0.1872 | 0.9445 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
mwoelki/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6294 - Accuracy: 0.896 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7384 | 0.992 | 62 | 2.5526 | 0.831 | | 1.8599 | 2.0 | 125 | 1.8006 | 0.88 | | 1.6127 | 2.976 | 186 | 1.6294 | 0.896 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1 - Datasets 3.0.0 - Tokenizers 0.19.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
codewithdark/vit-chest-xray
# Chest X-ray Image Classifier This repository contains a fine-tuned **Vision Transformer (ViT)** model for classifying chest X-ray images, utilizing the **CheXpert** dataset. The model is fine-tuned on the task of classifying various lung diseases from chest radiographs, achieving impressive accuracy in distinguishing between different conditions. ## Model Overview The fine-tuned model is based on the **Vision Transformer (ViT)** architecture, which excels in handling image-based tasks by leveraging attention mechanisms for efficient feature extraction. The model was trained on the **CheXpert dataset**, which consists of labeled chest X-ray images for detecting diseases such as pneumonia, cardiomegaly, and others. ## Performance - **Final Validation Accuracy**: 98.46% - **Final Training Loss**: 0.1069 - **Final Validation Loss**: 0.0980 The model achieved a significant accuracy improvement during training, demonstrating its ability to generalize well to unseen chest X-ray images. ## Dataset The dataset used for fine-tuning the model is the **CheXpert** dataset, which includes chest X-ray images from various patients with multi-label annotations. The data includes frontal and lateral views of the chest for each patient, annotated with labels for various lung diseases. For more details on the dataset, visit the [CheXpert official website](https://stanfordmlgroup.github.io/chexpert/). ## Training Details The model was fine-tuned using the following settings: - **Optimizer**: AdamW - **Learning Rate**: 3e-5 - **Batch Size**: 32 - **Epochs**: 10 - **Loss Function**: Binary Cross-Entropy with Logits - **Precision**: Mixed precision (via `torch.amp`) ## Usage ### Inference To use the fine-tuned model for inference, simply load the model from Hugging Face's Model Hub and input a chest X-ray image: ```python from PIL import Image import torch from transformers import AutoImageProcessor, AutoModelForImageClassification # Load model and processor processor = AutoImageProcessor.from_pretrained("codewithdark/vit-chest-xray") model = AutoModelForImageClassification.from_pretrained("codewithdark/vit-chest-xray") # Define label columns (class names) label_columns = ['Cardiomegaly', 'Edema', 'Consolidation', 'Pneumonia', 'No Finding'] # Step 1: Load and preprocess the image image_path = "/content/images.jpeg" # Replace with your image path # Open the image image = Image.open(image_path) # Ensure the image is in RGB mode (required by most image classification models) if image.mode != 'RGB': image = image.convert('RGB') print("Image converted to RGB.") # Step 2: Preprocess the image using the processor inputs = processor(images=image, return_tensors="pt") # Step 3: Make a prediction (using the model) with torch.no_grad(): # Disable gradient computation during inference outputs = model(**inputs) # Step 4: Extract logits and get the predicted class index logits = outputs.logits # Raw logits from the model predicted_class_idx = torch.argmax(logits, dim=-1).item() # Get the class index # Step 5: Map the predicted index to a class label # You can also use `model.config.id2label`, but we'll use `label_columns` for this task predicted_class_label = label_columns[predicted_class_idx] # Output the results print(f"Predicted Class Index: {predicted_class_idx}") print(f"Predicted Class Label: {predicted_class_label}") ''' Output : Predicted Class Index: 4 Predicted Class Label: No Finding ''' ``` ### Fine-Tuning To fine-tune the model on your own dataset, you can follow the instructions in this repo to adapt the code to your dataset and training configuration. ## Contributing We welcome contributions! If you have suggestions, improvements, or bug fixes, feel free to fork the repository and open a pull request. ## License This model is available under the MIT License. See [LICENSE](LICENSE) for more details. ## Acknowledgements - [CheXpert Dataset](https://stanfordmlgroup.github.io/chexpert/) - Hugging Face for providing the `transformers` library and Model Hub. --- Happy coding! 🚀
[ "label_0", "label_1", "label_2", "label_3", "label_4" ]
nguyenkhoa/dinov2_Liveness_detection_v2.2.3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/nguyenkhoaht002/liveness_detection/runs/441mzmc1) # dinov2_Liveness_detection_v2.2.3 This model is a fine-tuned version of [nguyenkhoa/dinov2_Liveness_detection_v2.2.2](https://huggingface.co/nguyenkhoa/dinov2_Liveness_detection_v2.2.2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0224 - Accuracy: 0.9932 - F1: 0.9932 - Recall: 0.9932 - Precision: 0.9933 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 768 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.0562 | 0.5872 | 64 | 0.0385 | 0.9860 | 0.9860 | 0.9860 | 0.9860 | | 0.0328 | 1.1743 | 128 | 0.0350 | 0.9884 | 0.9884 | 0.9884 | 0.9885 | | 0.0251 | 1.7615 | 192 | 0.0311 | 0.9879 | 0.9879 | 0.9879 | 0.9879 | | 0.0185 | 2.3486 | 256 | 0.0296 | 0.9895 | 0.9895 | 0.9895 | 0.9895 | | 0.0166 | 2.9358 | 320 | 0.0328 | 0.9897 | 0.9897 | 0.9897 | 0.9898 | | 0.0109 | 3.5229 | 384 | 0.0336 | 0.9906 | 0.9906 | 0.9906 | 0.9907 | | 0.0098 | 4.1101 | 448 | 0.0249 | 0.9917 | 0.9917 | 0.9917 | 0.9917 | | 0.0069 | 4.6972 | 512 | 0.0224 | 0.9932 | 0.9932 | 0.9932 | 0.9933 | ### Evaluate results - APCER: 0.1827 - BPCER: 0.0089 - ACER: 0.0958 - Accuracy: 0.8700 - F1: 0.8975 - Recall: 0.9911 - Precision: 0.7026 ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "live", "spoof" ]
codewithdark/face-emotion-detect
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6" ]
Melo1512/vit-msn-small-beta-fia-equally-enhanced_test_1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-beta-fia-equally-enhanced_test_1 This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6061 - Accuracy: 0.8732 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 100 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.5714 | 1 | 1.5578 | 0.0704 | | No log | 1.7143 | 3 | 1.4950 | 0.0634 | | No log | 2.8571 | 5 | 1.3574 | 0.0634 | | No log | 4.0 | 7 | 1.1698 | 0.1268 | | No log | 4.5714 | 8 | 1.0682 | 0.3169 | | 1.5036 | 5.7143 | 10 | 0.8754 | 0.7958 | | 1.5036 | 6.8571 | 12 | 0.7359 | 0.8239 | | 1.5036 | 8.0 | 14 | 0.6782 | 0.8169 | | 1.5036 | 8.5714 | 15 | 0.6718 | 0.8169 | | 1.5036 | 9.7143 | 17 | 0.6821 | 0.8099 | | 1.5036 | 10.8571 | 19 | 0.7157 | 0.8028 | | 0.7486 | 12.0 | 21 | 0.7173 | 0.8099 | | 0.7486 | 12.5714 | 22 | 0.6967 | 0.8169 | | 0.7486 | 13.7143 | 24 | 0.6847 | 0.8169 | | 0.7486 | 14.8571 | 26 | 0.6827 | 0.8239 | | 0.7486 | 16.0 | 28 | 0.6959 | 0.8380 | | 0.7486 | 16.5714 | 29 | 0.6826 | 0.8521 | | 0.6547 | 17.7143 | 31 | 0.6360 | 0.8310 | | 0.6547 | 18.8571 | 33 | 0.6257 | 0.8521 | | 0.6547 | 20.0 | 35 | 0.6594 | 0.8732 | | 0.6547 | 20.5714 | 36 | 0.6784 | 0.8380 | | 0.6547 | 21.7143 | 38 | 0.6578 | 0.8521 | | 0.5817 | 22.8571 | 40 | 0.6146 | 0.8592 | | 0.5817 | 24.0 | 42 | 0.6212 | 0.8732 | | 0.5817 | 24.5714 | 43 | 0.6395 | 0.8732 | | 0.5817 | 25.7143 | 45 | 0.6452 | 0.8732 | | 0.5817 | 26.8571 | 47 | 0.6317 | 0.8803 | | 0.5817 | 28.0 | 49 | 0.6332 | 0.8803 | | 0.5632 | 28.5714 | 50 | 0.6418 | 0.8732 | | 0.5632 | 29.7143 | 52 | 0.6383 | 0.8803 | | 0.5632 | 30.8571 | 54 | 0.6367 | 0.8592 | | 0.5632 | 32.0 | 56 | 0.6253 | 0.8732 | | 0.5632 | 32.5714 | 57 | 0.6268 | 0.8592 | | 0.5632 | 33.7143 | 59 | 0.6234 | 0.8662 | | 0.5328 | 34.8571 | 61 | 0.6368 | 0.8521 | | 0.5328 | 36.0 | 63 | 0.6251 | 0.8592 | | 0.5328 | 36.5714 | 64 | 0.6184 | 0.8732 | | 0.5328 | 37.7143 | 66 | 0.6067 | 0.8732 | | 0.5328 | 38.8571 | 68 | 0.6182 | 0.8662 | | 0.5272 | 40.0 | 70 | 0.6398 | 0.8451 | | 0.5272 | 40.5714 | 71 | 0.6440 | 0.8310 | | 0.5272 | 41.7143 | 73 | 0.6318 | 0.8451 | | 0.5272 | 42.8571 | 75 | 0.6111 | 0.8732 | | 0.5272 | 44.0 | 77 | 0.6061 | 0.8732 | | 0.5272 | 44.5714 | 78 | 0.6116 | 0.8732 | | 0.5255 | 45.7143 | 80 | 0.6320 | 0.8451 | | 0.5255 | 46.8571 | 82 | 0.6394 | 0.8310 | | 0.5255 | 48.0 | 84 | 0.6379 | 0.8310 | | 0.5255 | 48.5714 | 85 | 0.6363 | 0.8310 | | 0.5255 | 49.7143 | 87 | 0.6282 | 0.8521 | | 0.5255 | 50.8571 | 89 | 0.6214 | 0.8592 | | 0.52 | 52.0 | 91 | 0.6195 | 0.8592 | | 0.52 | 52.5714 | 92 | 0.6170 | 0.8662 | | 0.52 | 53.7143 | 94 | 0.6169 | 0.8592 | | 0.52 | 54.8571 | 96 | 0.6174 | 0.8592 | | 0.52 | 56.0 | 98 | 0.6187 | 0.8592 | | 0.52 | 56.5714 | 99 | 0.6193 | 0.8592 | | 0.504 | 57.1429 | 100 | 0.6194 | 0.8592 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "extra", "incomplete", "normal" ]
Melo1512/vit-msn-small-beta-fia-individually-enhanced_test_1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-beta-fia-individually-enhanced_test_1 This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7134 - Accuracy: 0.8028 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 50 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.5714 | 1 | 1.8493 | 0.1268 | | No log | 1.7143 | 3 | 1.7331 | 0.1268 | | No log | 2.8571 | 5 | 1.4744 | 0.1268 | | No log | 4.0 | 7 | 1.1333 | 0.2958 | | No log | 4.5714 | 8 | 0.9675 | 0.6690 | | 1.2028 | 5.7143 | 10 | 0.7486 | 0.8028 | | 1.2028 | 6.8571 | 12 | 0.7134 | 0.8028 | | 1.2028 | 8.0 | 14 | 0.7447 | 0.7958 | | 1.2028 | 8.5714 | 15 | 0.7686 | 0.8028 | | 1.2028 | 9.7143 | 17 | 0.7991 | 0.8028 | | 1.2028 | 10.8571 | 19 | 0.7821 | 0.8028 | | 0.7435 | 12.0 | 21 | 0.7589 | 0.8028 | | 0.7435 | 12.5714 | 22 | 0.7497 | 0.8028 | | 0.7435 | 13.7143 | 24 | 0.7412 | 0.8028 | | 0.7435 | 14.8571 | 26 | 0.7626 | 0.8028 | | 0.7435 | 16.0 | 28 | 0.7868 | 0.8028 | | 0.7435 | 16.5714 | 29 | 0.7903 | 0.7958 | | 0.6624 | 17.7143 | 31 | 0.7725 | 0.7958 | | 0.6624 | 18.8571 | 33 | 0.7456 | 0.7887 | | 0.6624 | 20.0 | 35 | 0.7403 | 0.8099 | | 0.6624 | 20.5714 | 36 | 0.7444 | 0.8239 | | 0.6624 | 21.7143 | 38 | 0.7515 | 0.8099 | | 0.6099 | 22.8571 | 40 | 0.7579 | 0.8099 | | 0.6099 | 24.0 | 42 | 0.7629 | 0.8099 | | 0.6099 | 24.5714 | 43 | 0.7650 | 0.8169 | | 0.6099 | 25.7143 | 45 | 0.7621 | 0.8169 | | 0.6099 | 26.8571 | 47 | 0.7571 | 0.8099 | | 0.6099 | 28.0 | 49 | 0.7551 | 0.8099 | | 0.5882 | 28.5714 | 50 | 0.7551 | 0.8099 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "extra", "incomplete", "normal" ]
bsvaz/landmark-classification-vit
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "haleakala national park", "mount rainier national park", "ljubljana castle", "dead sea", "wroclaws dwarves", "london olympic stadium", "niagara falls", "stonehenge", "grand canyon", "golden gate bridge", "edinburgh castle", "mount rushmore national memorial", "kantanagar temple", "yellowstone national park", "terminal tower", "central park", "eiffel tower", "changdeokgung", "delicate arch", "vienna city hall", "matterhorn", "taj mahal", "moscow raceway", "externsteine", "soreq cave", "banff national park", "pont du gard", "seattle japanese garden", "sydney harbour bridge", "petronas towers", "brooklyn bridge", "washington monument", "hanging temple", "sydney opera house", "great barrier reef", "monumento a la revolucion", "badlands national park", "atomium", "forth bridge", "gateway of india", "stockholm city hall", "machu picchu", "death valley national park", "gullfoss falls", "trevi fountain", "temple of heaven", "great wall of china", "prague astronomical clock", "whitby abbey", "temple of olympian zeus" ]
Mickaelass/vit-base-beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0080 - eval_accuracy: 0.9975 - eval_runtime: 135.4458 - eval_samples_per_second: 147.823 - eval_steps_per_second: 18.48 - epoch: 0.7191 - step: 1800 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "negative", "positive" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV24
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV24 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8851 - Accuracy: 0.7059 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 6.5614 | 1.0 | 17 | 1.6096 | 0.3176 | | 6.1279 | 2.0 | 34 | 1.5651 | 0.3176 | | 5.5089 | 3.0 | 51 | 1.3188 | 0.5529 | | 4.453 | 4.0 | 68 | 1.0195 | 0.6353 | | 3.3808 | 5.0 | 85 | 0.9741 | 0.5882 | | 2.7707 | 6.0 | 102 | 0.8365 | 0.6353 | | 2.3091 | 7.0 | 119 | 0.7725 | 0.6588 | | 1.9831 | 8.0 | 136 | 0.8312 | 0.6588 | | 1.8284 | 9.0 | 153 | 0.8473 | 0.7059 | | 1.511 | 10.0 | 170 | 0.7539 | 0.7176 | | 1.2827 | 11.0 | 187 | 0.8067 | 0.7176 | | 1.2072 | 12.0 | 204 | 0.7927 | 0.7176 | | 1.2069 | 13.0 | 221 | 0.8184 | 0.6824 | | 0.9242 | 14.0 | 238 | 0.8548 | 0.7059 | | 0.9772 | 15.0 | 255 | 0.8374 | 0.7294 | | 0.8412 | 16.0 | 272 | 0.8340 | 0.7176 | | 0.8921 | 17.0 | 289 | 0.8729 | 0.6941 | | 0.7975 | 18.0 | 306 | 0.9115 | 0.7059 | | 0.8107 | 19.0 | 323 | 0.8830 | 0.6941 | | 0.7131 | 20.0 | 340 | 0.9049 | 0.6941 | | 0.6777 | 21.0 | 357 | 0.8895 | 0.7059 | | 0.6557 | 22.0 | 374 | 0.8831 | 0.7059 | | 0.6555 | 23.0 | 391 | 0.8846 | 0.7059 | | 0.7766 | 23.5455 | 400 | 0.8851 | 0.7059 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
Khushi870/resnet50_finetuned
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "tench, tinca tinca", "goldfish, carassius auratus", "great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias", "tiger shark, galeocerdo cuvieri", "hammerhead, hammerhead shark", "electric ray, crampfish, numbfish, torpedo", "stingray", "cock", "hen", "ostrich, struthio camelus", "brambling, fringilla montifringilla", "goldfinch, carduelis carduelis", "house finch, linnet, carpodacus mexicanus", "junco, snowbird", "indigo bunting, indigo finch, indigo bird, passerina cyanea", "robin, american robin, turdus migratorius", "bulbul", "jay", "magpie", "chickadee", "water ouzel, dipper", "kite", "bald eagle, american eagle, haliaeetus leucocephalus", "vulture", "great grey owl, great gray owl, strix nebulosa", "european fire salamander, salamandra salamandra", "common newt, triturus vulgaris", "eft", "spotted salamander, ambystoma maculatum", "axolotl, mud puppy, ambystoma mexicanum", "bullfrog, rana catesbeiana", "tree frog, tree-frog", "tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui", "loggerhead, loggerhead turtle, caretta caretta", "leatherback turtle, leatherback, leathery turtle, dermochelys coriacea", "mud turtle", "terrapin", "box turtle, box tortoise", "banded gecko", "common iguana, iguana, iguana iguana", "american chameleon, anole, anolis carolinensis", "whiptail, whiptail lizard", "agama", "frilled lizard, chlamydosaurus kingi", "alligator lizard", "gila monster, heloderma suspectum", "green lizard, lacerta viridis", "african chameleon, chamaeleo chamaeleon", "komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis", "african crocodile, nile crocodile, crocodylus niloticus", "american alligator, alligator mississipiensis", "triceratops", "thunder snake, worm snake, carphophis amoenus", "ringneck snake, ring-necked snake, ring snake", "hognose snake, puff adder, sand viper", "green snake, grass snake", "king snake, kingsnake", "garter snake, grass snake", "water snake", "vine snake", "night snake, hypsiglena torquata", "boa constrictor, constrictor constrictor", "rock python, rock snake, python sebae", "indian cobra, naja naja", "green mamba", "sea snake", "horned viper, cerastes, sand viper, horned asp, cerastes cornutus", "diamondback, diamondback rattlesnake, crotalus adamanteus", "sidewinder, horned rattlesnake, crotalus cerastes", "trilobite", "harvestman, daddy longlegs, phalangium opilio", "scorpion", "black and gold garden spider, argiope aurantia", "barn spider, araneus cavaticus", "garden spider, aranea diademata", "black widow, latrodectus mactans", "tarantula", "wolf spider, hunting spider", "tick", "centipede", "black grouse", "ptarmigan", "ruffed grouse, partridge, bonasa umbellus", "prairie chicken, prairie grouse, prairie fowl", "peacock", "quail", "partridge", "african grey, african gray, psittacus erithacus", "macaw", "sulphur-crested cockatoo, kakatoe galerita, cacatua galerita", "lorikeet", "coucal", "bee eater", "hornbill", "hummingbird", "jacamar", "toucan", "drake", "red-breasted merganser, mergus serrator", "goose", "black swan, cygnus atratus", "tusker", "echidna, spiny anteater, anteater", "platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus", "wallaby, brush kangaroo", "koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus", "wombat", "jellyfish", "sea anemone, anemone", "brain coral", "flatworm, platyhelminth", "nematode, nematode worm, roundworm", "conch", "snail", "slug", "sea slug, nudibranch", "chiton, coat-of-mail shell, sea cradle, polyplacophore", "chambered nautilus, pearly nautilus, nautilus", "dungeness crab, cancer magister", "rock crab, cancer irroratus", "fiddler crab", "king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica", "american lobster, northern lobster, maine lobster, homarus americanus", "spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish", "crayfish, crawfish, crawdad, crawdaddy", "hermit crab", "isopod", "white stork, ciconia ciconia", "black stork, ciconia nigra", "spoonbill", "flamingo", "little blue heron, egretta caerulea", "american egret, great white heron, egretta albus", "bittern", "crane", "limpkin, aramus pictus", "european gallinule, porphyrio porphyrio", "american coot, marsh hen, mud hen, water hen, fulica americana", "bustard", "ruddy turnstone, arenaria interpres", "red-backed sandpiper, dunlin, erolia alpina", "redshank, tringa totanus", "dowitcher", "oystercatcher, oyster catcher", "pelican", "king penguin, aptenodytes patagonica", "albatross, mollymawk", "grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus", "killer whale, killer, orca, grampus, sea wolf, orcinus orca", "dugong, dugong dugon", "sea lion", "chihuahua", "japanese spaniel", "maltese dog, maltese terrier, maltese", "pekinese, pekingese, peke", "shih-tzu", "blenheim spaniel", "papillon", "toy terrier", "rhodesian ridgeback", "afghan hound, afghan", "basset, basset hound", "beagle", "bloodhound, sleuthhound", "bluetick", "black-and-tan coonhound", "walker hound, walker foxhound", "english foxhound", "redbone", "borzoi, russian wolfhound", "irish wolfhound", "italian greyhound", "whippet", "ibizan hound, ibizan podenco", "norwegian elkhound, elkhound", "otterhound, otter hound", "saluki, gazelle hound", "scottish deerhound, deerhound", "weimaraner", "staffordshire bullterrier, staffordshire bull terrier", "american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier", "bedlington terrier", "border terrier", "kerry blue terrier", "irish terrier", "norfolk terrier", "norwich terrier", "yorkshire terrier", "wire-haired fox terrier", "lakeland terrier", "sealyham terrier, sealyham", "airedale, airedale terrier", "cairn, cairn terrier", "australian terrier", "dandie dinmont, dandie dinmont terrier", "boston bull, boston terrier", "miniature schnauzer", "giant schnauzer", "standard schnauzer", "scotch terrier, scottish terrier, scottie", "tibetan terrier, chrysanthemum dog", "silky terrier, sydney silky", "soft-coated wheaten terrier", "west highland white terrier", "lhasa, lhasa apso", "flat-coated retriever", "curly-coated retriever", "golden retriever", "labrador retriever", "chesapeake bay retriever", "german short-haired pointer", "vizsla, hungarian pointer", "english setter", "irish setter, red setter", "gordon setter", "brittany spaniel", "clumber, clumber spaniel", "english springer, english springer spaniel", "welsh springer spaniel", "cocker spaniel, english cocker spaniel, cocker", "sussex spaniel", "irish water spaniel", "kuvasz", "schipperke", "groenendael", "malinois", "briard", "kelpie", "komondor", "old english sheepdog, bobtail", "shetland sheepdog, shetland sheep dog, shetland", "collie", "border collie", "bouvier des flandres, bouviers des flandres", "rottweiler", "german shepherd, german shepherd dog, german police dog, alsatian", "doberman, doberman pinscher", "miniature pinscher", "greater swiss mountain dog", "bernese mountain dog", "appenzeller", "entlebucher", "boxer", "bull mastiff", "tibetan mastiff", "french bulldog", "great dane", "saint bernard, st bernard", "eskimo dog, husky", "malamute, malemute, alaskan malamute", "siberian husky", "dalmatian, coach dog, carriage dog", "affenpinscher, monkey pinscher, monkey dog", "basenji", "pug, pug-dog", "leonberg", "newfoundland, newfoundland dog", "great pyrenees", "samoyed, samoyede", "pomeranian", "chow, chow chow", "keeshond", "brabancon griffon", "pembroke, pembroke welsh corgi", "cardigan, cardigan welsh corgi", "toy poodle", "miniature poodle", "standard poodle", "mexican hairless", "timber wolf, grey wolf, gray wolf, canis lupus", "white wolf, arctic wolf, canis lupus tundrarum", "red wolf, maned wolf, canis rufus, canis niger", "coyote, prairie wolf, brush wolf, canis latrans", "dingo, warrigal, warragal, canis dingo", "dhole, cuon alpinus", "african hunting dog, hyena dog, cape hunting dog, lycaon pictus", "hyena, hyaena", "red fox, vulpes vulpes", "kit fox, vulpes macrotis", "arctic fox, white fox, alopex lagopus", "grey fox, gray fox, urocyon cinereoargenteus", "tabby, tabby cat", "tiger cat", "persian cat", "siamese cat, siamese", "egyptian cat", "cougar, puma, catamount, mountain lion, painter, panther, felis concolor", "lynx, catamount", "leopard, panthera pardus", "snow leopard, ounce, panthera uncia", "jaguar, panther, panthera onca, felis onca", "lion, king of beasts, panthera leo", "tiger, panthera tigris", "cheetah, chetah, acinonyx jubatus", "brown bear, bruin, ursus arctos", "american black bear, black bear, ursus americanus, euarctos americanus", "ice bear, polar bear, ursus maritimus, thalarctos maritimus", "sloth bear, melursus ursinus, ursus ursinus", "mongoose", "meerkat, mierkat", "tiger beetle", "ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle", "ground beetle, carabid beetle", "long-horned beetle, longicorn, longicorn beetle", "leaf beetle, chrysomelid", "dung beetle", "rhinoceros beetle", "weevil", "fly", "bee", "ant, emmet, pismire", "grasshopper, hopper", "cricket", "walking stick, walkingstick, stick insect", "cockroach, roach", "mantis, mantid", "cicada, cicala", "leafhopper", "lacewing, lacewing fly", "dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk", "damselfly", "admiral", "ringlet, ringlet butterfly", "monarch, monarch butterfly, milkweed butterfly, danaus plexippus", "cabbage butterfly", "sulphur butterfly, sulfur butterfly", "lycaenid, lycaenid butterfly", "starfish, sea star", "sea urchin", "sea cucumber, holothurian", "wood rabbit, cottontail, cottontail rabbit", "hare", "angora, angora rabbit", "hamster", "porcupine, hedgehog", "fox squirrel, eastern fox squirrel, sciurus niger", "marmot", "beaver", "guinea pig, cavia cobaya", "sorrel", "zebra", "hog, pig, grunter, squealer, sus scrofa", "wild boar, boar, sus scrofa", "warthog", "hippopotamus, hippo, river horse, hippopotamus amphibius", "ox", "water buffalo, water ox, asiatic buffalo, bubalus bubalis", "bison", "ram, tup", "bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis", "ibex, capra ibex", "hartebeest", "impala, aepyceros melampus", "gazelle", "arabian camel, dromedary, camelus dromedarius", "llama", "weasel", "mink", "polecat, fitch, foulmart, foumart, mustela putorius", "black-footed ferret, ferret, mustela nigripes", "otter", "skunk, polecat, wood pussy", "badger", "armadillo", "three-toed sloth, ai, bradypus tridactylus", "orangutan, orang, orangutang, pongo pygmaeus", "gorilla, gorilla gorilla", "chimpanzee, chimp, pan troglodytes", "gibbon, hylobates lar", "siamang, hylobates syndactylus, symphalangus syndactylus", "guenon, guenon monkey", "patas, hussar monkey, erythrocebus patas", "baboon", "macaque", "langur", "colobus, colobus monkey", "proboscis monkey, nasalis larvatus", "marmoset", "capuchin, ringtail, cebus capucinus", "howler monkey, howler", "titi, titi monkey", "spider monkey, ateles geoffroyi", "squirrel monkey, saimiri sciureus", "madagascar cat, ring-tailed lemur, lemur catta", "indri, indris, indri indri, indri brevicaudatus", "indian elephant, elephas maximus", "african elephant, loxodonta africana", "lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens", "giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca", "barracouta, snoek", "eel", "coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch", "rock beauty, holocanthus tricolor", "anemone fish", "sturgeon", "gar, garfish, garpike, billfish, lepisosteus osseus", "lionfish", "puffer, pufferfish, blowfish, globefish", "abacus", "abaya", "academic gown, academic robe, judge's robe", "accordion, piano accordion, squeeze box", "acoustic guitar", "aircraft carrier, carrier, flattop, attack aircraft carrier", "airliner", "airship, dirigible", "altar", "ambulance", "amphibian, amphibious vehicle", "analog clock", "apiary, bee house", "apron", "ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin", "assault rifle, assault gun", "backpack, back pack, knapsack, packsack, rucksack, haversack", "bakery, bakeshop, bakehouse", "balance beam, beam", "balloon", "ballpoint, ballpoint pen, ballpen, biro", "band aid", "banjo", "bannister, banister, balustrade, balusters, handrail", "barbell", "barber chair", "barbershop", "barn", "barometer", "barrel, cask", "barrow, garden cart, lawn cart, wheelbarrow", "baseball", "basketball", "bassinet", "bassoon", "bathing cap, swimming cap", "bath towel", "bathtub, bathing tub, bath, tub", "beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon", "beacon, lighthouse, beacon light, pharos", "beaker", "bearskin, busby, shako", "beer bottle", "beer glass", "bell cote, bell cot", "bib", "bicycle-built-for-two, tandem bicycle, tandem", "bikini, two-piece", "binder, ring-binder", "binoculars, field glasses, opera glasses", "birdhouse", "boathouse", "bobsled, bobsleigh, bob", "bolo tie, bolo, bola tie, bola", "bonnet, poke bonnet", "bookcase", "bookshop, bookstore, bookstall", "bottlecap", "bow", "bow tie, bow-tie, bowtie", "brass, memorial tablet, plaque", "brassiere, bra, bandeau", "breakwater, groin, groyne, mole, bulwark, seawall, jetty", "breastplate, aegis, egis", "broom", "bucket, pail", "buckle", "bulletproof vest", "bullet train, bullet", "butcher shop, meat market", "cab, hack, taxi, taxicab", "caldron, cauldron", "candle, taper, wax light", "cannon", "canoe", "can opener, tin opener", "cardigan", "car mirror", "carousel, carrousel, merry-go-round, roundabout, whirligig", "carpenter's kit, tool kit", "carton", "car wheel", "cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm", "cassette", "cassette player", "castle", "catamaran", "cd player", "cello, violoncello", "cellular telephone, cellular phone, cellphone, cell, mobile phone", "chain", "chainlink fence", "chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour", "chain saw, chainsaw", "chest", "chiffonier, commode", "chime, bell, gong", "china cabinet, china closet", "christmas stocking", "church, church building", "cinema, movie theater, movie theatre, movie house, picture palace", "cleaver, meat cleaver, chopper", "cliff dwelling", "cloak", "clog, geta, patten, sabot", "cocktail shaker", "coffee mug", "coffeepot", "coil, spiral, volute, whorl, helix", "combination lock", "computer keyboard, keypad", "confectionery, confectionary, candy store", "container ship, containership, container vessel", "convertible", "corkscrew, bottle screw", "cornet, horn, trumpet, trump", "cowboy boot", "cowboy hat, ten-gallon hat", "cradle", "crane", "crash helmet", "crate", "crib, cot", "crock pot", "croquet ball", "crutch", "cuirass", "dam, dike, dyke", "desk", "desktop computer", "dial telephone, dial phone", "diaper, nappy, napkin", "digital clock", "digital watch", "dining table, board", "dishrag, dishcloth", "dishwasher, dish washer, dishwashing machine", "disk brake, disc brake", "dock, dockage, docking facility", "dogsled, dog sled, dog sleigh", "dome", "doormat, welcome mat", "drilling platform, offshore rig", "drum, membranophone, tympan", "drumstick", "dumbbell", "dutch oven", "electric fan, blower", "electric guitar", "electric locomotive", "entertainment center", "envelope", "espresso maker", "face powder", "feather boa, boa", "file, file cabinet, filing cabinet", "fireboat", "fire engine, fire truck", "fire screen, fireguard", "flagpole, flagstaff", "flute, transverse flute", "folding chair", "football helmet", "forklift", "fountain", "fountain pen", "four-poster", "freight car", "french horn, horn", "frying pan, frypan, skillet", "fur coat", "garbage truck, dustcart", "gasmask, respirator, gas helmet", "gas pump, gasoline pump, petrol pump, island dispenser", "goblet", "go-kart", "golf ball", "golfcart, golf cart", "gondola", "gong, tam-tam", "gown", "grand piano, grand", "greenhouse, nursery, glasshouse", "grille, radiator grille", "grocery store, grocery, food market, market", "guillotine", "hair slide", "hair spray", "half track", "hammer", "hamper", "hand blower, blow dryer, blow drier, hair dryer, hair drier", "hand-held computer, hand-held microcomputer", "handkerchief, hankie, hanky, hankey", "hard disc, hard disk, fixed disk", "harmonica, mouth organ, harp, mouth harp", "harp", "harvester, reaper", "hatchet", "holster", "home theater, home theatre", "honeycomb", "hook, claw", "hoopskirt, crinoline", "horizontal bar, high bar", "horse cart, horse-cart", "hourglass", "ipod", "iron, smoothing iron", "jack-o'-lantern", "jean, blue jean, denim", "jeep, landrover", "jersey, t-shirt, tee shirt", "jigsaw puzzle", "jinrikisha, ricksha, rickshaw", "joystick", "kimono", "knee pad", "knot", "lab coat, laboratory coat", "ladle", "lampshade, lamp shade", "laptop, laptop computer", "lawn mower, mower", "lens cap, lens cover", "letter opener, paper knife, paperknife", "library", "lifeboat", "lighter, light, igniter, ignitor", "limousine, limo", "liner, ocean liner", "lipstick, lip rouge", "loafer", "lotion", "loudspeaker, speaker, speaker unit, loudspeaker system, speaker system", "loupe, jeweler's loupe", "lumbermill, sawmill", "magnetic compass", "mailbag, postbag", "mailbox, letter box", "maillot", "maillot, tank suit", "manhole cover", "maraca", "marimba, xylophone", "mask", "matchstick", "maypole", "maze, labyrinth", "measuring cup", "medicine chest, medicine cabinet", "megalith, megalithic structure", "microphone, mike", "microwave, microwave oven", "military uniform", "milk can", "minibus", "miniskirt, mini", "minivan", "missile", "mitten", "mixing bowl", "mobile home, manufactured home", "model t", "modem", "monastery", "monitor", "moped", "mortar", "mortarboard", "mosque", "mosquito net", "motor scooter, scooter", "mountain bike, all-terrain bike, off-roader", "mountain tent", "mouse, computer mouse", "mousetrap", "moving van", "muzzle", "nail", "neck brace", "necklace", "nipple", "notebook, notebook computer", "obelisk", "oboe, hautboy, hautbois", "ocarina, sweet potato", "odometer, hodometer, mileometer, milometer", "oil filter", "organ, pipe organ", "oscilloscope, scope, cathode-ray oscilloscope, cro", "overskirt", "oxcart", "oxygen mask", "packet", "paddle, boat paddle", "paddlewheel, paddle wheel", "padlock", "paintbrush", "pajama, pyjama, pj's, jammies", "palace", "panpipe, pandean pipe, syrinx", "paper towel", "parachute, chute", "parallel bars, bars", "park bench", "parking meter", "passenger car, coach, carriage", "patio, terrace", "pay-phone, pay-station", "pedestal, plinth, footstall", "pencil box, pencil case", "pencil sharpener", "perfume, essence", "petri dish", "photocopier", "pick, plectrum, plectron", "pickelhaube", "picket fence, paling", "pickup, pickup truck", "pier", "piggy bank, penny bank", "pill bottle", "pillow", "ping-pong ball", "pinwheel", "pirate, pirate ship", "pitcher, ewer", "plane, carpenter's plane, woodworking plane", "planetarium", "plastic bag", "plate rack", "plow, plough", "plunger, plumber's helper", "polaroid camera, polaroid land camera", "pole", "police van, police wagon, paddy wagon, patrol wagon, wagon, black maria", "poncho", "pool table, billiard table, snooker table", "pop bottle, soda bottle", "pot, flowerpot", "potter's wheel", "power drill", "prayer rug, prayer mat", "printer", "prison, prison house", "projectile, missile", "projector", "puck, hockey puck", "punching bag, punch bag, punching ball, punchball", "purse", "quill, quill pen", "quilt, comforter, comfort, puff", "racer, race car, racing car", "racket, racquet", "radiator", "radio, wireless", "radio telescope, radio reflector", "rain barrel", "recreational vehicle, rv, r.v.", "reel", "reflex camera", "refrigerator, icebox", "remote control, remote", "restaurant, eating house, eating place, eatery", "revolver, six-gun, six-shooter", "rifle", "rocking chair, rocker", "rotisserie", "rubber eraser, rubber, pencil eraser", "rugby ball", "rule, ruler", "running shoe", "safe", "safety pin", "saltshaker, salt shaker", "sandal", "sarong", "sax, saxophone", "scabbard", "scale, weighing machine", "school bus", "schooner", "scoreboard", "screen, crt screen", "screw", "screwdriver", "seat belt, seatbelt", "sewing machine", "shield, buckler", "shoe shop, shoe-shop, shoe store", "shoji", "shopping basket", "shopping cart", "shovel", "shower cap", "shower curtain", "ski", "ski mask", "sleeping bag", "slide rule, slipstick", "sliding door", "slot, one-armed bandit", "snorkel", "snowmobile", "snowplow, snowplough", "soap dispenser", "soccer ball", "sock", "solar dish, solar collector, solar furnace", "sombrero", "soup bowl", "space bar", "space heater", "space shuttle", "spatula", "speedboat", "spider web, spider's web", "spindle", "sports car, sport car", "spotlight, spot", "stage", "steam locomotive", "steel arch bridge", "steel drum", "stethoscope", "stole", "stone wall", "stopwatch, stop watch", "stove", "strainer", "streetcar, tram, tramcar, trolley, trolley car", "stretcher", "studio couch, day bed", "stupa, tope", "submarine, pigboat, sub, u-boat", "suit, suit of clothes", "sundial", "sunglass", "sunglasses, dark glasses, shades", "sunscreen, sunblock, sun blocker", "suspension bridge", "swab, swob, mop", "sweatshirt", "swimming trunks, bathing trunks", "swing", "switch, electric switch, electrical switch", "syringe", "table lamp", "tank, army tank, armored combat vehicle, armoured combat vehicle", "tape player", "teapot", "teddy, teddy bear", "television, television system", "tennis ball", "thatch, thatched roof", "theater curtain, theatre curtain", "thimble", "thresher, thrasher, threshing machine", "throne", "tile roof", "toaster", "tobacco shop, tobacconist shop, tobacconist", "toilet seat", "torch", "totem pole", "tow truck, tow car, wrecker", "toyshop", "tractor", "trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi", "tray", "trench coat", "tricycle, trike, velocipede", "trimaran", "tripod", "triumphal arch", "trolleybus, trolley coach, trackless trolley", "trombone", "tub, vat", "turnstile", "typewriter keyboard", "umbrella", "unicycle, monocycle", "upright, upright piano", "vacuum, vacuum cleaner", "vase", "vault", "velvet", "vending machine", "vestment", "viaduct", "violin, fiddle", "volleyball", "waffle iron", "wall clock", "wallet, billfold, notecase, pocketbook", "wardrobe, closet, press", "warplane, military plane", "washbasin, handbasin, washbowl, lavabo, wash-hand basin", "washer, automatic washer, washing machine", "water bottle", "water jug", "water tower", "whiskey jug", "whistle", "wig", "window screen", "window shade", "windsor tie", "wine bottle", "wing", "wok", "wooden spoon", "wool, woolen, woollen", "worm fence, snake fence, snake-rail fence, virginia fence", "wreck", "yawl", "yurt", "web site, website, internet site, site", "comic book", "crossword puzzle, crossword", "street sign", "traffic light, traffic signal, stoplight", "book jacket, dust cover, dust jacket, dust wrapper", "menu", "plate", "guacamole", "consomme", "hot pot, hotpot", "trifle", "ice cream, icecream", "ice lolly, lolly, lollipop, popsicle", "french loaf", "bagel, beigel", "pretzel", "cheeseburger", "hotdog, hot dog, red hot", "mashed potato", "head cabbage", "broccoli", "cauliflower", "zucchini, courgette", "spaghetti squash", "acorn squash", "butternut squash", "cucumber, cuke", "artichoke, globe artichoke", "bell pepper", "cardoon", "mushroom", "granny smith", "strawberry", "orange", "lemon", "fig", "pineapple, ananas", "banana", "jackfruit, jak, jack", "custard apple", "pomegranate", "hay", "carbonara", "chocolate sauce, chocolate syrup", "dough", "meat loaf, meatloaf", "pizza, pizza pie", "potpie", "burrito", "red wine", "espresso", "cup", "eggnog", "alp", "bubble", "cliff, drop, drop-off", "coral reef", "geyser", "lakeside, lakeshore", "promontory, headland, head, foreland", "sandbar, sand bar", "seashore, coast, seacoast, sea-coast", "valley, vale", "volcano", "ballplayer, baseball player", "groom, bridegroom", "scuba diver", "rapeseed", "daisy", "yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum", "corn", "acorn", "hip, rose hip, rosehip", "buckeye, horse chestnut, conker", "coral fungus", "agaric", "gyromitra", "stinkhorn, carrion fungus", "earthstar", "hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa", "bolete", "ear, spike, capitulum", "toilet tissue, toilet paper, bathroom tissue" ]
Kibalama/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.1774 - Accuracy: 0.9526 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3899 | 1.0 | 370 | 0.2813 | 0.9310 | | 0.2419 | 2.0 | 740 | 0.2110 | 0.9418 | | 0.1803 | 3.0 | 1110 | 0.1912 | 0.9418 | | 0.1324 | 4.0 | 1480 | 0.1832 | 0.9432 | | 0.1485 | 5.0 | 1850 | 0.1833 | 0.9405 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
Melo1512/vit-msn-small-beta-fia-manually-enhanced_test_1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-beta-fia-manually-enhanced_test_1 This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7014 - Accuracy: 0.7465 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 50 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.5714 | 1 | 0.9554 | 0.3732 | | No log | 1.7143 | 3 | 0.9304 | 0.5211 | | No log | 2.8571 | 5 | 0.8979 | 0.6338 | | No log | 4.0 | 7 | 0.8770 | 0.6479 | | No log | 4.5714 | 8 | 0.8595 | 0.6408 | | 0.877 | 5.7143 | 10 | 0.8456 | 0.5634 | | 0.877 | 6.8571 | 12 | 0.8825 | 0.4859 | | 0.877 | 8.0 | 14 | 0.8294 | 0.5563 | | 0.877 | 8.5714 | 15 | 0.7883 | 0.6197 | | 0.877 | 9.7143 | 17 | 0.7541 | 0.6549 | | 0.877 | 10.8571 | 19 | 0.7689 | 0.6690 | | 0.7053 | 12.0 | 21 | 0.7652 | 0.6620 | | 0.7053 | 12.5714 | 22 | 0.7496 | 0.6690 | | 0.7053 | 13.7143 | 24 | 0.7139 | 0.7183 | | 0.7053 | 14.8571 | 26 | 0.7014 | 0.7465 | | 0.7053 | 16.0 | 28 | 0.7290 | 0.7183 | | 0.7053 | 16.5714 | 29 | 0.7431 | 0.6901 | | 0.6176 | 17.7143 | 31 | 0.7498 | 0.6690 | | 0.6176 | 18.8571 | 33 | 0.7439 | 0.6761 | | 0.6176 | 20.0 | 35 | 0.7347 | 0.6972 | | 0.6176 | 20.5714 | 36 | 0.7377 | 0.6901 | | 0.6176 | 21.7143 | 38 | 0.7227 | 0.6901 | | 0.6053 | 22.8571 | 40 | 0.7228 | 0.7042 | | 0.6053 | 24.0 | 42 | 0.7282 | 0.6901 | | 0.6053 | 24.5714 | 43 | 0.7363 | 0.6761 | | 0.6053 | 25.7143 | 45 | 0.7431 | 0.6831 | | 0.6053 | 26.8571 | 47 | 0.7450 | 0.6831 | | 0.6053 | 28.0 | 49 | 0.7455 | 0.6761 | | 0.5926 | 28.5714 | 50 | 0.7447 | 0.6761 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "extra", "incomplete", "normal" ]
Melo1512/vit-msn-small-beta-fia-manually-enhanced_test_2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-beta-fia-manually-enhanced_test_2 This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5203 - Accuracy: 0.7746 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:--------:|:----:|:---------------:|:--------:| | No log | 0.5714 | 1 | 0.6037 | 0.7465 | | No log | 1.7143 | 3 | 0.6071 | 0.7324 | | No log | 2.8571 | 5 | 0.6120 | 0.7183 | | No log | 4.0 | 7 | 0.6188 | 0.7183 | | No log | 4.5714 | 8 | 0.6206 | 0.7183 | | 0.4866 | 5.7143 | 10 | 0.6272 | 0.6972 | | 0.4866 | 6.8571 | 12 | 0.6355 | 0.6901 | | 0.4866 | 8.0 | 14 | 0.6399 | 0.6901 | | 0.4866 | 8.5714 | 15 | 0.6364 | 0.6831 | | 0.4866 | 9.7143 | 17 | 0.6295 | 0.6831 | | 0.4866 | 10.8571 | 19 | 0.6288 | 0.6901 | | 0.4519 | 12.0 | 21 | 0.6185 | 0.6901 | | 0.4519 | 12.5714 | 22 | 0.6159 | 0.6901 | | 0.4519 | 13.7143 | 24 | 0.6113 | 0.6972 | | 0.4519 | 14.8571 | 26 | 0.5987 | 0.6901 | | 0.4519 | 16.0 | 28 | 0.6017 | 0.6972 | | 0.4519 | 16.5714 | 29 | 0.6067 | 0.6972 | | 0.437 | 17.7143 | 31 | 0.6062 | 0.6620 | | 0.437 | 18.8571 | 33 | 0.5966 | 0.6901 | | 0.437 | 20.0 | 35 | 0.5858 | 0.7113 | | 0.437 | 20.5714 | 36 | 0.5889 | 0.7042 | | 0.437 | 21.7143 | 38 | 0.5768 | 0.7183 | | 0.4353 | 22.8571 | 40 | 0.5752 | 0.7183 | | 0.4353 | 24.0 | 42 | 0.5729 | 0.7183 | | 0.4353 | 24.5714 | 43 | 0.5909 | 0.6972 | | 0.4353 | 25.7143 | 45 | 0.6038 | 0.6761 | | 0.4353 | 26.8571 | 47 | 0.5904 | 0.6901 | | 0.4353 | 28.0 | 49 | 0.5847 | 0.6831 | | 0.4141 | 28.5714 | 50 | 0.5615 | 0.7113 | | 0.4141 | 29.7143 | 52 | 0.5544 | 0.7254 | | 0.4141 | 30.8571 | 54 | 0.5904 | 0.6690 | | 0.4141 | 32.0 | 56 | 0.5948 | 0.6831 | | 0.4141 | 32.5714 | 57 | 0.5800 | 0.6972 | | 0.4141 | 33.7143 | 59 | 0.5902 | 0.6972 | | 0.4066 | 34.8571 | 61 | 0.5950 | 0.6690 | | 0.4066 | 36.0 | 63 | 0.5500 | 0.7324 | | 0.4066 | 36.5714 | 64 | 0.5470 | 0.7324 | | 0.4066 | 37.7143 | 66 | 0.5859 | 0.6901 | | 0.4066 | 38.8571 | 68 | 0.5955 | 0.6831 | | 0.3827 | 40.0 | 70 | 0.5967 | 0.6761 | | 0.3827 | 40.5714 | 71 | 0.5809 | 0.6901 | | 0.3827 | 41.7143 | 73 | 0.5721 | 0.6972 | | 0.3827 | 42.8571 | 75 | 0.6019 | 0.6831 | | 0.3827 | 44.0 | 77 | 0.6071 | 0.6901 | | 0.3827 | 44.5714 | 78 | 0.5962 | 0.6972 | | 0.37 | 45.7143 | 80 | 0.6114 | 0.6831 | | 0.37 | 46.8571 | 82 | 0.5594 | 0.7183 | | 0.37 | 48.0 | 84 | 0.5493 | 0.7324 | | 0.37 | 48.5714 | 85 | 0.5744 | 0.7113 | | 0.37 | 49.7143 | 87 | 0.5443 | 0.7183 | | 0.37 | 50.8571 | 89 | 0.5469 | 0.7324 | | 0.3797 | 52.0 | 91 | 0.6003 | 0.6831 | | 0.3797 | 52.5714 | 92 | 0.6048 | 0.6901 | | 0.3797 | 53.7143 | 94 | 0.5203 | 0.7746 | | 0.3797 | 54.8571 | 96 | 0.5327 | 0.7535 | | 0.3797 | 56.0 | 98 | 0.6414 | 0.6338 | | 0.3797 | 56.5714 | 99 | 0.6562 | 0.6197 | | 0.3715 | 57.7143 | 101 | 0.5754 | 0.7183 | | 0.3715 | 58.8571 | 103 | 0.5672 | 0.7254 | | 0.3715 | 60.0 | 105 | 0.6060 | 0.6901 | | 0.3715 | 60.5714 | 106 | 0.6536 | 0.6197 | | 0.3715 | 61.7143 | 108 | 0.6177 | 0.6479 | | 0.3483 | 62.8571 | 110 | 0.5385 | 0.7535 | | 0.3483 | 64.0 | 112 | 0.5630 | 0.7394 | | 0.3483 | 64.5714 | 113 | 0.5818 | 0.7254 | | 0.3483 | 65.7143 | 115 | 0.6055 | 0.6972 | | 0.3483 | 66.8571 | 117 | 0.5737 | 0.7324 | | 0.3483 | 68.0 | 119 | 0.5606 | 0.7394 | | 0.3667 | 68.5714 | 120 | 0.5829 | 0.7183 | | 0.3667 | 69.7143 | 122 | 0.5931 | 0.7113 | | 0.3667 | 70.8571 | 124 | 0.5375 | 0.7606 | | 0.3667 | 72.0 | 126 | 0.5797 | 0.7113 | | 0.3667 | 72.5714 | 127 | 0.6182 | 0.6690 | | 0.3667 | 73.7143 | 129 | 0.6497 | 0.6690 | | 0.3357 | 74.8571 | 131 | 0.6432 | 0.6831 | | 0.3357 | 76.0 | 133 | 0.6772 | 0.6620 | | 0.3357 | 76.5714 | 134 | 0.6395 | 0.6479 | | 0.3357 | 77.7143 | 136 | 0.5895 | 0.7042 | | 0.3357 | 78.8571 | 138 | 0.5921 | 0.6972 | | 0.3415 | 80.0 | 140 | 0.5618 | 0.7254 | | 0.3415 | 80.5714 | 141 | 0.5697 | 0.7183 | | 0.3415 | 81.7143 | 143 | 0.6535 | 0.6197 | | 0.3415 | 82.8571 | 145 | 0.6627 | 0.6338 | | 0.3415 | 84.0 | 147 | 0.6194 | 0.6761 | | 0.3415 | 84.5714 | 148 | 0.6301 | 0.6901 | | 0.3296 | 85.7143 | 150 | 0.6436 | 0.6690 | | 0.3296 | 86.8571 | 152 | 0.6348 | 0.6831 | | 0.3296 | 88.0 | 154 | 0.6704 | 0.6479 | | 0.3296 | 88.5714 | 155 | 0.7190 | 0.6338 | | 0.3296 | 89.7143 | 157 | 0.7064 | 0.6338 | | 0.3296 | 90.8571 | 159 | 0.6291 | 0.6549 | | 0.3296 | 92.0 | 161 | 0.6933 | 0.6197 | | 0.3296 | 92.5714 | 162 | 0.7115 | 0.6197 | | 0.3296 | 93.7143 | 164 | 0.6229 | 0.6690 | | 0.3296 | 94.8571 | 166 | 0.5727 | 0.7183 | | 0.3296 | 96.0 | 168 | 0.5965 | 0.6901 | | 0.3296 | 96.5714 | 169 | 0.6433 | 0.6690 | | 0.3174 | 97.7143 | 171 | 0.6634 | 0.6408 | | 0.3174 | 98.8571 | 173 | 0.6166 | 0.6549 | | 0.3174 | 100.0 | 175 | 0.5896 | 0.6972 | | 0.3174 | 100.5714 | 176 | 0.6092 | 0.6549 | | 0.3174 | 101.7143 | 178 | 0.6022 | 0.6549 | | 0.3309 | 102.8571 | 180 | 0.5928 | 0.6761 | | 0.3309 | 104.0 | 182 | 0.6327 | 0.6408 | | 0.3309 | 104.5714 | 183 | 0.6490 | 0.6338 | | 0.3309 | 105.7143 | 185 | 0.6155 | 0.6479 | | 0.3309 | 106.8571 | 187 | 0.6225 | 0.6620 | | 0.3309 | 108.0 | 189 | 0.6732 | 0.6408 | | 0.3124 | 108.5714 | 190 | 0.6808 | 0.6408 | | 0.3124 | 109.7143 | 192 | 0.6585 | 0.6479 | | 0.3124 | 110.8571 | 194 | 0.6122 | 0.6761 | | 0.3124 | 112.0 | 196 | 0.6510 | 0.6549 | | 0.3124 | 112.5714 | 197 | 0.7099 | 0.6408 | | 0.3124 | 113.7143 | 199 | 0.7192 | 0.6338 | | 0.3158 | 114.8571 | 201 | 0.6186 | 0.6901 | | 0.3158 | 116.0 | 203 | 0.6071 | 0.7042 | | 0.3158 | 116.5714 | 204 | 0.6419 | 0.6831 | | 0.3158 | 117.7143 | 206 | 0.6679 | 0.6549 | | 0.3158 | 118.8571 | 208 | 0.6825 | 0.6268 | | 0.3026 | 120.0 | 210 | 0.6091 | 0.6972 | | 0.3026 | 120.5714 | 211 | 0.5861 | 0.7394 | | 0.3026 | 121.7143 | 213 | 0.6037 | 0.7113 | | 0.3026 | 122.8571 | 215 | 0.6315 | 0.6761 | | 0.3026 | 124.0 | 217 | 0.6328 | 0.6690 | | 0.3026 | 124.5714 | 218 | 0.6187 | 0.6831 | | 0.2968 | 125.7143 | 220 | 0.5843 | 0.7394 | | 0.2968 | 126.8571 | 222 | 0.6126 | 0.7042 | | 0.2968 | 128.0 | 224 | 0.6785 | 0.6549 | | 0.2968 | 128.5714 | 225 | 0.6706 | 0.6479 | | 0.2968 | 129.7143 | 227 | 0.6070 | 0.7113 | | 0.2968 | 130.8571 | 229 | 0.5984 | 0.7254 | | 0.294 | 132.0 | 231 | 0.6533 | 0.6620 | | 0.294 | 132.5714 | 232 | 0.6802 | 0.6408 | | 0.294 | 133.7143 | 234 | 0.6804 | 0.6408 | | 0.294 | 134.8571 | 236 | 0.6228 | 0.7042 | | 0.294 | 136.0 | 238 | 0.5849 | 0.7676 | | 0.294 | 136.5714 | 239 | 0.5874 | 0.7676 | | 0.3009 | 137.7143 | 241 | 0.6230 | 0.7042 | | 0.3009 | 138.8571 | 243 | 0.6641 | 0.6549 | | 0.3009 | 140.0 | 245 | 0.6435 | 0.6972 | | 0.3009 | 140.5714 | 246 | 0.6134 | 0.7254 | | 0.3009 | 141.7143 | 248 | 0.6063 | 0.7394 | | 0.2873 | 142.8571 | 250 | 0.6347 | 0.6972 | | 0.2873 | 144.0 | 252 | 0.6992 | 0.6690 | | 0.2873 | 144.5714 | 253 | 0.7137 | 0.6408 | | 0.2873 | 145.7143 | 255 | 0.6738 | 0.6690 | | 0.2873 | 146.8571 | 257 | 0.6321 | 0.7113 | | 0.2873 | 148.0 | 259 | 0.6135 | 0.7183 | | 0.2821 | 148.5714 | 260 | 0.6195 | 0.7113 | | 0.2821 | 149.7143 | 262 | 0.6544 | 0.6761 | | 0.2821 | 150.8571 | 264 | 0.6464 | 0.6831 | | 0.2821 | 152.0 | 266 | 0.6087 | 0.7324 | | 0.2821 | 152.5714 | 267 | 0.6000 | 0.7394 | | 0.2821 | 153.7143 | 269 | 0.6170 | 0.7113 | | 0.3017 | 154.8571 | 271 | 0.6674 | 0.6831 | | 0.3017 | 156.0 | 273 | 0.7137 | 0.6338 | | 0.3017 | 156.5714 | 274 | 0.7014 | 0.6479 | | 0.3017 | 157.7143 | 276 | 0.6091 | 0.7254 | | 0.3017 | 158.8571 | 278 | 0.5626 | 0.7676 | | 0.2857 | 160.0 | 280 | 0.5685 | 0.7606 | | 0.2857 | 160.5714 | 281 | 0.5941 | 0.7113 | | 0.2857 | 161.7143 | 283 | 0.6219 | 0.7113 | | 0.2857 | 162.8571 | 285 | 0.6283 | 0.7113 | | 0.2857 | 164.0 | 287 | 0.6314 | 0.7042 | | 0.2857 | 164.5714 | 288 | 0.6369 | 0.6972 | | 0.2819 | 165.7143 | 290 | 0.6446 | 0.6972 | | 0.2819 | 166.8571 | 292 | 0.6541 | 0.6901 | | 0.2819 | 168.0 | 294 | 0.6286 | 0.7183 | | 0.2819 | 168.5714 | 295 | 0.6064 | 0.7183 | | 0.2819 | 169.7143 | 297 | 0.5995 | 0.7254 | | 0.2819 | 170.8571 | 299 | 0.6431 | 0.7254 | | 0.2744 | 172.0 | 301 | 0.6797 | 0.6901 | | 0.2744 | 172.5714 | 302 | 0.6716 | 0.6972 | | 0.2744 | 173.7143 | 304 | 0.6510 | 0.7254 | | 0.2744 | 174.8571 | 306 | 0.6362 | 0.7465 | | 0.2744 | 176.0 | 308 | 0.6158 | 0.7606 | | 0.2744 | 176.5714 | 309 | 0.6099 | 0.7676 | | 0.2867 | 177.7143 | 311 | 0.6112 | 0.7535 | | 0.2867 | 178.8571 | 313 | 0.6035 | 0.7465 | | 0.2867 | 180.0 | 315 | 0.5816 | 0.7676 | | 0.2867 | 180.5714 | 316 | 0.5818 | 0.7676 | | 0.2867 | 181.7143 | 318 | 0.6078 | 0.7676 | | 0.2883 | 182.8571 | 320 | 0.6083 | 0.7535 | | 0.2883 | 184.0 | 322 | 0.5928 | 0.7465 | | 0.2883 | 184.5714 | 323 | 0.5862 | 0.7535 | | 0.2883 | 185.7143 | 325 | 0.5625 | 0.7676 | | 0.2883 | 186.8571 | 327 | 0.5580 | 0.7817 | | 0.2883 | 188.0 | 329 | 0.5945 | 0.7535 | | 0.2852 | 188.5714 | 330 | 0.6321 | 0.6972 | | 0.2852 | 189.7143 | 332 | 0.6650 | 0.6620 | | 0.2852 | 190.8571 | 334 | 0.6612 | 0.6690 | | 0.2852 | 192.0 | 336 | 0.6455 | 0.6761 | | 0.2852 | 192.5714 | 337 | 0.6290 | 0.7113 | | 0.2852 | 193.7143 | 339 | 0.6036 | 0.7394 | | 0.2941 | 194.8571 | 341 | 0.5879 | 0.7535 | | 0.2941 | 196.0 | 343 | 0.6135 | 0.7254 | | 0.2941 | 196.5714 | 344 | 0.6295 | 0.7113 | | 0.2941 | 197.7143 | 346 | 0.6445 | 0.6831 | | 0.2941 | 198.8571 | 348 | 0.6591 | 0.6690 | | 0.2692 | 200.0 | 350 | 0.6557 | 0.6831 | | 0.2692 | 200.5714 | 351 | 0.6485 | 0.7113 | | 0.2692 | 201.7143 | 353 | 0.6520 | 0.7183 | | 0.2692 | 202.8571 | 355 | 0.6673 | 0.7113 | | 0.2692 | 204.0 | 357 | 0.6814 | 0.7183 | | 0.2692 | 204.5714 | 358 | 0.6694 | 0.7113 | | 0.2666 | 205.7143 | 360 | 0.6350 | 0.7254 | | 0.2666 | 206.8571 | 362 | 0.6091 | 0.7465 | | 0.2666 | 208.0 | 364 | 0.6222 | 0.7394 | | 0.2666 | 208.5714 | 365 | 0.6363 | 0.7394 | | 0.2666 | 209.7143 | 367 | 0.6398 | 0.7394 | | 0.2666 | 210.8571 | 369 | 0.6555 | 0.7254 | | 0.2745 | 212.0 | 371 | 0.6555 | 0.7254 | | 0.2745 | 212.5714 | 372 | 0.6467 | 0.7394 | | 0.2745 | 213.7143 | 374 | 0.6216 | 0.7606 | | 0.2745 | 214.8571 | 376 | 0.6066 | 0.7676 | | 0.2745 | 216.0 | 378 | 0.6083 | 0.7606 | | 0.2745 | 216.5714 | 379 | 0.6152 | 0.7535 | | 0.2578 | 217.7143 | 381 | 0.6162 | 0.7535 | | 0.2578 | 218.8571 | 383 | 0.6097 | 0.7535 | | 0.2578 | 220.0 | 385 | 0.6003 | 0.7465 | | 0.2578 | 220.5714 | 386 | 0.6064 | 0.7535 | | 0.2578 | 221.7143 | 388 | 0.6182 | 0.7535 | | 0.2637 | 222.8571 | 390 | 0.6465 | 0.7465 | | 0.2637 | 224.0 | 392 | 0.6461 | 0.7535 | | 0.2637 | 224.5714 | 393 | 0.6352 | 0.7535 | | 0.2637 | 225.7143 | 395 | 0.6018 | 0.7606 | | 0.2637 | 226.8571 | 397 | 0.5855 | 0.7746 | | 0.2637 | 228.0 | 399 | 0.5916 | 0.7606 | | 0.2696 | 228.5714 | 400 | 0.6031 | 0.7606 | | 0.2696 | 229.7143 | 402 | 0.6308 | 0.7606 | | 0.2696 | 230.8571 | 404 | 0.6435 | 0.7465 | | 0.2696 | 232.0 | 406 | 0.6325 | 0.7465 | | 0.2696 | 232.5714 | 407 | 0.6212 | 0.7535 | | 0.2696 | 233.7143 | 409 | 0.5986 | 0.7535 | | 0.2697 | 234.8571 | 411 | 0.5964 | 0.7465 | | 0.2697 | 236.0 | 413 | 0.5950 | 0.7465 | | 0.2697 | 236.5714 | 414 | 0.5986 | 0.7465 | | 0.2697 | 237.7143 | 416 | 0.6066 | 0.7535 | | 0.2697 | 238.8571 | 418 | 0.6035 | 0.7535 | | 0.2659 | 240.0 | 420 | 0.6039 | 0.7535 | | 0.2659 | 240.5714 | 421 | 0.6004 | 0.7535 | | 0.2659 | 241.7143 | 423 | 0.6001 | 0.7535 | | 0.2659 | 242.8571 | 425 | 0.5941 | 0.7465 | | 0.2659 | 244.0 | 427 | 0.5942 | 0.7394 | | 0.2659 | 244.5714 | 428 | 0.5972 | 0.7465 | | 0.2529 | 245.7143 | 430 | 0.6077 | 0.7535 | | 0.2529 | 246.8571 | 432 | 0.6173 | 0.7465 | | 0.2529 | 248.0 | 434 | 0.6129 | 0.7606 | | 0.2529 | 248.5714 | 435 | 0.6099 | 0.7606 | | 0.2529 | 249.7143 | 437 | 0.6005 | 0.7606 | | 0.2529 | 250.8571 | 439 | 0.5920 | 0.7606 | | 0.261 | 252.0 | 441 | 0.5946 | 0.7606 | | 0.261 | 252.5714 | 442 | 0.5992 | 0.7606 | | 0.261 | 253.7143 | 444 | 0.6142 | 0.7606 | | 0.261 | 254.8571 | 446 | 0.6289 | 0.7465 | | 0.261 | 256.0 | 448 | 0.6316 | 0.7465 | | 0.261 | 256.5714 | 449 | 0.6302 | 0.7535 | | 0.2675 | 257.7143 | 451 | 0.6241 | 0.7535 | | 0.2675 | 258.8571 | 453 | 0.6129 | 0.7535 | | 0.2675 | 260.0 | 455 | 0.6066 | 0.7465 | | 0.2675 | 260.5714 | 456 | 0.6061 | 0.7465 | | 0.2675 | 261.7143 | 458 | 0.6098 | 0.7535 | | 0.2737 | 262.8571 | 460 | 0.6172 | 0.7394 | | 0.2737 | 264.0 | 462 | 0.6274 | 0.7324 | | 0.2737 | 264.5714 | 463 | 0.6298 | 0.7324 | | 0.2737 | 265.7143 | 465 | 0.6296 | 0.7324 | | 0.2737 | 266.8571 | 467 | 0.6285 | 0.7324 | | 0.2737 | 268.0 | 469 | 0.6265 | 0.7324 | | 0.2504 | 268.5714 | 470 | 0.6274 | 0.7465 | | 0.2504 | 269.7143 | 472 | 0.6286 | 0.7394 | | 0.2504 | 270.8571 | 474 | 0.6236 | 0.7465 | | 0.2504 | 272.0 | 476 | 0.6178 | 0.7465 | | 0.2504 | 272.5714 | 477 | 0.6164 | 0.7465 | | 0.2504 | 273.7143 | 479 | 0.6161 | 0.7465 | | 0.2539 | 274.8571 | 481 | 0.6193 | 0.7465 | | 0.2539 | 276.0 | 483 | 0.6236 | 0.7394 | | 0.2539 | 276.5714 | 484 | 0.6258 | 0.7394 | | 0.2539 | 277.7143 | 486 | 0.6308 | 0.7394 | | 0.2539 | 278.8571 | 488 | 0.6349 | 0.7394 | | 0.2508 | 280.0 | 490 | 0.6352 | 0.7394 | | 0.2508 | 280.5714 | 491 | 0.6346 | 0.7394 | | 0.2508 | 281.7143 | 493 | 0.6336 | 0.7394 | | 0.2508 | 282.8571 | 495 | 0.6331 | 0.7394 | | 0.2508 | 284.0 | 497 | 0.6324 | 0.7394 | | 0.2508 | 284.5714 | 498 | 0.6319 | 0.7394 | | 0.2393 | 285.7143 | 500 | 0.6316 | 0.7394 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "extra", "incomplete", "normal" ]
sagar27kumar/sagarsahu_ECG-XRAY-ViT
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sagarsahu_ECG-XRAY-ViT This model is a fine-tuned version of [google/vit-large-patch32-384](https://huggingface.co/google/vit-large-patch32-384) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0919 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2773 | 1.0 | 116 | 0.1050 | | 0.0739 | 2.0 | 232 | 0.0919 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.5.1 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "abnormal heartbeat", "history of mi", "myocardial infarction", "normal person" ]
dromero86/vit-model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0239 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.1297 | 3.8462 | 500 | 0.0239 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
dima806/ai_vs_human_generated_image_detection
Predicts with about 98% accuracy whether an attached image is AI-generated. See https://www.kaggle.com/code/dima806/ai-vs-human-generated-images-prediction-vit for details. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6449300e3adf50d864095b90/vaVofgP3NO3ybwG0GSbC3.png) ``` Classification report: precision recall f1-score support human 0.9655 0.9930 0.9790 3998 AI-generated 0.9928 0.9645 0.9784 3997 accuracy 0.9787 7995 macro avg 0.9791 0.9787 0.9787 7995 weighted avg 0.9791 0.9787 0.9787 7995 ```
[ "human", "ai-generated" ]
crocutacrocuto/dinov2-base-MEG4-6
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dinov2-base-MEG4-6 This model is a fine-tuned version of [facebook/dinov2-base](https://huggingface.co/facebook/dinov2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4166 - Accuracy: 0.8107 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:| | 0.1517 | 1.0000 | 11596 | 1.0708 | 0.7685 | | 0.1411 | 2.0000 | 23192 | 0.9791 | 0.7985 | | 0.0951 | 2.9999 | 34788 | 1.2306 | 0.8006 | | 0.0602 | 4.0 | 46385 | 1.1623 | 0.7970 | | 0.0314 | 5.0000 | 57981 | 1.4054 | 0.8065 | | 0.0233 | 5.9999 | 69576 | 1.4166 | 0.8107 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
[ "aardvark", "badger", "bird", "black-and-white colobus", "blue duiker", "blue monkey", "buffalo", "bushbuck", "bushpig", "cattle", "chimpanzee", "civet_genet", "dog", "elephant", "galago_potto", "goat", "golden cat", "gorilla", "guineafowl", "leopard", "lhoests monkey", "mandrill", "mongoose", "monkey", "olive baboon", "pangolin", "porcupine", "red colobus_red-capped mangabey", "red duiker", "rodent", "serval", "spotted hyena", "squirrel", "water chevrotain", "yellow-backed duiker" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV31
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV31 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0993 - Accuracy: 0.5902 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 5 | 1.5163 | 0.4590 | | No log | 2.0 | 10 | 1.3238 | 0.4590 | | 5.237 | 3.0 | 15 | 1.2645 | 0.5082 | | 5.237 | 4.0 | 20 | 1.1982 | 0.5082 | | 3.9571 | 5.0 | 25 | 1.1256 | 0.5246 | | 3.9571 | 6.0 | 30 | 1.2419 | 0.5902 | | 3.9571 | 7.0 | 35 | 1.1806 | 0.4918 | | 2.5954 | 8.0 | 40 | 1.3698 | 0.5738 | | 2.5954 | 9.0 | 45 | 1.3558 | 0.5574 | | 1.7082 | 10.0 | 50 | 1.5148 | 0.5902 | | 1.7082 | 11.0 | 55 | 1.6398 | 0.5574 | | 0.7716 | 12.0 | 60 | 1.6954 | 0.6066 | | 0.7716 | 13.0 | 65 | 1.8204 | 0.6066 | | 0.7716 | 14.0 | 70 | 1.8661 | 0.6066 | | 0.3952 | 15.0 | 75 | 1.9419 | 0.6066 | | 0.3952 | 16.0 | 80 | 2.1287 | 0.5902 | | 0.144 | 17.0 | 85 | 2.4729 | 0.5738 | | 0.144 | 18.0 | 90 | 2.8243 | 0.5410 | | 0.144 | 19.0 | 95 | 2.6554 | 0.5902 | | 0.0821 | 20.0 | 100 | 2.5922 | 0.5574 | | 0.0821 | 21.0 | 105 | 2.9639 | 0.4918 | | 0.0948 | 22.0 | 110 | 2.8189 | 0.5738 | | 0.0948 | 23.0 | 115 | 2.8026 | 0.5410 | | 0.0809 | 24.0 | 120 | 2.6857 | 0.6066 | | 0.0809 | 25.0 | 125 | 2.8069 | 0.5574 | | 0.0809 | 26.0 | 130 | 2.7649 | 0.5902 | | 0.0309 | 27.0 | 135 | 2.8005 | 0.5902 | | 0.0309 | 28.0 | 140 | 2.8427 | 0.6066 | | 0.0269 | 29.0 | 145 | 2.9175 | 0.6066 | | 0.0269 | 30.0 | 150 | 3.0034 | 0.5738 | | 0.0269 | 31.0 | 155 | 3.0874 | 0.5902 | | 0.0279 | 32.0 | 160 | 3.0993 | 0.5902 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "0", "1", "2", "3", "4" ]
jackzhouusa/my-food-model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-food-model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2726 - Accuracy: 0.941 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4463 | 1.0 | 125 | 0.4452 | 0.928 | | 0.2166 | 2.0 | 250 | 0.2987 | 0.933 | | 0.1348 | 3.0 | 375 | 0.2726 | 0.941 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "beignets", "bruschetta", "chicken_wings", "hamburger", "pork_chop", "prime_rib", "ramen" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV34
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV34 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3276 - Accuracy: 0.6333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.8889 | 6 | 1.4254 | 0.45 | | 5.9772 | 1.8889 | 12 | 1.3083 | 0.45 | | 5.9772 | 2.8889 | 18 | 1.3032 | 0.4667 | | 5.1141 | 3.8889 | 24 | 1.1521 | 0.5667 | | 5.1141 | 4.8889 | 30 | 1.2171 | 0.4667 | | 3.8959 | 5.8889 | 36 | 1.1281 | 0.4833 | | 3.8959 | 6.8889 | 42 | 0.9969 | 0.6167 | | 2.9663 | 7.8889 | 48 | 0.9814 | 0.65 | | 2.9663 | 8.8889 | 54 | 0.8797 | 0.6333 | | 2.416 | 9.8889 | 60 | 0.8956 | 0.6667 | | 2.416 | 10.8889 | 66 | 0.8364 | 0.7 | | 1.9336 | 11.8889 | 72 | 0.9800 | 0.65 | | 1.9336 | 12.8889 | 78 | 0.8707 | 0.6833 | | 1.5631 | 13.8889 | 84 | 0.9331 | 0.6333 | | 1.5631 | 14.8889 | 90 | 1.0113 | 0.6333 | | 1.1295 | 15.8889 | 96 | 1.0988 | 0.6167 | | 1.1295 | 16.8889 | 102 | 1.0197 | 0.6833 | | 1.1454 | 17.8889 | 108 | 1.2170 | 0.6167 | | 1.1454 | 18.8889 | 114 | 1.1182 | 0.6667 | | 0.8852 | 19.8889 | 120 | 1.1026 | 0.6833 | | 0.8852 | 20.8889 | 126 | 1.0868 | 0.6333 | | 0.7881 | 21.8889 | 132 | 1.1674 | 0.6333 | | 0.7881 | 22.8889 | 138 | 1.1763 | 0.6667 | | 0.7913 | 23.8889 | 144 | 1.3433 | 0.6333 | | 0.7913 | 24.8889 | 150 | 1.1228 | 0.7 | | 0.667 | 25.8889 | 156 | 1.2828 | 0.6667 | | 0.667 | 26.8889 | 162 | 1.2373 | 0.6833 | | 0.6299 | 27.8889 | 168 | 1.2951 | 0.6667 | | 0.6299 | 28.8889 | 174 | 1.3410 | 0.6333 | | 0.5409 | 29.8889 | 180 | 1.1852 | 0.7 | | 0.5409 | 30.8889 | 186 | 1.4286 | 0.6167 | | 0.6085 | 31.8889 | 192 | 1.2376 | 0.65 | | 0.6085 | 32.8889 | 198 | 1.2249 | 0.6667 | | 0.562 | 33.8889 | 204 | 1.3640 | 0.6333 | | 0.562 | 34.8889 | 210 | 1.4234 | 0.6333 | | 0.4543 | 35.8889 | 216 | 1.3489 | 0.6333 | | 0.4543 | 36.8889 | 222 | 1.3273 | 0.6333 | | 0.4708 | 37.8889 | 228 | 1.3215 | 0.6333 | | 0.4708 | 38.8889 | 234 | 1.3277 | 0.6333 | | 0.5217 | 39.8889 | 240 | 1.3276 | 0.6333 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
NEXTAltair/cache_aestheic-shadow-v2
# Aesthetic Shadow V2 Mirror [English](#english) | [日本語](#japanese) ## English This is a mirror of shadowlilac/aesthetic-shadow-v2. ### Model Description - Visual Transformer with 1.09B parameters - Evaluates 1024x1024 high-resolution images - Specialized in anime image quality assessment ### Score Interpretation - very aesthetic: ≥0.71 - aesthetic: 0.45-0.71 - displeasing: 0.27-0.45 - very displeasing: ≤0.27 ### Notice This model is redistributed under the CC-BY-NC-4.0 license with acknowledgment to the original author. ### Credit Original model: shadowlilac/aesthetic-shadow-v2 --- ## Japanese このモデルはshadowlilac/aesthetic-shadow-v2のミラーです。 ### モデルの説明 - 11億パラメータのビジュアルトランスフォーマー - 1024x1024の高解像度画像を評価 - アニメ画像の品質評価に特化 ### スコアの解釈 - very aesthetic: 0.71以上 - aesthetic: 0.45~0.71 - displeasing: 0.27~0.45 - very displeasing: 0.27以下 ### 注意事項 このモデルは元の作者の許可の下、CC-BY-NC-4.0ライセンスに従って再配布されています。 ### クレジット Original model: shadowlilac/aesthetic-shadow-v2
[ "hq", "lq" ]
noani/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.1680 - Accuracy: 0.9472 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3975 | 1.0 | 370 | 0.2996 | 0.9323 | | 0.2088 | 2.0 | 740 | 0.2288 | 0.9459 | | 0.1779 | 3.0 | 1110 | 0.2057 | 0.9486 | | 0.1578 | 4.0 | 1480 | 0.1959 | 0.9526 | | 0.1324 | 5.0 | 1850 | 0.1929 | 0.9540 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
YYAE/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6903 - Accuracy: 0.895 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 11.2832 | 1.0 | 63 | 2.6361 | 0.79 | | 7.6698 | 2.0 | 126 | 1.8547 | 0.877 | | 6.649 | 2.96 | 186 | 1.6903 | 0.895 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
corranm/model2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | No log | 0.9655 | 7 | 1.9020 | 0.1970 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
Melo1512/vit-msn-small-beta-fia-manually-enhanced-HSV_test_3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-beta-fia-manually-enhanced-HSV_test_3 This model is a fine-tuned version of [Melo1512/vit-msn-small-beta-fia-manually-enhanced-HSV_test_2](https://huggingface.co/Melo1512/vit-msn-small-beta-fia-manually-enhanced-HSV_test_2) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5013 - Accuracy: 0.8803 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.15 - num_epochs: 50 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.5714 | 1 | 0.5123 | 0.8873 | | No log | 1.7143 | 3 | 0.5219 | 0.8873 | | No log | 2.8571 | 5 | 0.5431 | 0.8732 | | No log | 4.0 | 7 | 0.5444 | 0.8732 | | No log | 4.5714 | 8 | 0.5336 | 0.8803 | | 0.4252 | 5.7143 | 10 | 0.5235 | 0.8873 | | 0.4252 | 6.8571 | 12 | 0.5269 | 0.8803 | | 0.4252 | 8.0 | 14 | 0.5106 | 0.8873 | | 0.4252 | 8.5714 | 15 | 0.5048 | 0.8873 | | 0.4252 | 9.7143 | 17 | 0.5013 | 0.8803 | | 0.4252 | 10.8571 | 19 | 0.5105 | 0.8803 | | 0.4413 | 12.0 | 21 | 0.5256 | 0.8803 | | 0.4413 | 12.5714 | 22 | 0.5303 | 0.8732 | | 0.4413 | 13.7143 | 24 | 0.5218 | 0.8662 | | 0.4413 | 14.8571 | 26 | 0.5188 | 0.8592 | | 0.4413 | 16.0 | 28 | 0.5202 | 0.8592 | | 0.4413 | 16.5714 | 29 | 0.5252 | 0.8592 | | 0.437 | 17.7143 | 31 | 0.5385 | 0.8592 | | 0.437 | 18.8571 | 33 | 0.5456 | 0.8592 | | 0.437 | 20.0 | 35 | 0.5409 | 0.8732 | | 0.437 | 20.5714 | 36 | 0.5375 | 0.8662 | | 0.437 | 21.7143 | 38 | 0.5356 | 0.8662 | | 0.4343 | 22.8571 | 40 | 0.5328 | 0.8803 | | 0.4343 | 24.0 | 42 | 0.5318 | 0.8803 | | 0.4343 | 24.5714 | 43 | 0.5330 | 0.8803 | | 0.4343 | 25.7143 | 45 | 0.5334 | 0.8803 | | 0.4343 | 26.8571 | 47 | 0.5332 | 0.8732 | | 0.4343 | 28.0 | 49 | 0.5341 | 0.8732 | | 0.4271 | 28.5714 | 50 | 0.5343 | 0.8732 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "extra", "incomplete", "normal" ]
Melo1512/vit-msn-small-beta-fia-manually-enhanced-HSV_test_4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-beta-fia-manually-enhanced-HSV_test_4 This model is a fine-tuned version of [Melo1512/vit-msn-small-beta-fia-manually-enhanced-HSV_test_3](https://huggingface.co/Melo1512/vit-msn-small-beta-fia-manually-enhanced-HSV_test_3) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3252 - Accuracy: 0.8662 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.5714 | 1 | 0.3425 | 0.8803 | | No log | 1.7143 | 3 | 0.3532 | 0.8803 | | No log | 2.8571 | 5 | 0.3731 | 0.8732 | | No log | 4.0 | 7 | 0.3582 | 0.8662 | | No log | 4.5714 | 8 | 0.3560 | 0.8732 | | 0.2214 | 5.7143 | 10 | 0.4090 | 0.8451 | | 0.2214 | 6.8571 | 12 | 0.4253 | 0.8239 | | 0.2214 | 8.0 | 14 | 0.3826 | 0.8592 | | 0.2214 | 8.5714 | 15 | 0.3748 | 0.8592 | | 0.2214 | 9.7143 | 17 | 0.3411 | 0.8592 | | 0.2214 | 10.8571 | 19 | 0.3402 | 0.8521 | | 0.1927 | 12.0 | 21 | 0.3833 | 0.8592 | | 0.1927 | 12.5714 | 22 | 0.3901 | 0.8592 | | 0.1927 | 13.7143 | 24 | 0.3608 | 0.8451 | | 0.1927 | 14.8571 | 26 | 0.3565 | 0.8662 | | 0.1927 | 16.0 | 28 | 0.3677 | 0.8803 | | 0.1927 | 16.5714 | 29 | 0.3672 | 0.8732 | | 0.213 | 17.7143 | 31 | 0.3429 | 0.8592 | | 0.213 | 18.8571 | 33 | 0.3402 | 0.8803 | | 0.213 | 20.0 | 35 | 0.3508 | 0.8662 | | 0.213 | 20.5714 | 36 | 0.3578 | 0.8662 | | 0.213 | 21.7143 | 38 | 0.3310 | 0.8662 | | 0.1927 | 22.8571 | 40 | 0.3252 | 0.8662 | | 0.1927 | 24.0 | 42 | 0.3473 | 0.8592 | | 0.1927 | 24.5714 | 43 | 0.3671 | 0.8592 | | 0.1927 | 25.7143 | 45 | 0.3863 | 0.8592 | | 0.1927 | 26.8571 | 47 | 0.3622 | 0.8592 | | 0.1927 | 28.0 | 49 | 0.3521 | 0.8592 | | 0.1856 | 28.5714 | 50 | 0.3529 | 0.8592 | | 0.1856 | 29.7143 | 52 | 0.3596 | 0.8592 | | 0.1856 | 30.8571 | 54 | 0.3648 | 0.8592 | | 0.1856 | 32.0 | 56 | 0.3637 | 0.8662 | | 0.1856 | 32.5714 | 57 | 0.3686 | 0.8592 | | 0.1856 | 33.7143 | 59 | 0.3602 | 0.8521 | | 0.18 | 34.8571 | 61 | 0.3648 | 0.8662 | | 0.18 | 36.0 | 63 | 0.3529 | 0.8521 | | 0.18 | 36.5714 | 64 | 0.3595 | 0.8662 | | 0.18 | 37.7143 | 66 | 0.3939 | 0.8521 | | 0.18 | 38.8571 | 68 | 0.4526 | 0.8239 | | 0.2048 | 40.0 | 70 | 0.4505 | 0.8239 | | 0.2048 | 40.5714 | 71 | 0.4319 | 0.8451 | | 0.2048 | 41.7143 | 73 | 0.3779 | 0.8521 | | 0.2048 | 42.8571 | 75 | 0.3352 | 0.8592 | | 0.2048 | 44.0 | 77 | 0.3353 | 0.8662 | | 0.2048 | 44.5714 | 78 | 0.3450 | 0.8521 | | 0.1977 | 45.7143 | 80 | 0.3557 | 0.8521 | | 0.1977 | 46.8571 | 82 | 0.3715 | 0.8521 | | 0.1977 | 48.0 | 84 | 0.3852 | 0.8521 | | 0.1977 | 48.5714 | 85 | 0.3962 | 0.8521 | | 0.1977 | 49.7143 | 87 | 0.4063 | 0.8521 | | 0.1977 | 50.8571 | 89 | 0.4012 | 0.8521 | | 0.1952 | 52.0 | 91 | 0.3912 | 0.8521 | | 0.1952 | 52.5714 | 92 | 0.3874 | 0.8521 | | 0.1952 | 53.7143 | 94 | 0.3822 | 0.8521 | | 0.1952 | 54.8571 | 96 | 0.3758 | 0.8592 | | 0.1952 | 56.0 | 98 | 0.3722 | 0.8592 | | 0.1952 | 56.5714 | 99 | 0.3720 | 0.8592 | | 0.1714 | 57.1429 | 100 | 0.3717 | 0.8592 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "extra", "incomplete", "normal" ]
Melo1512/vit-msn-small-beta-fia-manually-enhanced-HSV_test_5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-beta-fia-manually-enhanced-HSV_test_5 This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3267 - Accuracy: 0.9167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 5 - total_train_batch_size: 320 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.25 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.7143 | 1 | 1.1106 | 0.2292 | | No log | 1.4286 | 2 | 1.0984 | 0.2569 | | No log | 2.8571 | 4 | 1.0400 | 0.4097 | | No log | 3.5714 | 5 | 0.9960 | 0.5486 | | No log | 5.0 | 7 | 0.8868 | 0.7292 | | No log | 5.7143 | 8 | 0.8263 | 0.7778 | | No log | 6.4286 | 9 | 0.7651 | 0.8056 | | 0.9808 | 7.8571 | 11 | 0.6521 | 0.8125 | | 0.9808 | 8.5714 | 12 | 0.6052 | 0.8125 | | 0.9808 | 10.0 | 14 | 0.5388 | 0.8125 | | 0.9808 | 10.7143 | 15 | 0.5174 | 0.8125 | | 0.9808 | 11.4286 | 16 | 0.5032 | 0.8125 | | 0.9808 | 12.8571 | 18 | 0.5022 | 0.8125 | | 0.9808 | 13.5714 | 19 | 0.5044 | 0.8194 | | 0.5431 | 15.0 | 21 | 0.4773 | 0.8264 | | 0.5431 | 15.7143 | 22 | 0.4439 | 0.8333 | | 0.5431 | 16.4286 | 23 | 0.4198 | 0.8403 | | 0.5431 | 17.8571 | 25 | 0.3873 | 0.8819 | | 0.5431 | 18.5714 | 26 | 0.3730 | 0.8889 | | 0.5431 | 20.0 | 28 | 0.3774 | 0.9028 | | 0.5431 | 20.7143 | 29 | 0.3705 | 0.9097 | | 0.4028 | 21.4286 | 30 | 0.3587 | 0.9097 | | 0.4028 | 22.8571 | 32 | 0.3662 | 0.8958 | | 0.4028 | 23.5714 | 33 | 0.3779 | 0.8681 | | 0.4028 | 25.0 | 35 | 0.4322 | 0.8264 | | 0.4028 | 25.7143 | 36 | 0.3944 | 0.8333 | | 0.4028 | 26.4286 | 37 | 0.3585 | 0.8889 | | 0.4028 | 27.8571 | 39 | 0.3608 | 0.8889 | | 0.3497 | 28.5714 | 40 | 0.3972 | 0.8472 | | 0.3497 | 30.0 | 42 | 0.3805 | 0.8611 | | 0.3497 | 30.7143 | 43 | 0.3611 | 0.8819 | | 0.3497 | 31.4286 | 44 | 0.3267 | 0.9167 | | 0.3497 | 32.8571 | 46 | 0.3403 | 0.9028 | | 0.3497 | 33.5714 | 47 | 0.3751 | 0.875 | | 0.3497 | 35.0 | 49 | 0.3801 | 0.8681 | | 0.3278 | 35.7143 | 50 | 0.3499 | 0.8958 | | 0.3278 | 36.4286 | 51 | 0.3384 | 0.8958 | | 0.3278 | 37.8571 | 53 | 0.3642 | 0.8542 | | 0.3278 | 38.5714 | 54 | 0.3997 | 0.8194 | | 0.3278 | 40.0 | 56 | 0.3843 | 0.8403 | | 0.3278 | 40.7143 | 57 | 0.3676 | 0.8681 | | 0.3278 | 41.4286 | 58 | 0.3464 | 0.9028 | | 0.3334 | 42.8571 | 60 | 0.3618 | 0.8819 | | 0.3334 | 43.5714 | 61 | 0.4006 | 0.8194 | | 0.3334 | 45.0 | 63 | 0.4931 | 0.7639 | | 0.3334 | 45.7143 | 64 | 0.4845 | 0.7708 | | 0.3334 | 46.4286 | 65 | 0.4485 | 0.7917 | | 0.3334 | 47.8571 | 67 | 0.3783 | 0.8472 | | 0.3334 | 48.5714 | 68 | 0.3723 | 0.8472 | | 0.3334 | 50.0 | 70 | 0.4077 | 0.8125 | | 0.3334 | 50.7143 | 71 | 0.4381 | 0.7986 | | 0.3334 | 51.4286 | 72 | 0.4627 | 0.7847 | | 0.3334 | 52.8571 | 74 | 0.4445 | 0.7986 | | 0.3334 | 53.5714 | 75 | 0.4141 | 0.8125 | | 0.3334 | 55.0 | 77 | 0.3489 | 0.8681 | | 0.3334 | 55.7143 | 78 | 0.3371 | 0.8958 | | 0.3334 | 56.4286 | 79 | 0.3358 | 0.8889 | | 0.3105 | 57.8571 | 81 | 0.3539 | 0.8681 | | 0.3105 | 58.5714 | 82 | 0.3678 | 0.8542 | | 0.3105 | 60.0 | 84 | 0.3931 | 0.8264 | | 0.3105 | 60.7143 | 85 | 0.3938 | 0.8264 | | 0.3105 | 61.4286 | 86 | 0.3897 | 0.8472 | | 0.3105 | 62.8571 | 88 | 0.3638 | 0.8611 | | 0.3105 | 63.5714 | 89 | 0.3496 | 0.875 | | 0.3061 | 65.0 | 91 | 0.3305 | 0.8958 | | 0.3061 | 65.7143 | 92 | 0.3284 | 0.9028 | | 0.3061 | 66.4286 | 93 | 0.3284 | 0.8958 | | 0.3061 | 67.8571 | 95 | 0.3337 | 0.8958 | | 0.3061 | 68.5714 | 96 | 0.3374 | 0.8889 | | 0.3061 | 70.0 | 98 | 0.3442 | 0.875 | | 0.3061 | 70.7143 | 99 | 0.3452 | 0.875 | | 0.3137 | 71.4286 | 100 | 0.3460 | 0.875 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "extra", "incomplete", "normal" ]
guldasta/swin-tiny-patch4-window7-224-finetuned-beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-beans This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3777 - Accuracy: 0.8764 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 7 | 0.6531 | 0.8301 | | 3.2092 | 2.0 | 14 | 0.4175 | 0.8649 | | 3.2092 | 2.64 | 18 | 0.3777 | 0.8764 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
guqun1985/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0758 - Accuracy: 0.9770 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.251 | 1.0 | 190 | 0.1060 | 0.9659 | | 0.1726 | 2.0 | 380 | 0.1009 | 0.9681 | | 0.1438 | 3.0 | 570 | 0.0758 | 0.9770 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.4.1 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "annualcrop", "forest", "herbaceousvegetation", "highway", "industrial", "pasture", "permanentcrop", "residential", "river", "sealake" ]
ssale2/results
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [aretolabs/nsfw_detection](https://huggingface.co/aretolabs/nsfw_detection) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "label_0", "label_1", "label_2" ]
cvmil/vit-base-patch16-224_rice-leaf-disease-augmented_fft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224_rice-leaf-disease-augmented_fft_012825 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0311 - Accuracy: 0.9905 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 1.723 | 1.0 | 250 | 0.6955 | 1.0661 | | 0.586 | 2.0 | 500 | 0.924 | 0.2848 | | 0.14 | 3.0 | 750 | 0.9755 | 0.0896 | | 0.0253 | 4.0 | 1000 | 0.984 | 0.0505 | | 0.0055 | 5.0 | 1250 | 0.9865 | 0.0376 | | 0.0028 | 6.0 | 1500 | 0.9875 | 0.0368 | | 0.0019 | 7.0 | 1750 | 0.9875 | 0.0401 | | 0.0011 | 8.0 | 2000 | 0.988 | 0.0322 | | 0.0007 | 9.0 | 2250 | 0.991 | 0.0315 | | 0.0009 | 10.0 | 2500 | 0.99 | 0.0304 | | 0.0006 | 11.0 | 2750 | 0.9895 | 0.0316 | | 0.0006 | 12.0 | 3000 | 0.99 | 0.0324 | | 0.0005 | 13.0 | 3250 | 0.9905 | 0.0317 | | 0.0005 | 14.0 | 3500 | 0.9905 | 0.0313 | | 0.0004 | 15.0 | 3750 | 0.9905 | 0.0311 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
cvmil/deit-base-patch16-224_rice-leaf-disease-augmented_fft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deit-base-patch16-224_rice-leaf-disease-augmented_fft_012825 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0330 - Accuracy: 0.9915 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 1.5925 | 1.0 | 250 | 0.7635 | 0.7925 | | 0.4178 | 2.0 | 500 | 0.9385 | 0.2078 | | 0.0954 | 3.0 | 750 | 0.9785 | 0.0678 | | 0.0159 | 4.0 | 1000 | 0.9795 | 0.0617 | | 0.0032 | 5.0 | 1250 | 0.9905 | 0.0254 | | 0.0012 | 6.0 | 1500 | 0.9915 | 0.0223 | | 0.0012 | 7.0 | 1750 | 0.9925 | 0.0209 | | 0.0009 | 8.0 | 2000 | 0.992 | 0.0201 | | 0.0004 | 9.0 | 2250 | 0.9935 | 0.0185 | | 0.0091 | 10.0 | 2500 | 0.986 | 0.0532 | | 0.0012 | 11.0 | 2750 | 0.99 | 0.0314 | | 0.0007 | 12.0 | 3000 | 0.9935 | 0.0278 | | 0.0004 | 13.0 | 3250 | 0.9855 | 0.0561 | | 0.0003 | 14.0 | 3500 | 0.987 | 0.0391 | | 0.0006 | 15.0 | 3750 | 0.9915 | 0.0330 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
cvmil/beit-base-patch16-224_rice-leaf-disease-augmented_fft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beit-base-patch16-224_rice-leaf-disease-augmented_fft_012825 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0274 - Accuracy: 0.994 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 1.541 | 1.0 | 250 | 0.8105 | 0.5771 | | 0.3645 | 2.0 | 500 | 0.9315 | 0.1973 | | 0.1129 | 3.0 | 750 | 0.9805 | 0.0676 | | 0.0362 | 4.0 | 1000 | 0.9775 | 0.0760 | | 0.0182 | 5.0 | 1250 | 0.989 | 0.0396 | | 0.0097 | 6.0 | 1500 | 0.992 | 0.0349 | | 0.006 | 7.0 | 1750 | 0.99 | 0.0423 | | 0.0057 | 8.0 | 2000 | 0.9875 | 0.0488 | | 0.0038 | 9.0 | 2250 | 0.991 | 0.0333 | | 0.0048 | 10.0 | 2500 | 0.991 | 0.0316 | | 0.0033 | 11.0 | 2750 | 0.993 | 0.0300 | | 0.0034 | 12.0 | 3000 | 0.991 | 0.0375 | | 0.0022 | 13.0 | 3250 | 0.9935 | 0.0304 | | 0.0018 | 14.0 | 3500 | 0.994 | 0.0282 | | 0.0029 | 15.0 | 3750 | 0.994 | 0.0274 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
cvmil/swin-base-patch4-window7-224_rice-leaf-disease-augmented_fft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-base-patch4-window7-224_rice-leaf-disease-augmented_fft_012825 This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0472 - Accuracy: 0.988 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 1.5476 | 1.0 | 250 | 0.789 | 0.5917 | | 0.3353 | 2.0 | 500 | 0.9575 | 0.1408 | | 0.0827 | 3.0 | 750 | 0.985 | 0.0427 | | 0.027 | 4.0 | 1000 | 0.9885 | 0.0344 | | 0.0095 | 5.0 | 1250 | 0.9925 | 0.0188 | | 0.0076 | 6.0 | 1500 | 0.995 | 0.0119 | | 0.003 | 7.0 | 1750 | 0.9955 | 0.0090 | | 0.0023 | 8.0 | 2000 | 0.9955 | 0.0163 | | 0.0012 | 9.0 | 2250 | 0.992 | 0.0218 | | 0.0017 | 10.0 | 2500 | 0.996 | 0.0100 | | 0.0044 | 11.0 | 2750 | 0.9955 | 0.0236 | | 0.0048 | 12.0 | 3000 | 0.9925 | 0.0249 | | 0.0023 | 13.0 | 3250 | 0.9955 | 0.0182 | | 0.0032 | 14.0 | 3500 | 0.994 | 0.0198 | | 0.0017 | 15.0 | 3750 | 0.988 | 0.0472 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
cvmil/dinov2-base_rice-leaf-disease-augmented_fft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dinov2-base_rice-leaf-disease-augmented_fft_012825 This model is a fine-tuned version of [facebook/dinov2-base](https://huggingface.co/facebook/dinov2-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0125 - Accuracy: 0.9965 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 0.8384 | 1.0 | 250 | 0.931 | 0.2042 | | 0.1037 | 2.0 | 500 | 0.96 | 0.1290 | | 0.0574 | 3.0 | 750 | 0.977 | 0.0657 | | 0.0255 | 4.0 | 1000 | 0.985 | 0.0527 | | 0.0096 | 5.0 | 1250 | 0.988 | 0.0374 | | 0.0052 | 6.0 | 1500 | 0.997 | 0.0128 | | 0.0018 | 7.0 | 1750 | 0.9945 | 0.0131 | | 0.0027 | 8.0 | 2000 | 0.997 | 0.0122 | | 0.0004 | 9.0 | 2250 | 0.997 | 0.0153 | | 0.0013 | 10.0 | 2500 | 0.997 | 0.0128 | | 0.0004 | 11.0 | 2750 | 0.9965 | 0.0147 | | 0.0004 | 12.0 | 3000 | 0.997 | 0.0155 | | 0.0003 | 13.0 | 3250 | 0.9955 | 0.0118 | | 0.0004 | 14.0 | 3500 | 0.9955 | 0.0123 | | 0.0003 | 15.0 | 3750 | 0.9965 | 0.0125 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
desarrolloasesoreslocales/cmmy
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cmmy This model is a fine-tuned version of [therealcyberlord/stanford-car-vit-patch16](https://huggingface.co/therealcyberlord/stanford-car-vit-patch16) on the imagefolder dataset. It achieves the following results on the evaluation set: - Accuracy: 0.5412 - Loss: 1.4404 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 1024 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:------:|:----:|:--------:|:---------------:| | 16.0903 | 0.9967 | 226 | 0.1969 | 3.4455 | | 11.5438 | 1.9967 | 452 | 0.2992 | 2.4033 | | 9.5963 | 2.9967 | 678 | 0.3492 | 2.0761 | | 8.3873 | 3.9967 | 904 | 0.3934 | 1.8921 | | 7.3127 | 4.9967 | 1130 | 0.4288 | 1.7535 | | 6.2178 | 5.9967 | 1356 | 0.4609 | 1.6533 | | 5.4619 | 6.9967 | 1582 | 0.4856 | 1.5859 | | 4.618 | 7.9967 | 1808 | 0.5129 | 1.5253 | | 3.9349 | 8.9967 | 2034 | 0.5315 | 1.4693 | | 3.4667 | 9.9967 | 2260 | 0.5412 | 1.4404 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "acura_cl_1997", "acura_cl_1998", "acura_cl_1999", "acura_cl_2001", "acura_cl_2003", "acura_ilx_2013", "acura_ilx_2014", "acura_integra_1990", "acura_integra_1991", "acura_integra_1992", "acura_integra_1993", "acura_integra_1994", "acura_integra_1995", "acura_integra_1996", "acura_integra_1997", "acura_integra_1998", "acura_integra_1999", "acura_integra_2000", "acura_integra_2001", "acura_legend_1991", "acura_legend_1992", "acura_legend_1993", "acura_legend_1994", "acura_legend_1995", "acura_mdx_2001", "acura_mdx_2002", "acura_mdx_2003", "acura_mdx_2004", "acura_mdx_2005", "acura_mdx_2006", "acura_mdx_2007", "acura_mdx_2008", "acura_mdx_2009", "acura_mdx_2010", "acura_mdx_2011", "acura_mdx_2012", "acura_mdx_2016", "acura_rdx_2007", "acura_rdx_2008", "acura_rdx_2009", "acura_rdx_2012", "acura_rdx_2013", "acura_rdx_2014", "acura_rl_1997", "acura_rl_1998", "acura_rl_1999", "acura_rl_2000", "acura_rl_2002", "acura_rl_2004", "acura_rl_2005", "acura_rl_2006", "acura_rl_2007", "acura_rsx_2002", "acura_rsx_2003", "acura_rsx_2004", "acura_rsx_2005", "acura_rsx_2006", "acura_tl_1996", "acura_tl_1997", "acura_tl_1998", "acura_tl_1999", "acura_tl_2000", "acura_tl_2001", "acura_tl_2002", "acura_tl_2003", "acura_tl_2004", "acura_tl_2005", "acura_tl_2006", "acura_tl_2007", "acura_tl_2008", "acura_tl_2009", "acura_tl_2010", "acura_tl_2012", "acura_tl_2014", "acura_tsx_2004", "acura_tsx_2005", "acura_tsx_2006", "acura_tsx_2007", "acura_tsx_2008", "acura_tsx_2009", "acura_tsx_2010", "acura_tsx_2012", "acura_zdx_2010", "audi_a3_2006", "audi_a3_2007", "audi_a3_2008", "audi_a3_2010", "audi_a4_1996", "audi_a4_1997", "audi_a4_1998", "audi_a4_1999", "audi_a4_2000", "audi_a4_2001", "audi_a4_2002", "audi_a4_2003", "audi_a4_2004", "audi_a4_2005", "audi_a4_2006", "audi_a4_2007", "audi_a4_2008", "audi_a4_2009", "audi_a4_2010", "audi_a4_2011", "audi_a4_2012", "audi_a5_2010", "audi_a5_2011", "audi_a6_1998", "audi_a6_1999", "audi_a6_2000", "audi_a6_2001", "audi_a6_2002", "audi_a6_2003", "audi_a6_2004", "audi_a6_2005", "audi_a6_2006", "audi_a6_2008", "audi_a6_quattro_1999", "audi_a8_2001", "audi_a8_2004", "audi_a8_2005", "audi_a8_2006", "audi_a8_2007", "audi_allroad_2001", "audi_allroad_2003", "audi_q5_2010", "audi_q5_2011", "audi_q7_2007", "audi_q7_2008", "audi_s4_2000", "audi_s4_2001", "audi_s4_2004", "audi_s4_2005", "audi_s4_2006", "audi_s5_2008", "audi_s6_2008", "audi_s6_2013", "audi_tt_2000", "audi_tt_2001", "bmw_128i_2011", "bmw_135i_2011", "bmw_323i_1999", "bmw_323i_2000", "bmw_323i_2007", "bmw_325ci_2001", "bmw_325ci_2004", "bmw_325i_1987", "bmw_325i_1989", "bmw_325i_1992", "bmw_325i_1993", "bmw_325i_1994", "bmw_325i_1995", "bmw_325i_2001", "bmw_325i_2002", "bmw_325i_2003", "bmw_325i_2004", "bmw_325i_2005", "bmw_325i_2006", "bmw_325xi_2002", "bmw_328i_1996", "bmw_328i_1997", "bmw_328i_1998", "bmw_328i_1999", "bmw_328i_2000", "bmw_328i_2007", "bmw_328i_2008", "bmw_328i_2009", "bmw_328i_2010", "bmw_328i_2011", "bmw_328i_2012", "bmw_328i_2013", "bmw_328xi_2007", "bmw_328xi_2008", "bmw_330ci_2001", "bmw_330ci_2004", "bmw_330ci_2005", "bmw_330ci_2006", "bmw_330i_2001", "bmw_330i_2002", "bmw_330i_2003", "bmw_330i_2004", "bmw_330i_2006", "bmw_330xi_2002", "bmw_335i_2007", "bmw_335i_2008", "bmw_335i_2009", "bmw_335i_2011", "bmw_335i_2013", "bmw_335i_2014", "bmw_335i_2015", "bmw_428i_2015", "bmw_525i_1995", "bmw_525i_2001", "bmw_525i_2002", "bmw_525i_2003", "bmw_525i_2004", "bmw_525i_2005", "bmw_525i_2006", "bmw_525i_2007", "bmw_528i_1997", "bmw_528i_1998", "bmw_528i_1999", "bmw_528i_2000", "bmw_528i_2008", "bmw_528i_2009", "bmw_528i_2012", "bmw_528i_2013", "bmw_530i_2001", "bmw_530i_2002", "bmw_530i_2003", "bmw_530i_2004", "bmw_530i_2005", "bmw_530i_2006", "bmw_530i_2007", "bmw_535i_2008", "bmw_535i_2009", "bmw_535i_2011", "bmw_535i_2012", "bmw_535i_2013", "bmw_535xi_2008", "bmw_540i_1997", "bmw_540i_1998", "bmw_540i_2000", "bmw_540i_2001", "bmw_545i_2004", "bmw_545i_2005", "bmw_550i_2007", "bmw_550i_2008", "bmw_550i_2011", "bmw_640i_2014", "bmw_645ci_2004", "bmw_740i_1998", "bmw_740i_1999", "bmw_740i_2000", "bmw_740i_2001", "bmw_745i_2002", "bmw_745i_2003", "bmw_745i_2004", "bmw_745i_2005", "bmw_750i_2006", "bmw_750i_2007", "bmw_750i_2008", "bmw_750i_2009", "bmw_750i_2010", "bmw_750i_2012", "bmw_850i_1991", "bmw_m3_1995", "bmw_m3_1997", "bmw_m3_1998", "bmw_m3_1999", "bmw_m3_2001", "bmw_m3_2002", "bmw_m3_2003", "bmw_m3_2004", "bmw_m3_2008", "bmw_m3_2009", "bmw_m3_2012", "bmw_m3_convertible_2004", "bmw_m5_2000", "bmw_m5_2002", "bmw_m5_2006", "bmw_m6_2007", "bmw_m6_2008", "bmw_m_roadster_2000", "bmw_x1_2014", "bmw_x3_2004", "bmw_x3_2005", "bmw_x3_2006", "bmw_x3_2007", "bmw_x3_2008", "bmw_x5_2000", "bmw_x5_2001", "bmw_x5_2002", "bmw_x5_2003", "bmw_x5_2004", "bmw_x5_2005", "bmw_x5_2006", "bmw_x5_2007", "bmw_x5_2008", "bmw_x5_2009", "bmw_x5_2011", "bmw_x5_2012", "bmw_z3_1997", "bmw_z3_1998", "bmw_z3_1999", "bmw_z3_2000", "bmw_z3_2001", "bmw_z4_2003", "bmw_z4_2004", "bmw_z4_2005", "bmw_z4_2006", "bmw_z4_2007", "buick_century_1998", "buick_century_1999", "buick_century_2000", "buick_century_2001", "buick_century_2002", "buick_century_2003", "buick_century_2004", "buick_century_2005", "buick_enclave_2008", "buick_enclave_2009", "buick_enclave_2010", "buick_enclave_2011", "buick_lacrosse_2005", "buick_lacrosse_2006", "buick_lacrosse_2007", "buick_lacrosse_2008", "buick_lacrosse_2010", "buick_lacrosse_2011", "buick_lacrosse_2015", "buick_lesabre_1995", "buick_lesabre_1996", "buick_lesabre_1997", "buick_lesabre_1999", "buick_lesabre_2000", "buick_lesabre_2001", "buick_lesabre_2002", "buick_lesabre_2003", "buick_lesabre_2004", "buick_lesabre_2005", "buick_lucerne_2006", "buick_lucerne_2007", "buick_lucerne_2008", "buick_parkavenue_1996", "buick_parkavenue_1997", "buick_parkavenue_1998", "buick_parkavenue_1999", "buick_parkavenue_2000", "buick_parkavenue_2001", "buick_parkavenue_2002", "buick_rainier_2004", "buick_rainier_2006", "buick_regal_1979", "buick_regal_1991", "buick_regal_1995", "buick_regal_1996", "buick_regal_1998", "buick_regal_1999", "buick_regal_2000", "buick_regal_2001", "buick_regal_2002", "buick_regal_2011", "buick_regal_2012", "buick_rendezvous_2002", "buick_rendezvous_2003", "buick_rendezvous_2004", "buick_rendezvous_2005", "buick_rendezvous_2006", "buick_rendezvous_2007", "buick_riviera_1995", "buick_riviera_1996", "buick_roadmaster_1992", "buick_roadmaster_1993", "buick_roadmaster_1996", "buick_terraza_2005", "buick_terraza_2006", "cadillac_allante_1992", "cadillac_ats_2013", "cadillac_catera_1997", "cadillac_catera_1998", "cadillac_catera_1999", "cadillac_catera_2000", "cadillac_catera_2001", "cadillac_cts_2003", "cadillac_cts_2004", "cadillac_cts_2005", "cadillac_cts_2006", "cadillac_cts_2007", "cadillac_cts_2008", "cadillac_cts_2009", "cadillac_cts_2010", "cadillac_cts_2012", "cadillac_cts_2014", "cadillac_deville_1979", "cadillac_deville_1991", "cadillac_deville_1992", "cadillac_deville_1993", "cadillac_deville_1994", "cadillac_deville_1995", "cadillac_deville_1997", "cadillac_deville_1998", "cadillac_deville_1999", "cadillac_deville_2000", "cadillac_deville_2001", "cadillac_deville_2002", "cadillac_deville_2003", "cadillac_deville_2004", "cadillac_deville_2005", "cadillac_deville_2006", "cadillac_deville_2007", "cadillac_eldorado_1976", "cadillac_eldorado_1997", "cadillac_eldorado_1999", "cadillac_eldorado_2001", "cadillac_escalade_1999", "cadillac_escalade_2000", "cadillac_escalade_2002", "cadillac_escalade_2003", "cadillac_escalade_2004", "cadillac_escalade_2005", "cadillac_escalade_2006", "cadillac_escalade_2007", "cadillac_escalade_2008", "cadillac_escalade_2009", "cadillac_escalade_2010", "cadillac_escalade_2011", "cadillac_seville_1999", "cadillac_seville_2000", "cadillac_seville_2001", "cadillac_seville_2002", "cadillac_seville_2003", "cadillac_srx_2004", "cadillac_srx_2005", "cadillac_srx_2006", "cadillac_srx_2007", "cadillac_srx_2010", "cadillac_srx_2011", "cadillac_sts_1999", "cadillac_sts_2001", "cadillac_sts_2002", "cadillac_sts_2005", "cadillac_sts_2006", "cadillac_sts_2007", "cadillac_xts_2013", "chevrolet_apache_1958", "chevrolet_astro_1995", "chevrolet_astro_1997", "chevrolet_astro_1999", "chevrolet_astro_2000", "chevrolet_astro_2002", "chevrolet_astro_2003", "chevrolet_astro_2004", "chevrolet_avalanche_2002", "chevrolet_avalanche_2003", "chevrolet_avalanche_2004", "chevrolet_avalanche_2005", "chevrolet_avalanche_2007", "chevrolet_avalanche_2008", "chevrolet_aveo_2004", "chevrolet_aveo_2005", "chevrolet_aveo_2006", "chevrolet_aveo_2007", "chevrolet_aveo_2008", "chevrolet_aveo_2009", "chevrolet_aveo_2010", "chevrolet_aveo_2011", "chevrolet_bel air_1955", "chevrolet_bel air_1956", "chevrolet_bel air_1957", "chevrolet_blazer_1977", "chevrolet_blazer_1984", "chevrolet_blazer_1985", "chevrolet_blazer_1987", "chevrolet_blazer_1988", "chevrolet_blazer_1989", "chevrolet_blazer_1990", "chevrolet_blazer_1991", "chevrolet_blazer_1992", "chevrolet_blazer_1993", "chevrolet_blazer_1994", "chevrolet_blazer_1995", "chevrolet_blazer_1996", "chevrolet_blazer_1997", "chevrolet_blazer_1998", "chevrolet_blazer_1999", "chevrolet_blazer_2000", "chevrolet_blazer_2001", "chevrolet_blazer_2002", "chevrolet_blazer_2003", "chevrolet_blazer_2004", "chevrolet_c-k1500_1991", "chevrolet_c-k1500_1993", "chevrolet_c-k1500_1995", "chevrolet_c-k1500_1996", "chevrolet_c-k1500_1997", "chevrolet_c-k1500_1998", "chevrolet_c-k2500_1998", "chevrolet_c10_1962", "chevrolet_c10_1964", "chevrolet_c10_1965", "chevrolet_c10_1966", "chevrolet_c10_1967", "chevrolet_c10_1969", "chevrolet_c10_1970", "chevrolet_c10_1971", "chevrolet_c10_1972", "chevrolet_c10_1976", "chevrolet_c10_1977", "chevrolet_c10_1981", "chevrolet_c10_1983", "chevrolet_c20_1966", "chevrolet_camaro_1967", "chevrolet_camaro_1968", "chevrolet_camaro_1969", "chevrolet_camaro_1970", "chevrolet_camaro_1971", "chevrolet_camaro_1973", "chevrolet_camaro_1974", "chevrolet_camaro_1978", "chevrolet_camaro_1979", "chevrolet_camaro_1980", "chevrolet_camaro_1981", "chevrolet_camaro_1984", "chevrolet_camaro_1985", "chevrolet_camaro_1986", "chevrolet_camaro_1987", "chevrolet_camaro_1988", "chevrolet_camaro_1989", "chevrolet_camaro_1991", "chevrolet_camaro_1992", "chevrolet_camaro_1993", "chevrolet_camaro_1994", "chevrolet_camaro_1995", "chevrolet_camaro_1996", "chevrolet_camaro_1997", "chevrolet_camaro_1998", "chevrolet_camaro_1999", "chevrolet_camaro_2000", "chevrolet_camaro_2001", "chevrolet_camaro_2002", "chevrolet_camaro_2010", "chevrolet_camaro_2011", "chevrolet_camaro_2012", "chevrolet_camaro_2013", "chevrolet_camaro_2014", "chevrolet_camaro_2015", "chevrolet_caprice_1966", "chevrolet_caprice_1985", "chevrolet_caprice_1989", "chevrolet_caprice_1991", "chevrolet_caprice_1992", "chevrolet_caprice_1993", "chevrolet_caprice_1994", "chevrolet_caprice_1995", "chevrolet_caprice_1996", "chevrolet_cavalier_1996", "chevrolet_cavalier_1997", "chevrolet_cavalier_1998", "chevrolet_cavalier_1999", "chevrolet_cavalier_2000", "chevrolet_cavalier_2001", "chevrolet_cavalier_2002", "chevrolet_cavalier_2003", "chevrolet_cavalier_2004", "chevrolet_cavalier_2005", "chevrolet_chevelle_1967", "chevrolet_chevelle_1968", "chevrolet_chevelle_1969", "chevrolet_chevelle_1972", "chevrolet_cobalt_2005", "chevrolet_cobalt_2006", "chevrolet_cobalt_2007", "chevrolet_cobalt_2008", "chevrolet_cobalt_2009", "chevrolet_cobalt_2010", "chevrolet_colorado_2004", "chevrolet_colorado_2005", "chevrolet_colorado_2006", "chevrolet_colorado_2007", "chevrolet_colorado_2008", "chevrolet_colorado_2010", "chevrolet_colorado_2016", "chevrolet_corvair_1964", "chevrolet_corvette_1964", "chevrolet_corvette_1966", "chevrolet_corvette_1968", "chevrolet_corvette_1969", "chevrolet_corvette_1970", "chevrolet_corvette_1971", "chevrolet_corvette_1972", "chevrolet_corvette_1973", "chevrolet_corvette_1974", "chevrolet_corvette_1975", "chevrolet_corvette_1976", "chevrolet_corvette_1977", "chevrolet_corvette_1978", "chevrolet_corvette_1979", "chevrolet_corvette_1980", "chevrolet_corvette_1981", "chevrolet_corvette_1982", "chevrolet_corvette_1984", "chevrolet_corvette_1985", "chevrolet_corvette_1986", "chevrolet_corvette_1987", "chevrolet_corvette_1988", "chevrolet_corvette_1989", "chevrolet_corvette_1990", "chevrolet_corvette_1991", "chevrolet_corvette_1992", "chevrolet_corvette_1993", "chevrolet_corvette_1994", "chevrolet_corvette_1995", "chevrolet_corvette_1996", "chevrolet_corvette_1997", "chevrolet_corvette_1998", "chevrolet_corvette_1999", "chevrolet_corvette_2000", "chevrolet_corvette_2001", "chevrolet_corvette_2002", "chevrolet_corvette_2003", "chevrolet_corvette_2004", "chevrolet_corvette_2005", "chevrolet_corvette_2006", "chevrolet_corvette_2007", "chevrolet_corvette_2008", "chevrolet_corvette_2010", "chevrolet_corvette_2011", "chevrolet_corvette_2014", "chevrolet_corvette_2015", "chevrolet_cruze_2011", "chevrolet_cruze_2012", "chevrolet_cruze_2013", "chevrolet_cruze_2014", "chevrolet_cruze_2015", "chevrolet_cruze_2016", "chevrolet_deluxe_1941", "chevrolet_equinox_2005", "chevrolet_equinox_2006", "chevrolet_equinox_2007", "chevrolet_equinox_2008", "chevrolet_equinox_2009", "chevrolet_equinox_2010", "chevrolet_equinox_2011", "chevrolet_equinox_2012", "chevrolet_equinox_2013", "chevrolet_equinox_2014", "chevrolet_equinox_2015", "chevrolet_express_2000", "chevrolet_express_2001", "chevrolet_express_2003", "chevrolet_express_2004", "chevrolet_express_2005", "chevrolet_express_2006", "chevrolet_express_2007", "chevrolet_express_2008", "chevrolet_express_2009", "chevrolet_express_2015", "chevrolet_hhr_2006", "chevrolet_hhr_2007", "chevrolet_hhr_2008", "chevrolet_hhr_2009", "chevrolet_hhr_2010", "chevrolet_hhr_2011", "chevrolet_impala_1960", "chevrolet_impala_1962", "chevrolet_impala_1963", "chevrolet_impala_1964", "chevrolet_impala_1966", "chevrolet_impala_1967", "chevrolet_impala_1968", "chevrolet_impala_1995", "chevrolet_impala_1996", "chevrolet_impala_2000", "chevrolet_impala_2001", "chevrolet_impala_2002", "chevrolet_impala_2003", "chevrolet_impala_2004", "chevrolet_impala_2005", "chevrolet_impala_2006", "chevrolet_impala_2007", "chevrolet_impala_2008", "chevrolet_impala_2009", "chevrolet_impala_2010", "chevrolet_impala_2011", "chevrolet_impala_2012", "chevrolet_impala_2013", "chevrolet_lumina_1992", "chevrolet_lumina_1995", "chevrolet_lumina_1996", "chevrolet_lumina_1997", "chevrolet_lumina_1998", "chevrolet_lumina_1999", "chevrolet_lumina_2000", "chevrolet_lumina_2001", "chevrolet_malibu_1969", "chevrolet_malibu_1997", "chevrolet_malibu_1998", "chevrolet_malibu_1999", "chevrolet_malibu_2000", "chevrolet_malibu_2001", "chevrolet_malibu_2002", "chevrolet_malibu_2003", "chevrolet_malibu_2004", "chevrolet_malibu_2005", "chevrolet_malibu_2006", "chevrolet_malibu_2007", "chevrolet_malibu_2008", "chevrolet_malibu_2009", "chevrolet_malibu_2010", "chevrolet_malibu_2011", "chevrolet_malibu_2012", "chevrolet_malibu_2013", "chevrolet_malibu_2014", "chevrolet_malibu_2015", "chevrolet_malibu_2016", "chevrolet_montecarlo_1970", "chevrolet_montecarlo_1972", "chevrolet_montecarlo_1978", "chevrolet_montecarlo_1984", "chevrolet_montecarlo_1985", "chevrolet_montecarlo_1986", "chevrolet_montecarlo_1987", "chevrolet_montecarlo_1995", "chevrolet_montecarlo_1997", "chevrolet_montecarlo_1998", "chevrolet_montecarlo_1999", "chevrolet_montecarlo_2000", "chevrolet_montecarlo_2001", "chevrolet_montecarlo_2002", "chevrolet_montecarlo_2003", "chevrolet_montecarlo_2004", "chevrolet_montecarlo_2005", "chevrolet_montecarlo_2006", "chevrolet_montecarlo_2007", "chevrolet_nova_1966", "chevrolet_nova_1972", "chevrolet_nova_1973", "chevrolet_nova_1974", "chevrolet_nova_1975", "chevrolet_nova_1977", "chevrolet_prizm_1998", "chevrolet_prizm_2000", "chevrolet_prizm_2001", "chevrolet_s10_1987", "chevrolet_s10_1988", "chevrolet_s10_1989", "chevrolet_s10_1990", "chevrolet_s10_1991", "chevrolet_s10_1992", "chevrolet_s10_1993", "chevrolet_s10_1994", "chevrolet_s10_1995", "chevrolet_s10_1996", "chevrolet_s10_1997", "chevrolet_s10_1998", "chevrolet_s10_1999", "chevrolet_s10_2000", "chevrolet_s10_2001", "chevrolet_s10_2002", "chevrolet_s10_2003", "chevrolet_silverado_1979", "chevrolet_silverado_1984", "chevrolet_silverado_1987", "chevrolet_silverado_1988", "chevrolet_silverado_1989", "chevrolet_silverado_1990", "chevrolet_silverado_1991", "chevrolet_silverado_1992", "chevrolet_silverado_1993", "chevrolet_silverado_1994", "chevrolet_silverado_1995", "chevrolet_silverado_1996", "chevrolet_silverado_1997", "chevrolet_silverado_1998", "chevrolet_silverado_1999", "chevrolet_silverado_2000", "chevrolet_silverado_2001", "chevrolet_silverado_2002", "chevrolet_silverado_2003", "chevrolet_silverado_2004", "chevrolet_silverado_2005", "chevrolet_silverado_2006", "chevrolet_silverado_2007", "chevrolet_silverado_2008", "chevrolet_silverado_2009", "chevrolet_silverado_2010", "chevrolet_silverado_2011", "chevrolet_silverado_2012", "chevrolet_silverado_2013", "chevrolet_silverado_2014", "chevrolet_silverado_2015", "chevrolet_silverado_2016", "chevrolet_silverado_3500_2000", "chevrolet_silverado_3500_2006", "chevrolet_sonic_2012", "chevrolet_sonic_2013", "chevrolet_sonic_2014", "chevrolet_spark_2013", "chevrolet_styleline_1950", "chevrolet_suburban_1989", "chevrolet_suburban_1991", "chevrolet_suburban_1993", "chevrolet_suburban_1994", "chevrolet_suburban_1995", "chevrolet_suburban_1996", "chevrolet_suburban_1997", "chevrolet_suburban_1998", "chevrolet_suburban_1999", "chevrolet_suburban_2000", "chevrolet_suburban_2001", "chevrolet_suburban_2002", "chevrolet_suburban_2003", "chevrolet_suburban_2004", "chevrolet_suburban_2005", "chevrolet_suburban_2006", "chevrolet_suburban_2007", "chevrolet_suburban_2008", "chevrolet_suburban_2013", "chevrolet_tahoe_1995", "chevrolet_tahoe_1996", "chevrolet_tahoe_1997", "chevrolet_tahoe_1998", "chevrolet_tahoe_1999", "chevrolet_tahoe_2000", "chevrolet_tahoe_2001", "chevrolet_tahoe_2002", "chevrolet_tahoe_2003", "chevrolet_tahoe_2004", "chevrolet_tahoe_2005", "chevrolet_tahoe_2006", "chevrolet_tahoe_2007", "chevrolet_tahoe_2008", "chevrolet_tahoe_2009", "chevrolet_tahoe_2010", "chevrolet_tahoe_2011", "chevrolet_tahoe_2012", "chevrolet_tahoe_2015", "chevrolet_tracker_1999", "chevrolet_tracker_2000", "chevrolet_tracker_2001", "chevrolet_tracker_2002", "chevrolet_tracker_2003", "chevrolet_trailblazer_2002", "chevrolet_trailblazer_2003", "chevrolet_trailblazer_2004", "chevrolet_trailblazer_2005", "chevrolet_trailblazer_2006", "chevrolet_trailblazer_2007", "chevrolet_trailblazer_2008", "chevrolet_traverse_2009", "chevrolet_traverse_2010", "chevrolet_traverse_2011", "chevrolet_traverse_2012", "chevrolet_traverse_2013", "chevrolet_uplander_2005", "chevrolet_uplander_2006", "chevrolet_uplander_2007", "chevrolet_uplander_2008", "chevrolet_vega_1972", "chevrolet_venture_1998", "chevrolet_venture_1999", "chevrolet_venture_2000", "chevrolet_venture_2001", "chevrolet_venture_2002", "chevrolet_venture_2003", "chevrolet_venture_2004", "chevrolet_venture_2005", "chevrolet_volt_2013", "chrysler_200_2011", "chrysler_200_2012", "chrysler_200_2013", "chrysler_200_2015", "chrysler_300_1999", "chrysler_300_2000", "chrysler_300_2001", "chrysler_300_2002", "chrysler_300_2003", "chrysler_300_2004", "chrysler_300_2005", "chrysler_300_2006", "chrysler_300_2007", "chrysler_300_2008", "chrysler_300_2009", "chrysler_300_2010", "chrysler_300_2011", "chrysler_300_2012", "chrysler_300_2013", "chrysler_aspen_2007", "chrysler_aspen_2008", "chrysler_cirrus_2000", "chrysler_concorde_1999", "chrysler_concorde_2000", "chrysler_concorde_2001", "chrysler_concorde_2002", "chrysler_concorde_2003", "chrysler_concorde_2004", "chrysler_crossfire_2004", "chrysler_crossfire_2005", "chrysler_lhs_1999", "chrysler_lhs_2000", "chrysler_lhs_2001", "chrysler_pacifica_2004", "chrysler_pacifica_2005", "chrysler_pacifica_2006", "chrysler_pacifica_2007", "chrysler_pacifica_2008", "chrysler_pt cruiser_2001", "chrysler_pt cruiser_2002", "chrysler_pt cruiser_2003", "chrysler_pt cruiser_2004", "chrysler_pt cruiser_2005", "chrysler_pt cruiser_2006", "chrysler_pt cruiser_2007", "chrysler_pt cruiser_2008", "chrysler_sebring_1996", "chrysler_sebring_1997", "chrysler_sebring_1998", "chrysler_sebring_1999", "chrysler_sebring_2000", "chrysler_sebring_2001", "chrysler_sebring_2002", "chrysler_sebring_2003", "chrysler_sebring_2004", "chrysler_sebring_2005", "chrysler_sebring_2006", "chrysler_sebring_2007", "chrysler_sebring_2008", "chrysler_sebring_2009", "chrysler_sebring_2010", "chrysler_town&country_1996", "chrysler_town&country_1997", "chrysler_town&country_1998", "chrysler_town&country_1999", "chrysler_town&country_2000", "chrysler_town&country_2001", "chrysler_town&country_2002", "chrysler_town&country_2003", "chrysler_town&country_2004", "chrysler_town&country_2005", "chrysler_town&country_2006", "chrysler_town&country_2007", "chrysler_town&country_2008", "chrysler_town&country_2009", "chrysler_town&country_2010", "chrysler_town&country_2011", "chrysler_town&country_2012", "chrysler_town&country_2013", "chrysler_town&country_2014", "chrysler_voyager_2000", "chrysler_voyager_2001", "chrysler_voyager_2002", "chrysler_voyager_2003", "datsun_240z_1971", "dodge_avenger_2008", "dodge_avenger_2009", "dodge_avenger_2010", "dodge_avenger_2011", "dodge_avenger_2012", "dodge_avenger_2013", "dodge_avenger_2014", "dodge_caliber_2007", "dodge_caliber_2008", "dodge_caliber_2009", "dodge_caliber_2010", "dodge_caliber_2011", "dodge_challenger_2008", "dodge_challenger_2009", "dodge_challenger_2010", "dodge_challenger_2011", "dodge_challenger_2012", "dodge_challenger_2013", "dodge_challenger_2014", "dodge_challenger_2015", "dodge_challenger_2016", "dodge_charger_1973", "dodge_charger_2006", "dodge_charger_2007", "dodge_charger_2008", "dodge_charger_2009", "dodge_charger_2010", "dodge_charger_2011", "dodge_charger_2012", "dodge_charger_2013", "dodge_charger_2014", "dodge_dakota_1989", "dodge_dakota_1992", "dodge_dakota_1993", "dodge_dakota_1994", "dodge_dakota_1995", "dodge_dakota_1996", "dodge_dakota_1997", "dodge_dakota_1998", "dodge_dakota_1999", "dodge_dakota_2000", "dodge_dakota_2001", "dodge_dakota_2002", "dodge_dakota_2003", "dodge_dakota_2004", "dodge_dakota_2005", "dodge_dakota_2006", "dodge_dakota_2007", "dodge_dart_1964", "dodge_dart_1969", "dodge_dart_2013", "dodge_dart_2014", "dodge_durango_1998", "dodge_durango_1999", "dodge_durango_2000", "dodge_durango_2001", "dodge_durango_2002", "dodge_durango_2003", "dodge_durango_2004", "dodge_durango_2005", "dodge_durango_2006", "dodge_durango_2007", "dodge_durango_2008", "dodge_durango_2011", "dodge_grand caravan_1996", "dodge_grand caravan_1997", "dodge_grand caravan_1998", "dodge_grand caravan_1999", "dodge_grand caravan_2000", "dodge_grand caravan_2001", "dodge_grand caravan_2002", "dodge_grand caravan_2003", "dodge_grand caravan_2004", "dodge_grand caravan_2005", "dodge_grand caravan_2006", "dodge_grand caravan_2007", "dodge_grand caravan_2008", "dodge_grand caravan_2009", "dodge_grand caravan_2010", "dodge_grand caravan_2011", "dodge_grand caravan_2012", "dodge_grand caravan_2013", "dodge_grand caravan_2014", "dodge_grand caravan_2015", "dodge_intrepid_1998", "dodge_intrepid_1999", "dodge_intrepid_2000", "dodge_intrepid_2001", "dodge_intrepid_2002", "dodge_intrepid_2003", "dodge_intrepid_2004", "dodge_journey_2009", "dodge_journey_2010", "dodge_journey_2012", "dodge_magnum_2005", "dodge_magnum_2006", "dodge_magnum_2007", "dodge_magnum_2008", "dodge_neon_1996", "dodge_neon_1997", "dodge_neon_1998", "dodge_neon_1999", "dodge_neon_2000", "dodge_neon_2001", "dodge_neon_2002", "dodge_neon_2003", "dodge_neon_2004", "dodge_neon_2005", "dodge_nitro_2007", "dodge_nitro_2008", "dodge_nitro_2009", "dodge_ram_1500_1995", "dodge_ram_1500_1996", "dodge_ram_1500_1997", "dodge_ram_1500_1998", "dodge_ram_1500_1999", "dodge_ram_1500_2000", "dodge_ram_1500_2001", "dodge_ram_1500_2002", "dodge_ram_1500_2003", "dodge_ram_1500_2004", "dodge_ram_1500_2005", "dodge_ram_1500_2006", "dodge_ram_1500_2007", "dodge_ram_1500_2008", "dodge_ram_1500_2009", "dodge_ram_1500_2010", "dodge_ram_1500_2011", "dodge_ram_1500_2012", "dodge_ram_1500_2013", "dodge_ram_1500_2014", "dodge_ram_1500_2015", "dodge_ram_2500_1992", "dodge_ram_2500_1994", "dodge_ram_2500_1995", "dodge_ram_2500_1996", "dodge_ram_2500_1997", "dodge_ram_2500_1998", "dodge_ram_2500_1999", "dodge_ram_2500_2000", "dodge_ram_2500_2001", "dodge_ram_2500_2002", "dodge_ram_2500_2003", "dodge_ram_2500_2004", "dodge_ram_2500_2005", "dodge_ram_2500_2006", "dodge_ram_2500_2007", "dodge_ram_2500_2008", "dodge_ram_2500_2010", "dodge_ram_2500_2011", "dodge_ram_2500_2012", "dodge_ram_2500_2014", "dodge_ram_3500_1995", "dodge_ram_3500_1996", "dodge_ram_3500_1997", "dodge_ram_3500_1998", "dodge_ram_3500_1999", "dodge_ram_3500_2000", "dodge_ram_3500_2001", "dodge_ram_3500_2003", "dodge_ram_3500_2004", "dodge_ram_3500_2005", "dodge_ram_3500_2006", "dodge_ram_3500_2007", "dodge_ram_3500_2008", "dodge_ram_3500_2010", "dodge_ram_3500_2012", "dodge_stealth_1991", "dodge_stealth_1992", "dodge_stratus_1999", "dodge_stratus_2000", "dodge_stratus_2001", "dodge_stratus_2002", "dodge_stratus_2003", "dodge_stratus_2004", "dodge_stratus_2005", "dodge_stratus_2006", "dodge_viper_2001", "dodge_viper_2014", "eagle_talon_1995", "fiat_five hundred_2012", "fiat_five hundred_2013", "ford_bronco_1976", "ford_bronco_1979", "ford_bronco_1984", "ford_bronco_1985", "ford_bronco_1986", "ford_bronco_1987", "ford_bronco_1988", "ford_bronco_1989", "ford_bronco_1990", "ford_bronco_1991", "ford_bronco_1992", "ford_bronco_1993", "ford_bronco_1994", "ford_bronco_1995", "ford_bronco_1996", "ford_c-max_2013", "ford_contour_1996", "ford_contour_1998", "ford_contour_1999", "ford_contour_2000", "ford_coupe_1934", "ford_coupe_1946", "ford_crown victoria_1995", "ford_crown victoria_1997", "ford_crown victoria_1998", "ford_crown victoria_1999", "ford_crown victoria_2000", "ford_crown victoria_2001", "ford_crown victoria_2002", "ford_crown victoria_2003", "ford_crown victoria_2004", "ford_crown victoria_2005", "ford_crown victoria_2006", "ford_crown victoria_2007", "ford_crown victoria_2008", "ford_crown victoria_2009", "ford_crown victoria_2010", "ford_crown victoria_2011", "ford_e150_1999", "ford_e150_2004", "ford_e150_2006", "ford_e250_1997", "ford_e250_2006", "ford_e350_1998", "ford_e350_2000", "ford_e350_2001", "ford_e350_2003", "ford_e350_2005", "ford_e350_2006", "ford_e350_2007", "ford_e350_2009", "ford_edge_2007", "ford_edge_2008", "ford_edge_2010", "ford_edge_2011", "ford_edge_2012", "ford_edge_2013", "ford_escape_2001", "ford_escape_2002", "ford_escape_2003", "ford_escape_2004", "ford_escape_2005", "ford_escape_2006", "ford_escape_2007", "ford_escape_2008", "ford_escape_2009", "ford_escape_2010", "ford_escape_2011", "ford_escape_2012", "ford_escape_2013", "ford_escape_2014", "ford_escape_2015", "ford_escort_1997", "ford_escort_1998", "ford_escort_1999", "ford_escort_2000", "ford_escort_2001", "ford_escort_2002", "ford_excursion_2000", "ford_excursion_2001", "ford_excursion_2002", "ford_excursion_2003", "ford_excursion_2004", "ford_expedition_1997", "ford_expedition_1998", "ford_expedition_1999", "ford_expedition_2000", "ford_expedition_2001", "ford_expedition_2002", "ford_expedition_2003", "ford_expedition_2004", "ford_expedition_2005", "ford_expedition_2006", "ford_expedition_2007", "ford_expedition_2008", "ford_expedition_2010", "ford_explorer_1991", "ford_explorer_1992", "ford_explorer_1993", "ford_explorer_1994", "ford_explorer_1995", "ford_explorer_1996", "ford_explorer_1997", "ford_explorer_1998", "ford_explorer_1999", "ford_explorer_2000", "ford_explorer_2001", "ford_explorer_2002", "ford_explorer_2003", "ford_explorer_2004", "ford_explorer_2005", "ford_explorer_2006", "ford_explorer_2007", "ford_explorer_2008", "ford_explorer_2009", "ford_explorer_2010", "ford_explorer_2012", "ford_explorer_2013", "ford_explorer_2014", "ford_explorer_2015", "ford_f100_1963", "ford_f100_1964", "ford_f100_1968", "ford_f100_1970", "ford_f100_1978", "ford_f100_1979", "ford_f150_1976", "ford_f150_1979", "ford_f150_1985", "ford_f150_1986", "ford_f150_1987", "ford_f150_1988", "ford_f150_1989", "ford_f150_1990", "ford_f150_1991", "ford_f150_1992", "ford_f150_1993", "ford_f150_1994", "ford_f150_1995", "ford_f150_1996", "ford_f150_1997", "ford_f150_1998", "ford_f150_1999", "ford_f150_2000", "ford_f150_2001", "ford_f150_2002", "ford_f150_2003", "ford_f150_2004", "ford_f150_2005", "ford_f150_2006", "ford_f150_2007", "ford_f150_2008", "ford_f150_2009", "ford_f150_2010", "ford_f150_2011", "ford_f150_2012", "ford_f150_2013", "ford_f150_2014", "ford_f150_2015", "ford_f250_1971", "ford_f250_1976", "ford_f250_1978", "ford_f250_1984", "ford_f250_1985", "ford_f250_1986", "ford_f250_1987", "ford_f250_1988", "ford_f250_1989", "ford_f250_1990", "ford_f250_1991", "ford_f250_1992", "ford_f250_1993", "ford_f250_1994", "ford_f250_1995", "ford_f250_1996", "ford_f250_1997", "ford_f250_1999", "ford_f250_2000", "ford_f250_2001", "ford_f250_2002", "ford_f250_2003", "ford_f250_2004", "ford_f250_2005", "ford_f250_2006", "ford_f250_2007", "ford_f250_2008", "ford_f250_2009", "ford_f250_2010", "ford_f250_2011", "ford_f250_2012", "ford_f250_2013", "ford_f250_2015", "ford_f250_2016", "ford_f350_1989", "ford_f350_1991", "ford_f350_1993", "ford_f350_1994", "ford_f350_1995", "ford_f350_1996", "ford_f350_1997", "ford_f350_1999", "ford_f350_2000", "ford_f350_2001", "ford_f350_2002", "ford_f350_2003", "ford_f350_2004", "ford_f350_2005", "ford_f350_2006", "ford_f350_2007", "ford_f350_2008", "ford_f350_2009", "ford_f350_2010", "ford_f350_2011", "ford_f350_2012", "ford_f450_1999", "ford_f450_2008", "ford_fiesta_2011", "ford_fiesta_2012", "ford_fiesta_2013", "ford_fiesta_2014", "ford_fiesta_2015", "ford_five hundred_2005", "ford_five hundred_2006", "ford_five hundred_2007", "ford_flex_2009", "ford_focus_2000", "ford_focus_2001", "ford_focus_2002", "ford_focus_2003", "ford_focus_2004", "ford_focus_2005", "ford_focus_2006", "ford_focus_2007", "ford_focus_2008", "ford_focus_2009", "ford_focus_2010", "ford_focus_2011", "ford_focus_2012", "ford_focus_2013", "ford_focus_2014", "ford_focus_2015", "ford_freestar_2004", "ford_freestar_2005", "ford_freestar_2006", "ford_freestyle_2005", "ford_freestyle_2006", "ford_fusion_2006", "ford_fusion_2007", "ford_fusion_2008", "ford_fusion_2009", "ford_fusion_2010", "ford_fusion_2011", "ford_fusion_2012", "ford_fusion_2013", "ford_fusion_2014", "ford_fusion_2015", "ford_galaxie_1964", "ford_mustang_1965", "ford_mustang_1966", "ford_mustang_1967", "ford_mustang_1968", "ford_mustang_1969", "ford_mustang_1970", "ford_mustang_1971", "ford_mustang_1972", "ford_mustang_1973", "ford_mustang_1983", "ford_mustang_1984", "ford_mustang_1985", "ford_mustang_1986", "ford_mustang_1987", "ford_mustang_1988", "ford_mustang_1989", "ford_mustang_1990", "ford_mustang_1991", "ford_mustang_1992", "ford_mustang_1993", "ford_mustang_1994", "ford_mustang_1995", "ford_mustang_1996", "ford_mustang_1997", "ford_mustang_1998", "ford_mustang_1999", "ford_mustang_2000", "ford_mustang_2001", "ford_mustang_2002", "ford_mustang_2003", "ford_mustang_2004", "ford_mustang_2005", "ford_mustang_2006", "ford_mustang_2007", "ford_mustang_2008", "ford_mustang_2009", "ford_mustang_2010", "ford_mustang_2011", "ford_mustang_2012", "ford_mustang_2013", "ford_mustang_2014", "ford_mustang_2015", "ford_mustang_convertible_1989", "ford_mustang_convertible_1991", "ford_mustang_convertible_1999", "ford_mustang_convertible_2000", "ford_mustang_convertible_2001", "ford_mustang_convertible_2004", "ford_mustang_convertible_2005", "ford_mustang_convertible_2007", "ford_mustang_convertible_2014", "ford_mustang_gt_1995", "ford_mustang_gt_1996", "ford_mustang_gt_1999", "ford_mustang_gt_2000", "ford_mustang_gt_2001", "ford_mustang_gt_2002", "ford_mustang_gt_2004", "ford_mustang_gt_2005", "ford_mustang_gt_2006", "ford_mustang_gt_2011", "ford_mustang_gt_2014", "ford_mustang_gt_convertible_1994", "ford_mustang_mach_2003", "ford_ranger_1986", "ford_ranger_1987", "ford_ranger_1988", "ford_ranger_1989", "ford_ranger_1990", "ford_ranger_1991", "ford_ranger_1992", "ford_ranger_1993", "ford_ranger_1994", "ford_ranger_1995", "ford_ranger_1996", "ford_ranger_1997", "ford_ranger_1998", "ford_ranger_1999", "ford_ranger_2000", "ford_ranger_2001", "ford_ranger_2002", "ford_ranger_2003", "ford_ranger_2004", "ford_ranger_2005", "ford_ranger_2006", "ford_ranger_2007", "ford_ranger_2008", "ford_ranger_2009", "ford_ranger_2010", "ford_ranger_2011", "ford_roadster_1932", "ford_taurus_1993", "ford_taurus_1994", "ford_taurus_1995", "ford_taurus_1996", "ford_taurus_1997", "ford_taurus_1998", "ford_taurus_1999", "ford_taurus_2000", "ford_taurus_2001", "ford_taurus_2002", "ford_taurus_2003", "ford_taurus_2004", "ford_taurus_2005", "ford_taurus_2006", "ford_taurus_2007", "ford_taurus_2008", "ford_taurus_2009", "ford_taurus_2010", "ford_taurus_2011", "ford_taurus_2013", "ford_taurus_2014", "ford_thunderbird_1957", "ford_thunderbird_1964", "ford_thunderbird_1965", "ford_thunderbird_1971", "ford_thunderbird_1976", "ford_thunderbird_1993", "ford_thunderbird_1994", "ford_thunderbird_1995", "ford_thunderbird_1996", "ford_thunderbird_1997", "ford_thunderbird_2002", "ford_thunderbird_2004", "ford_transit_2012", "ford_windstar_1998", "ford_windstar_1999", "ford_windstar_2000", "ford_windstar_2001", "ford_windstar_2002", "ford_windstar_2003", "gmc_acadia_2007", "gmc_acadia_2008", "gmc_acadia_2009", "gmc_acadia_2010", "gmc_acadia_2011", "gmc_acadia_2012", "gmc_canyon_2004", "gmc_canyon_2005", "gmc_canyon_2006", "gmc_canyon_2007", "gmc_canyon_2008", "gmc_envoy_2002", "gmc_envoy_2003", "gmc_envoy_2004", "gmc_envoy_2005", "gmc_envoy_2006", "gmc_envoy_2007", "gmc_envoy_2008", "gmc_jimmy_1995", "gmc_jimmy_1996", "gmc_jimmy_1998", "gmc_jimmy_1999", "gmc_jimmy_2000", "gmc_jimmy_2001", "gmc_safari_2000", "gmc_sierra_1500_1986", "gmc_sierra_1500_1987", "gmc_sierra_1500_1989", "gmc_sierra_1500_1990", "gmc_sierra_1500_1991", "gmc_sierra_1500_1992", "gmc_sierra_1500_1993", "gmc_sierra_1500_1994", "gmc_sierra_1500_1995", "gmc_sierra_1500_1996", "gmc_sierra_1500_1997", "gmc_sierra_1500_1998", "gmc_sierra_1500_1999", "gmc_sierra_1500_2000", "gmc_sierra_1500_2001", "gmc_sierra_1500_2002", "gmc_sierra_1500_2003", "gmc_sierra_1500_2004", "gmc_sierra_1500_2005", "gmc_sierra_1500_2006", "gmc_sierra_1500_2007", "gmc_sierra_1500_2008", "gmc_sierra_1500_2009", "gmc_sierra_1500_2010", "gmc_sierra_1500_2011", "gmc_sierra_1500_2012", "gmc_sierra_1500_2013", "gmc_sierra_1500_2014", "gmc_sierra_2500_2000", "gmc_sierra_2500_2002", "gmc_sierra_2500_2004", "gmc_sierra_2500_2006", "gmc_sierra_2500_2007", "gmc_sierra_2500_2008", "gmc_sierra_3500_2001", "gmc_sonoma_1995", "gmc_sonoma_1996", "gmc_sonoma_1997", "gmc_sonoma_1998", "gmc_sonoma_1999", "gmc_sonoma_2000", "gmc_sonoma_2001", "gmc_sonoma_2002", "gmc_sonoma_2003", "gmc_suburban_1500_1994", "gmc_suburban_1500_1995", "gmc_suburban_1500_1996", "gmc_suburban_1500_1997", "gmc_suburban_1500_1999", "gmc_terrain_2010", "gmc_terrain_2011", "gmc_terrain_2012", "gmc_terrain_2013", "gmc_yukon_1500_1995", "gmc_yukon_1500_1996", "gmc_yukon_1500_1997", "gmc_yukon_1500_1998", "gmc_yukon_1500_1999", "gmc_yukon_1500_2000", "gmc_yukon_1500_2001", "gmc_yukon_1500_2002", "gmc_yukon_1500_2003", "gmc_yukon_1500_2004", "gmc_yukon_1500_2005", "gmc_yukon_1500_2006", "gmc_yukon_1500_2007", "gmc_yukon_1500_2008", "gmc_yukon_1500_2009", "gmc_yukon_1500_2011", "honda_accord_1988", "honda_accord_1990", "honda_accord_1991", "honda_accord_1992", "honda_accord_1993", "honda_accord_1994", "honda_accord_1995", "honda_accord_1996", "honda_accord_1997", "honda_accord_1998", "honda_accord_1999", "honda_accord_2 door_2001", "honda_accord_2 door_2003", "honda_accord_2 door_2004", "honda_accord_2 door_2005", "honda_accord_2 door_2010", "honda_accord_2000", "honda_accord_2001", "honda_accord_2002", "honda_accord_2003", "honda_accord_2004", "honda_accord_2005", "honda_accord_2006", "honda_accord_2007", "honda_accord_2008", "honda_accord_2009", "honda_accord_2010", "honda_accord_2011", "honda_accord_2012", "honda_accord_2013", "honda_accord_2014", "honda_accord_2015", "honda_accord_2016", "honda_civic_1988", "honda_civic_1989", "honda_civic_1990", "honda_civic_1991", "honda_civic_1992", "honda_civic_1993", "honda_civic_1994", "honda_civic_1995", "honda_civic_1996", "honda_civic_1997", "honda_civic_1998", "honda_civic_1999", "honda_civic_2000", "honda_civic_2001", "honda_civic_2002", "honda_civic_2003", "honda_civic_2004", "honda_civic_2005", "honda_civic_2006", "honda_civic_2007", "honda_civic_2008", "honda_civic_2009", "honda_civic_2010", "honda_civic_2011", "honda_civic_2012", "honda_civic_2013", "honda_civic_2014", "honda_civic_2015", "honda_civic_2016", "honda_civic_coupe_1993", "honda_civic_coupe_1994", "honda_civic_coupe_1995", "honda_civic_coupe_1996", "honda_civic_coupe_1997", "honda_civic_coupe_1998", "honda_civic_coupe_1999", "honda_civic_coupe_2000", "honda_civic_coupe_2001", "honda_civic_coupe_2002", "honda_civic_coupe_2003", "honda_civic_coupe_2004", "honda_civic_coupe_2005", "honda_civic_coupe_2006", "honda_civic_coupe_2007", "honda_civic_coupe_2008", "honda_civic_coupe_2009", "honda_civic_coupe_2010", "honda_civic_coupe_2011", "honda_civic_coupe_2012", "honda_civic_coupe_2013", "honda_civic_hatchback_1989", "honda_civic_hatchback_1990", "honda_civic_hatchback_1991", "honda_civic_hatchback_1992", "honda_civic_hatchback_1993", "honda_civic_hatchback_1994", "honda_civic_hatchback_1996", "honda_civic_hatchback_1998", "honda_civic_hatchback_2000", "honda_civic_hatchback_2002", "honda_civic_hatchback_2003", "honda_civic_hatchback_2004", "honda_cr-v_1997", "honda_cr-v_1998", "honda_cr-v_1999", "honda_cr-v_2000", "honda_cr-v_2001", "honda_cr-v_2002", "honda_cr-v_2003", "honda_cr-v_2004", "honda_cr-v_2005", "honda_cr-v_2006", "honda_cr-v_2007", "honda_cr-v_2008", "honda_cr-v_2009", "honda_cr-v_2010", "honda_cr-v_2011", "honda_cr-v_2012", "honda_cr-v_2013", "honda_cr-v_2014", "honda_cr-v_2015", "honda_cr-z_2011", "honda_crx_1988", "honda_crx_1989", "honda_crx_1991", "honda_delsol_1993", "honda_delsol_1994", "honda_delsol_1995", "honda_element_2003", "honda_element_2004", "honda_element_2005", "honda_element_2006", "honda_element_2007", "honda_element_2008", "honda_element_2009", "honda_element_2010", "honda_fit_2007", "honda_fit_2008", "honda_fit_2009", "honda_fit_2010", "honda_fit_2011", "honda_fit_2012", "honda_fit_2013", "honda_fit_2015", "honda_insight_2010", "honda_odyssey_1995", "honda_odyssey_1996", "honda_odyssey_1997", "honda_odyssey_1998", "honda_odyssey_1999", "honda_odyssey_2000", "honda_odyssey_2001", "honda_odyssey_2002", "honda_odyssey_2003", "honda_odyssey_2004", "honda_odyssey_2005", "honda_odyssey_2006", "honda_odyssey_2007", "honda_odyssey_2008", "honda_odyssey_2009", "honda_odyssey_2010", "honda_odyssey_2011", "honda_odyssey_2012", "honda_passport_1998", "honda_passport_1999", "honda_passport_2000", "honda_passport_2001", "honda_passport_2002", "honda_pilot_2003", "honda_pilot_2004", "honda_pilot_2005", "honda_pilot_2006", "honda_pilot_2007", "honda_pilot_2008", "honda_pilot_2009", "honda_pilot_2010", "honda_pilot_2011", "honda_prelude_1990", "honda_prelude_1992", "honda_prelude_1993", "honda_prelude_1994", "honda_prelude_1997", "honda_prelude_1998", "honda_prelude_1999", "honda_prelude_2000", "honda_prelude_2001", "honda_ridgeline_2006", "honda_ridgeline_2007", "honda_ridgeline_2008", "honda_ridgeline_2012", "honda_ridgeline_2013", "honda_s2000_2000", "honda_s2000_2001", "honda_s2000_2002", "honda_s2000_2003", "honda_s2000_2004", "honda_s2000_2005", "honda_s2000_2006", "hummer_h2_2003", "hummer_h2_2004", "hummer_h2_2005", "hummer_h2_2006", "hummer_h3_2006", "hummer_h3_2007", "hummer_h3_2008", "hummer_h3_2009", "hyundai_accent_2000", "hyundai_accent_2001", "hyundai_accent_2002", "hyundai_accent_2003", "hyundai_accent_2004", "hyundai_accent_2005", "hyundai_accent_2006", "hyundai_accent_2007", "hyundai_accent_2008", "hyundai_accent_2009", "hyundai_accent_2010", "hyundai_accent_2011", "hyundai_accent_2012", "hyundai_accent_2013", "hyundai_accent_2015", "hyundai_azera_2006", "hyundai_azera_2007", "hyundai_azera_2014", "hyundai_elantra_1999", "hyundai_elantra_2000", "hyundai_elantra_2001", "hyundai_elantra_2002", "hyundai_elantra_2003", "hyundai_elantra_2004", "hyundai_elantra_2005", "hyundai_elantra_2006", "hyundai_elantra_2007", "hyundai_elantra_2008", "hyundai_elantra_2009", "hyundai_elantra_2010", "hyundai_elantra_2011", "hyundai_elantra_2012", "hyundai_elantra_2013", "hyundai_elantra_2014", "hyundai_elantra_2015", "hyundai_elantra_2016", "hyundai_entourage_2007", "hyundai_genesis_2009", "hyundai_genesis_2010", "hyundai_genesis_2011", "hyundai_genesis_coupe_2010", "hyundai_genesis_coupe_2011", "hyundai_genesis_coupe_2013", "hyundai_santafe_2001", "hyundai_santafe_2002", "hyundai_santafe_2003", "hyundai_santafe_2004", "hyundai_santafe_2005", "hyundai_santafe_2006", "hyundai_santafe_2007", "hyundai_santafe_2008", "hyundai_santafe_2009", "hyundai_santafe_2014", "hyundai_sonata_1999", "hyundai_sonata_2000", "hyundai_sonata_2001", "hyundai_sonata_2002", "hyundai_sonata_2003", "hyundai_sonata_2004", "hyundai_sonata_2005", "hyundai_sonata_2006", "hyundai_sonata_2007", "hyundai_sonata_2008", "hyundai_sonata_2009", "hyundai_sonata_2010", "hyundai_sonata_2011", "hyundai_sonata_2012", "hyundai_sonata_2013", "hyundai_sonata_2014", "hyundai_tiburon_2000", "hyundai_tiburon_2001", "hyundai_tiburon_2003", "hyundai_tiburon_2004", "hyundai_tiburon_2005", "hyundai_tiburon_2006", "hyundai_tiburon_2007", "hyundai_tiburon_2008", "hyundai_tucson_2005", "hyundai_tucson_2006", "hyundai_tucson_2007", "hyundai_tucson_2010", "hyundai_tucson_2011", "hyundai_veloster_2012", "hyundai_veloster_2013", "hyundai_xg300_2001", "infiniti_fx35_2003", "infiniti_fx35_2004", "infiniti_fx35_2005", "infiniti_fx35_2006", "infiniti_fx35_2007", "infiniti_fx35_2008", "infiniti_fx35_2009", "infiniti_fx45_2003", "infiniti_g20_1999", "infiniti_g20_2002", "infiniti_g25_2012", "infiniti_g35_2003", "infiniti_g35_2004", "infiniti_g35_2005", "infiniti_g35_2006", "infiniti_g35_2007", "infiniti_g35_2008", "infiniti_g35_coupe_2003", "infiniti_g35_coupe_2004", "infiniti_g35_coupe_2005", "infiniti_g35_coupe_2006", "infiniti_g35_coupe_2007", "infiniti_g37_2008", "infiniti_g37_2009", "infiniti_g37_2010", "infiniti_g37_2011", "infiniti_g37_2012", "infiniti_g37_2013", "infiniti_i30_1996", "infiniti_i30_1997", "infiniti_i30_1998", "infiniti_i30_1999", "infiniti_i30_2000", "infiniti_i30_2001", "infiniti_i35_2002", "infiniti_i35_2003", "infiniti_i35_2004", "infiniti_m35_2006", "infiniti_m35_2007", "infiniti_m35_2008", "infiniti_m45_2003", "infiniti_m45_2006", "infiniti_q45_2001", "infiniti_q45_2002", "infiniti_q45_2003", "infiniti_qx4_1997", "infiniti_qx4_1998", "infiniti_qx4_1999", "infiniti_qx4_2000", "infiniti_qx4_2001", "infiniti_qx4_2002", "infiniti_qx4_2003", "infiniti_qx56_2004", "infiniti_qx56_2005", "infiniti_qx56_2006", "infiniti_qx56_2008", "infiniti_qx56_2010", "infiniti_qx56_2011", "isuzu_amigo_1998", "isuzu_axiom_2002", "isuzu_rodeo_1996", "isuzu_rodeo_1997", "isuzu_rodeo_1998", "isuzu_rodeo_1999", "isuzu_rodeo_2000", "isuzu_rodeo_2001", "isuzu_rodeo_2002", "isuzu_rodeo_2004", "isuzu_trooper_1999", "isuzu_trooper_2000", "isuzu_trooper_2001", "jaguar_s-type_2000", "jaguar_s-type_2001", "jaguar_s-type_2002", "jaguar_s-type_2003", "jaguar_s-type_2005", "jaguar_x-type_2002", "jaguar_x-type_2003", "jaguar_x-type_2004", "jaguar_x-type_2005", "jaguar_xj_1986", "jaguar_xj_1998", "jaguar_xj_1999", "jaguar_xj_2000", "jaguar_xj_2001", "jaguar_xj_2004", "jaguar_xj_2005", "jaguar_xj_2013", "jaguar_xk_1997", "jaguar_xk_2008", "jeep_cherokee_1987", "jeep_cherokee_1988", "jeep_cherokee_1989", "jeep_cherokee_1990", "jeep_cherokee_1991", "jeep_cherokee_1992", "jeep_cherokee_1993", "jeep_cherokee_1994", "jeep_cherokee_1995", "jeep_cherokee_1996", "jeep_cherokee_1997", "jeep_cherokee_1998", "jeep_cherokee_1999", "jeep_cherokee_2000", "jeep_cherokee_2001", "jeep_cherokee_2005", "jeep_cherokee_2006", "jeep_cherokee_2014", "jeep_cj5_1973", "jeep_cj5_1979", "jeep_cj7_1985", "jeep_commander_2006", "jeep_commander_2007", "jeep_commander_2008", "jeep_compass_2007", "jeep_compass_2008", "jeep_compass_2009", "jeep_compass_2010", "jeep_compass_2011", "jeep_compass_2015", "jeep_grand_cherokee_1993", "jeep_grand_cherokee_1994", "jeep_grand_cherokee_1995", "jeep_grand_cherokee_1996", "jeep_grand_cherokee_1997", "jeep_grand_cherokee_1998", "jeep_grand_cherokee_1999", "jeep_grand_cherokee_2000", "jeep_grand_cherokee_2001", "jeep_grand_cherokee_2002", "jeep_grand_cherokee_2003", "jeep_grand_cherokee_2004", "jeep_grand_cherokee_2005", "jeep_grand_cherokee_2006", "jeep_grand_cherokee_2007", "jeep_grand_cherokee_2008", "jeep_grand_cherokee_2009", "jeep_grand_cherokee_2011", "jeep_grand_cherokee_2012", "jeep_grand_cherokee_2013", "jeep_grand_cherokee_2014", "jeep_liberty_2002", "jeep_liberty_2003", "jeep_liberty_2004", "jeep_liberty_2005", "jeep_liberty_2006", "jeep_liberty_2007", "jeep_liberty_2008", "jeep_liberty_2010", "jeep_liberty_2011", "jeep_liberty_2012", "jeep_patriot_2007", "jeep_patriot_2008", "jeep_patriot_2009", "jeep_patriot_2010", "jeep_patriot_2012", "jeep_patriot_2014", "jeep_patriot_2015", "jeep_wrangler_1987", "jeep_wrangler_1988", "jeep_wrangler_1989", "jeep_wrangler_1990", "jeep_wrangler_1991", "jeep_wrangler_1992", "jeep_wrangler_1993", "jeep_wrangler_1994", "jeep_wrangler_1995", "jeep_wrangler_1997", "jeep_wrangler_1998", "jeep_wrangler_1999", "jeep_wrangler_2000", "jeep_wrangler_2001", "jeep_wrangler_2002", "jeep_wrangler_2003", "jeep_wrangler_2004", "jeep_wrangler_2005", "jeep_wrangler_2006", "jeep_wrangler_2007", "jeep_wrangler_2008", "jeep_wrangler_2009", "jeep_wrangler_2010", "jeep_wrangler_2011", "jeep_wrangler_2012", "jeep_wrangler_2013", "jeep_wrangler_2014", "jeep_wrangler_rubicon_2012", "jeep_wrangler_sahara_1999", "jeep_wrangler_tj_1999", "jeep_wrangler_tj_2001", "jeep_wrangler_unlimited_2007", "jeep_wrangler_unlimited_2008", "jeep_wrangler_unlimited_2010", "jeep_wrangler_unlimited_2011", "jeep_wrangler_unlimited_2012", "jeep_wrangler_unlimited_2013", "jeep_wrangler_unlimited_2014", "jeep_wrangler_unlimited_2015", "kia_amanti_2004", "kia_amanti_2005", "kia_forte_2010", "kia_forte_2011", "kia_forte_2012", "kia_forte_2013", "kia_forte_2015", "kia_optima_2001", "kia_optima_2002", "kia_optima_2003", "kia_optima_2004", "kia_optima_2005", "kia_optima_2006", "kia_optima_2007", "kia_optima_2008", "kia_optima_2009", "kia_optima_2010", "kia_optima_2011", "kia_optima_2012", "kia_optima_2013", "kia_optima_2014", "kia_optima_2015", "kia_rio_2001", "kia_rio_2002", "kia_rio_2003", "kia_rio_2004", "kia_rio_2005", "kia_rio_2006", "kia_rio_2007", "kia_rio_2008", "kia_rio_2009", "kia_rio_2010", "kia_rio_2012", "kia_rio_2013", "kia_rondo_2007", "kia_rondo_2008", "kia_sedona_2002", "kia_sedona_2003", "kia_sedona_2004", "kia_sedona_2005", "kia_sedona_2006", "kia_sedona_2007", "kia_sedona_2008", "kia_sedona_2012", "kia_sephia_1999", "kia_sephia_2000", "kia_sephia_2001", "kia_sorento_2003", "kia_sorento_2004", "kia_sorento_2005", "kia_sorento_2006", "kia_sorento_2007", "kia_sorento_2008", "kia_sorento_2011", "kia_sorento_2012", "kia_sorento_2013", "kia_sorento_2014", "kia_soul_2010", "kia_soul_2011", "kia_soul_2012", "kia_soul_2013", "kia_soul_2014", "kia_soul_2015", "kia_spectra_2001", "kia_spectra_2002", "kia_spectra_2003", "kia_spectra_2004", "kia_spectra_2005", "kia_spectra_2006", "kia_spectra_2007", "kia_spectra_2008", "kia_spectra_2009", "kia_sportage_1999", "kia_sportage_2000", "kia_sportage_2001", "kia_sportage_2002", "kia_sportage_2005", "kia_sportage_2006", "kia_sportage_2007", "kia_sportage_2008", "kia_sportage_2011", "kia_sportage_2012", "landrover_discovery_1997", "landrover_discovery_1999", "landrover_discovery_2003", "landrover_discovery_2004", "landrover_lr2_2008", "landrover_lr3_2006", "landrover_lr3_2007", "landrover_rangerover_1999", "landrover_rangerover_2003", "landrover_rangerover_2004", "landrover_rangerover_2006", "landrover_rangerover_2007", "landrover_rangerover_2011", "landrover_rangerover_2012", "lexus_ct_2013", "lexus_es300_1994", "lexus_es300_1995", "lexus_es300_1996", "lexus_es300_1997", "lexus_es300_1998", "lexus_es300_1999", "lexus_es300_2000", "lexus_es300_2001", "lexus_es300_2002", "lexus_es300_2003", "lexus_es330_2004", "lexus_es330_2005", "lexus_es330_2006", "lexus_es350_2007", "lexus_es350_2008", "lexus_es350_2013", "lexus_gs300_1998", "lexus_gs300_1999", "lexus_gs300_2000", "lexus_gs300_2001", "lexus_gs300_2002", "lexus_gs300_2003", "lexus_gs300_2006", "lexus_gs350_2007", "lexus_gs350_2010", "lexus_gs400_1998", "lexus_gs400_1999", "lexus_gs400_2000", "lexus_gs430_2003", "lexus_gx470_2003", "lexus_gx470_2004", "lexus_gx470_2005", "lexus_gx470_2006", "lexus_gx470_2008", "lexus_is250_2006", "lexus_is250_2007", "lexus_is250_2008", "lexus_is250_2009", "lexus_is250_2010", "lexus_is250_2011", "lexus_is250_2015", "lexus_is300_2001", "lexus_is300_2002", "lexus_is300_2003", "lexus_is300_2004", "lexus_is350_2006", "lexus_is350_2007", "lexus_is350_2010", "lexus_ls400_1990", "lexus_ls400_1993", "lexus_ls400_1995", "lexus_ls400_1996", "lexus_ls400_1999", "lexus_ls430_2001", "lexus_ls430_2002", "lexus_ls430_2003", "lexus_ls430_2004", "lexus_ls430_2005", "lexus_ls430_2006", "lexus_ls460_2007", "lexus_ls460_2008", "lexus_lx470_1999", "lexus_lx570_2015", "lexus_rx300_1999", "lexus_rx300_2000", "lexus_rx300_2001", "lexus_rx300_2002", "lexus_rx300_2003", "lexus_rx330_2004", "lexus_rx330_2005", "lexus_rx350_2008", "lexus_rx350_2010", "lexus_rx400h_2006", "lexus_sc300_1992", "lexus_sc300_1993", "lexus_sc300_1995", "lexus_sc400_1992", "lexus_sc400_1993", "lexus_sc400_1995", "lexus_sc430_2002", "lexus_sc430_2003", "lexus_sc430_2005", "lincoln_aviator_2003", "lincoln_aviator_2004", "lincoln_aviator_2005", "lincoln_continental_1998", "lincoln_continental_1999", "lincoln_continental_2000", "lincoln_continental_2001", "lincoln_continental_2002", "lincoln_ls_2000", "lincoln_ls_2001", "lincoln_ls_2002", "lincoln_ls_2003", "lincoln_ls_2004", "lincoln_ls_2005", "lincoln_ls_2006", "lincoln_mark_v_1977", "lincoln_mark_v_1979", "lincoln_mark_viii_1996", "lincoln_mark_viii_1997", "lincoln_mks_2009", "lincoln_mks_2011", "lincoln_mks_2013", "lincoln_mkx_2007", "lincoln_mkx_2008", "lincoln_mkz_2007", "lincoln_mkz_2008", "lincoln_mkz_2010", "lincoln_navigator_1998", "lincoln_navigator_1999", "lincoln_navigator_2000", "lincoln_navigator_2001", "lincoln_navigator_2002", "lincoln_navigator_2003", "lincoln_navigator_2004", "lincoln_navigator_2005", "lincoln_navigator_2006", "lincoln_navigator_2007", "lincoln_navigator_2008", "lincoln_towncar_1988", "lincoln_towncar_1989", "lincoln_towncar_1993", "lincoln_towncar_1994", "lincoln_towncar_1995", "lincoln_towncar_1996", "lincoln_towncar_1997", "lincoln_towncar_1998", "lincoln_towncar_1999", "lincoln_towncar_2000", "lincoln_towncar_2001", "lincoln_towncar_2002", "lincoln_towncar_2003", "lincoln_towncar_2004", "lincoln_towncar_2005", "lincoln_towncar_2006", "lincoln_towncar_2007", "lincoln_zephyr_2006", "mazda_2_2012", "mazda_2_2013", "mazda_2_2014", "mazda_3_2004", "mazda_3_2005", "mazda_3_2006", "mazda_3_2007", "mazda_3_2008", "mazda_3_2009", "mazda_3_2010", "mazda_3_2011", "mazda_3_2012", "mazda_3_2013", "mazda_3_2014", "mazda_3_2015", "mazda_3_2016", "mazda_3_hatchback_2004", "mazda_3_hatchback_2006", "mazda_3_hatchback_2007", "mazda_3_hatchback_2011", "mazda_5_2006", "mazda_5_2007", "mazda_5_2008", "mazda_5_2009", "mazda_5_2012", "mazda_626_1997", "mazda_626_1998", "mazda_626_1999", "mazda_626_2000", "mazda_626_2001", "mazda_626_2002", "mazda_6_2003", "mazda_6_2004", "mazda_6_2005", "mazda_6_2006", "mazda_6_2007", "mazda_6_2008", "mazda_6_2009", "mazda_6_2010", "mazda_6_2011", "mazda_6_2012", "mazda_6_2015", "mazda_6_wagon_2004", "mazda_b3000_2000", "mazda_b4000_1999", "mazda_cx7_2007", "mazda_cx7_2008", "mazda_cx9_2007", "mazda_cx9_2008", "mazda_millenia_2000", "mazda_millenia_2001", "mazda_millenia_2002", "mazda_mpv_2000", "mazda_mpv_2001", "mazda_mpv_2002", "mazda_mpv_2003", "mazda_mpv_2004", "mazda_mpv_2005", "mazda_mpv_2006", "mazda_mx5_miata_1990", "mazda_mx5_miata_1991", "mazda_mx5_miata_1992", "mazda_mx5_miata_1993", "mazda_mx5_miata_1994", "mazda_mx5_miata_1995", "mazda_mx5_miata_1996", "mazda_mx5_miata_1997", "mazda_mx5_miata_1999", "mazda_mx5_miata_2000", "mazda_mx5_miata_2001", "mazda_mx5_miata_2002", "mazda_mx5_miata_2003", "mazda_mx5_miata_2006", "mazda_mx5_miata_2007", "mazda_mx6_1993", "mazda_protege_1997", "mazda_protege_1998", "mazda_protege_1999", "mazda_protege_2000", "mazda_protege_2001", "mazda_protege_2002", "mazda_protege_2003", "mazda_rx7_1985", "mazda_rx7_1988", "mazda_rx7_1991", "mazda_rx7_1993", "mazda_rx8_2004", "mazda_rx8_2005", "mazda_rx8_2006", "mazda_speed3_2007", "mazda_speed3_2010", "mazda_tribute_2001", "mazda_tribute_2002", "mazda_tribute_2003", "mazda_tribute_2004", "mazda_tribute_2005", "mazda_tribute_2006", "mazda_tribute_2008", "mercedes benz_c230_1997", "mercedes benz_c230_1998", "mercedes benz_c230_1999", "mercedes benz_c230_2000", "mercedes benz_c230_2002", "mercedes benz_c230_2003", "mercedes benz_c230_2004", "mercedes benz_c230_2005", "mercedes benz_c230_2006", "mercedes benz_c230_2007", "mercedes benz_c240_2001", "mercedes benz_c240_2002", "mercedes benz_c240_2003", "mercedes benz_c240_2004", "mercedes benz_c240_2005", "mercedes benz_c250_2012", "mercedes benz_c250_2013", "mercedes benz_c250_2015", "mercedes benz_c280_1994", "mercedes benz_c280_1995", "mercedes benz_c280_1996", "mercedes benz_c280_1999", "mercedes benz_c280_2006", "mercedes benz_c300_2008", "mercedes benz_c300_2009", "mercedes benz_c300_2010", "mercedes benz_c300_2011", "mercedes benz_c300_2012", "mercedes benz_c320_2001", "mercedes benz_c320_2002", "mercedes benz_c320_2003", "mercedes benz_c320_2004", "mercedes benz_c320_2005", "mercedes benz_c350_2008", "mercedes benz_c55_2006", "mercedes benz_cl500_2006", "mercedes benz_clk320_1999", "mercedes benz_clk320_2000", "mercedes benz_clk320_2001", "mercedes benz_clk320_2004", "mercedes benz_clk320_2005", "mercedes benz_clk350_2008", "mercedes benz_clk430_1999", "mercedes benz_clk500_2003", "mercedes benz_clk500_2005", "mercedes benz_cls500_2006", "mercedes benz_cls550_2007", "mercedes benz_e300_1999", "mercedes benz_e320_1995", "mercedes benz_e320_1996", "mercedes benz_e320_1997", "mercedes benz_e320_1998", "mercedes benz_e320_1999", "mercedes benz_e320_2000", "mercedes benz_e320_2001", "mercedes benz_e320_2002", "mercedes benz_e320_2003", "mercedes benz_e320_2004", "mercedes benz_e320_2005", "mercedes benz_e320_wagon_2004", "mercedes benz_e350_2006", "mercedes benz_e350_2007", "mercedes benz_e350_2008", "mercedes benz_e350_2009", "mercedes benz_e350_2010", "mercedes benz_e350_2011", "mercedes benz_e350_2014", "mercedes benz_e420_1997", "mercedes benz_e430_1999", "mercedes benz_e430_2000", "mercedes benz_e430_2001", "mercedes benz_e500_2003", "mercedes benz_e500_2004", "mercedes benz_e500_2005", "mercedes benz_gl450_2007", "mercedes benz_gl450_2008", "mercedes benz_gla250_2016", "mercedes benz_gla45_2016", "mercedes benz_glk350_2010", "mercedes benz_glk350_2011", "mercedes benz_glk350_2013", "mercedes benz_glk350_2014", "mercedes benz_ml320_1998", "mercedes benz_ml320_1999", "mercedes benz_ml320_2000", "mercedes benz_ml320_2001", "mercedes benz_ml320_2002", "mercedes benz_ml350_2003", "mercedes benz_ml350_2004", "mercedes benz_ml350_2006", "mercedes benz_ml350_2007", "mercedes benz_ml350_2008", "mercedes benz_ml350_2010", "mercedes benz_ml430_1999", "mercedes benz_ml430_2000", "mercedes benz_ml500_2002", "mercedes benz_ml500_2004", "mercedes benz_ml500_2006", "mercedes benz_r350_2006", "mercedes benz_r350_2010", "mercedes benz_s320_1997", "mercedes benz_s350_2012", "mercedes benz_s420_1997", "mercedes benz_s430_2000", "mercedes benz_s430_2001", "mercedes benz_s430_2002", "mercedes benz_s430_2003", "mercedes benz_s500_1995", "mercedes benz_s500_2000", "mercedes benz_s500_2001", "mercedes benz_s500_2002", "mercedes benz_s500_2003", "mercedes benz_s500_2005", "mercedes benz_s550_2007", "mercedes benz_s550_2010", "mercedes benz_s600_2002", "mercedes benz_sl450_1980", "mercedes benz_sl500_1999", "mercedes benz_sl500_2000", "mercedes benz_sl500_2003", "mercedes benz_sl500_2004", "mercedes benz_sl550_2007", "mercedes benz_slk230_1999", "mercedes benz_slk230_2000", "mercedes benz_slk230_2001", "mercedes benz_slk350_2006", "mercedes benz_sprinter_2014", "mercury_cougar_1996", "mercury_cougar_1997", "mercury_cougar_1999", "mercury_cougar_2000", "mercury_cougar_2001", "mercury_cougar_2002", "mercury_grandmarquis_1995", "mercury_grandmarquis_1996", "mercury_grandmarquis_1997", "mercury_grandmarquis_1998", "mercury_grandmarquis_1999", "mercury_grandmarquis_2000", "mercury_grandmarquis_2001", "mercury_grandmarquis_2002", "mercury_grandmarquis_2003", "mercury_grandmarquis_2004", "mercury_grandmarquis_2005", "mercury_grandmarquis_2006", "mercury_grandmarquis_2007", "mercury_marauder_2003", "mercury_mariner_2005", "mercury_mariner_2006", "mercury_mariner_2007", "mercury_mariner_2008", "mercury_mariner_2009", "mercury_milan_2006", "mercury_milan_2007", "mercury_milan_2008", "mercury_montego_2006", "mercury_monterey_2004", "mercury_mountaineer_1997", "mercury_mountaineer_1998", "mercury_mountaineer_1999", "mercury_mountaineer_2000", "mercury_mountaineer_2001", "mercury_mountaineer_2002", "mercury_mountaineer_2003", "mercury_mountaineer_2004", "mercury_mountaineer_2005", "mercury_mountaineer_2006", "mercury_mountaineer_2008", "mercury_mystique_2000", "mercury_sable_1996", "mercury_sable_1997", "mercury_sable_1998", "mercury_sable_1999", "mercury_sable_2000", "mercury_sable_2001", "mercury_sable_2002", "mercury_sable_2003", "mercury_sable_2004", "mercury_sable_2005", "mercury_sable_2008", "mercury_villager_1998", "mercury_villager_1999", "mercury_villager_2001", "mercury_villager_2002", "mg_midget_1976", "mini_austin_1975", "mini_cooper_2002", "mini_cooper_2003", "mini_cooper_2004", "mini_cooper_2005", "mini_cooper_2006", "mini_cooper_2007", "mini_cooper_2008", "mini_cooper_2009", "mini_cooper_2010", "mini_cooper_2011", "mini_cooper_2012", "mini_cooper_2013", "mini_cooper_2014", "mini_cooper_countryman_2012", "mitsubishi_3000gt_1993", "mitsubishi_3000gt_1994", "mitsubishi_3000gt_1995", "mitsubishi_3000gt_1997", "mitsubishi_diamante_2001", "mitsubishi_diamante_2002", "mitsubishi_eclipse_1995", "mitsubishi_eclipse_1996", "mitsubishi_eclipse_1997", "mitsubishi_eclipse_1998", "mitsubishi_eclipse_1999", "mitsubishi_eclipse_2000", "mitsubishi_eclipse_2001", "mitsubishi_eclipse_2002", "mitsubishi_eclipse_2003", "mitsubishi_eclipse_2004", "mitsubishi_eclipse_2006", "mitsubishi_eclipse_2007", "mitsubishi_eclipse_2008", "mitsubishi_eclipse_2009", "mitsubishi_eclipse_convertible_2001", "mitsubishi_eclipse_convertible_2003", "mitsubishi_eclipse_convertible_2007", "mitsubishi_endeavor_2004", "mitsubishi_endeavor_2005", "mitsubishi_endeavor_2007", "mitsubishi_galant_1994", "mitsubishi_galant_1998", "mitsubishi_galant_1999", "mitsubishi_galant_2000", "mitsubishi_galant_2001", "mitsubishi_galant_2002", "mitsubishi_galant_2003", "mitsubishi_galant_2004", "mitsubishi_galant_2005", "mitsubishi_galant_2006", "mitsubishi_galant_2007", "mitsubishi_galant_2008", "mitsubishi_galant_2009", "mitsubishi_galant_2011", "mitsubishi_lancer_2002", "mitsubishi_lancer_2003", "mitsubishi_lancer_2004", "mitsubishi_lancer_2005", "mitsubishi_lancer_2006", "mitsubishi_lancer_2008", "mitsubishi_lancer_2009", "mitsubishi_lancer_2010", "mitsubishi_lancer_2011", "mitsubishi_lancer_2012", "mitsubishi_mirage_1999", "mitsubishi_mirage_2000", "mitsubishi_mirage_2001", "mitsubishi_mirage_2014", "mitsubishi_montero_2000", "mitsubishi_montero_2001", "mitsubishi_montero_2002", "mitsubishi_montero_2003", "mitsubishi_outlander_2003", "mitsubishi_outlander_2004", "mitsubishi_outlander_2005", "mitsubishi_outlander_2007", "mitsubishi_outlander_2014", "nissan_200sx_1996", "nissan_200sx_1998", "nissan_240sx_1989", "nissan_240sx_1990", "nissan_240sx_1991", "nissan_240sx_1992", "nissan_240sx_1995", "nissan_300zx_1985", "nissan_300zx_1986", "nissan_300zx_1990", "nissan_300zx_1991", "nissan_300zx_1993", "nissan_350z_2003", "nissan_350z_2004", "nissan_350z_2005", "nissan_350z_2006", "nissan_350z_2007", "nissan_350z_2008", "nissan_370z_2009", "nissan_370z_2010", "nissan_370z_2012", "nissan_altima_1994", "nissan_altima_1995", "nissan_altima_1996", "nissan_altima_1997", "nissan_altima_1998", "nissan_altima_1999", "nissan_altima_2000", "nissan_altima_2001", "nissan_altima_2002", "nissan_altima_2003", "nissan_altima_2004", "nissan_altima_2005", "nissan_altima_2006", "nissan_altima_2007", "nissan_altima_2008", "nissan_altima_2009", "nissan_altima_2010", "nissan_altima_2011", "nissan_altima_2012", "nissan_altima_2013", "nissan_altima_2014", "nissan_altima_2015", "nissan_altima_2016", "nissan_armada_2004", "nissan_armada_2005", "nissan_armada_2006", "nissan_armada_2007", "nissan_armada_2008", "nissan_armada_2011", "nissan_cube_2013", "nissan_frontier_1998", "nissan_frontier_1999", "nissan_frontier_2000", "nissan_frontier_2001", "nissan_frontier_2002", "nissan_frontier_2003", "nissan_frontier_2004", "nissan_frontier_2005", "nissan_frontier_2006", "nissan_frontier_2007", "nissan_gtr_2010", "nissan_juke_2011", "nissan_juke_2012", "nissan_maxima_1993", "nissan_maxima_1995", "nissan_maxima_1996", "nissan_maxima_1997", "nissan_maxima_1998", "nissan_maxima_1999", "nissan_maxima_2000", "nissan_maxima_2001", "nissan_maxima_2002", "nissan_maxima_2003", "nissan_maxima_2004", "nissan_maxima_2005", "nissan_maxima_2006", "nissan_maxima_2007", "nissan_maxima_2008", "nissan_maxima_2009", "nissan_maxima_2010", "nissan_maxima_2011", "nissan_maxima_2012", "nissan_maxima_2013", "nissan_murano_2003", "nissan_murano_2004", "nissan_murano_2005", "nissan_murano_2006", "nissan_murano_2007", "nissan_murano_2009", "nissan_murano_2010", "nissan_murano_2011", "nissan_pathfinder_1995", "nissan_pathfinder_1996", "nissan_pathfinder_1997", "nissan_pathfinder_1998", "nissan_pathfinder_1999", "nissan_pathfinder_2000", "nissan_pathfinder_2001", "nissan_pathfinder_2002", "nissan_pathfinder_2003", "nissan_pathfinder_2004", "nissan_pathfinder_2005", "nissan_pathfinder_2006", "nissan_pathfinder_2007", "nissan_pathfinder_2008", "nissan_pathfinder_2013", "nissan_quest_1995", "nissan_quest_1996", "nissan_quest_1997", "nissan_quest_1998", "nissan_quest_1999", "nissan_quest_2000", "nissan_quest_2001", "nissan_quest_2002", "nissan_quest_2004", "nissan_quest_2005", "nissan_quest_2006", "nissan_quest_2007", "nissan_quest_2008", "nissan_quest_2012", "nissan_rogue_2008", "nissan_rogue_2009", "nissan_rogue_2010", "nissan_rogue_2011", "nissan_rogue_2012", "nissan_rogue_2013", "nissan_rogue_2014", "nissan_rogue_2015", "nissan_sentra_1991", "nissan_sentra_1993", "nissan_sentra_1994", "nissan_sentra_1995", "nissan_sentra_1996", "nissan_sentra_1997", "nissan_sentra_1998", "nissan_sentra_1999", "nissan_sentra_2000", "nissan_sentra_2001", "nissan_sentra_2002", "nissan_sentra_2003", "nissan_sentra_2004", "nissan_sentra_2005", "nissan_sentra_2006", "nissan_sentra_2007", "nissan_sentra_2008", "nissan_sentra_2009", "nissan_sentra_2010", "nissan_sentra_2011", "nissan_sentra_2012", "nissan_sentra_2013", "nissan_sentra_2014", "nissan_sentra_2015", "nissan_titan_2004", "nissan_titan_2005", "nissan_titan_2006", "nissan_titan_2007", "nissan_titan_2008", "nissan_titan_2009", "nissan_titan_2012", "nissan_titan_2013", "nissan_versa_2007", "nissan_versa_2008", "nissan_versa_2009", "nissan_versa_2010", "nissan_versa_2011", "nissan_versa_2012", "nissan_versa_2013", "nissan_versa_2014", "nissan_versa_2015", "nissan_versa_2016", "nissan_xterra_2000", "nissan_xterra_2001", "nissan_xterra_2002", "nissan_xterra_2003", "nissan_xterra_2004", "nissan_xterra_2005", "nissan_xterra_2006", "nissan_xterra_2007", "nissan_xterra_2008", "oldsmobile_alero_1999", "oldsmobile_alero_2000", "oldsmobile_alero_2001", "oldsmobile_alero_2002", "oldsmobile_alero_2003", "oldsmobile_alero_2004", "oldsmobile_aurora_1998", "oldsmobile_aurora_2001", "oldsmobile_aurora_2002", "oldsmobile_bravada_1997", "oldsmobile_bravada_1998", "oldsmobile_bravada_1999", "oldsmobile_bravada_2000", "oldsmobile_bravada_2001", "oldsmobile_bravada_2002", "oldsmobile_cutlass_1972", "oldsmobile_cutlass_1987", "oldsmobile_cutlass_1994", "oldsmobile_cutlass_1995", "oldsmobile_cutlass_1996", "oldsmobile_cutlass_1998", "oldsmobile_intrigue_1998", "oldsmobile_intrigue_1999", "oldsmobile_intrigue_2000", "oldsmobile_intrigue_2001", "oldsmobile_intrigue_2002", "oldsmobile_silhouette_1999", "oldsmobile_silhouette_2000", "oldsmobile_silhouette_2001", "oldsmobile_silhouette_2002", "oldsmobile_silhouette_2003", "oldsmobile_silhouette_2004", "plymouth_belvedere_1966", "plymouth_breeze_1997", "plymouth_breeze_1999", "plymouth_gtx_1968", "plymouth_neon_1998", "plymouth_neon_2000", "plymouth_neon_2001", "plymouth_roadrunner_1969", "plymouth_valiant_1965", "plymouth_voyager_1995", "plymouth_voyager_1997", "plymouth_voyager_1998", "plymouth_voyager_1999", "plymouth_voyager_2000", "pontiac_aztek_2001", "pontiac_aztek_2002", "pontiac_aztek_2003", "pontiac_aztek_2004", "pontiac_bonneville_1997", "pontiac_bonneville_1998", "pontiac_bonneville_1999", "pontiac_bonneville_2000", "pontiac_bonneville_2001", "pontiac_bonneville_2002", "pontiac_bonneville_2003", "pontiac_bonneville_2004", "pontiac_bonneville_2005", "pontiac_fiero_1986", "pontiac_firebird_1969", "pontiac_firebird_1981", "pontiac_firebird_1988", "pontiac_firebird_1994", "pontiac_firebird_1995", "pontiac_firebird_1996", "pontiac_firebird_1997", "pontiac_firebird_1998", "pontiac_firebird_1999", "pontiac_firebird_2000", "pontiac_firebird_2001", "pontiac_firebird_2002", "pontiac_g5_2007", "pontiac_g5_2008", "pontiac_g5_2009", "pontiac_g6_2005", "pontiac_g6_2006", "pontiac_g6_2007", "pontiac_g6_2008", "pontiac_g6_2009", "pontiac_g6_2010", "pontiac_g8_2008", "pontiac_g8_2009", "pontiac_grandam_1996", "pontiac_grandam_1997", "pontiac_grandam_1998", "pontiac_grandam_1999", "pontiac_grandam_2000", "pontiac_grandam_2001", "pontiac_grandam_2002", "pontiac_grandam_2003", "pontiac_grandam_2004", "pontiac_grandam_2005", "pontiac_grandprix_1997", "pontiac_grandprix_1998", "pontiac_grandprix_1999", "pontiac_grandprix_2000", "pontiac_grandprix_2001", "pontiac_grandprix_2002", "pontiac_grandprix_2003", "pontiac_grandprix_2004", "pontiac_grandprix_2005", "pontiac_grandprix_2006", "pontiac_grandprix_2007", "pontiac_grandprix_2008", "pontiac_gto_1970", "pontiac_gto_2004", "pontiac_gto_2005", "pontiac_gto_2006", "pontiac_lemans_1971", "pontiac_montana_1999", "pontiac_montana_2000", "pontiac_montana_2001", "pontiac_montana_2002", "pontiac_montana_2003", "pontiac_montana_2004", "pontiac_montana_2005", "pontiac_montana_2006", "pontiac_solstice_2006", "pontiac_solstice_2007", "pontiac_solstice_2008", "pontiac_starchief_1954", "pontiac_sunfire_1997", "pontiac_sunfire_1998", "pontiac_sunfire_1999", "pontiac_sunfire_2000", "pontiac_sunfire_2001", "pontiac_sunfire_2002", "pontiac_sunfire_2003", "pontiac_sunfire_2004", "pontiac_sunfire_2005", "pontiac_torrent_2006", "pontiac_torrent_2007", "pontiac_torrent_2008", "pontiac_torrent_2009", "pontiac_transam_1978", "pontiac_transam_1979", "pontiac_transam_1986", "pontiac_transam_1995", "pontiac_transam_1997", "pontiac_transam_1998", "pontiac_transam_1999", "pontiac_transam_2000", "pontiac_transam_2001", "pontiac_vibe_2003", "pontiac_vibe_2004", "pontiac_vibe_2005", "pontiac_vibe_2006", "pontiac_vibe_2007", "pontiac_vibe_2008", "pontiac_vibe_2009", "porsche_911_1991", "porsche_911_1999", "porsche_911_2002", "porsche_911_2006", "porsche_911_2008", "porsche_911_2013", "porsche_944_1983", "porsche_944_1987", "porsche_boxster_1997", "porsche_boxster_1998", "porsche_boxster_1999", "porsche_boxster_2000", "porsche_boxster_2001", "porsche_boxster_2002", "porsche_cayenne_2004", "porsche_cayenne_2005", "porsche_cayenne_2006", "porsche_cayenne_2010", "porsche_panamera_2011", "ram_1500_2003", "ram_1500_2012", "ram_1500_2014", "ram_1500_2016", "saab_9-3_1999", "saab_9-3_2000", "saab_9-3_2002", "saab_9-3_2003", "saab_9-3_2004", "saab_9-3_2005", "saab_9-3_2006", "saab_9-3_2007", "saab_9-3_2008", "saab_9-3_convertible_2003", "saab_9-3_convertible_2006", "saab_9-3_convertible_2007", "saab_9-5_1999", "saab_9-5_2001", "saab_9-5_2002", "saab_9-5_2003", "saab_9-5_2004", "saturn_astra_2008", "saturn_aura_2007", "saturn_aura_2008", "saturn_aura_2009", "saturn_ion_2003", "saturn_ion_2004", "saturn_ion_2005", "saturn_ion_2006", "saturn_ion_2007", "saturn_l200_2001", "saturn_l200_2002", "saturn_l200_2003", "saturn_l300_2002", "saturn_l300_2003", "saturn_l300_2004", "saturn_l300_2005", "saturn_ls_2000", "saturn_ls_2002", "saturn_outlook_2007", "saturn_outlook_2008", "saturn_outlook_2009", "saturn_relay_2005", "saturn_sky_2007", "saturn_sky_2009", "saturn_sl1_2000", "saturn_sl1_2001", "saturn_sl2_1997", "saturn_sl2_2000", "saturn_sl2_2001", "saturn_sl2_2002", "saturn_sl_2002", "saturn_vue_2002", "saturn_vue_2003", "saturn_vue_2004", "saturn_vue_2005", "saturn_vue_2006", "saturn_vue_2007", "saturn_vue_2008", "saturn_vue_2009", "scion_frs_2013", "scion_iq_2013", "scion_tc_2005", "scion_tc_2006", "scion_tc_2007", "scion_tc_2008", "scion_tc_2009", "scion_tc_2010", "scion_tc_2011", "scion_tc_2012", "scion_tc_2014", "scion_xa_2004", "scion_xa_2005", "scion_xa_2006", "scion_xb_2004", "scion_xb_2005", "scion_xb_2006", "scion_xb_2008", "scion_xb_2009", "scion_xb_2010", "scion_xb_2012", "scion_xd_2008", "scion_xd_2009", "scion_xd_2010", "scion_xd_2012", "scion_xd_2014", "smart_fortwo_2008", "smart_fortwo_2009", "smart_fortwo_2011", "smart_fortwo_2014", "subaru_baja_2003", "subaru_baja_2006", "subaru_brz_2014", "subaru_brz_2015", "subaru_forester_1998", "subaru_forester_1999", "subaru_forester_2000", "subaru_forester_2001", "subaru_forester_2002", "subaru_forester_2003", "subaru_forester_2004", "subaru_forester_2005", "subaru_forester_2006", "subaru_forester_2007", "subaru_forester_2008", "subaru_forester_2009", "subaru_forester_2010", "subaru_forester_2011", "subaru_forester_2012", "subaru_forester_2014", "subaru_forester_2016", "subaru_impreza_1993", "subaru_impreza_1998", "subaru_impreza_1999", "subaru_impreza_2000", "subaru_impreza_2002", "subaru_impreza_2003", "subaru_impreza_2004", "subaru_impreza_2005", "subaru_impreza_2006", "subaru_impreza_2007", "subaru_impreza_2008", "subaru_impreza_2009", "subaru_impreza_2010", "subaru_impreza_2011", "subaru_impreza_2012", "subaru_impreza_2013", "subaru_impreza_wagon_2002", "subaru_impreza_wagon_2004", "subaru_impreza_wagon_2005", "subaru_impreza_wagon_2008", "subaru_impreza_wagon_2012", "subaru_legacy_1998", "subaru_legacy_1999", "subaru_legacy_2003", "subaru_legacy_2004", "subaru_legacy_2005", "subaru_legacy_2006", "subaru_legacy_2008", "subaru_legacy_2009", "subaru_legacy_2010", "subaru_legacy_2011", "subaru_legacy_2012", "subaru_legacy_2013", "subaru_outback_1996", "subaru_outback_1997", "subaru_outback_1998", "subaru_outback_1999", "subaru_outback_2000", "subaru_outback_2001", "subaru_outback_2002", "subaru_outback_2003", "subaru_outback_2004", "subaru_outback_2005", "subaru_outback_2006", "subaru_outback_2007", "subaru_outback_2008", "subaru_outback_2009", "subaru_outback_2010", "subaru_outback_2011", "subaru_outback_2012", "subaru_outback_2013", "subaru_outback_2014", "subaru_tribeca_2006", "subaru_tribeca_2008", "subaru_wrx_2002", "subaru_wrx_2004", "subaru_wrx_2008", "subaru_wrx_2009", "subaru_wrx_2010", "subaru_wrx_2013", "subaru_wrx_2015", "suzuki_aerio_2003", "suzuki_forenza_2004", "suzuki_forenza_2005", "suzuki_forenza_2006", "suzuki_forenza_2007", "suzuki_forenza_2008", "suzuki_reno_2006", "suzuki_reno_2007", "suzuki_samurai_1987", "suzuki_samurai_1988", "suzuki_sx4_2007", "suzuki_sx4_2008", "suzuki_sx4_2011", "suzuki_sx4_2012", "suzuki_verona_2004", "suzuki_verona_2005", "suzuki_xl7_2002", "suzuki_xl7_2004", "suzuki_xl7_2007", "toyota_4runner_1986", "toyota_4runner_1987", "toyota_4runner_1988", "toyota_4runner_1989", "toyota_4runner_1990", "toyota_4runner_1991", "toyota_4runner_1992", "toyota_4runner_1993", "toyota_4runner_1994", "toyota_4runner_1995", "toyota_4runner_1996", "toyota_4runner_1997", "toyota_4runner_1998", "toyota_4runner_1999", "toyota_4runner_2000", "toyota_4runner_2001", "toyota_4runner_2002", "toyota_4runner_2003", "toyota_4runner_2004", "toyota_4runner_2005", "toyota_4runner_2006", "toyota_4runner_2007", "toyota_4runner_2008", "toyota_4runner_2010", "toyota_avalon_1995", "toyota_avalon_1996", "toyota_avalon_1997", "toyota_avalon_1998", "toyota_avalon_1999", "toyota_avalon_2000", "toyota_avalon_2001", "toyota_avalon_2002", "toyota_avalon_2003", "toyota_avalon_2004", "toyota_avalon_2005", "toyota_avalon_2006", "toyota_avalon_2007", "toyota_avalon_2008", "toyota_avalon_2011", "toyota_camry_1989", "toyota_camry_1990", "toyota_camry_1991", "toyota_camry_1992", "toyota_camry_1993", "toyota_camry_1994", "toyota_camry_1995", "toyota_camry_1996", "toyota_camry_1997", "toyota_camry_1998", "toyota_camry_1999", "toyota_camry_2000", "toyota_camry_2001", "toyota_camry_2002", "toyota_camry_2003", "toyota_camry_2004", "toyota_camry_2005", "toyota_camry_2006", "toyota_camry_2007", "toyota_camry_2008", "toyota_camry_2009", "toyota_camry_2010", "toyota_camry_2011", "toyota_camry_2012", "toyota_camry_2013", "toyota_camry_2014", "toyota_camry_2015", "toyota_camry_le_1999", "toyota_camry_le_2000", "toyota_camry_le_2002", "toyota_camry_le_2003", "toyota_camry_le_2004", "toyota_camry_le_2005", "toyota_camry_le_2007", "toyota_camry_le_2009", "toyota_camry_le_2010", "toyota_camry_le_2012", "toyota_camry_se_2012", "toyota_camry_se_2013", "toyota_camry_se_2015", "toyota_camry_solara_1999", "toyota_camry_xle_2007", "toyota_celica_1990", "toyota_celica_1994", "toyota_celica_1995", "toyota_celica_2000", "toyota_celica_2001", "toyota_celica_2002", "toyota_celica_2003", "toyota_corolla_1990", "toyota_corolla_1991", "toyota_corolla_1992", "toyota_corolla_1993", "toyota_corolla_1994", "toyota_corolla_1995", "toyota_corolla_1996", "toyota_corolla_1997", "toyota_corolla_1998", "toyota_corolla_1999", "toyota_corolla_2000", "toyota_corolla_2001", "toyota_corolla_2002", "toyota_corolla_2003", "toyota_corolla_2004", "toyota_corolla_2005", "toyota_corolla_2006", "toyota_corolla_2007", "toyota_corolla_2008", "toyota_corolla_2009", "toyota_corolla_2010", "toyota_corolla_2011", "toyota_corolla_2012", "toyota_corolla_2013", "toyota_corolla_2014", "toyota_corolla_2015", "toyota_corolla_le_2004", "toyota_corolla_le_2010", "toyota_corolla_le_2012", "toyota_corolla_le_2015", "toyota_corolla_s_2005", "toyota_corolla_s_2007", "toyota_corolla_s_2010", "toyota_corolla_s_2011", "toyota_corolla_s_2012", "toyota_echo_2000", "toyota_echo_2001", "toyota_echo_2002", "toyota_echo_2003", "toyota_fjcruiser_2007", "toyota_fjcruiser_2008", "toyota_fjcruiser_2010", "toyota_fjcruiser_2012", "toyota_highlander_2001", "toyota_highlander_2002", "toyota_highlander_2003", "toyota_highlander_2004", "toyota_highlander_2005", "toyota_highlander_2006", "toyota_highlander_2007", "toyota_highlander_2008", "toyota_highlander_2009", "toyota_highlander_2010", "toyota_highlander_2011", "toyota_highlander_2012", "toyota_highlander_2014", "toyota_landcruiser_1983", "toyota_landcruiser_1991", "toyota_landcruiser_1999", "toyota_landcruiser_2000", "toyota_matrix_2003", "toyota_matrix_2004", "toyota_matrix_2005", "toyota_matrix_2006", "toyota_matrix_2007", "toyota_matrix_2008", "toyota_matrix_2009", "toyota_matrix_2010", "toyota_mr2_1987", "toyota_mr2_1989", "toyota_mr2_1991", "toyota_mr2_1993", "toyota_mr2_2000", "toyota_mr2_2001", "toyota_mr2_2002", "toyota_mr2_2003", "toyota_pickup_1986", "toyota_pickup_1988", "toyota_pickup_1989", "toyota_pickup_1991", "toyota_pickup_1994", "toyota_previa_1992", "toyota_previa_1993", "toyota_prius_2001", "toyota_prius_2002", "toyota_prius_2003", "toyota_prius_2004", "toyota_prius_2005", "toyota_prius_2006", "toyota_prius_2007", "toyota_prius_2008", "toyota_prius_2009", "toyota_prius_2010", "toyota_prius_2011", "toyota_prius_2012", "toyota_prius_2013", "toyota_prius_2014", "toyota_prius_2015", "toyota_rav4_1996", "toyota_rav4_1997", "toyota_rav4_1998", "toyota_rav4_1999", "toyota_rav4_2000", "toyota_rav4_2001", "toyota_rav4_2002", "toyota_rav4_2003", "toyota_rav4_2004", "toyota_rav4_2005", "toyota_rav4_2006", "toyota_rav4_2007", "toyota_rav4_2008", "toyota_rav4_2009", "toyota_rav4_2010", "toyota_rav4_2011", "toyota_rav4_2012", "toyota_rav4_2015", "toyota_scion_2005", "toyota_scion_2006", "toyota_scion_2009", "toyota_sequoia_2001", "toyota_sequoia_2002", "toyota_sequoia_2003", "toyota_sequoia_2004", "toyota_sequoia_2005", "toyota_sequoia_2006", "toyota_sequoia_2007", "toyota_sequoia_2008", "toyota_sequoia_2010", "toyota_sienna_1998", "toyota_sienna_1999", "toyota_sienna_2000", "toyota_sienna_2001", "toyota_sienna_2002", "toyota_sienna_2003", "toyota_sienna_2004", "toyota_sienna_2005", "toyota_sienna_2006", "toyota_sienna_2007", "toyota_sienna_2008", "toyota_sienna_2009", "toyota_sienna_2010", "toyota_sienna_2011", "toyota_sienna_2012", "toyota_sienna_2013", "toyota_sienna_2015", "toyota_solara_1999", "toyota_solara_2000", "toyota_solara_2001", "toyota_solara_2002", "toyota_solara_2003", "toyota_solara_2004", "toyota_solara_2005", "toyota_solara_2006", "toyota_supra_1989", "toyota_supra_1990", "toyota_supra_1994", "toyota_t100_1993", "toyota_t100_1995", "toyota_t100_1996", "toyota_t100_1997", "toyota_tacoma_1995", "toyota_tacoma_1996", "toyota_tacoma_1997", "toyota_tacoma_1998", "toyota_tacoma_1999", "toyota_tacoma_2000", "toyota_tacoma_2001", "toyota_tacoma_2002", "toyota_tacoma_2003", "toyota_tacoma_2004", "toyota_tacoma_2005", "toyota_tacoma_2006", "toyota_tacoma_2007", "toyota_tacoma_2008", "toyota_tacoma_2009", "toyota_tacoma_2010", "toyota_tacoma_2011", "toyota_tacoma_2012", "toyota_tacoma_2013", "toyota_tacoma_2014", "toyota_tacoma_2015", "toyota_tacoma_2016", "toyota_tercel_1994", "toyota_tercel_1995", "toyota_tercel_1996", "toyota_tercel_1997", "toyota_tundra_2000", "toyota_tundra_2001", "toyota_tundra_2002", "toyota_tundra_2003", "toyota_tundra_2004", "toyota_tundra_2005", "toyota_tundra_2006", "toyota_tundra_2007", "toyota_tundra_2008", "toyota_tundra_2010", "toyota_tundra_2011", "toyota_tundra_2012", "toyota_tundra_2013", "toyota_tundra_2014", "toyota_tundra_2015", "toyota_venza_2009", "toyota_venza_2010", "toyota_venza_2013", "toyota_yaris_2007", "toyota_yaris_2008", "toyota_yaris_2009", "toyota_yaris_2010", "toyota_yaris_2011", "toyota_yaris_2012", "toyota_yaris_2013", "toyota_yaris_2014", "volkswagen_beetle_1967", "volkswagen_beetle_1969", "volkswagen_beetle_1971", "volkswagen_beetle_1974", "volkswagen_beetle_1978", "volkswagen_beetle_1998", "volkswagen_beetle_1999", "volkswagen_beetle_2000", "volkswagen_beetle_2001", "volkswagen_beetle_2002", "volkswagen_beetle_2003", "volkswagen_beetle_2004", "volkswagen_beetle_2005", "volkswagen_beetle_2006", "volkswagen_beetle_2007", "volkswagen_beetle_2008", "volkswagen_bug_1967", "volkswagen_bug_1968", "volkswagen_cabrio_1999", "volkswagen_cabriolet_1988", "volkswagen_cc_2009", "volkswagen_cc_2010", "volkswagen_cc_2013", "volkswagen_eos_2007", "volkswagen_eos_2008", "volkswagen_eos_2009", "volkswagen_eos_2010", "volkswagen_golf_1997", "volkswagen_golf_2000", "volkswagen_golf_2001", "volkswagen_golf_2002", "volkswagen_golf_2003", "volkswagen_golf_2004", "volkswagen_golf_2005", "volkswagen_golf_2006", "volkswagen_golf_2007", "volkswagen_golf_2012", "volkswagen_gti_1998", "volkswagen_gti_2000", "volkswagen_gti_2001", "volkswagen_gti_2002", "volkswagen_gti_2003", "volkswagen_gti_2004", "volkswagen_gti_2005", "volkswagen_gti_2006", "volkswagen_gti_2007", "volkswagen_gti_2008", "volkswagen_gti_2009", "volkswagen_gti_2010", "volkswagen_gti_2011", "volkswagen_gti_2012", "volkswagen_gti_2013", "volkswagen_jetta_1995", "volkswagen_jetta_1996", "volkswagen_jetta_1997", "volkswagen_jetta_1998", "volkswagen_jetta_1999", "volkswagen_jetta_2000", "volkswagen_jetta_2001", "volkswagen_jetta_2002", "volkswagen_jetta_2003", "volkswagen_jetta_2004", "volkswagen_jetta_2005", "volkswagen_jetta_2006", "volkswagen_jetta_2007", "volkswagen_jetta_2008", "volkswagen_jetta_2009", "volkswagen_jetta_2010", "volkswagen_jetta_2011", "volkswagen_jetta_2012", "volkswagen_jetta_2013", "volkswagen_jetta_2014", "volkswagen_jetta_2015", "volkswagen_newbeetle_1999", "volkswagen_newbeetle_2000", "volkswagen_newbeetle_2001", "volkswagen_newbeetle_2002", "volkswagen_newbeetle_2003", "volkswagen_newbeetle_2004", "volkswagen_newbeetle_2006", "volkswagen_newbeetle_2008", "volkswagen_newbeetle_2009", "volkswagen_passat_1998", "volkswagen_passat_1999", "volkswagen_passat_2000", "volkswagen_passat_2001", "volkswagen_passat_2002", "volkswagen_passat_2003", "volkswagen_passat_2004", "volkswagen_passat_2005", "volkswagen_passat_2006", "volkswagen_passat_2007", "volkswagen_passat_2008", "volkswagen_passat_2009", "volkswagen_passat_2012", "volkswagen_passat_2013", "volkswagen_passat_2016", "volkswagen_passat_wagon_1999", "volkswagen_passat_wagon_2000", "volkswagen_passat_wagon_2001", "volkswagen_passat_wagon_2002", "volkswagen_passat_wagon_2003", "volkswagen_passat_wagon_2004", "volkswagen_passat_wagon_2005", "volkswagen_passat_wagon_2007", "volkswagen_r32_2004", "volkswagen_r32_2008", "volkswagen_rabbit_1981", "volkswagen_rabbit_2006", "volkswagen_rabbit_2007", "volkswagen_rabbit_2008", "volkswagen_rabbit_2009", "volkswagen_routan_2009", "volkswagen_routan_2010", "volkswagen_squareback_1973", "volkswagen_tiguan_2009", "volkswagen_tiguan_2011", "volkswagen_tiguan_2012", "volkswagen_tiguan_2013", "volkswagen_touareg_2004", "volkswagen_touareg_2005", "volkswagen_touareg_2006", "volkswagen_touareg_2008", "volkswagen_vanagon_1982", "volvo_850_1995", "volvo_850_1996", "volvo_850_1997", "volvo_850_wagon_1994", "volvo_850_wagon_1997", "volvo_c30_2009", "volvo_c70_convertible_2000", "volvo_c70_convertible_2001", "volvo_c70_convertible_2004", "volvo_c70_convertible_2008", "volvo_c70_convertible_2009", "volvo_s40_2000", "volvo_s40_2001", "volvo_s40_2002", "volvo_s40_2003", "volvo_s40_2004", "volvo_s40_2005", "volvo_s40_2006", "volvo_s40_2007", "volvo_s40_2008", "volvo_s40_2009", "volvo_s60_2001", "volvo_s60_2002", "volvo_s60_2003", "volvo_s60_2004", "volvo_s60_2005", "volvo_s60_2006", "volvo_s60_2007", "volvo_s60_2008", "volvo_s70_1998", "volvo_s70_1999", "volvo_s70_2000", "volvo_s80_1999", "volvo_s80_2000", "volvo_s80_2001", "volvo_s80_2002", "volvo_s80_2003", "volvo_s80_2004", "volvo_s80_2005", "volvo_s80_2006", "volvo_s80_2007", "volvo_s90_1998", "volvo_v40_2000", "volvo_v40_2001", "volvo_v40_2002", "volvo_v50_2005", "volvo_v50_2006", "volvo_v70_1998", "volvo_v70_1999", "volvo_v70_2000", "volvo_v70_2001", "volvo_v70_2002", "volvo_v70_2004", "volvo_v70_2006", "volvo_xc70_2003", "volvo_xc70_2004", "volvo_xc70_2006", "volvo_xc70_2011", "volvo_xc90_2003", "volvo_xc90_2004", "volvo_xc90_2005", "volvo_xc90_2006", "volvo_xc90_2007", "volvo_xc90_2008", "volvo_xc90_2010" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV35
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV35 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2578 - Accuracy: 0.7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 2.85 | 1.0 | 36 | 1.4133 | 0.5333 | | 1.9294 | 2.0 | 72 | 0.9294 | 0.6333 | | 1.1818 | 3.0 | 108 | 0.7700 | 0.65 | | 0.7534 | 4.0 | 144 | 0.7531 | 0.7167 | | 0.4285 | 5.0 | 180 | 0.9580 | 0.7 | | 0.08 | 6.0 | 216 | 1.1785 | 0.75 | | 0.0891 | 7.0 | 252 | 1.4686 | 0.7333 | | 0.0602 | 8.0 | 288 | 1.7816 | 0.7 | | 0.0284 | 9.0 | 324 | 1.5790 | 0.7667 | | 0.0513 | 10.0 | 360 | 1.8933 | 0.7 | | 0.0335 | 11.0 | 396 | 2.1433 | 0.65 | | 0.025 | 12.0 | 432 | 2.3483 | 0.6667 | | 0.0246 | 13.0 | 468 | 2.6426 | 0.6667 | | 0.0306 | 14.0 | 504 | 3.0153 | 0.65 | | 0.016 | 15.0 | 540 | 3.1259 | 0.6833 | | 0.006 | 16.0 | 576 | 2.7612 | 0.7167 | | 0.0234 | 17.0 | 612 | 2.5334 | 0.7167 | | 0.0025 | 18.0 | 648 | 2.1768 | 0.7667 | | 0.0001 | 19.0 | 684 | 2.6585 | 0.7167 | | 0.0007 | 20.0 | 720 | 2.3282 | 0.7167 | | 0.0003 | 21.0 | 756 | 2.6975 | 0.7333 | | 0.0003 | 22.0 | 792 | 2.6186 | 0.7 | | 0.0006 | 23.0 | 828 | 2.9600 | 0.7167 | | 0.0008 | 24.0 | 864 | 2.9623 | 0.7333 | | 0.0002 | 25.0 | 900 | 2.8632 | 0.7167 | | 0.0143 | 26.0 | 936 | 2.8460 | 0.7167 | | 0.0 | 27.0 | 972 | 2.9372 | 0.7167 | | 0.0002 | 28.0 | 1008 | 2.8056 | 0.75 | | 0.0001 | 29.0 | 1044 | 3.0591 | 0.7167 | | 0.0001 | 30.0 | 1080 | 3.3295 | 0.6833 | | 0.0 | 31.0 | 1116 | 3.2851 | 0.6833 | | 0.0001 | 32.0 | 1152 | 3.4065 | 0.7 | | 0.0 | 33.0 | 1188 | 3.3669 | 0.7 | | 0.0 | 34.0 | 1224 | 3.3185 | 0.7167 | | 0.0006 | 35.0 | 1260 | 3.2563 | 0.7 | | 0.0004 | 36.0 | 1296 | 3.2831 | 0.7 | | 0.0001 | 37.0 | 1332 | 3.2594 | 0.7 | | 0.0 | 38.0 | 1368 | 3.2576 | 0.7 | | 0.0 | 38.9014 | 1400 | 3.2578 | 0.7 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "label_0", "label_1", "label_2", "label_3", "label_4" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV36
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV36 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6972 - Accuracy: 0.68 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.214 | 1.0 | 22 | 1.4507 | 0.46 | | 2.298 | 2.0 | 44 | 1.0632 | 0.62 | | 0.9579 | 3.0 | 66 | 1.1191 | 0.64 | | 0.4479 | 4.0 | 88 | 0.9825 | 0.6467 | | 0.1963 | 5.0 | 110 | 1.2844 | 0.6467 | | 0.1663 | 6.0 | 132 | 1.2373 | 0.6667 | | 0.1188 | 7.0 | 154 | 1.4338 | 0.6933 | | 0.0526 | 8.0 | 176 | 1.6726 | 0.7133 | | 0.01 | 9.0 | 198 | 2.5248 | 0.6267 | | 0.028 | 10.0 | 220 | 2.6156 | 0.6467 | | 0.0296 | 11.0 | 242 | 2.8334 | 0.6533 | | 0.0074 | 12.0 | 264 | 2.2200 | 0.6867 | | 0.0022 | 13.0 | 286 | 2.2802 | 0.7467 | | 0.0225 | 14.0 | 308 | 2.1764 | 0.6933 | | 0.0058 | 15.0 | 330 | 3.0594 | 0.62 | | 0.0075 | 16.0 | 352 | 3.2166 | 0.6333 | | 0.0163 | 17.0 | 374 | 2.4014 | 0.6933 | | 0.0033 | 18.0 | 396 | 2.9112 | 0.6733 | | 0.0036 | 19.0 | 418 | 2.8147 | 0.6533 | | 0.0033 | 20.0 | 440 | 2.7731 | 0.6733 | | 0.0161 | 21.0 | 462 | 2.0340 | 0.7467 | | 0.0012 | 22.0 | 484 | 2.4596 | 0.6867 | | 0.0009 | 23.0 | 506 | 2.7352 | 0.6667 | | 0.0101 | 24.0 | 528 | 2.8204 | 0.6667 | | 0.0011 | 25.0 | 550 | 2.8091 | 0.6733 | | 0.0005 | 26.0 | 572 | 2.8126 | 0.6667 | | 0.0007 | 27.0 | 594 | 2.7742 | 0.6733 | | 0.0004 | 28.0 | 616 | 2.7208 | 0.6733 | | 0.0025 | 29.0 | 638 | 2.7022 | 0.6733 | | 0.0025 | 30.0 | 660 | 2.6972 | 0.68 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "label_0", "label_1", "label_2", "label_3", "label_4" ]
corranm/test_model_6
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_model_6 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8693 - Accuracy: 0.2121 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.8 | 3 | 1.9228 | 0.1818 | | No log | 1.8 | 6 | 1.8828 | 0.2197 | | No log | 2.8 | 9 | 1.8726 | 0.2273 | | 2.1938 | 3.8 | 12 | 1.8746 | 0.1970 | | 2.1938 | 4.8 | 15 | 1.8680 | 0.2273 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/test_model_7
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_model_7 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8939 - F1 Macro: 0.0651 - F1 Micro: 0.2045 - F1 Weighted: 0.0913 - Precision Macro: 0.0760 - Precision Micro: 0.2045 - Precision Weighted: 0.1037 - Recall Macro: 0.1437 - Recall Micro: 0.2045 - Recall Weighted: 0.2045 - Accuracy: 0.2045 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | No log | 0.8 | 3 | 1.9112 | 0.0464 | 0.1894 | 0.0664 | 0.0281 | 0.1894 | 0.0403 | 0.1323 | 0.1894 | 0.1894 | 0.1894 | | No log | 1.8 | 6 | 1.8938 | 0.0654 | 0.2045 | 0.0917 | 0.0762 | 0.2045 | 0.1040 | 0.1437 | 0.2045 | 0.2045 | 0.2045 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/test_model_8
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_model_8 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8797 - F1 Macro: 0.0598 - F1 Micro: 0.2121 - F1 Weighted: 0.0845 - Precision Macro: 0.1723 - Precision Micro: 0.2121 - Precision Weighted: 0.2316 - Recall Macro: 0.1486 - Recall Micro: 0.2121 - Recall Weighted: 0.2121 - Accuracy: 0.2121 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.9439 | 0.8 | 3 | 1.9065 | 0.0541 | 0.1894 | 0.0764 | 0.0625 | 0.1894 | 0.0857 | 0.1327 | 0.1894 | 0.1894 | 0.1894 | | 1.9049 | 1.8 | 6 | 1.8820 | 0.0578 | 0.2045 | 0.0818 | 0.0501 | 0.2045 | 0.0696 | 0.1433 | 0.2045 | 0.2045 | 0.2045 | | 2.3436 | 2.8 | 9 | 1.8773 | 0.0738 | 0.1894 | 0.1022 | 0.0567 | 0.1894 | 0.0780 | 0.1348 | 0.1894 | 0.1894 | 0.1894 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/test_model_88
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_model_88 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the corranm/first_vote_100_per_new2 dataset. It achieves the following results on the evaluation set: - Loss: 1.8934 - F1 Macro: 0.0606 - F1 Micro: 0.1591 - F1 Weighted: 0.0846 - Precision Macro: 0.0421 - Precision Micro: 0.1591 - Precision Weighted: 0.0586 - Recall Macro: 0.1132 - Recall Micro: 0.1591 - Recall Weighted: 0.1591 - Accuracy: 0.1591 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.9682 | 0.8 | 3 | 1.9070 | 0.0599 | 0.2121 | 0.0848 | 0.0661 | 0.2121 | 0.0908 | 0.1486 | 0.2121 | 0.2121 | 0.2121 | | 1.8993 | 1.8 | 6 | 1.8860 | 0.0902 | 0.2197 | 0.1243 | 0.0630 | 0.2197 | 0.0867 | 0.1594 | 0.2197 | 0.2197 | 0.2197 | | 2.3539 | 2.8 | 9 | 1.8915 | 0.0637 | 0.1591 | 0.0887 | 0.0443 | 0.1591 | 0.0616 | 0.1141 | 0.1591 | 0.1591 | 0.1591 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/test_model_90
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_model_90 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the corranm/first_vote_100_per_new2 dataset. It achieves the following results on the evaluation set: - Loss: 1.8966 - F1 Macro: 0.1255 - F1 Micro: 0.2652 - F1 Weighted: 0.1671 - Precision Macro: 0.1232 - Precision Micro: 0.2652 - Precision Weighted: 0.1573 - Recall Macro: 0.1971 - Recall Micro: 0.2652 - Recall Weighted: 0.2652 - Accuracy: 0.2652 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.9497 | 0.8 | 3 | 1.8943 | 0.1087 | 0.2197 | 0.1434 | 0.1559 | 0.2197 | 0.1899 | 0.1632 | 0.2197 | 0.2197 | 0.2197 | | 1.8932 | 1.8 | 6 | 1.8811 | 0.0832 | 0.2121 | 0.1143 | 0.0925 | 0.2121 | 0.1296 | 0.1579 | 0.2121 | 0.2121 | 0.2121 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/test_model_94
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_model_94 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the corranm/first_vote_100_per_new2 dataset. It achieves the following results on the evaluation set: - Loss: 1.8933 - F1 Macro: 0.0863 - F1 Micro: 0.2197 - F1 Weighted: 0.1195 - Precision Macro: 0.0630 - Precision Micro: 0.2197 - Precision Weighted: 0.0868 - Recall Macro: 0.1568 - Recall Micro: 0.2197 - Recall Weighted: 0.2197 - Accuracy: 0.2197 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.9541 | 0.8 | 3 | 1.9150 | 0.0426 | 0.1591 | 0.0609 | 0.0263 | 0.1591 | 0.0377 | 0.1111 | 0.1591 | 0.1591 | 0.1591 | | 1.9037 | 1.8 | 6 | 1.8975 | 0.0848 | 0.2121 | 0.1175 | 0.0601 | 0.2121 | 0.0831 | 0.1520 | 0.2121 | 0.2121 | 0.2121 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
HieuVo/vit-base-beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-beans-classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0153 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1884 | 1.5385 | 100 | 0.1875 | 0.9323 | | 0.0213 | 3.0769 | 200 | 0.0153 | 1.0 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
LarsSto/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3069 - Accuracy: 0.8797 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.8326 | 1.0 | 17 | 0.4572 | 0.8340 | | 1.8933 | 2.0 | 34 | 0.3210 | 0.8797 | | 1.5638 | 3.0 | 51 | 0.3069 | 0.8797 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "abnormal_sperm", "non-sperm", "normal_sperm" ]
dima806/smart_tv_hand_gestures_image_detection
Returns a hand gesture type for smart TV given an image. See https://www.kaggle.com/code/dima806/smart-tv-hand-gestures-image-detection-vit for details. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6449300e3adf50d864095b90/YQKXqTq3L3TJEFGuOt-Ko.png) ``` Classification report: precision recall f1-score support Down 1.0000 1.0000 1.0000 1350 Left Swipe 1.0000 1.0000 1.0000 1350 Right Swipe 1.0000 1.0000 1.0000 1350 Stop 0.9912 1.0000 0.9956 1350 Stop Gesture 1.0000 1.0000 1.0000 1350 Swipe 1.0000 0.9948 0.9974 1350 Thumbs Down 1.0000 1.0000 1.0000 1350 Thumbs Up 1.0000 1.0000 1.0000 1350 Up 1.0000 0.9963 0.9981 1350 accuracy 0.9990 12150 macro avg 0.9990 0.9990 0.9990 12150 weighted avg 0.9990 0.9990 0.9990 12150 ```
[ "down", "left swipe", "right swipe", "stop", "stop gesture", "swipe", "thumbs down", "thumbs up", "up" ]
OhHaewon/my-finetuned-cifar10-model
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6", "label_7", "label_8", "label_9" ]
gryzaq1337/autotrain-45ui2-ce6i6
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metrics loss: 0.9045712351799011 f1_macro: 0.23360438147930815 f1_micro: 0.5124378109452736 f1_weighted: 0.3526136606764151 precision_macro: 0.5033333333333333 precision_micro: 0.5124378109452736 precision_weighted: 0.6468656716417911 recall_macro: 0.3376068376068376 recall_micro: 0.5124378109452736 recall_weighted: 0.5124378109452736 accuracy: 0.5124378109452736
[ "down", "empty", "up" ]
KFrimps/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2028 - Accuracy: 0.9323 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3898 | 1.0 | 370 | 0.2945 | 0.9215 | | 0.2182 | 2.0 | 740 | 0.2318 | 0.9296 | | 0.1703 | 3.0 | 1110 | 0.2156 | 0.9310 | | 0.1584 | 4.0 | 1480 | 0.2069 | 0.9310 | | 0.1408 | 5.0 | 1850 | 0.2028 | 0.9323 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
milotix/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.1989 - Accuracy: 0.9445 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3671 | 1.0 | 370 | 0.3067 | 0.9296 | | 0.2071 | 2.0 | 740 | 0.2404 | 0.9310 | | 0.1681 | 3.0 | 1110 | 0.2167 | 0.9350 | | 0.1519 | 4.0 | 1480 | 0.2081 | 0.9391 | | 0.147 | 5.0 | 1850 | 0.2039 | 0.9364 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
corranm/squarerun
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # squarerun This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3394 - F1 Macro: 0.4627 - F1 Micro: 0.5606 - F1 Weighted: 0.5294 - Precision Macro: 0.4704 - Precision Micro: 0.5606 - Precision Weighted: 0.5310 - Recall Macro: 0.4855 - Recall Micro: 0.5606 - Recall Weighted: 0.5606 - Accuracy: 0.5606 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 45 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.903 | 1.0 | 29 | 1.8868 | 0.0658 | 0.1742 | 0.0900 | 0.0502 | 0.1742 | 0.0693 | 0.1293 | 0.1742 | 0.1742 | 0.1742 | | 1.8662 | 2.0 | 58 | 1.8740 | 0.0754 | 0.2197 | 0.1004 | 0.0603 | 0.2197 | 0.0773 | 0.1580 | 0.2197 | 0.2197 | 0.2197 | | 1.9291 | 3.0 | 87 | 1.8862 | 0.0485 | 0.2045 | 0.0695 | 0.0292 | 0.2045 | 0.0418 | 0.1429 | 0.2045 | 0.2045 | 0.2045 | | 1.7838 | 4.0 | 116 | 1.8127 | 0.1171 | 0.2652 | 0.1474 | 0.1092 | 0.2652 | 0.1321 | 0.1973 | 0.2652 | 0.2652 | 0.2652 | | 1.7113 | 5.0 | 145 | 1.6979 | 0.2133 | 0.3485 | 0.2592 | 0.3189 | 0.3485 | 0.3631 | 0.2822 | 0.3485 | 0.3485 | 0.3485 | | 1.6459 | 6.0 | 174 | 1.5577 | 0.2714 | 0.3939 | 0.3225 | 0.4296 | 0.3939 | 0.4531 | 0.3198 | 0.3939 | 0.3939 | 0.3939 | | 1.4829 | 7.0 | 203 | 1.3814 | 0.4069 | 0.5227 | 0.4611 | 0.3786 | 0.5227 | 0.4216 | 0.4511 | 0.5227 | 0.5227 | 0.5227 | | 1.2847 | 8.0 | 232 | 1.3783 | 0.3675 | 0.4545 | 0.4176 | 0.4992 | 0.4545 | 0.5702 | 0.4080 | 0.4545 | 0.4545 | 0.4545 | | 0.7746 | 9.0 | 261 | 1.1536 | 0.4579 | 0.5758 | 0.5298 | 0.5301 | 0.5758 | 0.5896 | 0.4853 | 0.5758 | 0.5758 | 0.5758 | | 1.0172 | 10.0 | 290 | 1.2211 | 0.4700 | 0.5909 | 0.5365 | 0.5722 | 0.5909 | 0.6399 | 0.5182 | 0.5909 | 0.5909 | 0.5909 | | 0.7865 | 11.0 | 319 | 1.1357 | 0.5282 | 0.6136 | 0.5961 | 0.5342 | 0.6136 | 0.6009 | 0.5432 | 0.6136 | 0.6136 | 0.6136 | | 0.8335 | 12.0 | 348 | 1.1530 | 0.5315 | 0.6061 | 0.6017 | 0.5365 | 0.6061 | 0.6209 | 0.5489 | 0.6061 | 0.6061 | 0.6061 | | 0.6959 | 13.0 | 377 | 1.1307 | 0.5638 | 0.6667 | 0.6451 | 0.5912 | 0.6667 | 0.6615 | 0.5773 | 0.6667 | 0.6667 | 0.6667 | | 0.5864 | 14.0 | 406 | 1.1957 | 0.5211 | 0.5985 | 0.5894 | 0.5537 | 0.5985 | 0.6275 | 0.5389 | 0.5985 | 0.5985 | 0.5985 | | 0.6145 | 15.0 | 435 | 0.9957 | 0.6086 | 0.7045 | 0.6833 | 0.6164 | 0.7045 | 0.6791 | 0.6160 | 0.7045 | 0.7045 | 0.7045 | | 0.5632 | 16.0 | 464 | 1.2302 | 0.5112 | 0.5985 | 0.5781 | 0.5219 | 0.5985 | 0.5853 | 0.5236 | 0.5985 | 0.5985 | 0.5985 | | 0.3392 | 17.0 | 493 | 1.1925 | 0.5335 | 0.6288 | 0.6043 | 0.5903 | 0.6288 | 0.6435 | 0.5355 | 0.6288 | 0.6288 | 0.6288 | | 0.2998 | 18.0 | 522 | 1.1444 | 0.5544 | 0.6364 | 0.6251 | 0.5520 | 0.6364 | 0.6248 | 0.5670 | 0.6364 | 0.6364 | 0.6364 | | 0.2706 | 19.0 | 551 | 1.1072 | 0.5579 | 0.6439 | 0.6308 | 0.5790 | 0.6439 | 0.6404 | 0.5571 | 0.6439 | 0.6439 | 0.6439 | | 0.2012 | 20.0 | 580 | 1.1353 | 0.5278 | 0.6212 | 0.6012 | 0.5433 | 0.6212 | 0.6063 | 0.5346 | 0.6212 | 0.6212 | 0.6212 | | 0.532 | 21.0 | 609 | 1.2503 | 0.5421 | 0.6212 | 0.6079 | 0.5651 | 0.6212 | 0.6253 | 0.5488 | 0.6212 | 0.6212 | 0.6212 | | 0.0963 | 22.0 | 638 | 1.2203 | 0.5702 | 0.6288 | 0.6227 | 0.5807 | 0.6288 | 0.6327 | 0.5745 | 0.6288 | 0.6288 | 0.6288 | | 0.1076 | 23.0 | 667 | 1.3798 | 0.5216 | 0.6136 | 0.5894 | 0.5339 | 0.6136 | 0.5971 | 0.5370 | 0.6136 | 0.6136 | 0.6136 | | 0.1773 | 24.0 | 696 | 1.3129 | 0.5422 | 0.6288 | 0.6169 | 0.5581 | 0.6288 | 0.6253 | 0.5453 | 0.6288 | 0.6288 | 0.6288 | | 0.0598 | 25.0 | 725 | 1.2855 | 0.5633 | 0.6515 | 0.6381 | 0.5846 | 0.6515 | 0.6562 | 0.5713 | 0.6515 | 0.6515 | 0.6515 | | 0.0632 | 26.0 | 754 | 1.3155 | 0.6414 | 0.6591 | 0.6643 | 0.6525 | 0.6591 | 0.6925 | 0.6585 | 0.6591 | 0.6591 | 0.6591 | | 0.0644 | 27.0 | 783 | 1.3211 | 0.5588 | 0.6439 | 0.6315 | 0.5745 | 0.6439 | 0.6357 | 0.5595 | 0.6439 | 0.6439 | 0.6439 | | 0.1495 | 28.0 | 812 | 1.4196 | 0.5539 | 0.6364 | 0.6245 | 0.5650 | 0.6364 | 0.6270 | 0.5556 | 0.6364 | 0.6364 | 0.6364 | | 0.0413 | 29.0 | 841 | 1.4027 | 0.5378 | 0.6136 | 0.6102 | 0.5405 | 0.6136 | 0.6100 | 0.5380 | 0.6136 | 0.6136 | 0.6136 | | 0.0323 | 30.0 | 870 | 1.4302 | 0.5641 | 0.6364 | 0.6329 | 0.5689 | 0.6364 | 0.6430 | 0.5712 | 0.6364 | 0.6364 | 0.6364 | | 0.0452 | 31.0 | 899 | 1.4577 | 0.5706 | 0.6515 | 0.6412 | 0.5835 | 0.6515 | 0.6478 | 0.5738 | 0.6515 | 0.6515 | 0.6515 | | 0.0285 | 32.0 | 928 | 1.4224 | 0.5597 | 0.6439 | 0.6300 | 0.5618 | 0.6439 | 0.6250 | 0.5657 | 0.6439 | 0.6439 | 0.6439 | | 0.0241 | 33.0 | 957 | 1.4513 | 0.5542 | 0.6364 | 0.6252 | 0.5700 | 0.6364 | 0.6309 | 0.5533 | 0.6364 | 0.6364 | 0.6364 | | 0.0224 | 34.0 | 986 | 1.4701 | 0.5795 | 0.6742 | 0.6545 | 0.5856 | 0.6742 | 0.6523 | 0.5902 | 0.6742 | 0.6742 | 0.6742 | | 0.0228 | 35.0 | 1015 | 1.4697 | 0.5772 | 0.6591 | 0.6489 | 0.5870 | 0.6591 | 0.6497 | 0.5774 | 0.6591 | 0.6591 | 0.6591 | | 0.0231 | 36.0 | 1044 | 1.5315 | 0.5745 | 0.6591 | 0.6491 | 0.5783 | 0.6591 | 0.6483 | 0.5788 | 0.6591 | 0.6591 | 0.6591 | | 0.0457 | 37.0 | 1073 | 1.5210 | 0.5532 | 0.6439 | 0.6277 | 0.5641 | 0.6439 | 0.6317 | 0.5606 | 0.6439 | 0.6439 | 0.6439 | | 0.0197 | 38.0 | 1102 | 1.4956 | 0.5636 | 0.6515 | 0.6386 | 0.5590 | 0.6515 | 0.6296 | 0.5714 | 0.6515 | 0.6515 | 0.6515 | | 0.0219 | 39.0 | 1131 | 1.4910 | 0.5981 | 0.6591 | 0.6540 | 0.6063 | 0.6591 | 0.6554 | 0.5970 | 0.6591 | 0.6591 | 0.6591 | | 0.0212 | 40.0 | 1160 | 1.5050 | 0.5912 | 0.6515 | 0.6462 | 0.5997 | 0.6515 | 0.6472 | 0.5898 | 0.6515 | 0.6515 | 0.6515 | | 0.0212 | 41.0 | 1189 | 1.5091 | 0.5977 | 0.6591 | 0.6537 | 0.6080 | 0.6591 | 0.6558 | 0.5955 | 0.6591 | 0.6591 | 0.6591 | | 0.0202 | 42.0 | 1218 | 1.4961 | 0.5655 | 0.6515 | 0.6411 | 0.5708 | 0.6515 | 0.6411 | 0.5695 | 0.6515 | 0.6515 | 0.6515 | | 0.0216 | 43.0 | 1247 | 1.4917 | 0.5655 | 0.6515 | 0.6411 | 0.5708 | 0.6515 | 0.6411 | 0.5695 | 0.6515 | 0.6515 | 0.6515 | | 0.0199 | 44.0 | 1276 | 1.4855 | 0.5674 | 0.6515 | 0.6423 | 0.5694 | 0.6515 | 0.6401 | 0.5717 | 0.6515 | 0.6515 | 0.6515 | | 0.027 | 45.0 | 1305 | 1.4832 | 0.5674 | 0.6515 | 0.6423 | 0.5694 | 0.6515 | 0.6401 | 0.5717 | 0.6515 | 0.6515 | 0.6515 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
nemik/vit-base-patch16-224-in21k-v2025-1-31
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-v2025-1-31 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the webdataset dataset. It achieves the following results on the evaluation set: - Loss: 0.3391 - Accuracy: 0.8973 - F1: 0.7668 - Precision: 0.7866 - Recall: 0.7480 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.4871 | 0.5682 | 100 | 0.4866 | 0.7903 | 0.1400 | 0.9449 | 0.0756 | | 0.4151 | 1.1364 | 200 | 0.4007 | 0.8361 | 0.4540 | 0.9159 | 0.3018 | | 0.3517 | 1.7045 | 300 | 0.3460 | 0.8671 | 0.6481 | 0.8060 | 0.5419 | | 0.3337 | 2.2727 | 400 | 0.3202 | 0.8777 | 0.7034 | 0.7768 | 0.6427 | | 0.3128 | 2.8409 | 500 | 0.2995 | 0.8774 | 0.6943 | 0.7940 | 0.6169 | | 0.3199 | 3.4091 | 600 | 0.2980 | 0.8771 | 0.6960 | 0.7880 | 0.6232 | | 0.3094 | 3.9773 | 700 | 0.3051 | 0.8764 | 0.7031 | 0.7679 | 0.6484 | | 0.3068 | 4.5455 | 800 | 0.2753 | 0.8900 | 0.7409 | 0.7915 | 0.6963 | | 0.3003 | 5.1136 | 900 | 0.2699 | 0.8890 | 0.7351 | 0.7973 | 0.6818 | | 0.3012 | 5.6818 | 1000 | 0.2860 | 0.8799 | 0.7256 | 0.7495 | 0.7032 | | 0.267 | 6.25 | 1100 | 0.2848 | 0.8832 | 0.7216 | 0.7812 | 0.6704 | | 0.2364 | 6.8182 | 1200 | 0.2608 | 0.8896 | 0.7399 | 0.7903 | 0.6957 | | 0.2401 | 7.3864 | 1300 | 0.2695 | 0.8885 | 0.7406 | 0.7798 | 0.7051 | | 0.219 | 7.9545 | 1400 | 0.2599 | 0.8909 | 0.7413 | 0.7975 | 0.6925 | | 0.1985 | 8.5227 | 1500 | 0.2668 | 0.8898 | 0.7421 | 0.7863 | 0.7026 | | 0.1986 | 9.0909 | 1600 | 0.2762 | 0.8851 | 0.7316 | 0.7737 | 0.6938 | | 0.1988 | 9.6591 | 1700 | 0.2765 | 0.8862 | 0.7404 | 0.7632 | 0.7190 | | 0.167 | 10.2273 | 1800 | 0.2630 | 0.8940 | 0.7594 | 0.7788 | 0.7410 | | 0.207 | 10.7955 | 1900 | 0.2637 | 0.8923 | 0.7557 | 0.7745 | 0.7379 | | 0.1811 | 11.3636 | 2000 | 0.2568 | 0.8946 | 0.7609 | 0.7798 | 0.7429 | | 0.171 | 11.9318 | 2100 | 0.2607 | 0.8935 | 0.7527 | 0.7906 | 0.7183 | | 0.1571 | 12.5 | 2200 | 0.2552 | 0.8972 | 0.7708 | 0.7755 | 0.7662 | | 0.1234 | 13.0682 | 2300 | 0.2676 | 0.8993 | 0.7694 | 0.7964 | 0.7442 | | 0.1299 | 13.6364 | 2400 | 0.2683 | 0.8970 | 0.7655 | 0.7875 | 0.7448 | | 0.1335 | 14.2045 | 2500 | 0.2823 | 0.8949 | 0.7559 | 0.7944 | 0.7209 | | 0.1235 | 14.7727 | 2600 | 0.2753 | 0.8976 | 0.7671 | 0.7880 | 0.7473 | | 0.1163 | 15.3409 | 2700 | 0.2884 | 0.8962 | 0.7644 | 0.7836 | 0.7461 | | 0.1111 | 15.9091 | 2800 | 0.2770 | 0.8973 | 0.7675 | 0.7847 | 0.7511 | | 0.1128 | 16.4773 | 2900 | 0.2773 | 0.8987 | 0.7722 | 0.7843 | 0.7606 | | 0.0982 | 17.0455 | 3000 | 0.2754 | 0.8993 | 0.7716 | 0.7905 | 0.7536 | | 0.1115 | 17.6136 | 3100 | 0.2956 | 0.8972 | 0.7640 | 0.7927 | 0.7372 | | 0.07 | 18.1818 | 3200 | 0.2961 | 0.8977 | 0.7683 | 0.7863 | 0.7511 | | 0.0993 | 18.75 | 3300 | 0.3041 | 0.8959 | 0.7639 | 0.7826 | 0.7461 | | 0.0779 | 19.3182 | 3400 | 0.3012 | 0.9 | 0.7745 | 0.7889 | 0.7606 | | 0.0691 | 19.8864 | 3500 | 0.3075 | 0.8964 | 0.7674 | 0.7784 | 0.7568 | | 0.063 | 20.4545 | 3600 | 0.3271 | 0.8912 | 0.7509 | 0.7770 | 0.7265 | | 0.0668 | 21.0227 | 3700 | 0.3229 | 0.8952 | 0.7649 | 0.7745 | 0.7555 | | 0.0573 | 21.5909 | 3800 | 0.3236 | 0.8960 | 0.7626 | 0.7869 | 0.7398 | | 0.0668 | 22.1591 | 3900 | 0.3251 | 0.8972 | 0.7629 | 0.7955 | 0.7328 | | 0.062 | 22.7273 | 4000 | 0.3221 | 0.8987 | 0.7702 | 0.7895 | 0.7517 | | 0.0647 | 23.2955 | 4100 | 0.3179 | 0.8959 | 0.7663 | 0.7767 | 0.7561 | | 0.0417 | 23.8636 | 4200 | 0.3323 | 0.8969 | 0.7662 | 0.7847 | 0.7486 | | 0.0623 | 24.4318 | 4300 | 0.3396 | 0.8945 | 0.7602 | 0.7804 | 0.7410 | | 0.0361 | 25.0 | 4400 | 0.3418 | 0.8959 | 0.7623 | 0.7863 | 0.7398 | | 0.0334 | 25.5682 | 4500 | 0.3404 | 0.8984 | 0.7703 | 0.7870 | 0.7543 | | 0.0326 | 26.1364 | 4600 | 0.3376 | 0.8967 | 0.7676 | 0.7801 | 0.7555 | | 0.052 | 26.7045 | 4700 | 0.3395 | 0.8972 | 0.7679 | 0.7827 | 0.7536 | | 0.0341 | 27.2727 | 4800 | 0.3440 | 0.8953 | 0.7638 | 0.7783 | 0.7498 | | 0.0459 | 27.8409 | 4900 | 0.3406 | 0.8980 | 0.7689 | 0.7869 | 0.7517 | | 0.0392 | 28.4091 | 5000 | 0.3389 | 0.8977 | 0.7680 | 0.7870 | 0.7498 | | 0.0407 | 28.9773 | 5100 | 0.3410 | 0.8976 | 0.7677 | 0.7865 | 0.7498 | | 0.0445 | 29.5455 | 5200 | 0.3395 | 0.8969 | 0.7661 | 0.7851 | 0.7480 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "snowing", "raining", "sunny", "cloudy", "night", "snow_on_road", "partial_snow_on_road", "clear_pavement", "wet_pavement", "iced_lens" ]
nemik/frost-vision-v2-google__vit-base-patch16-224-in21k-v2025-1-31
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "snowing", "raining", "sunny", "cloudy", "night", "snow_on_road", "partial_snow_on_road", "clear_pavement", "wet_pavement", "iced_lens" ]
cvmil/dinov2-base_rice-leaf-disease-augmented_tl
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dinov2-base_rice-leaf-disease-augmented_tl_020125 This model is a fine-tuned version of [facebook/dinov2-base](https://huggingface.co/facebook/dinov2-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1132 - Accuracy: 0.966 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 1.1218 | 1.0 | 250 | 0.825 | 0.5307 | | 0.4069 | 2.0 | 500 | 0.885 | 0.3582 | | 0.2746 | 3.0 | 750 | 0.92 | 0.2464 | | 0.2157 | 4.0 | 1000 | 0.9315 | 0.2224 | | 0.1779 | 5.0 | 1250 | 0.949 | 0.1753 | | 0.1539 | 6.0 | 1500 | 0.942 | 0.1718 | | 0.1361 | 7.0 | 1750 | 0.9515 | 0.1603 | | 0.1271 | 8.0 | 2000 | 0.958 | 0.1448 | | 0.1114 | 9.0 | 2250 | 0.951 | 0.1444 | | 0.1023 | 10.0 | 2500 | 0.959 | 0.1309 | | 0.0968 | 11.0 | 2750 | 0.9625 | 0.1240 | | 0.0911 | 12.0 | 3000 | 0.9645 | 0.1248 | | 0.0858 | 13.0 | 3250 | 0.962 | 0.1189 | | 0.0818 | 14.0 | 3500 | 0.9645 | 0.1136 | | 0.0789 | 15.0 | 3750 | 0.966 | 0.1132 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
cvmil/beit-base-patch16-224_rice-leaf-disease-augmented_tl
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beit-base-patch16-224_rice-leaf-disease-augmented_tl_020125 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4361 - Accuracy: 0.8575 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 1.9619 | 1.0 | 250 | 0.496 | 1.4686 | | 1.1487 | 2.0 | 500 | 0.716 | 0.8858 | | 0.8166 | 3.0 | 750 | 0.7685 | 0.7041 | | 0.691 | 4.0 | 1000 | 0.797 | 0.6216 | | 0.6259 | 5.0 | 1250 | 0.824 | 0.5643 | | 0.582 | 6.0 | 1500 | 0.8305 | 0.5286 | | 0.547 | 7.0 | 1750 | 0.836 | 0.5041 | | 0.5244 | 8.0 | 2000 | 0.844 | 0.4842 | | 0.5081 | 9.0 | 2250 | 0.843 | 0.4707 | | 0.493 | 10.0 | 2500 | 0.8475 | 0.4589 | | 0.4824 | 11.0 | 2750 | 0.85 | 0.4509 | | 0.4759 | 12.0 | 3000 | 0.853 | 0.4437 | | 0.4677 | 13.0 | 3250 | 0.855 | 0.4398 | | 0.4651 | 14.0 | 3500 | 0.856 | 0.4373 | | 0.4625 | 15.0 | 3750 | 0.8575 | 0.4361 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
cvmil/deit-base-patch16-224_rice-leaf-disease-augmented_tl
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deit-base-patch16-224_rice-leaf-disease-augmented_tl_020125 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7678 - Accuracy: 0.757 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 2.0085 | 1.0 | 250 | 0.4305 | 1.7920 | | 1.5371 | 2.0 | 500 | 0.5935 | 1.3467 | | 1.2212 | 3.0 | 750 | 0.658 | 1.1434 | | 1.0695 | 4.0 | 1000 | 0.6845 | 1.0324 | | 0.9787 | 5.0 | 1250 | 0.707 | 0.9592 | | 0.916 | 6.0 | 1500 | 0.7175 | 0.9066 | | 0.8713 | 7.0 | 1750 | 0.7255 | 0.8705 | | 0.8382 | 8.0 | 2000 | 0.736 | 0.8404 | | 0.8123 | 9.0 | 2250 | 0.7405 | 0.8190 | | 0.7927 | 10.0 | 2500 | 0.746 | 0.8022 | | 0.7772 | 11.0 | 2750 | 0.753 | 0.7892 | | 0.7657 | 12.0 | 3000 | 0.7795 | 0.753 | | 0.7574 | 13.0 | 3250 | 0.7731 | 0.7565 | | 0.7518 | 14.0 | 3500 | 0.7693 | 0.757 | | 0.749 | 15.0 | 3750 | 0.7678 | 0.757 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
cvmil/swin-base-patch4-window7-224_rice-leaf-disease-augmented_tl
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-base-patch4-window7-224_rice-leaf-disease-augmented_tl_020125 This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6198 - Accuracy: 0.7955 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 1.9251 | 1.0 | 250 | 0.492 | 1.5964 | | 1.3185 | 2.0 | 500 | 0.669 | 1.0944 | | 1.0055 | 3.0 | 750 | 0.721 | 0.9142 | | 0.8773 | 4.0 | 1000 | 0.7405 | 0.8254 | | 0.8008 | 5.0 | 1250 | 0.7565 | 0.7678 | | 0.7496 | 6.0 | 1500 | 0.7655 | 0.7288 | | 0.7126 | 7.0 | 1750 | 0.7733 | 0.6998 | | 0.685 | 8.0 | 2000 | 0.777 | 0.6766 | | 0.6637 | 9.0 | 2250 | 0.783 | 0.6594 | | 0.6488 | 10.0 | 2500 | 0.7865 | 0.6473 | | 0.6356 | 11.0 | 2750 | 0.7905 | 0.6373 | | 0.6268 | 12.0 | 3000 | 0.793 | 0.6296 | | 0.6191 | 13.0 | 3250 | 0.7955 | 0.6238 | | 0.6143 | 14.0 | 3500 | 0.795 | 0.6210 | | 0.6115 | 15.0 | 3750 | 0.7955 | 0.6198 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
cvmil/vit-base-patch16-224_rice-leaf-disease-augmented_tl
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224_rice-leaf-disease-augmented_tl_020125 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6522 - Accuracy: 0.7885 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 2.023 | 1.0 | 250 | 0.406 | 1.7331 | | 1.4339 | 2.0 | 500 | 0.638 | 1.1983 | | 1.0891 | 3.0 | 750 | 0.6985 | 0.9888 | | 0.9387 | 4.0 | 1000 | 0.728 | 0.8833 | | 0.8519 | 5.0 | 1250 | 0.7455 | 0.8170 | | 0.7933 | 6.0 | 1500 | 0.7575 | 0.7714 | | 0.7515 | 7.0 | 1750 | 0.7635 | 0.7390 | | 0.7208 | 8.0 | 2000 | 0.7695 | 0.7137 | | 0.6967 | 9.0 | 2250 | 0.774 | 0.6958 | | 0.6787 | 10.0 | 2500 | 0.7795 | 0.6809 | | 0.6644 | 11.0 | 2750 | 0.783 | 0.6697 | | 0.6539 | 12.0 | 3000 | 0.7825 | 0.6619 | | 0.6463 | 13.0 | 3250 | 0.7875 | 0.6563 | | 0.6411 | 14.0 | 3500 | 0.789 | 0.6533 | | 0.6385 | 15.0 | 3750 | 0.7885 | 0.6522 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
YaswanthReddy23/ViT_Sunflower
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_Sunflower This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1157 - Accuracy: 0.9709 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0478 | 1.2048 | 100 | 0.1879 | 0.9524 | | 0.0526 | 2.4096 | 200 | 0.1999 | 0.9444 | | 0.013 | 3.6145 | 300 | 0.1157 | 0.9709 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Tokenizers 0.21.0
[ "downy mildew", "gray mold", "healthy", "leaf scars" ]
YaswanthReddy23/Vit_Guava
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Vit_Guava This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0036 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1092 | 0.4651 | 100 | 0.0675 | 0.9918 | | 0.0334 | 0.9302 | 200 | 0.0861 | 0.9785 | | 0.0188 | 1.3953 | 300 | 0.0506 | 0.9847 | | 0.0074 | 1.8605 | 400 | 0.0236 | 0.9949 | | 0.016 | 2.3256 | 500 | 0.0092 | 0.9980 | | 0.0041 | 2.7907 | 600 | 0.0044 | 1.0 | | 0.0038 | 3.2558 | 700 | 0.0039 | 1.0 | | 0.0035 | 3.7209 | 800 | 0.0036 | 1.0 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Tokenizers 0.21.0
[ "healthy", "phytopthora", "red_rust", "scab", "styler and root" ]
YaswanthReddy23/ViT_Cucumber
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_Cucumber This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0155 - Accuracy: 0.9976 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1694 | 0.3571 | 100 | 0.1965 | 0.9607 | | 0.1409 | 0.7143 | 200 | 0.2409 | 0.9261 | | 0.1024 | 1.0714 | 300 | 0.0903 | 0.9780 | | 0.0326 | 1.4286 | 400 | 0.0630 | 0.9866 | | 0.0338 | 1.7857 | 500 | 0.0675 | 0.9843 | | 0.0082 | 2.1429 | 600 | 0.0508 | 0.9882 | | 0.0072 | 2.5 | 700 | 0.0609 | 0.9874 | | 0.0056 | 2.8571 | 800 | 0.0175 | 0.9976 | | 0.0044 | 3.2143 | 900 | 0.0154 | 0.9976 | | 0.0042 | 3.5714 | 1000 | 0.0151 | 0.9976 | | 0.0045 | 3.9286 | 1100 | 0.0155 | 0.9976 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.3.0+cu118 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "anthracnose", "bacterial wilt", "belly rot", "downy mildew", "gummy stem blight", "healthy cucumber", "healthy leaf", "pythium fruit rot" ]
YaswanthReddy23/ViT_Cotton
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_Cotton This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0678 - Accuracy: 0.9859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.2409 | 1.0638 | 100 | 0.2505 | 0.9366 | | 0.0502 | 2.1277 | 200 | 0.1396 | 0.9718 | | 0.0257 | 3.1915 | 300 | 0.0678 | 0.9859 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Tokenizers 0.21.0
[ "bacterial blight", "curl virus", "healthy leaf", "herbicide growth damage", "leaf hopper jassids", "leaf redding", "leaf variegation" ]
cvmil/resnet-50_rice-leaf-disease-augmented_tl
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-50_rice-leaf-disease-augmented_tl_020125 This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7926 - Accuracy: 0.739 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.8935 | 1.0 | 250 | 1.5964 | 0.5395 | | 1.4144 | 2.0 | 500 | 1.2223 | 0.6045 | | 1.1814 | 3.0 | 750 | 1.0813 | 0.645 | | 1.0714 | 4.0 | 1000 | 1.0132 | 0.6575 | | 0.9906 | 5.0 | 1250 | 0.9498 | 0.6865 | | 0.9428 | 6.0 | 1500 | 0.9129 | 0.7085 | | 0.9026 | 7.0 | 1750 | 0.8716 | 0.722 | | 0.8749 | 8.0 | 2000 | 0.8627 | 0.717 | | 0.8501 | 9.0 | 2250 | 0.8443 | 0.726 | | 0.828 | 10.0 | 2500 | 0.8177 | 0.737 | | 0.8126 | 11.0 | 2750 | 0.8112 | 0.736 | | 0.8036 | 12.0 | 3000 | 0.8031 | 0.744 | | 0.79 | 13.0 | 3250 | 0.8043 | 0.735 | | 0.7925 | 14.0 | 3500 | 0.7939 | 0.7385 | | 0.7838 | 15.0 | 3750 | 0.7926 | 0.739 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
cvmil/resnet-50_rice-leaf-disease-augmented_fft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-50_rice-leaf-disease-augmented_fft_020125 This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6406 - Accuracy: 0.779 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0532 | 1.0 | 250 | 1.9924 | 0.3795 | | 1.8952 | 2.0 | 500 | 1.7562 | 0.5 | | 1.6491 | 3.0 | 750 | 1.5051 | 0.5685 | | 1.4229 | 4.0 | 1000 | 1.2998 | 0.6105 | | 1.2276 | 5.0 | 1250 | 1.1287 | 0.661 | | 1.0723 | 6.0 | 1500 | 0.9887 | 0.6965 | | 0.9462 | 7.0 | 1750 | 0.8832 | 0.7235 | | 0.8542 | 8.0 | 2000 | 0.8107 | 0.7375 | | 0.7818 | 9.0 | 2250 | 0.7554 | 0.754 | | 0.7259 | 10.0 | 2500 | 0.7115 | 0.7585 | | 0.6918 | 11.0 | 2750 | 0.6865 | 0.7685 | | 0.6616 | 12.0 | 3000 | 0.6611 | 0.77 | | 0.6407 | 13.0 | 3250 | 0.6528 | 0.774 | | 0.6286 | 14.0 | 3500 | 0.6438 | 0.7795 | | 0.6218 | 15.0 | 3750 | 0.6406 | 0.779 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
ssale2/dummy_output
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dummy_output This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Tokenizers 0.21.0
[ "sfw", "nsfw" ]
prithivMLmods/Deep-Fake-Detector-v2-Model
![fake q.gif](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/PVkTbLOEBr-qNkTws3UsD.gif) # **Deep-Fake-Detector-v2-Model** # **Overview** The **Deep-Fake-Detector-v2-Model** is a state-of-the-art deep learning model designed to detect deepfake images. It leverages the **Vision Transformer (ViT)** architecture, specifically the `google/vit-base-patch16-224-in21k` model, fine-tuned on a dataset of real and deepfake images. The model is trained to classify images as either "Realism" or "Deepfake" with high accuracy, making it a powerful tool for detecting manipulated media. ``` Classification report: precision recall f1-score support Realism 0.9683 0.8708 0.9170 28001 Deepfake 0.8826 0.9715 0.9249 28000 accuracy 0.9212 56001 macro avg 0.9255 0.9212 0.9210 56001 weighted avg 0.9255 0.9212 0.9210 56001 ``` **Confusion Matrix**: ``` [[True Positives, False Negatives], [False Positives, True Negatives]] ``` ![download.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/VLX0QDcKkSLIJ9c5LX-wt.png) **<span style="color:red;">Update :</span>** The previous model checkpoint was obtained using a smaller classification dataset. Although it performed well in evaluation scores, its real-time performance was average due to limited variations in the training set. The new update includes a larger dataset to improve the detection of fake images. | Repository | Link | |------------|------| | Deep Fake Detector v2 Model | [GitHub Repository](https://github.com/PRITHIVSAKTHIUR/Deep-Fake-Detector-Model) | # **Key Features** - **Architecture**: Vision Transformer (ViT) - `google/vit-base-patch16-224-in21k`. - **Input**: RGB images resized to 224x224 pixels. - **Output**: Binary classification ("Realism" or "Deepfake"). - **Training Dataset**: A curated dataset of real and deepfake images. - **Fine-Tuning**: The model is fine-tuned using Hugging Face's `Trainer` API with advanced data augmentation techniques. - **Performance**: Achieves high accuracy and F1 score on validation and test datasets. # **Model Architecture** The model is based on the **Vision Transformer (ViT)**, which treats images as sequences of patches and applies a transformer encoder to learn spatial relationships. Key components include: - **Patch Embedding**: Divides the input image into fixed-size patches (16x16 pixels). - **Transformer Encoder**: Processes patch embeddings using multi-head self-attention mechanisms. - **Classification Head**: A fully connected layer for binary classification. # **Training Details** - **Optimizer**: AdamW with a learning rate of `1e-6`. - **Batch Size**: 32 for training, 8 for evaluation. - **Epochs**: 2. - **Data Augmentation**: - Random rotation (±90 degrees). - Random sharpness adjustment. - Random resizing and cropping. - **Loss Function**: Cross-Entropy Loss. - **Evaluation Metrics**: Accuracy, F1 Score, and Confusion Matrix. # **Inference with Hugging Face Pipeline** ```python from transformers import pipeline # Load the model pipe = pipeline('image-classification', model="prithivMLmods/Deep-Fake-Detector-v2-Model", device=0) # Predict on an image result = pipe("path_to_image.jpg") print(result) ``` # **Inference with PyTorch** ```python from transformers import ViTForImageClassification, ViTImageProcessor from PIL import Image import torch # Load the model and processor model = ViTForImageClassification.from_pretrained("prithivMLmods/Deep-Fake-Detector-v2-Model") processor = ViTImageProcessor.from_pretrained("prithivMLmods/Deep-Fake-Detector-v2-Model") # Load and preprocess the image image = Image.open("path_to_image.jpg").convert("RGB") inputs = processor(images=image, return_tensors="pt") # Perform inference with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class = torch.argmax(logits, dim=1).item() # Map class index to label label = model.config.id2label[predicted_class] print(f"Predicted Label: {label}") ``` # **Dataset** The model is fine-tuned on the dataset, which contains: - **Real Images**: Authentic images of human faces. - **Fake Images**: Deepfake images generated using advanced AI techniques. # **Limitations** The model is trained on a specific dataset and may not generalize well to other deepfake datasets or domains. - Performance may degrade on low-resolution or heavily compressed images. - The model is designed for image classification and does not detect deepfake videos directly. # **Ethical Considerations** **Misuse**: This model should not be used for malicious purposes, such as creating or spreading deepfakes. **Bias**: The model may inherit biases from the training dataset. Care should be taken to ensure fairness and inclusivity. **Transparency**: Users should be informed when deepfake detection tools are used to analyze their content. # **Future Work** - Extend the model to detect deepfake videos. - Improve generalization by training on larger and more diverse datasets. - Incorporate explainability techniques to provide insights into model predictions. # **Citation** ```bibtex @misc{Deep-Fake-Detector-v2-Model, author = {prithivMLmods}, title = {Deep-Fake-Detector-v2-Model}, initial = {21 Mar 2024}, second_updated = {31 Jan 2025}, latest_updated = {02 Feb 2025} }
[ "realism", "deepfake" ]
ckappel/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1684 - Accuracy: 0.9567 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3751 | 1.0 | 370 | 0.3121 | 0.9296 | | 0.2139 | 2.0 | 740 | 0.2491 | 0.9310 | | 0.17 | 3.0 | 1110 | 0.2301 | 0.9378 | | 0.1451 | 4.0 | 1480 | 0.2213 | 0.9378 | | 0.1443 | 5.0 | 1850 | 0.2203 | 0.9418 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
Eraly-ml/centraasia-ResNet-50
# ResNet-50 Model for Central Asian Image Classification ## Model Description This is a pre-trained ResNet-50 model fine-tuned on the Central Asian Food Dataset. The model is used for image classification across multiple classes. The data was split into training, validation, and test sets. The model was trained using gradient descent with an SGD optimizer and CrossEntropyLoss as the loss function. ## Training Parameters - **Epochs:** 25 - **Batch Size:** 32 - **Learning Rate:** 0.001 - **Optimizer:** SGD with momentum of 0.9 - **Loss Function:** CrossEntropyLoss ## Results ### Training and Validation, F1 | Stage | Loss (train) | Accuracy (train) | Loss (val) | Accuracy (val) | |--------------|--------------|------------------|------------|----------------| | Epoch 1 | 2.1171 | 47.00% | 0.8727 | 75.00% | | Epoch 2 | 1.0462 | 69.00% | 0.6721 | 78.00% | | ... | ... | ... | ... | ... | | Epoch 25 | 0.4286 | 86.00% | 0.4349 | 86.00% | **Model was trained on two T4 GPUs in a Kaggle notebook trained 36m 7s** **Best validation accuracy:** 86,54% ``` precision recall f1-score support achichuk 0.91 0.98 0.94 41 airan-katyk 0.84 0.93 0.89 46 asip 0.78 0.57 0.66 37 bauyrsak 0.90 0.90 0.90 62 beshbarmak-w-kazy 0.71 0.84 0.77 44 beshbarmak-wo-kazy 0.86 0.69 0.76 61 chak-chak 0.94 0.94 0.94 93 cheburek 0.92 0.88 0.90 94 doner-lavash 0.77 1.00 0.87 20 doner-nan 0.86 0.82 0.84 22 hvorost 0.98 0.86 0.91 141 irimshik 0.96 0.94 0.95 175 kattama-nan 0.84 0.88 0.86 66 kazy-karta 0.72 0.78 0.75 46 kurt 0.86 0.97 0.91 61 kuyrdak 0.92 0.93 0.92 58 kymyz-kymyran 0.93 0.82 0.87 49 lagman-fried 0.86 0.95 0.90 38 lagman-w-soup 0.90 0.80 0.85 75 lagman-wo-soup 0.58 0.86 0.69 22 manty 0.91 0.95 0.93 63 naryn 0.97 0.99 0.98 84 nauryz-kozhe 0.88 0.96 0.92 52 orama 0.68 0.84 0.75 38 plov 0.95 0.98 0.97 101 samsa 0.91 0.93 0.92 106 shashlyk-chicken 0.68 0.65 0.66 62 shashlyk-chicken-v 0.74 0.76 0.75 33 shashlyk-kuskovoi 0.75 0.75 0.75 71 shashlyk-kuskovoi-v 0.53 0.79 0.64 29 shashlyk-minced-meat 0.74 0.69 0.72 42 sheep-head 0.75 0.94 0.83 16 shelpek 0.77 0.86 0.81 64 shorpa 0.95 0.88 0.91 80 soup-plain 0.96 0.94 0.95 71 sushki 0.83 1.00 0.91 43 suzbe 0.89 0.82 0.86 62 taba-nan 0.92 0.80 0.86 136 talkan-zhent 0.86 0.80 0.83 90 tushpara-fried 0.79 0.74 0.76 46 tushpara-w-soup 0.94 0.94 0.94 67 tushpara-wo-soup 0.92 0.87 0.89 91 accuracy 0.87 2698 macro avg 0.84 0.86 0.85 2698 weighted avg 0.88 0.87 0.87 2698 ``` ![confusion matrix](matrix.png) ### Testing After training, the model was tested on the test set: - **Test accuracy:** 87% ## Repository Structure - `main.py` — Code for training and testing the model - `model/` — Saved model in SafeTensors format ## Usage Instructions from transformers import AutoModelForImageClassification from huggingface_hub import hf_hub_download from safetensors.torch import load_file repo_id = "Eraly-ml/centraasia-ResNet-50" filename = "model.safetensors" # Load model ``` model_path = hf_hub_download(repo_id=repo_id, filename=filename) model = AutoModelForImageClassification.from_pretrained(repo_id) model.load_state_dict(load_file(model_path)) ``` My telegram @eralyf
[ "achichuk", "airan-katyk", "asip", "bauyrsak", "beshbarmak-w-kazy", "beshbarmak-wo-kazy", "chak-chak", "cheburek", "doner-lavash", "doner-nan", "hvorost", "irimshik", "kattama-nan", "kazy-karta", "kurt", "kuyrdak", "kymyz-kymyran", "lagman-fried", "lagman-w-soup", "lagman-wo-soup", "manty", "naryn", "nauryz-kozhe", "orama", "plov", "samsa", "shashlyk-chicken", "shashlyk-chicken-v", "shashlyk-kuskovoi", "shashlyk-kuskovoi-v", "shashlyk-minced-meat", "sheep-head", "shelpek", "shorpa", "soup-plain", "sushki", "suzbe", "taba-nan", "talkan-zhent", "tushpara-fried", "tushpara-w-soup", "tushpara-wo-soup" ]
Kankanaghosh/vit-fashion-mnist
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-fashion-mnist This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1755 - Accuracy: 0.9504 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:| | 0.6439 | 0.0267 | 100 | 0.6483 | 0.7925 | | 0.3972 | 0.0533 | 200 | 0.4405 | 0.8598 | | 0.4898 | 0.08 | 300 | 0.4771 | 0.8344 | | 0.4585 | 0.1067 | 400 | 0.4260 | 0.8533 | | 0.4513 | 0.1333 | 500 | 0.4276 | 0.8582 | | 0.3669 | 0.16 | 600 | 0.3700 | 0.8728 | | 0.3053 | 0.1867 | 700 | 0.3351 | 0.8878 | | 0.3537 | 0.2133 | 800 | 0.3868 | 0.8632 | | 0.3253 | 0.24 | 900 | 0.2819 | 0.9023 | | 0.6373 | 0.2667 | 1000 | 0.4660 | 0.8436 | | 0.3327 | 0.2933 | 1100 | 0.2756 | 0.9068 | | 0.2778 | 0.32 | 1200 | 0.3304 | 0.8892 | | 0.2734 | 0.3467 | 1300 | 0.3733 | 0.8688 | | 0.3481 | 0.3733 | 1400 | 0.3195 | 0.892 | | 0.194 | 0.4 | 1500 | 0.2794 | 0.9059 | | 0.3727 | 0.4267 | 1600 | 0.3116 | 0.8932 | | 0.379 | 0.4533 | 1700 | 0.2742 | 0.9016 | | 0.2764 | 0.48 | 1800 | 0.3533 | 0.8782 | | 0.2362 | 0.5067 | 1900 | 0.2735 | 0.9062 | | 0.333 | 0.5333 | 2000 | 0.2844 | 0.9065 | | 0.2024 | 0.56 | 2100 | 0.3169 | 0.8871 | | 0.2167 | 0.5867 | 2200 | 0.2575 | 0.9097 | | 0.2368 | 0.6133 | 2300 | 0.2612 | 0.9103 | | 0.3344 | 0.64 | 2400 | 0.2549 | 0.91 | | 0.168 | 0.6667 | 2500 | 0.2792 | 0.9076 | | 0.2709 | 0.6933 | 2600 | 0.2769 | 0.9034 | | 0.2131 | 0.72 | 2700 | 0.2900 | 0.895 | | 0.2265 | 0.7467 | 2800 | 0.2394 | 0.9141 | | 0.3461 | 0.7733 | 2900 | 0.3260 | 0.8868 | | 0.3012 | 0.8 | 3000 | 0.4391 | 0.8687 | | 0.2332 | 0.8267 | 3100 | 0.2320 | 0.9189 | | 0.2458 | 0.8533 | 3200 | 0.2460 | 0.9148 | | 0.3271 | 0.88 | 3300 | 0.2724 | 0.9031 | | 0.1846 | 0.9067 | 3400 | 0.2359 | 0.9173 | | 0.1764 | 0.9333 | 3500 | 0.2712 | 0.9035 | | 0.1818 | 0.96 | 3600 | 0.2453 | 0.9152 | | 0.1628 | 0.9867 | 3700 | 0.2307 | 0.9189 | | 0.2072 | 1.0133 | 3800 | 0.2309 | 0.9207 | | 0.182 | 1.04 | 3900 | 0.2980 | 0.9015 | | 0.1572 | 1.0667 | 4000 | 0.2553 | 0.917 | | 0.2 | 1.0933 | 4100 | 0.2203 | 0.9216 | | 0.1475 | 1.12 | 4200 | 0.2635 | 0.91 | | 0.2729 | 1.1467 | 4300 | 0.2382 | 0.9151 | | 0.2978 | 1.1733 | 4400 | 0.2469 | 0.9157 | | 0.2117 | 1.2 | 4500 | 0.2546 | 0.9104 | | 0.2361 | 1.2267 | 4600 | 0.2434 | 0.9143 | | 0.3054 | 1.2533 | 4700 | 0.2272 | 0.9193 | | 0.1032 | 1.28 | 4800 | 0.2392 | 0.9172 | | 0.1405 | 1.3067 | 4900 | 0.2269 | 0.9205 | | 0.2779 | 1.3333 | 5000 | 0.2037 | 0.9293 | | 0.2025 | 1.3600 | 5100 | 0.2238 | 0.9231 | | 0.3432 | 1.3867 | 5200 | 0.2428 | 0.9139 | | 0.1422 | 1.4133 | 5300 | 0.2443 | 0.9181 | | 0.2444 | 1.44 | 5400 | 0.2395 | 0.919 | | 0.1836 | 1.4667 | 5500 | 0.2089 | 0.9277 | | 0.2308 | 1.4933 | 5600 | 0.2120 | 0.926 | | 0.1877 | 1.52 | 5700 | 0.2000 | 0.9305 | | 0.2019 | 1.5467 | 5800 | 0.2278 | 0.9229 | | 0.2829 | 1.5733 | 5900 | 0.1935 | 0.9315 | | 0.1262 | 1.6 | 6000 | 0.2274 | 0.92 | | 0.1152 | 1.6267 | 6100 | 0.2849 | 0.9082 | | 0.2012 | 1.6533 | 6200 | 0.2272 | 0.921 | | 0.1806 | 1.6800 | 6300 | 0.1932 | 0.9324 | | 0.1769 | 1.7067 | 6400 | 0.2020 | 0.9293 | | 0.2793 | 1.7333 | 6500 | 0.2052 | 0.927 | | 0.0894 | 1.76 | 6600 | 0.2147 | 0.9238 | | 0.2441 | 1.7867 | 6700 | 0.2020 | 0.93 | | 0.2366 | 1.8133 | 6800 | 0.2125 | 0.9264 | | 0.1992 | 1.8400 | 6900 | 0.1930 | 0.9316 | | 0.1936 | 1.8667 | 7000 | 0.2038 | 0.93 | | 0.2093 | 1.8933 | 7100 | 0.2100 | 0.9321 | | 0.2183 | 1.92 | 7200 | 0.2287 | 0.9267 | | 0.1483 | 1.9467 | 7300 | 0.1954 | 0.934 | | 0.1828 | 1.9733 | 7400 | 0.1922 | 0.9345 | | 0.1424 | 2.0 | 7500 | 0.1732 | 0.9388 | | 0.1396 | 2.0267 | 7600 | 0.1920 | 0.9312 | | 0.1433 | 2.0533 | 7700 | 0.1966 | 0.9316 | | 0.0639 | 2.08 | 7800 | 0.1811 | 0.9358 | | 0.1334 | 2.1067 | 7900 | 0.1962 | 0.9338 | | 0.2618 | 2.1333 | 8000 | 0.2176 | 0.9307 | | 0.1167 | 2.16 | 8100 | 0.1869 | 0.9369 | | 0.0498 | 2.1867 | 8200 | 0.2008 | 0.9357 | | 0.0647 | 2.2133 | 8300 | 0.2179 | 0.9295 | | 0.1444 | 2.24 | 8400 | 0.1934 | 0.9368 | | 0.1431 | 2.2667 | 8500 | 0.2257 | 0.9256 | | 0.1464 | 2.2933 | 8600 | 0.1796 | 0.9397 | | 0.1152 | 2.32 | 8700 | 0.1746 | 0.9422 | | 0.1679 | 2.3467 | 8800 | 0.1796 | 0.9416 | | 0.1404 | 2.3733 | 8900 | 0.1949 | 0.9357 | | 0.2441 | 2.4 | 9000 | 0.1742 | 0.9421 | | 0.1206 | 2.4267 | 9100 | 0.1953 | 0.9366 | | 0.2064 | 2.4533 | 9200 | 0.1908 | 0.9371 | | 0.0851 | 2.48 | 9300 | 0.1915 | 0.9369 | | 0.1101 | 2.5067 | 9400 | 0.1830 | 0.9411 | | 0.1081 | 2.5333 | 9500 | 0.1938 | 0.9387 | | 0.1559 | 2.56 | 9600 | 0.1692 | 0.9435 | | 0.0974 | 2.5867 | 9700 | 0.1735 | 0.9426 | | 0.1344 | 2.6133 | 9800 | 0.1834 | 0.9411 | | 0.0983 | 2.64 | 9900 | 0.1915 | 0.9367 | | 0.0941 | 2.6667 | 10000 | 0.1842 | 0.9399 | | 0.127 | 2.6933 | 10100 | 0.2004 | 0.938 | | 0.1112 | 2.7200 | 10200 | 0.1829 | 0.9395 | | 0.1898 | 2.7467 | 10300 | 0.1872 | 0.9384 | | 0.088 | 2.7733 | 10400 | 0.1831 | 0.9417 | | 0.1301 | 2.8 | 10500 | 0.1819 | 0.9408 | | 0.129 | 2.8267 | 10600 | 0.1831 | 0.9394 | | 0.1225 | 2.8533 | 10700 | 0.1778 | 0.9406 | | 0.1084 | 2.88 | 10800 | 0.1754 | 0.9399 | | 0.1159 | 2.9067 | 10900 | 0.1696 | 0.9432 | | 0.1037 | 2.9333 | 11000 | 0.1731 | 0.9431 | | 0.1173 | 2.96 | 11100 | 0.1817 | 0.9406 | | 0.0524 | 2.9867 | 11200 | 0.1703 | 0.9439 | | 0.0635 | 3.0133 | 11300 | 0.1689 | 0.9436 | | 0.0662 | 3.04 | 11400 | 0.1726 | 0.9454 | | 0.068 | 3.0667 | 11500 | 0.1777 | 0.9449 | | 0.0441 | 3.0933 | 11600 | 0.1942 | 0.9408 | | 0.0397 | 3.12 | 11700 | 0.1794 | 0.9478 | | 0.0804 | 3.1467 | 11800 | 0.1859 | 0.9467 | | 0.0193 | 3.1733 | 11900 | 0.1991 | 0.9431 | | 0.1243 | 3.2 | 12000 | 0.1867 | 0.946 | | 0.062 | 3.2267 | 12100 | 0.1877 | 0.9465 | | 0.032 | 3.2533 | 12200 | 0.2086 | 0.9432 | | 0.0177 | 3.2800 | 12300 | 0.1971 | 0.9458 | | 0.0582 | 3.3067 | 12400 | 0.1875 | 0.9467 | | 0.0584 | 3.3333 | 12500 | 0.1805 | 0.9484 | | 0.0814 | 3.36 | 12600 | 0.1829 | 0.9487 | | 0.1127 | 3.3867 | 12700 | 0.1875 | 0.9466 | | 0.0515 | 3.4133 | 12800 | 0.1906 | 0.9452 | | 0.0568 | 3.44 | 12900 | 0.1794 | 0.9488 | | 0.0642 | 3.4667 | 13000 | 0.1820 | 0.9479 | | 0.1252 | 3.4933 | 13100 | 0.1844 | 0.9491 | | 0.0512 | 3.52 | 13200 | 0.1787 | 0.9495 | | 0.0241 | 3.5467 | 13300 | 0.1772 | 0.9486 | | 0.0239 | 3.5733 | 13400 | 0.1723 | 0.952 | | 0.0796 | 3.6 | 13500 | 0.1792 | 0.9494 | | 0.0507 | 3.6267 | 13600 | 0.1744 | 0.9513 | | 0.0443 | 3.6533 | 13700 | 0.1745 | 0.9505 | | 0.1451 | 3.68 | 13800 | 0.1796 | 0.9483 | | 0.0799 | 3.7067 | 13900 | 0.1800 | 0.9491 | | 0.0416 | 3.7333 | 14000 | 0.1799 | 0.9481 | | 0.0758 | 3.76 | 14100 | 0.1767 | 0.9496 | | 0.0472 | 3.7867 | 14200 | 0.1776 | 0.9495 | | 0.0325 | 3.8133 | 14300 | 0.1745 | 0.9506 | | 0.0388 | 3.84 | 14400 | 0.1748 | 0.951 | | 0.0579 | 3.8667 | 14500 | 0.1763 | 0.9504 | | 0.0784 | 3.8933 | 14600 | 0.1759 | 0.9508 | | 0.0811 | 3.92 | 14700 | 0.1750 | 0.951 | | 0.0204 | 3.9467 | 14800 | 0.1749 | 0.9508 | | 0.0767 | 3.9733 | 14900 | 0.1757 | 0.9502 | | 0.0661 | 4.0 | 15000 | 0.1755 | 0.9504 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "t - shirt / top", "trouser", "pullover", "dress", "coat", "sandal", "shirt", "sneaker", "bag", "ankle boot" ]
corranm/squarerun2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # squarerun2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4284 - F1 Macro: 0.4676 - F1 Micro: 0.5606 - F1 Weighted: 0.5361 - Precision Macro: 0.4718 - Precision Micro: 0.5606 - Precision Weighted: 0.5334 - Recall Macro: 0.4835 - Recall Micro: 0.5606 - Recall Weighted: 0.5606 - Accuracy: 0.5606 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.9016 | 1.0 | 29 | 1.8764 | 0.1011 | 0.2424 | 0.1401 | 0.0721 | 0.2424 | 0.1001 | 0.1761 | 0.2424 | 0.2424 | 0.2424 | | 1.8787 | 2.0 | 58 | 1.8750 | 0.0485 | 0.2045 | 0.0695 | 0.0292 | 0.2045 | 0.0418 | 0.1429 | 0.2045 | 0.2045 | 0.2045 | | 1.9345 | 3.0 | 87 | 1.8624 | 0.0485 | 0.2045 | 0.0695 | 0.0292 | 0.2045 | 0.0418 | 0.1429 | 0.2045 | 0.2045 | 0.2045 | | 1.6663 | 4.0 | 116 | 1.7239 | 0.2230 | 0.3561 | 0.2738 | 0.3173 | 0.3561 | 0.3549 | 0.2725 | 0.3561 | 0.3561 | 0.3561 | | 1.3847 | 5.0 | 145 | 1.4880 | 0.3420 | 0.4697 | 0.4038 | 0.4521 | 0.4697 | 0.4846 | 0.3893 | 0.4697 | 0.4697 | 0.4697 | | 1.6559 | 6.0 | 174 | 1.4056 | 0.3479 | 0.4773 | 0.4108 | 0.3865 | 0.4773 | 0.4276 | 0.3870 | 0.4773 | 0.4773 | 0.4773 | | 1.335 | 7.0 | 203 | 1.3768 | 0.3875 | 0.5152 | 0.4527 | 0.3933 | 0.5152 | 0.4447 | 0.4265 | 0.5152 | 0.5152 | 0.5152 | | 1.2514 | 8.0 | 232 | 1.2345 | 0.4536 | 0.5606 | 0.5207 | 0.4701 | 0.5606 | 0.5257 | 0.4766 | 0.5606 | 0.5606 | 0.5606 | | 0.6979 | 9.0 | 261 | 1.1501 | 0.5305 | 0.6364 | 0.6097 | 0.5491 | 0.6364 | 0.6127 | 0.5391 | 0.6364 | 0.6364 | 0.6364 | | 1.0417 | 10.0 | 290 | 1.1654 | 0.5206 | 0.6136 | 0.5900 | 0.5215 | 0.6136 | 0.5935 | 0.5464 | 0.6136 | 0.6136 | 0.6136 | | 0.7314 | 11.0 | 319 | 1.1566 | 0.5376 | 0.6212 | 0.6109 | 0.5387 | 0.6212 | 0.6154 | 0.5514 | 0.6212 | 0.6212 | 0.6212 | | 0.7902 | 12.0 | 348 | 1.1624 | 0.5397 | 0.6212 | 0.6140 | 0.5422 | 0.6212 | 0.6209 | 0.5505 | 0.6212 | 0.6212 | 0.6212 | | 0.7503 | 13.0 | 377 | 1.1359 | 0.5377 | 0.6288 | 0.6126 | 0.5472 | 0.6288 | 0.6143 | 0.5455 | 0.6288 | 0.6288 | 0.6288 | | 0.586 | 14.0 | 406 | 1.1512 | 0.5441 | 0.6288 | 0.6141 | 0.5361 | 0.6288 | 0.6033 | 0.5557 | 0.6288 | 0.6288 | 0.6288 | | 0.6869 | 15.0 | 435 | 1.1306 | 0.5323 | 0.6288 | 0.6117 | 0.5270 | 0.6288 | 0.6043 | 0.5475 | 0.6288 | 0.6288 | 0.6288 | | 0.5498 | 16.0 | 464 | 1.1293 | 0.5373 | 0.6288 | 0.6117 | 0.5353 | 0.6288 | 0.6039 | 0.5471 | 0.6288 | 0.6288 | 0.6288 | | 0.5037 | 17.0 | 493 | 1.1635 | 0.5290 | 0.6212 | 0.6005 | 0.5374 | 0.6212 | 0.6022 | 0.5398 | 0.6212 | 0.6212 | 0.6212 | | 0.3624 | 18.0 | 522 | 1.0994 | 0.5700 | 0.6591 | 0.6414 | 0.5815 | 0.6591 | 0.6409 | 0.5743 | 0.6591 | 0.6591 | 0.6591 | | 0.3387 | 19.0 | 551 | 1.0944 | 0.5643 | 0.6515 | 0.6367 | 0.5556 | 0.6515 | 0.6268 | 0.5781 | 0.6515 | 0.6515 | 0.6515 | | 0.4052 | 20.0 | 580 | 1.0934 | 0.5683 | 0.6591 | 0.6432 | 0.5681 | 0.6591 | 0.6393 | 0.5798 | 0.6591 | 0.6591 | 0.6591 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
platzi/beans-vit-base-hector-nieto
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beans-vit-base-hector-nieto This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0221 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1213 | 3.8462 | 500 | 0.0221 | 0.9925 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
corranm/squarerun_earlystop
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # squarerun_earlystop This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2750 - F1 Macro: 0.4568 - F1 Micro: 0.5455 - F1 Weighted: 0.5111 - Precision Macro: 0.4686 - Precision Micro: 0.5455 - Precision Weighted: 0.5173 - Recall Macro: 0.4845 - Recall Micro: 0.5455 - Recall Weighted: 0.5455 - Accuracy: 0.5455 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.9437 | 1.0 | 29 | 1.8987 | 0.1485 | 0.2576 | 0.1680 | 0.1192 | 0.2576 | 0.1321 | 0.2207 | 0.2576 | 0.2576 | 0.2576 | | 1.4616 | 2.0 | 58 | 1.5844 | 0.3569 | 0.4242 | 0.4076 | 0.4336 | 0.4242 | 0.4738 | 0.3657 | 0.4242 | 0.4242 | 0.4242 | | 1.9935 | 3.0 | 87 | 1.4952 | 0.3059 | 0.4242 | 0.3585 | 0.3795 | 0.4242 | 0.4097 | 0.3387 | 0.4242 | 0.4242 | 0.4242 | | 1.3601 | 4.0 | 116 | 1.4319 | 0.3275 | 0.4167 | 0.3720 | 0.3223 | 0.4167 | 0.3618 | 0.3614 | 0.4167 | 0.4167 | 0.4167 | | 1.1685 | 5.0 | 145 | 1.1508 | 0.4913 | 0.5833 | 0.5550 | 0.4887 | 0.5833 | 0.5484 | 0.5139 | 0.5833 | 0.5833 | 0.5833 | | 1.2228 | 6.0 | 174 | 1.2663 | 0.4865 | 0.5076 | 0.5046 | 0.5339 | 0.5076 | 0.5644 | 0.4964 | 0.5076 | 0.5076 | 0.5076 | | 1.2811 | 7.0 | 203 | 1.4596 | 0.4084 | 0.5303 | 0.4752 | 0.5582 | 0.5303 | 0.6068 | 0.4383 | 0.5303 | 0.5303 | 0.5303 | | 1.7256 | 8.0 | 232 | 1.4908 | 0.4805 | 0.5682 | 0.5435 | 0.5333 | 0.5682 | 0.6122 | 0.5219 | 0.5682 | 0.5682 | 0.5682 | | 0.4549 | 9.0 | 261 | 1.2969 | 0.5270 | 0.6136 | 0.5648 | 0.6664 | 0.6136 | 0.6757 | 0.5526 | 0.6136 | 0.6136 | 0.6136 | | 0.5877 | 10.0 | 290 | 1.3581 | 0.4638 | 0.5758 | 0.5271 | 0.5632 | 0.5758 | 0.6293 | 0.5095 | 0.5758 | 0.5758 | 0.5758 | | 0.3451 | 11.0 | 319 | 1.2491 | 0.5613 | 0.6136 | 0.6066 | 0.5909 | 0.6136 | 0.6111 | 0.5589 | 0.6136 | 0.6136 | 0.6136 | | 0.4885 | 12.0 | 348 | 1.6862 | 0.5381 | 0.6288 | 0.6087 | 0.5515 | 0.6288 | 0.6225 | 0.5576 | 0.6288 | 0.6288 | 0.6288 | | 0.3835 | 13.0 | 377 | 1.8354 | 0.5318 | 0.5379 | 0.5440 | 0.6396 | 0.5379 | 0.6577 | 0.5264 | 0.5379 | 0.5379 | 0.5379 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/squarerun_large_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # squarerun_large_model This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5150 - F1 Macro: 0.4837 - F1 Micro: 0.5909 - F1 Weighted: 0.5569 - Precision Macro: 0.5183 - Precision Micro: 0.5909 - Precision Weighted: 0.5764 - Recall Macro: 0.5013 - Recall Micro: 0.5909 - Recall Weighted: 0.5909 - Accuracy: 0.5909 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.917 | 1.0 | 29 | 1.9115 | 0.1066 | 0.2197 | 0.1273 | 0.0780 | 0.2197 | 0.0923 | 0.1832 | 0.2197 | 0.2197 | 0.2197 | | 1.6762 | 2.0 | 58 | 1.6722 | 0.2733 | 0.3561 | 0.3005 | 0.3141 | 0.3561 | 0.3684 | 0.3355 | 0.3561 | 0.3561 | 0.3561 | | 1.9664 | 3.0 | 87 | 1.5057 | 0.3554 | 0.4545 | 0.4060 | 0.3734 | 0.4545 | 0.4129 | 0.3857 | 0.4545 | 0.4545 | 0.4545 | | 1.1934 | 4.0 | 116 | 1.4217 | 0.3130 | 0.4091 | 0.3530 | 0.3414 | 0.4091 | 0.3818 | 0.3629 | 0.4091 | 0.4091 | 0.4091 | | 1.0968 | 5.0 | 145 | 1.1879 | 0.4608 | 0.5758 | 0.5258 | 0.4807 | 0.5758 | 0.5438 | 0.5045 | 0.5758 | 0.5758 | 0.5758 | | 1.1313 | 6.0 | 174 | 1.2307 | 0.4964 | 0.5530 | 0.5243 | 0.5850 | 0.5530 | 0.6114 | 0.5196 | 0.5530 | 0.5530 | 0.5530 | | 1.0807 | 7.0 | 203 | 1.2771 | 0.4088 | 0.5303 | 0.4772 | 0.5393 | 0.5303 | 0.5816 | 0.4304 | 0.5303 | 0.5303 | 0.5303 | | 1.1825 | 8.0 | 232 | 1.2339 | 0.4528 | 0.5682 | 0.5175 | 0.5544 | 0.5682 | 0.6169 | 0.4920 | 0.5682 | 0.5682 | 0.5682 | | 0.4454 | 9.0 | 261 | 1.0474 | 0.6064 | 0.6970 | 0.6763 | 0.6334 | 0.6970 | 0.6868 | 0.6100 | 0.6970 | 0.6970 | 0.6970 | | 0.5439 | 10.0 | 290 | 1.6815 | 0.4580 | 0.5152 | 0.4920 | 0.5394 | 0.5152 | 0.5951 | 0.4903 | 0.5152 | 0.5152 | 0.5152 | | 0.4256 | 11.0 | 319 | 1.1378 | 0.5800 | 0.6667 | 0.6495 | 0.5801 | 0.6667 | 0.6435 | 0.5907 | 0.6667 | 0.6667 | 0.6667 | | 0.4968 | 12.0 | 348 | 1.4229 | 0.5307 | 0.6136 | 0.6013 | 0.5348 | 0.6136 | 0.6095 | 0.5486 | 0.6136 | 0.6136 | 0.6136 | | 0.3408 | 13.0 | 377 | 1.4445 | 0.5426 | 0.6288 | 0.6095 | 0.5559 | 0.6288 | 0.6307 | 0.5621 | 0.6288 | 0.6288 | 0.6288 | | 0.2914 | 14.0 | 406 | 1.4277 | 0.6009 | 0.6515 | 0.6470 | 0.7068 | 0.6515 | 0.6868 | 0.5958 | 0.6515 | 0.6515 | 0.6515 | | 0.2003 | 15.0 | 435 | 1.5517 | 0.5770 | 0.6288 | 0.6296 | 0.5890 | 0.6288 | 0.6475 | 0.5792 | 0.6288 | 0.6288 | 0.6288 | | 0.0871 | 16.0 | 464 | 1.4812 | 0.5702 | 0.6515 | 0.6407 | 0.5777 | 0.6515 | 0.6491 | 0.5785 | 0.6515 | 0.6515 | 0.6515 | | 0.0352 | 17.0 | 493 | 2.1052 | 0.5007 | 0.5985 | 0.5744 | 0.5466 | 0.5985 | 0.6130 | 0.5127 | 0.5985 | 0.5985 | 0.5985 | | 0.0101 | 18.0 | 522 | 1.9978 | 0.5725 | 0.6212 | 0.6223 | 0.6152 | 0.6212 | 0.6559 | 0.5672 | 0.6212 | 0.6212 | 0.6212 | | 0.0035 | 19.0 | 551 | 2.0304 | 0.5880 | 0.6439 | 0.6388 | 0.6698 | 0.6439 | 0.6936 | 0.5805 | 0.6439 | 0.6439 | 0.6439 | | 0.0013 | 20.0 | 580 | 2.1374 | 0.5514 | 0.6364 | 0.6224 | 0.6025 | 0.6364 | 0.6765 | 0.5685 | 0.6364 | 0.6364 | 0.6364 | | 0.0589 | 21.0 | 609 | 1.7676 | 0.5879 | 0.6439 | 0.6396 | 0.5940 | 0.6439 | 0.6407 | 0.5889 | 0.6439 | 0.6439 | 0.6439 | | 0.0263 | 22.0 | 638 | 1.8416 | 0.5785 | 0.6439 | 0.6327 | 0.6016 | 0.6439 | 0.6454 | 0.5758 | 0.6439 | 0.6439 | 0.6439 | | 0.0028 | 23.0 | 667 | 1.9843 | 0.6068 | 0.6667 | 0.6569 | 0.6631 | 0.6667 | 0.6882 | 0.6069 | 0.6667 | 0.6667 | 0.6667 | | 0.0006 | 24.0 | 696 | 1.9432 | 0.6157 | 0.6742 | 0.6655 | 0.6603 | 0.6742 | 0.6853 | 0.6152 | 0.6742 | 0.6742 | 0.6742 | | 0.0004 | 25.0 | 725 | 1.9346 | 0.6089 | 0.6667 | 0.6569 | 0.6548 | 0.6667 | 0.6763 | 0.6073 | 0.6667 | 0.6667 | 0.6667 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/vit-tiny-patch16-224
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-tiny-patch16-224 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6141 - F1 Macro: 0.4385 - F1 Micro: 0.5303 - F1 Weighted: 0.4856 - Precision Macro: 0.5225 - Precision Micro: 0.5303 - Precision Weighted: 0.5788 - Recall Macro: 0.4858 - Recall Micro: 0.5303 - Recall Weighted: 0.5303 - Accuracy: 0.5303 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.9719 | 1.0 | 29 | 1.9209 | 0.0949 | 0.2197 | 0.1155 | 0.0891 | 0.2197 | 0.1051 | 0.1722 | 0.2197 | 0.2197 | 0.2197 | | 1.8717 | 2.0 | 58 | 2.0378 | 0.0953 | 0.1970 | 0.1069 | 0.1996 | 0.1970 | 0.2660 | 0.1794 | 0.1970 | 0.1970 | 0.1970 | | 1.9326 | 3.0 | 87 | 1.7680 | 0.2290 | 0.3939 | 0.2939 | 0.2151 | 0.3939 | 0.2682 | 0.3004 | 0.3939 | 0.3939 | 0.3939 | | 1.2873 | 4.0 | 116 | 1.5892 | 0.3502 | 0.4470 | 0.4082 | 0.4831 | 0.4470 | 0.5140 | 0.3646 | 0.4470 | 0.4470 | 0.4470 | | 1.3997 | 5.0 | 145 | 1.4773 | 0.3481 | 0.5 | 0.4245 | 0.3463 | 0.5 | 0.4119 | 0.4052 | 0.5 | 0.5 | 0.5 | | 1.7041 | 6.0 | 174 | 1.4406 | 0.4266 | 0.5379 | 0.5005 | 0.5011 | 0.5379 | 0.5628 | 0.4529 | 0.5379 | 0.5379 | 0.5379 | | 1.1863 | 7.0 | 203 | 1.3680 | 0.4759 | 0.5682 | 0.5400 | 0.5559 | 0.5682 | 0.6032 | 0.4831 | 0.5682 | 0.5682 | 0.5682 | | 0.9817 | 8.0 | 232 | 1.3515 | 0.4399 | 0.5227 | 0.4969 | 0.4445 | 0.5227 | 0.5088 | 0.4722 | 0.5227 | 0.5227 | 0.5227 | | 0.617 | 9.0 | 261 | 1.3867 | 0.4895 | 0.5909 | 0.5555 | 0.5136 | 0.5909 | 0.5776 | 0.5183 | 0.5909 | 0.5909 | 0.5909 | | 1.0365 | 10.0 | 290 | 1.4607 | 0.4313 | 0.5379 | 0.4961 | 0.4371 | 0.5379 | 0.4997 | 0.4674 | 0.5379 | 0.5379 | 0.5379 | | 0.6815 | 11.0 | 319 | 1.3133 | 0.4962 | 0.5909 | 0.5664 | 0.5087 | 0.5909 | 0.5742 | 0.5133 | 0.5909 | 0.5909 | 0.5909 | | 0.4153 | 12.0 | 348 | 1.3528 | 0.5082 | 0.5909 | 0.5735 | 0.5185 | 0.5909 | 0.5820 | 0.5202 | 0.5909 | 0.5909 | 0.5909 | | 0.3396 | 13.0 | 377 | 1.3856 | 0.5372 | 0.5909 | 0.5830 | 0.5623 | 0.5909 | 0.6018 | 0.5387 | 0.5909 | 0.5909 | 0.5909 | | 0.5415 | 14.0 | 406 | 1.4252 | 0.5132 | 0.5909 | 0.5795 | 0.5223 | 0.5909 | 0.5893 | 0.5255 | 0.5909 | 0.5909 | 0.5909 | | 0.4421 | 15.0 | 435 | 1.4081 | 0.5574 | 0.6136 | 0.6086 | 0.5753 | 0.6136 | 0.6149 | 0.5532 | 0.6136 | 0.6136 | 0.6136 | | 0.2893 | 16.0 | 464 | 1.5285 | 0.5127 | 0.5985 | 0.5833 | 0.5059 | 0.5985 | 0.5752 | 0.5253 | 0.5985 | 0.5985 | 0.5985 | | 0.2403 | 17.0 | 493 | 1.4820 | 0.5395 | 0.6288 | 0.6065 | 0.5808 | 0.6288 | 0.6380 | 0.5460 | 0.6288 | 0.6288 | 0.6288 | | 0.1087 | 18.0 | 522 | 1.3999 | 0.5320 | 0.6061 | 0.6009 | 0.5612 | 0.6061 | 0.6211 | 0.5261 | 0.6061 | 0.6061 | 0.6061 | | 0.2619 | 19.0 | 551 | 1.4408 | 0.5618 | 0.6136 | 0.6037 | 0.6154 | 0.6136 | 0.6225 | 0.5501 | 0.6136 | 0.6136 | 0.6136 | | 0.1154 | 20.0 | 580 | 1.4516 | 0.5402 | 0.6288 | 0.6090 | 0.5538 | 0.6288 | 0.6145 | 0.5492 | 0.6288 | 0.6288 | 0.6288 | | 0.1367 | 21.0 | 609 | 1.5306 | 0.5254 | 0.6136 | 0.5942 | 0.5321 | 0.6136 | 0.5923 | 0.5340 | 0.6136 | 0.6136 | 0.6136 | | 0.0839 | 22.0 | 638 | 1.6397 | 0.5154 | 0.5833 | 0.5756 | 0.5274 | 0.5833 | 0.5895 | 0.5252 | 0.5833 | 0.5833 | 0.5833 | | 0.1818 | 23.0 | 667 | 1.6416 | 0.5656 | 0.6515 | 0.6359 | 0.5848 | 0.6515 | 0.6456 | 0.5696 | 0.6515 | 0.6515 | 0.6515 | | 0.0781 | 24.0 | 696 | 1.6026 | 0.5393 | 0.6212 | 0.6079 | 0.5524 | 0.6212 | 0.6118 | 0.5412 | 0.6212 | 0.6212 | 0.6212 | | 0.0792 | 25.0 | 725 | 1.5997 | 0.5494 | 0.6288 | 0.6180 | 0.5716 | 0.6288 | 0.6297 | 0.5480 | 0.6288 | 0.6288 | 0.6288 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/resnet-50
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-50 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6576 - F1 Macro: 0.2323 - F1 Micro: 0.3485 - F1 Weighted: 0.2841 - Precision Macro: 0.2908 - Precision Micro: 0.3485 - Precision Weighted: 0.3373 - Recall Macro: 0.2776 - Recall Micro: 0.3485 - Recall Weighted: 0.3485 - Accuracy: 0.3485 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.9401 | 1.0 | 29 | 1.9376 | 0.0673 | 0.1364 | 0.0898 | 0.0524 | 0.1364 | 0.0693 | 0.1020 | 0.1364 | 0.1364 | 0.1364 | | 1.9122 | 2.0 | 58 | 1.9165 | 0.0601 | 0.2045 | 0.0852 | 0.0463 | 0.2045 | 0.0648 | 0.1433 | 0.2045 | 0.2045 | 0.2045 | | 1.9226 | 3.0 | 87 | 1.8974 | 0.0729 | 0.2197 | 0.1023 | 0.0542 | 0.2197 | 0.0754 | 0.1547 | 0.2197 | 0.2197 | 0.2197 | | 1.8609 | 4.0 | 116 | 1.8879 | 0.0479 | 0.1894 | 0.0686 | 0.0293 | 0.1894 | 0.0419 | 0.1323 | 0.1894 | 0.1894 | 0.1894 | | 1.8345 | 5.0 | 145 | 1.8808 | 0.0498 | 0.2045 | 0.0713 | 0.0301 | 0.2045 | 0.0431 | 0.1429 | 0.2045 | 0.2045 | 0.2045 | | 1.8965 | 6.0 | 174 | 1.8803 | 0.0555 | 0.1894 | 0.0787 | 0.0379 | 0.1894 | 0.0534 | 0.1327 | 0.1894 | 0.1894 | 0.1894 | | 1.8651 | 7.0 | 203 | 1.8732 | 0.0787 | 0.2273 | 0.1100 | 0.0607 | 0.2273 | 0.0840 | 0.1604 | 0.2273 | 0.2273 | 0.2273 | | 1.8235 | 8.0 | 232 | 1.8693 | 0.0573 | 0.1970 | 0.0813 | 0.0393 | 0.1970 | 0.0554 | 0.1380 | 0.1970 | 0.1970 | 0.1970 | | 1.7786 | 9.0 | 261 | 1.8613 | 0.1112 | 0.25 | 0.1502 | 0.2131 | 0.25 | 0.2558 | 0.1808 | 0.25 | 0.25 | 0.25 | | 1.9601 | 10.0 | 290 | 1.8535 | 0.1144 | 0.2576 | 0.1549 | 0.1437 | 0.2576 | 0.1793 | 0.1865 | 0.2576 | 0.2576 | 0.2576 | | 1.7922 | 11.0 | 319 | 1.8492 | 0.1222 | 0.2727 | 0.1652 | 0.1487 | 0.2727 | 0.1860 | 0.1983 | 0.2727 | 0.2727 | 0.2727 | | 1.8398 | 12.0 | 348 | 1.8497 | 0.1371 | 0.2727 | 0.1810 | 0.1368 | 0.2727 | 0.1724 | 0.2022 | 0.2727 | 0.2727 | 0.2727 | | 1.8811 | 13.0 | 377 | 1.8354 | 0.1099 | 0.2424 | 0.1484 | 0.1170 | 0.2424 | 0.1490 | 0.1780 | 0.2424 | 0.2424 | 0.2424 | | 1.7813 | 14.0 | 406 | 1.8299 | 0.1445 | 0.2955 | 0.1925 | 0.1274 | 0.2955 | 0.1643 | 0.2164 | 0.2955 | 0.2955 | 0.2955 | | 1.8719 | 15.0 | 435 | 1.8213 | 0.1608 | 0.2955 | 0.2083 | 0.1462 | 0.2955 | 0.1838 | 0.2213 | 0.2955 | 0.2955 | 0.2955 | | 1.7755 | 16.0 | 464 | 1.8057 | 0.1735 | 0.3182 | 0.2247 | 0.1522 | 0.3182 | 0.1921 | 0.2392 | 0.3182 | 0.3182 | 0.3182 | | 1.7729 | 17.0 | 493 | 1.7964 | 0.1625 | 0.3106 | 0.2129 | 0.1450 | 0.3106 | 0.1843 | 0.2313 | 0.3106 | 0.3106 | 0.3106 | | 1.687 | 18.0 | 522 | 1.7865 | 0.1719 | 0.3182 | 0.2237 | 0.1576 | 0.3182 | 0.1987 | 0.2381 | 0.3182 | 0.3182 | 0.3182 | | 1.7207 | 19.0 | 551 | 1.7771 | 0.1823 | 0.3485 | 0.2394 | 0.1572 | 0.3485 | 0.2012 | 0.2592 | 0.3485 | 0.3485 | 0.3485 | | 1.7066 | 20.0 | 580 | 1.7672 | 0.1857 | 0.3485 | 0.2424 | 0.1578 | 0.3485 | 0.2015 | 0.2607 | 0.3485 | 0.3485 | 0.3485 | | 1.7726 | 21.0 | 609 | 1.7596 | 0.2147 | 0.3636 | 0.2710 | 0.2530 | 0.3636 | 0.2931 | 0.2766 | 0.3636 | 0.3636 | 0.3636 | | 1.7349 | 22.0 | 638 | 1.7517 | 0.2081 | 0.3485 | 0.2627 | 0.2145 | 0.3485 | 0.2554 | 0.2660 | 0.3485 | 0.3485 | 0.3485 | | 1.7956 | 23.0 | 667 | 1.7437 | 0.2018 | 0.3561 | 0.2590 | 0.1970 | 0.3561 | 0.2402 | 0.2687 | 0.3561 | 0.3561 | 0.3561 | | 1.4672 | 24.0 | 696 | 1.7264 | 0.2033 | 0.3636 | 0.2611 | 0.2975 | 0.3636 | 0.3356 | 0.2740 | 0.3636 | 0.3636 | 0.3636 | | 1.6008 | 25.0 | 725 | 1.7233 | 0.2323 | 0.3788 | 0.2905 | 0.2533 | 0.3788 | 0.2963 | 0.2898 | 0.3788 | 0.3788 | 0.3788 | | 1.6899 | 26.0 | 754 | 1.7199 | 0.2261 | 0.3788 | 0.2852 | 0.2426 | 0.3788 | 0.2863 | 0.2887 | 0.3788 | 0.3788 | 0.3788 | | 1.7073 | 27.0 | 783 | 1.7113 | 0.2171 | 0.3712 | 0.2752 | 0.2305 | 0.3712 | 0.2729 | 0.2819 | 0.3712 | 0.3712 | 0.3712 | | 1.6558 | 28.0 | 812 | 1.6996 | 0.2311 | 0.3864 | 0.2923 | 0.2212 | 0.3864 | 0.2677 | 0.2955 | 0.3864 | 0.3864 | 0.3864 | | 1.4732 | 29.0 | 841 | 1.7078 | 0.2320 | 0.3788 | 0.2901 | 0.2301 | 0.3788 | 0.2742 | 0.2909 | 0.3788 | 0.3788 | 0.3788 | | 1.6134 | 30.0 | 870 | 1.7132 | 0.2248 | 0.3788 | 0.2845 | 0.2245 | 0.3788 | 0.2692 | 0.2887 | 0.3788 | 0.3788 | 0.3788 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/vit-base-patch16-224
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3122 - F1 Macro: 0.5397 - F1 Micro: 0.6212 - F1 Weighted: 0.6077 - Precision Macro: 0.5343 - Precision Micro: 0.6212 - Precision Weighted: 0.6084 - Recall Macro: 0.5571 - Recall Micro: 0.6212 - Recall Weighted: 0.6212 - Accuracy: 0.6212 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.9037 | 1.0 | 29 | 1.8618 | 0.1250 | 0.2197 | 0.1592 | 0.1401 | 0.2197 | 0.1692 | 0.1674 | 0.2197 | 0.2197 | 0.2197 | | 1.6981 | 2.0 | 58 | 1.8760 | 0.1537 | 0.2424 | 0.1896 | 0.2152 | 0.2424 | 0.2787 | 0.2068 | 0.2424 | 0.2424 | 0.2424 | | 1.7426 | 3.0 | 87 | 1.6971 | 0.2272 | 0.3333 | 0.2622 | 0.1959 | 0.3333 | 0.2233 | 0.2846 | 0.3333 | 0.3333 | 0.3333 | | 1.1847 | 4.0 | 116 | 1.5082 | 0.3360 | 0.4242 | 0.3911 | 0.3925 | 0.4242 | 0.4281 | 0.3553 | 0.4242 | 0.4242 | 0.4242 | | 1.3906 | 5.0 | 145 | 1.4063 | 0.3152 | 0.4621 | 0.3815 | 0.2727 | 0.4621 | 0.3279 | 0.3785 | 0.4621 | 0.4621 | 0.4621 | | 1.5575 | 6.0 | 174 | 1.3833 | 0.4414 | 0.4621 | 0.4526 | 0.4850 | 0.4621 | 0.4941 | 0.4402 | 0.4621 | 0.4621 | 0.4621 | | 1.1063 | 7.0 | 203 | 1.2431 | 0.4750 | 0.5833 | 0.5453 | 0.5898 | 0.5833 | 0.6329 | 0.4890 | 0.5833 | 0.5833 | 0.5833 | | 1.1503 | 8.0 | 232 | 1.3635 | 0.4036 | 0.4924 | 0.4586 | 0.4145 | 0.4924 | 0.4762 | 0.4436 | 0.4924 | 0.4924 | 0.4924 | | 0.5124 | 9.0 | 261 | 1.1603 | 0.5463 | 0.6288 | 0.6136 | 0.5488 | 0.6288 | 0.6113 | 0.5551 | 0.6288 | 0.6288 | 0.6288 | | 0.6648 | 10.0 | 290 | 1.4136 | 0.4184 | 0.5 | 0.4713 | 0.4713 | 0.5 | 0.5275 | 0.4413 | 0.5 | 0.5 | 0.5 | | 0.2917 | 11.0 | 319 | 1.2004 | 0.5155 | 0.6061 | 0.5892 | 0.5180 | 0.6061 | 0.5882 | 0.5268 | 0.6061 | 0.6061 | 0.6061 | | 0.4962 | 12.0 | 348 | 1.3730 | 0.4970 | 0.5682 | 0.5671 | 0.5094 | 0.5682 | 0.5909 | 0.5109 | 0.5682 | 0.5682 | 0.5682 | | 0.5723 | 13.0 | 377 | 1.3377 | 0.5705 | 0.6136 | 0.6077 | 0.7050 | 0.6136 | 0.6879 | 0.5756 | 0.6136 | 0.6136 | 0.6136 | | 0.4589 | 14.0 | 406 | 1.3717 | 0.5648 | 0.6136 | 0.6094 | 0.6239 | 0.6136 | 0.6458 | 0.5609 | 0.6136 | 0.6136 | 0.6136 | | 0.2544 | 15.0 | 435 | 1.4129 | 0.5086 | 0.5985 | 0.5793 | 0.5140 | 0.5985 | 0.5772 | 0.5187 | 0.5985 | 0.5985 | 0.5985 | | 0.3179 | 16.0 | 464 | 1.3589 | 0.5882 | 0.6439 | 0.6347 | 0.6912 | 0.6439 | 0.6603 | 0.5777 | 0.6439 | 0.6439 | 0.6439 | | 0.1304 | 17.0 | 493 | 1.5604 | 0.5010 | 0.5758 | 0.5606 | 0.5123 | 0.5758 | 0.5669 | 0.5076 | 0.5758 | 0.5758 | 0.5758 | | 0.0887 | 18.0 | 522 | 1.6231 | 0.5091 | 0.6061 | 0.5800 | 0.5344 | 0.6061 | 0.5917 | 0.5190 | 0.6061 | 0.6061 | 0.6061 | | 0.0254 | 19.0 | 551 | 1.6095 | 0.5625 | 0.6136 | 0.6070 | 0.6642 | 0.6136 | 0.6353 | 0.5520 | 0.6136 | 0.6136 | 0.6136 | | 0.0908 | 20.0 | 580 | 1.6941 | 0.5270 | 0.6136 | 0.5962 | 0.5331 | 0.6136 | 0.6004 | 0.5381 | 0.6136 | 0.6136 | 0.6136 | | 0.0913 | 21.0 | 609 | 1.6917 | 0.5537 | 0.6136 | 0.6018 | 0.5909 | 0.6136 | 0.6169 | 0.5579 | 0.6136 | 0.6136 | 0.6136 | | 0.015 | 22.0 | 638 | 1.8274 | 0.4866 | 0.5682 | 0.5512 | 0.4855 | 0.5682 | 0.5477 | 0.5003 | 0.5682 | 0.5682 | 0.5682 | | 0.0156 | 23.0 | 667 | 1.7322 | 0.5772 | 0.6439 | 0.6233 | 0.6870 | 0.6439 | 0.6598 | 0.5802 | 0.6439 | 0.6439 | 0.6439 | | 0.0275 | 24.0 | 696 | 1.6262 | 0.5293 | 0.6212 | 0.6006 | 0.5274 | 0.6212 | 0.5913 | 0.5422 | 0.6212 | 0.6212 | 0.6212 | | 0.0034 | 25.0 | 725 | 1.7278 | 0.5680 | 0.6591 | 0.6409 | 0.5674 | 0.6591 | 0.6333 | 0.5786 | 0.6591 | 0.6591 | 0.6591 | | 0.0021 | 26.0 | 754 | 1.7111 | 0.5542 | 0.6439 | 0.6250 | 0.5506 | 0.6439 | 0.6148 | 0.5657 | 0.6439 | 0.6439 | 0.6439 | | 0.0021 | 27.0 | 783 | 1.7412 | 0.5556 | 0.6439 | 0.6257 | 0.5507 | 0.6439 | 0.6163 | 0.5684 | 0.6439 | 0.6439 | 0.6439 | | 0.0079 | 28.0 | 812 | 1.8651 | 0.5506 | 0.6364 | 0.6176 | 0.5427 | 0.6364 | 0.6078 | 0.5670 | 0.6364 | 0.6364 | 0.6364 | | 0.0018 | 29.0 | 841 | 1.8016 | 0.5508 | 0.6364 | 0.6184 | 0.5425 | 0.6364 | 0.6074 | 0.5654 | 0.6364 | 0.6364 | 0.6364 | | 0.0068 | 30.0 | 870 | 1.7936 | 0.5510 | 0.6364 | 0.6182 | 0.5436 | 0.6364 | 0.6073 | 0.5650 | 0.6364 | 0.6364 | 0.6364 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/vit-base-patch16-224-in21k_16batch
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k_16batch This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2813 - F1 Macro: 0.4280 - F1 Micro: 0.5455 - F1 Weighted: 0.4882 - Precision Macro: 0.4004 - Precision Micro: 0.5455 - Precision Weighted: 0.4529 - Recall Macro: 0.4762 - Recall Micro: 0.5455 - Recall Weighted: 0.5455 - Accuracy: 0.5455 ## Model description Using a batch size of 16 ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.9371 | 1.0 | 29 | 1.9372 | 0.0504 | 0.1212 | 0.0604 | 0.0334 | 0.1212 | 0.0403 | 0.1029 | 0.1212 | 0.1212 | 0.1212 | | 1.9078 | 2.0 | 58 | 1.9066 | 0.0454 | 0.1818 | 0.0602 | 0.0272 | 0.1818 | 0.0361 | 0.1371 | 0.1818 | 0.1818 | 0.1818 | | 1.9276 | 3.0 | 87 | 1.8808 | 0.0696 | 0.1818 | 0.0968 | 0.0492 | 0.1818 | 0.0682 | 0.1295 | 0.1818 | 0.1818 | 0.1818 | | 1.8373 | 4.0 | 116 | 1.8696 | 0.0485 | 0.2045 | 0.0695 | 0.0292 | 0.2045 | 0.0418 | 0.1429 | 0.2045 | 0.2045 | 0.2045 | | 1.8152 | 5.0 | 145 | 1.8490 | 0.1339 | 0.2576 | 0.1745 | 0.1298 | 0.2576 | 0.1640 | 0.1944 | 0.2576 | 0.2576 | 0.2576 | | 1.8488 | 6.0 | 174 | 1.8281 | 0.1379 | 0.2727 | 0.1817 | 0.1512 | 0.2727 | 0.1891 | 0.1997 | 0.2727 | 0.2727 | 0.2727 | | 1.7626 | 7.0 | 203 | 1.7917 | 0.2271 | 0.3333 | 0.2718 | 0.1922 | 0.3333 | 0.2298 | 0.2783 | 0.3333 | 0.3333 | 0.3333 | | 1.7169 | 8.0 | 232 | 1.7478 | 0.2887 | 0.4242 | 0.3465 | 0.2706 | 0.4242 | 0.3154 | 0.3426 | 0.4242 | 0.4242 | 0.4242 | | 1.5364 | 9.0 | 261 | 1.7098 | 0.2835 | 0.4091 | 0.3409 | 0.2720 | 0.4091 | 0.3245 | 0.3324 | 0.4091 | 0.4091 | 0.4091 | | 1.7373 | 10.0 | 290 | 1.6765 | 0.2906 | 0.4167 | 0.3463 | 0.2726 | 0.4167 | 0.3157 | 0.3386 | 0.4167 | 0.4167 | 0.4167 | | 1.5345 | 11.0 | 319 | 1.6423 | 0.2805 | 0.3939 | 0.3342 | 0.3728 | 0.3939 | 0.4258 | 0.3275 | 0.3939 | 0.3939 | 0.3939 | | 1.6421 | 12.0 | 348 | 1.6103 | 0.3324 | 0.4697 | 0.3978 | 0.4583 | 0.4697 | 0.5178 | 0.3760 | 0.4697 | 0.4697 | 0.4697 | | 1.5266 | 13.0 | 377 | 1.5835 | 0.3171 | 0.4621 | 0.3822 | 0.2917 | 0.4621 | 0.3483 | 0.3748 | 0.4621 | 0.4621 | 0.4621 | | 1.5182 | 14.0 | 406 | 1.5633 | 0.3133 | 0.4242 | 0.3680 | 0.3634 | 0.4242 | 0.4009 | 0.3568 | 0.4242 | 0.4242 | 0.4242 | | 1.5341 | 15.0 | 435 | 1.5528 | 0.3015 | 0.4167 | 0.3585 | 0.3109 | 0.4167 | 0.3638 | 0.3499 | 0.4167 | 0.4167 | 0.4167 | | 1.3961 | 16.0 | 464 | 1.5273 | 0.3449 | 0.4545 | 0.3991 | 0.4329 | 0.4545 | 0.4704 | 0.3839 | 0.4545 | 0.4545 | 0.4545 | | 1.3601 | 17.0 | 493 | 1.4971 | 0.3670 | 0.5 | 0.4357 | 0.5047 | 0.5 | 0.5382 | 0.4078 | 0.5 | 0.5 | 0.5 | | 1.2535 | 18.0 | 522 | 1.5006 | 0.3511 | 0.4621 | 0.4138 | 0.4778 | 0.4621 | 0.5101 | 0.3872 | 0.4621 | 0.4621 | 0.4621 | | 1.2375 | 19.0 | 551 | 1.4659 | 0.3655 | 0.4924 | 0.4345 | 0.4298 | 0.4924 | 0.4797 | 0.4020 | 0.4924 | 0.4924 | 0.4924 | | 1.2141 | 20.0 | 580 | 1.4407 | 0.3914 | 0.5076 | 0.4565 | 0.4650 | 0.5076 | 0.5087 | 0.4217 | 0.5076 | 0.5076 | 0.5076 | | 1.2831 | 21.0 | 609 | 1.4454 | 0.3965 | 0.5152 | 0.4645 | 0.4801 | 0.5152 | 0.5265 | 0.4214 | 0.5152 | 0.5152 | 0.5152 | | 1.1543 | 22.0 | 638 | 1.4167 | 0.4285 | 0.5455 | 0.4997 | 0.4781 | 0.5455 | 0.5309 | 0.4521 | 0.5455 | 0.5455 | 0.5455 | | 1.4079 | 23.0 | 667 | 1.4465 | 0.3675 | 0.4621 | 0.4269 | 0.4187 | 0.4621 | 0.4676 | 0.3929 | 0.4621 | 0.4621 | 0.4621 | | 1.0619 | 24.0 | 696 | 1.4249 | 0.4092 | 0.5076 | 0.4724 | 0.4659 | 0.5076 | 0.5180 | 0.4336 | 0.5076 | 0.5076 | 0.5076 | | 1.1059 | 25.0 | 725 | 1.3834 | 0.4356 | 0.5530 | 0.5061 | 0.5025 | 0.5530 | 0.5491 | 0.4594 | 0.5530 | 0.5530 | 0.5530 | | 1.192 | 26.0 | 754 | 1.3784 | 0.4286 | 0.5379 | 0.4893 | 0.4566 | 0.5379 | 0.4969 | 0.4544 | 0.5379 | 0.5379 | 0.5379 | | 1.21 | 27.0 | 783 | 1.3874 | 0.4409 | 0.5379 | 0.5060 | 0.4709 | 0.5379 | 0.5258 | 0.4616 | 0.5379 | 0.5379 | 0.5379 | | 1.0901 | 28.0 | 812 | 1.3621 | 0.4402 | 0.5379 | 0.5074 | 0.4635 | 0.5379 | 0.5204 | 0.4557 | 0.5379 | 0.5379 | 0.5379 | | 1.1254 | 29.0 | 841 | 1.3714 | 0.4265 | 0.5227 | 0.4873 | 0.4492 | 0.5227 | 0.4984 | 0.4449 | 0.5227 | 0.5227 | 0.5227 | | 0.9345 | 30.0 | 870 | 1.3525 | 0.4425 | 0.5379 | 0.5074 | 0.4736 | 0.5379 | 0.5264 | 0.4557 | 0.5379 | 0.5379 | 0.5379 | | 1.2036 | 31.0 | 899 | 1.3592 | 0.4363 | 0.5379 | 0.5020 | 0.4869 | 0.5379 | 0.5368 | 0.4533 | 0.5379 | 0.5379 | 0.5379 | | 1.036 | 32.0 | 928 | 1.3362 | 0.4451 | 0.5455 | 0.5109 | 0.4673 | 0.5455 | 0.5226 | 0.4637 | 0.5455 | 0.5455 | 0.5455 | | 0.9979 | 33.0 | 957 | 1.3492 | 0.4454 | 0.5455 | 0.5134 | 0.4808 | 0.5455 | 0.5358 | 0.4620 | 0.5455 | 0.5455 | 0.5455 | | 0.8353 | 34.0 | 986 | 1.3402 | 0.4635 | 0.5606 | 0.5301 | 0.4659 | 0.5606 | 0.5268 | 0.4854 | 0.5606 | 0.5606 | 0.5606 | | 0.9384 | 35.0 | 1015 | 1.3414 | 0.4408 | 0.5455 | 0.5088 | 0.4664 | 0.5455 | 0.5237 | 0.4602 | 0.5455 | 0.5455 | 0.5455 | | 0.996 | 36.0 | 1044 | 1.3405 | 0.4559 | 0.5530 | 0.5235 | 0.4795 | 0.5530 | 0.5377 | 0.4715 | 0.5530 | 0.5530 | 0.5530 | | 0.9613 | 37.0 | 1073 | 1.3357 | 0.4847 | 0.5833 | 0.5535 | 0.5011 | 0.5833 | 0.5612 | 0.5020 | 0.5833 | 0.5833 | 0.5833 | | 0.8507 | 38.0 | 1102 | 1.3347 | 0.4760 | 0.5758 | 0.5454 | 0.4897 | 0.5758 | 0.5510 | 0.4940 | 0.5758 | 0.5758 | 0.5758 | | 1.1563 | 39.0 | 1131 | 1.3396 | 0.4553 | 0.5530 | 0.5250 | 0.4608 | 0.5530 | 0.5234 | 0.4735 | 0.5530 | 0.5530 | 0.5530 | | 0.9681 | 40.0 | 1160 | 1.3371 | 0.4703 | 0.5682 | 0.5396 | 0.4816 | 0.5682 | 0.5445 | 0.4887 | 0.5682 | 0.5682 | 0.5682 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
rescu/deit-base-patch16-224-finetuned-plantvillage
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deit-base-patch16-224-finetuned-plantvillage This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0061 - Accuracy: 0.9989 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0029 | 1.0 | 2223 | 0.0100 | 0.9972 | | 0.0001 | 1.9994 | 4444 | 0.0061 | 0.9989 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "apple___apple_scab", "apple___black_rot", "apple___cedar_apple_rust", "apple___healthy", "background_without_leaves", "blueberry___healthy", "cherry___powdery_mildew", "cherry___healthy", "corn___cercospora_leaf_spot gray_leaf_spot", "corn___common_rust", "corn___northern_leaf_blight", "corn___healthy", "grape___black_rot", "grape___esca_(black_measles)", "grape___leaf_blight_(isariopsis_leaf_spot)", "grape___healthy", "orange___haunglongbing_(citrus_greening)", "peach___bacterial_spot", "peach___healthy", "pepper,_bell___bacterial_spot", "pepper,_bell___healthy", "potato___early_blight", "potato___late_blight", "potato___healthy", "raspberry___healthy", "soybean___healthy", "squash___powdery_mildew", "strawberry___leaf_scorch", "strawberry___healthy", "tomato___bacterial_spot", "tomato___early_blight", "tomato___late_blight", "tomato___leaf_mold", "tomato___septoria_leaf_spot", "tomato___spider_mites two-spotted_spider_mite", "tomato___target_spot", "tomato___tomato_yellow_leaf_curl_virus", "tomato___tomato_mosaic_virus", "tomato___healthy" ]
corranm/square_run_with_16_batch_size
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # square_run_with_16_batch_size This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4457 - F1 Macro: 0.4685 - F1 Micro: 0.5455 - F1 Weighted: 0.5242 - Precision Macro: 0.5341 - Precision Micro: 0.5455 - Precision Weighted: 0.5870 - Recall Macro: 0.4829 - Recall Micro: 0.5455 - Recall Weighted: 0.5455 - Accuracy: 0.5455 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.9976 | 1.0 | 29 | 1.9107 | 0.0915 | 0.2045 | 0.1209 | 0.0793 | 0.2045 | 0.1019 | 0.1535 | 0.2045 | 0.2045 | 0.2045 | | 1.7575 | 2.0 | 58 | 1.8877 | 0.1474 | 0.2348 | 0.1805 | 0.1704 | 0.2348 | 0.2163 | 0.1989 | 0.2348 | 0.2348 | 0.2348 | | 1.8336 | 3.0 | 87 | 1.7319 | 0.1659 | 0.3182 | 0.2117 | 0.1611 | 0.3182 | 0.2133 | 0.2586 | 0.3182 | 0.3182 | 0.3182 | | 1.452 | 4.0 | 116 | 1.5316 | 0.3336 | 0.4167 | 0.3752 | 0.3518 | 0.4167 | 0.3903 | 0.3682 | 0.4167 | 0.4167 | 0.4167 | | 1.2545 | 5.0 | 145 | 1.4192 | 0.3999 | 0.4848 | 0.4447 | 0.4601 | 0.4848 | 0.5021 | 0.4318 | 0.4848 | 0.4848 | 0.4848 | | 1.6479 | 6.0 | 174 | 1.3642 | 0.4649 | 0.5455 | 0.5265 | 0.5072 | 0.5455 | 0.5559 | 0.4750 | 0.5455 | 0.5455 | 0.5455 | | 1.301 | 7.0 | 203 | 1.3015 | 0.4178 | 0.5303 | 0.4735 | 0.4090 | 0.5303 | 0.4503 | 0.4535 | 0.5303 | 0.5303 | 0.5303 | | 0.9006 | 8.0 | 232 | 1.4861 | 0.4234 | 0.4924 | 0.4699 | 0.4586 | 0.4924 | 0.5286 | 0.4621 | 0.4924 | 0.4924 | 0.4924 | | 0.4134 | 9.0 | 261 | 1.2101 | 0.4852 | 0.5833 | 0.5545 | 0.5427 | 0.5833 | 0.5894 | 0.5010 | 0.5833 | 0.5833 | 0.5833 | | 0.9532 | 10.0 | 290 | 1.3783 | 0.4577 | 0.5682 | 0.5204 | 0.4557 | 0.5682 | 0.5160 | 0.4972 | 0.5682 | 0.5682 | 0.5682 | | 0.4521 | 11.0 | 319 | 1.3602 | 0.5266 | 0.6136 | 0.5923 | 0.5296 | 0.6136 | 0.5907 | 0.5403 | 0.6136 | 0.6136 | 0.6136 | | 0.633 | 12.0 | 348 | 1.4293 | 0.5032 | 0.5833 | 0.5727 | 0.4969 | 0.5833 | 0.5674 | 0.5140 | 0.5833 | 0.5833 | 0.5833 | | 0.4268 | 13.0 | 377 | 1.4388 | 0.5031 | 0.5833 | 0.5676 | 0.5543 | 0.5833 | 0.6189 | 0.5124 | 0.5833 | 0.5833 | 0.5833 | | 0.2857 | 14.0 | 406 | 1.6012 | 0.5071 | 0.5833 | 0.5676 | 0.5209 | 0.5833 | 0.5879 | 0.5211 | 0.5833 | 0.5833 | 0.5833 | | 0.2606 | 15.0 | 435 | 1.5817 | 0.5579 | 0.6136 | 0.6109 | 0.5657 | 0.6136 | 0.6178 | 0.5590 | 0.6136 | 0.6136 | 0.6136 | | 0.2028 | 16.0 | 464 | 1.8048 | 0.4526 | 0.5227 | 0.5112 | 0.4703 | 0.5227 | 0.5378 | 0.4668 | 0.5227 | 0.5227 | 0.5227 | | 0.3251 | 17.0 | 493 | 1.6340 | 0.4942 | 0.5833 | 0.5625 | 0.5049 | 0.5833 | 0.5631 | 0.5031 | 0.5833 | 0.5833 | 0.5833 | | 0.0369 | 18.0 | 522 | 1.5847 | 0.5860 | 0.6439 | 0.6349 | 0.6267 | 0.6439 | 0.6476 | 0.5824 | 0.6439 | 0.6439 | 0.6439 | | 0.1133 | 19.0 | 551 | 1.5825 | 0.5457 | 0.6288 | 0.6157 | 0.5377 | 0.6288 | 0.6111 | 0.5615 | 0.6288 | 0.6288 | 0.6288 | | 0.0457 | 20.0 | 580 | 1.7253 | 0.5258 | 0.6136 | 0.5938 | 0.5229 | 0.6136 | 0.5854 | 0.5391 | 0.6136 | 0.6136 | 0.6136 | | 0.1109 | 21.0 | 609 | 1.7898 | 0.5708 | 0.6212 | 0.6154 | 0.6150 | 0.6212 | 0.6283 | 0.5613 | 0.6212 | 0.6212 | 0.6212 | | 0.046 | 22.0 | 638 | 1.7368 | 0.5656 | 0.6136 | 0.6029 | 0.6021 | 0.6136 | 0.6103 | 0.5615 | 0.6136 | 0.6136 | 0.6136 | | 0.0553 | 23.0 | 667 | 2.2478 | 0.4822 | 0.5682 | 0.5430 | 0.4851 | 0.5682 | 0.5380 | 0.4975 | 0.5682 | 0.5682 | 0.5682 | | 0.0047 | 24.0 | 696 | 2.1705 | 0.5133 | 0.5909 | 0.5750 | 0.5158 | 0.5909 | 0.5716 | 0.5220 | 0.5909 | 0.5909 | 0.5909 | | 0.0104 | 25.0 | 725 | 2.2669 | 0.4950 | 0.5833 | 0.5622 | 0.5035 | 0.5833 | 0.5609 | 0.5038 | 0.5833 | 0.5833 | 0.5833 | | 0.0287 | 26.0 | 754 | 2.0390 | 0.5267 | 0.6061 | 0.5935 | 0.5265 | 0.6061 | 0.5898 | 0.5346 | 0.6061 | 0.6061 | 0.6061 | | 0.0212 | 27.0 | 783 | 2.1345 | 0.5344 | 0.6136 | 0.6005 | 0.5308 | 0.6136 | 0.5946 | 0.5449 | 0.6136 | 0.6136 | 0.6136 | | 0.0221 | 28.0 | 812 | 2.1555 | 0.5607 | 0.6136 | 0.6035 | 0.5953 | 0.6136 | 0.6107 | 0.5583 | 0.6136 | 0.6136 | 0.6136 | | 0.001 | 29.0 | 841 | 2.1102 | 0.5833 | 0.6364 | 0.6289 | 0.6172 | 0.6364 | 0.6353 | 0.5789 | 0.6364 | 0.6364 | 0.6364 | | 0.0045 | 30.0 | 870 | 2.0669 | 0.5862 | 0.6364 | 0.6290 | 0.6164 | 0.6364 | 0.6326 | 0.5831 | 0.6364 | 0.6364 | 0.6364 | | 0.0021 | 31.0 | 899 | 2.1442 | 0.5833 | 0.6364 | 0.6282 | 0.6165 | 0.6364 | 0.6330 | 0.5789 | 0.6364 | 0.6364 | 0.6364 | | 0.0014 | 32.0 | 928 | 2.1435 | 0.5616 | 0.6136 | 0.6049 | 0.5957 | 0.6136 | 0.6099 | 0.5569 | 0.6136 | 0.6136 | 0.6136 | | 0.0016 | 33.0 | 957 | 2.1279 | 0.5621 | 0.6136 | 0.6047 | 0.5966 | 0.6136 | 0.6093 | 0.5569 | 0.6136 | 0.6136 | 0.6136 | | 0.0008 | 34.0 | 986 | 2.1310 | 0.5691 | 0.6212 | 0.6127 | 0.6030 | 0.6212 | 0.6170 | 0.5641 | 0.6212 | 0.6212 | 0.6212 | | 0.0006 | 35.0 | 1015 | 2.1338 | 0.5690 | 0.6212 | 0.6130 | 0.6026 | 0.6212 | 0.6176 | 0.5641 | 0.6212 | 0.6212 | 0.6212 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/square_run_with_actual_16_batch_size
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # square_run_with_actual_16_batch_size This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2489 - F1 Macro: 0.5325 - F1 Micro: 0.6212 - F1 Weighted: 0.6021 - Precision Macro: 0.5262 - Precision Micro: 0.6212 - Precision Weighted: 0.5980 - Recall Macro: 0.5529 - Recall Micro: 0.6212 - Recall Weighted: 0.6212 - Accuracy: 0.6212 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.9666 | 1.0 | 15 | 1.9122 | 0.0522 | 0.1818 | 0.0701 | 0.0544 | 0.1818 | 0.0752 | 0.1367 | 0.1818 | 0.1818 | 0.1818 | | 1.3456 | 2.0 | 30 | 1.8989 | 0.1141 | 0.2045 | 0.1357 | 0.0978 | 0.2045 | 0.1212 | 0.1792 | 0.2045 | 0.2045 | 0.2045 | | 1.8934 | 3.0 | 45 | 1.8013 | 0.1472 | 0.25 | 0.1719 | 0.2585 | 0.25 | 0.3007 | 0.1956 | 0.25 | 0.25 | 0.25 | | 1.0877 | 4.0 | 60 | 1.5598 | 0.3297 | 0.3864 | 0.3581 | 0.3466 | 0.3864 | 0.3737 | 0.3502 | 0.3864 | 0.3864 | 0.3864 | | 1.2233 | 5.0 | 75 | 1.4987 | 0.3442 | 0.4318 | 0.3877 | 0.4069 | 0.4318 | 0.4401 | 0.3723 | 0.4318 | 0.4318 | 0.4318 | | 1.0714 | 6.0 | 90 | 1.4548 | 0.3909 | 0.4848 | 0.4447 | 0.4532 | 0.4848 | 0.5043 | 0.4156 | 0.4848 | 0.4848 | 0.4848 | | 1.0008 | 7.0 | 105 | 1.3781 | 0.4699 | 0.5455 | 0.5277 | 0.4927 | 0.5455 | 0.5384 | 0.4728 | 0.5455 | 0.5455 | 0.5455 | | 0.8822 | 8.0 | 120 | 1.4939 | 0.4440 | 0.5076 | 0.4959 | 0.4501 | 0.5076 | 0.5066 | 0.4594 | 0.5076 | 0.5076 | 0.5076 | | 0.6024 | 9.0 | 135 | 1.2516 | 0.4922 | 0.5682 | 0.5559 | 0.4841 | 0.5682 | 0.5477 | 0.5037 | 0.5682 | 0.5682 | 0.5682 | | 0.2719 | 10.0 | 150 | 1.4020 | 0.4582 | 0.5455 | 0.5177 | 0.4610 | 0.5455 | 0.5195 | 0.4821 | 0.5455 | 0.5455 | 0.5455 | | 0.2165 | 11.0 | 165 | 1.4121 | 0.4974 | 0.5758 | 0.5601 | 0.4906 | 0.5758 | 0.5530 | 0.5121 | 0.5758 | 0.5758 | 0.5758 | | 0.2752 | 12.0 | 180 | 1.4315 | 0.5750 | 0.5909 | 0.5923 | 0.6618 | 0.5909 | 0.6319 | 0.5545 | 0.5909 | 0.5909 | 0.5909 | | 0.4386 | 13.0 | 195 | 1.4823 | 0.5170 | 0.5606 | 0.5607 | 0.5515 | 0.5606 | 0.5955 | 0.5150 | 0.5606 | 0.5606 | 0.5606 | | 0.2152 | 14.0 | 210 | 1.4962 | 0.5371 | 0.5833 | 0.5795 | 0.5796 | 0.5833 | 0.6003 | 0.5334 | 0.5833 | 0.5833 | 0.5833 | | 0.2021 | 15.0 | 225 | 1.5027 | 0.4698 | 0.5606 | 0.5383 | 0.4781 | 0.5606 | 0.5353 | 0.4829 | 0.5606 | 0.5606 | 0.5606 | | 0.1147 | 16.0 | 240 | 1.5977 | 0.4771 | 0.5606 | 0.5447 | 0.4901 | 0.5606 | 0.5638 | 0.4976 | 0.5606 | 0.5606 | 0.5606 | | 0.0648 | 17.0 | 255 | 1.6214 | 0.4894 | 0.5682 | 0.5551 | 0.4829 | 0.5682 | 0.5474 | 0.5009 | 0.5682 | 0.5682 | 0.5682 | | 0.0958 | 18.0 | 270 | 1.6267 | 0.4911 | 0.5379 | 0.5332 | 0.5140 | 0.5379 | 0.5539 | 0.4899 | 0.5379 | 0.5379 | 0.5379 | | 0.0468 | 19.0 | 285 | 1.6385 | 0.5462 | 0.6061 | 0.5975 | 0.5685 | 0.6061 | 0.6104 | 0.5458 | 0.6061 | 0.6061 | 0.6061 | | 0.0344 | 20.0 | 300 | 1.7048 | 0.5815 | 0.6136 | 0.6076 | 0.6000 | 0.6136 | 0.6225 | 0.5844 | 0.6136 | 0.6136 | 0.6136 | | 0.1686 | 21.0 | 315 | 1.8050 | 0.5545 | 0.6061 | 0.5975 | 0.5939 | 0.6061 | 0.6116 | 0.5538 | 0.6061 | 0.6061 | 0.6061 | | 0.044 | 22.0 | 330 | 1.7228 | 0.5362 | 0.5909 | 0.5779 | 0.5746 | 0.5909 | 0.5894 | 0.5380 | 0.5909 | 0.5909 | 0.5909 | | 0.0185 | 23.0 | 345 | 1.9158 | 0.5024 | 0.6061 | 0.5756 | 0.5249 | 0.6061 | 0.6034 | 0.5363 | 0.6061 | 0.6061 | 0.6061 | | 0.0065 | 24.0 | 360 | 1.7524 | 0.5881 | 0.6212 | 0.6111 | 0.6132 | 0.6212 | 0.6234 | 0.5850 | 0.6212 | 0.6212 | 0.6212 | | 0.1169 | 25.0 | 375 | 1.7258 | 0.5801 | 0.6364 | 0.6246 | 0.6131 | 0.6364 | 0.6282 | 0.5780 | 0.6364 | 0.6364 | 0.6364 | | 0.0092 | 26.0 | 390 | 1.8467 | 0.5427 | 0.5985 | 0.5849 | 0.5862 | 0.5985 | 0.5954 | 0.5379 | 0.5985 | 0.5985 | 0.5985 | | 0.0286 | 27.0 | 405 | 1.8018 | 0.5670 | 0.6212 | 0.6108 | 0.6027 | 0.6212 | 0.6171 | 0.5636 | 0.6212 | 0.6212 | 0.6212 | | 0.0263 | 28.0 | 420 | 1.8319 | 0.5621 | 0.6212 | 0.6079 | 0.5969 | 0.6212 | 0.6121 | 0.5598 | 0.6212 | 0.6212 | 0.6212 | | 0.0049 | 29.0 | 435 | 1.8276 | 0.5637 | 0.6136 | 0.6066 | 0.5962 | 0.6136 | 0.6099 | 0.5574 | 0.6136 | 0.6136 | 0.6136 | | 0.0039 | 30.0 | 450 | 1.8631 | 0.5480 | 0.5985 | 0.5900 | 0.5879 | 0.5985 | 0.5988 | 0.5405 | 0.5985 | 0.5985 | 0.5985 | | 0.0636 | 31.0 | 465 | 1.8579 | 0.5697 | 0.6212 | 0.6118 | 0.6075 | 0.6212 | 0.6181 | 0.5629 | 0.6212 | 0.6212 | 0.6212 | | 0.0023 | 32.0 | 480 | 1.8398 | 0.5773 | 0.6288 | 0.6204 | 0.6144 | 0.6288 | 0.6265 | 0.57 | 0.6288 | 0.6288 | 0.6288 | | 0.0028 | 32.6897 | 490 | 1.8415 | 0.5773 | 0.6288 | 0.6204 | 0.6144 | 0.6288 | 0.6265 | 0.57 | 0.6288 | 0.6288 | 0.6288 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/square_run_age_gender
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # square_run_age_gender This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4067 - F1 Macro: 0.4365 - F1 Micro: 0.5152 - F1 Weighted: 0.4956 - Precision Macro: 0.4384 - Precision Micro: 0.5152 - Precision Weighted: 0.4986 - Recall Macro: 0.4561 - Recall Micro: 0.5152 - Recall Weighted: 0.5152 - Accuracy: 0.5152 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.8891 | 1.0 | 29 | 1.8671 | 0.1742 | 0.2576 | 0.2101 | 0.1681 | 0.2576 | 0.2045 | 0.2142 | 0.2576 | 0.2576 | 0.2576 | | 1.8327 | 2.0 | 58 | 1.8124 | 0.1570 | 0.3182 | 0.1937 | 0.1335 | 0.3182 | 0.1611 | 0.2508 | 0.3182 | 0.3182 | 0.3182 | | 1.9127 | 3.0 | 87 | 1.7830 | 0.2085 | 0.3182 | 0.2576 | 0.2128 | 0.3182 | 0.2618 | 0.2625 | 0.3182 | 0.3182 | 0.3182 | | 1.4498 | 4.0 | 116 | 1.5796 | 0.2936 | 0.3864 | 0.3438 | 0.4342 | 0.3864 | 0.4527 | 0.3179 | 0.3864 | 0.3864 | 0.3864 | | 1.2166 | 5.0 | 145 | 1.3485 | 0.3868 | 0.4773 | 0.4442 | 0.5068 | 0.4773 | 0.5373 | 0.4077 | 0.4773 | 0.4773 | 0.4773 | | 1.5704 | 6.0 | 174 | 1.2560 | 0.4853 | 0.5606 | 0.5510 | 0.4906 | 0.5606 | 0.5679 | 0.5026 | 0.5606 | 0.5606 | 0.5606 | | 1.2465 | 7.0 | 203 | 1.4968 | 0.3854 | 0.4924 | 0.4393 | 0.5611 | 0.4924 | 0.5975 | 0.4107 | 0.4924 | 0.4924 | 0.4924 | | 1.2531 | 8.0 | 232 | 1.4663 | 0.4380 | 0.5 | 0.4841 | 0.4623 | 0.5 | 0.5302 | 0.4693 | 0.5 | 0.5 | 0.5 | | 0.5318 | 9.0 | 261 | 1.1161 | 0.4938 | 0.5909 | 0.5646 | 0.4892 | 0.5909 | 0.5595 | 0.5176 | 0.5909 | 0.5909 | 0.5909 | | 0.6824 | 10.0 | 290 | 1.1811 | 0.4802 | 0.5909 | 0.5515 | 0.4814 | 0.5909 | 0.5498 | 0.5148 | 0.5909 | 0.5909 | 0.5909 | | 0.6324 | 11.0 | 319 | 1.2358 | 0.4927 | 0.5758 | 0.5506 | 0.5015 | 0.5758 | 0.5690 | 0.5226 | 0.5758 | 0.5758 | 0.5758 | | 0.4145 | 12.0 | 348 | 1.1608 | 0.5846 | 0.6742 | 0.6643 | 0.5822 | 0.6742 | 0.6681 | 0.6005 | 0.6742 | 0.6742 | 0.6742 | | 0.4805 | 13.0 | 377 | 1.3200 | 0.5276 | 0.5758 | 0.5689 | 0.5767 | 0.5758 | 0.6138 | 0.5269 | 0.5758 | 0.5758 | 0.5758 | | 0.6232 | 14.0 | 406 | 1.3190 | 0.4790 | 0.5758 | 0.5517 | 0.5025 | 0.5758 | 0.5734 | 0.5006 | 0.5758 | 0.5758 | 0.5758 | | 0.3475 | 15.0 | 435 | 1.1853 | 0.6303 | 0.6970 | 0.6894 | 0.6717 | 0.6970 | 0.7088 | 0.6312 | 0.6970 | 0.6970 | 0.6970 | | 0.1956 | 16.0 | 464 | 1.5695 | 0.4323 | 0.5152 | 0.4974 | 0.4755 | 0.5152 | 0.5334 | 0.4358 | 0.5152 | 0.5152 | 0.5152 | | 0.1519 | 17.0 | 493 | 1.4404 | 0.5819 | 0.6439 | 0.6317 | 0.6438 | 0.6439 | 0.6577 | 0.5706 | 0.6439 | 0.6439 | 0.6439 | | 0.1031 | 18.0 | 522 | 1.4877 | 0.5370 | 0.6136 | 0.6041 | 0.5351 | 0.6136 | 0.5975 | 0.5422 | 0.6136 | 0.6136 | 0.6136 | | 0.0615 | 19.0 | 551 | 1.4801 | 0.6013 | 0.6061 | 0.6106 | 0.6476 | 0.6061 | 0.6581 | 0.5951 | 0.6061 | 0.6061 | 0.6061 | | 0.0249 | 20.0 | 580 | 1.6082 | 0.5198 | 0.5909 | 0.5825 | 0.5149 | 0.5909 | 0.5770 | 0.5272 | 0.5909 | 0.5909 | 0.5909 | | 0.374 | 21.0 | 609 | 1.7594 | 0.6084 | 0.6288 | 0.6185 | 0.6712 | 0.6288 | 0.6679 | 0.6049 | 0.6288 | 0.6288 | 0.6288 | | 0.025 | 22.0 | 638 | 1.4723 | 0.6446 | 0.6515 | 0.6520 | 0.6543 | 0.6515 | 0.6660 | 0.6479 | 0.6515 | 0.6515 | 0.6515 | | 0.0096 | 23.0 | 667 | 1.5689 | 0.5899 | 0.6136 | 0.6089 | 0.6170 | 0.6136 | 0.6315 | 0.5878 | 0.6136 | 0.6136 | 0.6136 | | 0.0661 | 24.0 | 696 | 1.6276 | 0.6056 | 0.6667 | 0.6576 | 0.6690 | 0.6667 | 0.6867 | 0.5949 | 0.6667 | 0.6667 | 0.6667 | | 0.0463 | 25.0 | 725 | 1.6761 | 0.5591 | 0.6136 | 0.6085 | 0.6193 | 0.6136 | 0.6401 | 0.5521 | 0.6136 | 0.6136 | 0.6136 | | 0.0118 | 26.0 | 754 | 1.6210 | 0.5353 | 0.6288 | 0.6075 | 0.5716 | 0.6288 | 0.6263 | 0.5410 | 0.6288 | 0.6288 | 0.6288 | | 0.0018 | 27.0 | 783 | 1.6073 | 0.5860 | 0.6742 | 0.6575 | 0.5956 | 0.6742 | 0.6587 | 0.5929 | 0.6742 | 0.6742 | 0.6742 | | 0.0336 | 28.0 | 812 | 1.5964 | 0.6086 | 0.6439 | 0.6411 | 0.6379 | 0.6439 | 0.6566 | 0.5979 | 0.6439 | 0.6439 | 0.6439 | | 0.0014 | 29.0 | 841 | 1.5290 | 0.6873 | 0.7121 | 0.7083 | 0.7263 | 0.7121 | 0.7308 | 0.6734 | 0.7121 | 0.7121 | 0.7121 | | 0.021 | 30.0 | 870 | 1.5440 | 0.6982 | 0.6970 | 0.6974 | 0.7076 | 0.6970 | 0.7170 | 0.7086 | 0.6970 | 0.6970 | 0.6970 | | 0.0065 | 31.0 | 899 | 1.6576 | 0.6869 | 0.6970 | 0.6915 | 0.7430 | 0.6970 | 0.7270 | 0.6699 | 0.6970 | 0.6970 | 0.6970 | | 0.0013 | 32.0 | 928 | 1.5603 | 0.7124 | 0.7197 | 0.7173 | 0.7508 | 0.7197 | 0.7411 | 0.6987 | 0.7197 | 0.7197 | 0.7197 | | 0.0129 | 33.0 | 957 | 1.6028 | 0.6842 | 0.6894 | 0.6870 | 0.7153 | 0.6894 | 0.7059 | 0.6731 | 0.6894 | 0.6894 | 0.6894 | | 0.0006 | 34.0 | 986 | 1.6075 | 0.6787 | 0.6818 | 0.6800 | 0.7094 | 0.6818 | 0.6991 | 0.6678 | 0.6818 | 0.6818 | 0.6818 | | 0.0022 | 35.0 | 1015 | 1.6009 | 0.6848 | 0.6894 | 0.6869 | 0.7171 | 0.6894 | 0.7062 | 0.6731 | 0.6894 | 0.6894 | 0.6894 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/square_run_min_loss
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # square_run_min_loss This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5286 - F1 Macro: 0.4619 - F1 Micro: 0.5455 - F1 Weighted: 0.5156 - Precision Macro: 0.4696 - Precision Micro: 0.5455 - Precision Weighted: 0.5176 - Recall Macro: 0.4841 - Recall Micro: 0.5455 - Recall Weighted: 0.5455 - Accuracy: 0.5455 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.934 | 1.0 | 58 | 1.8780 | 0.0664 | 0.2045 | 0.0901 | 0.1708 | 0.2045 | 0.2415 | 0.1534 | 0.2045 | 0.2045 | 0.2045 | | 1.8145 | 2.0 | 116 | 1.8828 | 0.0691 | 0.1742 | 0.0755 | 0.0608 | 0.1742 | 0.0658 | 0.1575 | 0.1742 | 0.1742 | 0.1742 | | 1.8527 | 3.0 | 174 | 1.7131 | 0.2503 | 0.3788 | 0.3053 | 0.2573 | 0.3788 | 0.3062 | 0.3094 | 0.3788 | 0.3788 | 0.3788 | | 1.6734 | 4.0 | 232 | 1.7940 | 0.1621 | 0.2803 | 0.2087 | 0.2145 | 0.2803 | 0.2624 | 0.2076 | 0.2803 | 0.2803 | 0.2803 | | 1.6408 | 5.0 | 290 | 1.6808 | 0.1570 | 0.3333 | 0.1965 | 0.1432 | 0.3333 | 0.1858 | 0.2702 | 0.3333 | 0.3333 | 0.3333 | | 1.5696 | 6.0 | 348 | 1.5061 | 0.3172 | 0.4470 | 0.3802 | 0.3895 | 0.4470 | 0.4186 | 0.3618 | 0.4470 | 0.4470 | 0.4470 | | 1.4543 | 7.0 | 406 | 1.3674 | 0.4113 | 0.5152 | 0.4708 | 0.4077 | 0.5152 | 0.4630 | 0.4479 | 0.5152 | 0.5152 | 0.5152 | | 1.2349 | 8.0 | 464 | 1.3137 | 0.4024 | 0.5 | 0.4550 | 0.4050 | 0.5 | 0.4606 | 0.4479 | 0.5 | 0.5 | 0.5 | | 1.2544 | 9.0 | 522 | 1.3322 | 0.4209 | 0.5076 | 0.4748 | 0.4224 | 0.5076 | 0.4737 | 0.4480 | 0.5076 | 0.5076 | 0.5076 | | 1.206 | 10.0 | 580 | 1.3818 | 0.3555 | 0.4621 | 0.4009 | 0.3931 | 0.4621 | 0.4372 | 0.4129 | 0.4621 | 0.4621 | 0.4621 | | 1.0416 | 11.0 | 638 | 1.3142 | 0.4610 | 0.5606 | 0.5249 | 0.5218 | 0.5606 | 0.5872 | 0.4951 | 0.5606 | 0.5606 | 0.5606 | | 1.1494 | 12.0 | 696 | 1.3793 | 0.4106 | 0.4773 | 0.4652 | 0.4619 | 0.4773 | 0.5256 | 0.4227 | 0.4773 | 0.4773 | 0.4773 | | 0.7366 | 13.0 | 754 | 1.1936 | 0.5656 | 0.6515 | 0.6383 | 0.5708 | 0.6515 | 0.6446 | 0.5790 | 0.6515 | 0.6515 | 0.6515 | | 1.3729 | 14.0 | 812 | 1.2285 | 0.5151 | 0.6061 | 0.5861 | 0.5714 | 0.6061 | 0.6314 | 0.5225 | 0.6061 | 0.6061 | 0.6061 | | 1.3638 | 15.0 | 870 | 1.1742 | 0.5389 | 0.6212 | 0.6055 | 0.5617 | 0.6212 | 0.6334 | 0.5513 | 0.6212 | 0.6212 | 0.6212 | | 0.9063 | 16.0 | 928 | 1.2325 | 0.5079 | 0.5985 | 0.5770 | 0.5077 | 0.5985 | 0.5715 | 0.5215 | 0.5985 | 0.5985 | 0.5985 | | 0.4584 | 17.0 | 986 | 1.1497 | 0.5515 | 0.6364 | 0.6210 | 0.5676 | 0.6364 | 0.6286 | 0.5575 | 0.6364 | 0.6364 | 0.6364 | | 0.86 | 18.0 | 1044 | 1.2673 | 0.4925 | 0.5909 | 0.5719 | 0.4968 | 0.5909 | 0.5681 | 0.5031 | 0.5909 | 0.5909 | 0.5909 | | 0.2113 | 19.0 | 1102 | 1.2132 | 0.5180 | 0.6212 | 0.5986 | 0.5386 | 0.6212 | 0.6049 | 0.5257 | 0.6212 | 0.6212 | 0.6212 | | 0.1168 | 20.0 | 1160 | 1.2442 | 0.5543 | 0.6136 | 0.6070 | 0.5742 | 0.6136 | 0.6164 | 0.5517 | 0.6136 | 0.6136 | 0.6136 | | 0.3149 | 21.0 | 1218 | 1.2900 | 0.5446 | 0.6288 | 0.6146 | 0.5463 | 0.6288 | 0.6120 | 0.5534 | 0.6288 | 0.6288 | 0.6288 | | 0.0793 | 22.0 | 1276 | 1.3290 | 0.5692 | 0.6288 | 0.6210 | 0.5960 | 0.6288 | 0.6359 | 0.5651 | 0.6288 | 0.6288 | 0.6288 | | 0.1761 | 23.0 | 1334 | 1.4284 | 0.5572 | 0.6212 | 0.6032 | 0.6454 | 0.6212 | 0.6563 | 0.5516 | 0.6212 | 0.6212 | 0.6212 | | 0.1714 | 24.0 | 1392 | 1.2994 | 0.5782 | 0.6288 | 0.6344 | 0.5899 | 0.6288 | 0.6461 | 0.5728 | 0.6288 | 0.6288 | 0.6288 | | 0.465 | 25.0 | 1450 | 1.4011 | 0.5581 | 0.6136 | 0.6134 | 0.5662 | 0.6136 | 0.6188 | 0.5556 | 0.6136 | 0.6136 | 0.6136 | | 0.2203 | 26.0 | 1508 | 1.4701 | 0.5741 | 0.6288 | 0.6266 | 0.6167 | 0.6288 | 0.6553 | 0.5676 | 0.6288 | 0.6288 | 0.6288 | | 0.0574 | 27.0 | 1566 | 1.4511 | 0.5800 | 0.6364 | 0.6352 | 0.6073 | 0.6364 | 0.6546 | 0.5738 | 0.6364 | 0.6364 | 0.6364 | | 0.0399 | 28.0 | 1624 | 1.4921 | 0.5674 | 0.6061 | 0.6133 | 0.5933 | 0.6061 | 0.6390 | 0.5645 | 0.6061 | 0.6061 | 0.6061 | | 0.0269 | 29.0 | 1682 | 1.4752 | 0.5563 | 0.6288 | 0.6283 | 0.5686 | 0.6288 | 0.6350 | 0.5515 | 0.6288 | 0.6288 | 0.6288 | | 0.0267 | 30.0 | 1740 | 1.5353 | 0.5621 | 0.6136 | 0.6142 | 0.5859 | 0.6136 | 0.6324 | 0.5565 | 0.6136 | 0.6136 | 0.6136 | | 0.1094 | 31.0 | 1798 | 1.5126 | 0.5912 | 0.6515 | 0.6529 | 0.6028 | 0.6515 | 0.6604 | 0.5867 | 0.6515 | 0.6515 | 0.6515 | | 0.0243 | 32.0 | 1856 | 1.4900 | 0.5985 | 0.6591 | 0.6563 | 0.6103 | 0.6591 | 0.6604 | 0.5935 | 0.6591 | 0.6591 | 0.6591 | | 0.0366 | 33.0 | 1914 | 1.4680 | 0.6275 | 0.6894 | 0.6851 | 0.6369 | 0.6894 | 0.6855 | 0.6241 | 0.6894 | 0.6894 | 0.6894 | | 0.0235 | 34.0 | 1972 | 1.4772 | 0.6216 | 0.6818 | 0.6795 | 0.6324 | 0.6818 | 0.6836 | 0.6173 | 0.6818 | 0.6818 | 0.6818 | | 0.0345 | 35.0 | 2030 | 1.4754 | 0.6556 | 0.6970 | 0.6961 | 0.6722 | 0.6970 | 0.7038 | 0.6479 | 0.6970 | 0.6970 | 0.6970 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
prithivMLmods/Fire-Detection-Engine
![ccccccccccccccccccccccccccc.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/WP5cejlMirz3t6o2_YzVQ.png) # **Fire-Detection-Engine** The **Fire-Detection-Engine** is a state-of-the-art deep learning model designed to detect fire-related conditions in images. It leverages the **Vision Transformer (ViT)** architecture, specifically the `google/vit-base-patch16-224-in21k` model, fine-tuned on a dataset of fire and non-fire images. The model is trained to classify images into one of the following categories: "Fire Needed Action," "Normal Conditions," or "Smoky Environment," making it a powerful tool for detecting fire hazards. ```python Classification report: precision recall f1-score support Fire Needed Action 0.9708 0.9864 0.9785 808 Normal Conditions 0.9872 0.9530 0.9698 808 Smoky Environment 0.9818 1.0000 0.9908 808 accuracy 0.9798 2424 macro avg 0.9799 0.9798 0.9797 2424 weighted avg 0.9799 0.9798 0.9797 2424 ``` ![download.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/kDS5cVee2ZVOv92rY0lGw.png) # **Mappers** ```python Mapping of IDs to Labels: {0: 'Fire Needed Action', 1: 'Normal Conditions', 2: 'Smoky Environment'} Mapping of Labels to IDs: {'Fire Needed Action': 0, 'Normal Conditions': 1, 'Smoky Environment': 2} ``` # **Key Features** - **Architecture**: Vision Transformer (ViT) - `google/vit-base-patch16-224-in21k`. - **Input**: RGB images resized to 224x224 pixels. - **Output**: Binary classification ("Fire Needed Action" or "Normal Conditions" or "Smoky Environment"). - **Training Dataset**: A curated dataset of fire place conditions. - **Fine-Tuning**: The model is fine-tuned using Hugging Face's `Trainer` API with advanced data augmentation techniques. - **Performance**: Achieves high accuracy and F1 score on validation and test datasets.
[ "fire needed action", "normal conditions", "smoky environment" ]
corranm/square_run_32_batch
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # square_run_32_batch This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6241 - F1 Macro: 0.5019 - F1 Micro: 0.5758 - F1 Weighted: 0.5679 - Precision Macro: 0.5021 - Precision Micro: 0.5758 - Precision Weighted: 0.5657 - Recall Macro: 0.5073 - Recall Micro: 0.5758 - Recall Weighted: 0.5758 - Accuracy: 0.5758 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.9373 | 1.0 | 15 | 1.8818 | 0.0464 | 0.1894 | 0.0615 | 0.0277 | 0.1894 | 0.0367 | 0.1429 | 0.1894 | 0.1894 | 0.1894 | | 1.869 | 2.0 | 30 | 1.8642 | 0.1100 | 0.2652 | 0.1418 | 0.075 | 0.2652 | 0.0968 | 0.2063 | 0.2652 | 0.2652 | 0.2652 | | 1.9218 | 3.0 | 45 | 1.8754 | 0.1163 | 0.2576 | 0.1460 | 0.1316 | 0.2576 | 0.1566 | 0.1905 | 0.2576 | 0.2576 | 0.2576 | | 1.6733 | 4.0 | 60 | 1.6881 | 0.2445 | 0.3864 | 0.3053 | 0.2427 | 0.3864 | 0.2917 | 0.2992 | 0.3864 | 0.3864 | 0.3864 | | 1.54 | 5.0 | 75 | 1.5528 | 0.3252 | 0.4242 | 0.3856 | 0.3429 | 0.4242 | 0.4101 | 0.3570 | 0.4242 | 0.4242 | 0.4242 | | 1.4418 | 6.0 | 90 | 1.5737 | 0.2858 | 0.3864 | 0.3213 | 0.2846 | 0.3864 | 0.3243 | 0.3398 | 0.3864 | 0.3864 | 0.3864 | | 0.8592 | 7.0 | 105 | 1.5408 | 0.3444 | 0.4394 | 0.3965 | 0.3208 | 0.4394 | 0.3674 | 0.3791 | 0.4394 | 0.4394 | 0.4394 | | 1.1427 | 8.0 | 120 | 1.2804 | 0.4638 | 0.5606 | 0.5317 | 0.4698 | 0.5606 | 0.5280 | 0.4831 | 0.5606 | 0.5606 | 0.5606 | | 0.7849 | 9.0 | 135 | 1.2880 | 0.4649 | 0.5530 | 0.5291 | 0.4804 | 0.5530 | 0.5401 | 0.4823 | 0.5530 | 0.5530 | 0.5530 | | 0.6846 | 10.0 | 150 | 1.3130 | 0.4298 | 0.5152 | 0.4811 | 0.4404 | 0.5152 | 0.5005 | 0.4671 | 0.5152 | 0.5152 | 0.5152 | | 0.4006 | 11.0 | 165 | 1.2958 | 0.4931 | 0.5833 | 0.5598 | 0.4983 | 0.5833 | 0.5756 | 0.5229 | 0.5833 | 0.5833 | 0.5833 | | 0.4329 | 12.0 | 180 | 1.2990 | 0.5062 | 0.5530 | 0.5562 | 0.5315 | 0.5530 | 0.5874 | 0.5133 | 0.5530 | 0.5530 | 0.5530 | | 0.482 | 13.0 | 195 | 1.3831 | 0.4842 | 0.5152 | 0.5233 | 0.5517 | 0.5152 | 0.5803 | 0.4839 | 0.5152 | 0.5152 | 0.5152 | | 0.6409 | 14.0 | 210 | 1.4066 | 0.5081 | 0.5985 | 0.5765 | 0.5194 | 0.5985 | 0.5820 | 0.5232 | 0.5985 | 0.5985 | 0.5985 | | 0.3206 | 15.0 | 225 | 1.3690 | 0.5155 | 0.5606 | 0.5520 | 0.6158 | 0.5606 | 0.5890 | 0.5170 | 0.5606 | 0.5606 | 0.5606 | | 0.1773 | 16.0 | 240 | 1.2568 | 0.5920 | 0.6515 | 0.6408 | 0.6894 | 0.6515 | 0.6623 | 0.5843 | 0.6515 | 0.6515 | 0.6515 | | 0.3259 | 17.0 | 255 | 1.3406 | 0.5467 | 0.6061 | 0.5961 | 0.5615 | 0.6061 | 0.6033 | 0.5467 | 0.6061 | 0.6061 | 0.6061 | | 0.1123 | 18.0 | 270 | 1.3767 | 0.5868 | 0.6364 | 0.6306 | 0.6258 | 0.6364 | 0.6413 | 0.5785 | 0.6364 | 0.6364 | 0.6364 | | 0.1129 | 19.0 | 285 | 1.4680 | 0.5879 | 0.6439 | 0.6306 | 0.6809 | 0.6439 | 0.6933 | 0.5806 | 0.6439 | 0.6439 | 0.6439 | | 0.0651 | 20.0 | 300 | 1.4981 | 0.6655 | 0.6894 | 0.6876 | 0.7115 | 0.6894 | 0.7224 | 0.6511 | 0.6894 | 0.6894 | 0.6894 | | 0.0685 | 21.0 | 315 | 1.4621 | 0.6091 | 0.6515 | 0.6494 | 0.6303 | 0.6515 | 0.6641 | 0.6040 | 0.6515 | 0.6515 | 0.6515 | | 0.1469 | 22.0 | 330 | 1.5347 | 0.5330 | 0.6212 | 0.6040 | 0.5477 | 0.6212 | 0.6149 | 0.5440 | 0.6212 | 0.6212 | 0.6212 | | 0.0289 | 23.0 | 345 | 1.5417 | 0.5466 | 0.6288 | 0.6180 | 0.5409 | 0.6288 | 0.6108 | 0.5549 | 0.6288 | 0.6288 | 0.6288 | | 0.01 | 24.0 | 360 | 1.5670 | 0.5475 | 0.6364 | 0.6187 | 0.5435 | 0.6364 | 0.6104 | 0.5594 | 0.6364 | 0.6364 | 0.6364 | | 0.035 | 25.0 | 375 | 1.6037 | 0.5529 | 0.6364 | 0.6209 | 0.5470 | 0.6364 | 0.6156 | 0.5679 | 0.6364 | 0.6364 | 0.6364 | | 0.0109 | 26.0 | 390 | 1.6752 | 0.5897 | 0.6212 | 0.6203 | 0.6145 | 0.6212 | 0.6527 | 0.6000 | 0.6212 | 0.6212 | 0.6212 | | 0.038 | 27.0 | 405 | 1.6724 | 0.5344 | 0.6136 | 0.6008 | 0.5332 | 0.6136 | 0.6005 | 0.5468 | 0.6136 | 0.6136 | 0.6136 | | 0.0116 | 28.0 | 420 | 1.6252 | 0.5384 | 0.6212 | 0.6090 | 0.5337 | 0.6212 | 0.6033 | 0.5491 | 0.6212 | 0.6212 | 0.6212 | | 0.006 | 29.0 | 435 | 1.5980 | 0.5572 | 0.6364 | 0.6294 | 0.5529 | 0.6364 | 0.6246 | 0.5634 | 0.6364 | 0.6364 | 0.6364 | | 0.0046 | 30.0 | 450 | 1.5939 | 0.5605 | 0.6439 | 0.6342 | 0.5546 | 0.6439 | 0.6269 | 0.5687 | 0.6439 | 0.6439 | 0.6439 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
corranm/square_run_second_vote
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # square_run_second_vote This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1557 - F1 Macro: 0.5777 - F1 Micro: 0.6667 - F1 Weighted: 0.6629 - Precision Macro: 0.5756 - Precision Micro: 0.6667 - Precision Weighted: 0.6734 - Recall Macro: 0.5912 - Recall Micro: 0.6667 - Recall Weighted: 0.6667 - Accuracy: 0.6667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:| | 1.8754 | 1.0 | 58 | 1.7961 | 0.1385 | 0.2803 | 0.1722 | 0.1426 | 0.2803 | 0.1603 | 0.2098 | 0.2803 | 0.2803 | 0.2803 | | 2.0246 | 2.0 | 116 | 2.0138 | 0.2236 | 0.3106 | 0.2484 | 0.2558 | 0.3106 | 0.2692 | 0.2842 | 0.3106 | 0.3106 | 0.3106 | | 1.6189 | 3.0 | 174 | 1.5039 | 0.2444 | 0.3864 | 0.3195 | 0.2633 | 0.3864 | 0.3301 | 0.2847 | 0.3864 | 0.3864 | 0.3864 | | 1.3445 | 4.0 | 232 | 1.3982 | 0.3287 | 0.4394 | 0.3866 | 0.3186 | 0.4394 | 0.3730 | 0.3696 | 0.4394 | 0.4394 | 0.4394 | | 1.3387 | 5.0 | 290 | 1.1920 | 0.4401 | 0.5758 | 0.5265 | 0.4315 | 0.5758 | 0.5031 | 0.4683 | 0.5758 | 0.5758 | 0.5758 | | 1.1664 | 6.0 | 348 | 1.1778 | 0.4179 | 0.5076 | 0.4988 | 0.5068 | 0.5076 | 0.5862 | 0.4395 | 0.5076 | 0.5076 | 0.5076 | | 1.1622 | 7.0 | 406 | 1.1723 | 0.4518 | 0.5379 | 0.5251 | 0.4514 | 0.5379 | 0.5526 | 0.4867 | 0.5379 | 0.5379 | 0.5379 | | 0.9827 | 8.0 | 464 | 1.0619 | 0.5084 | 0.6212 | 0.6074 | 0.5037 | 0.6212 | 0.6140 | 0.5345 | 0.6212 | 0.6212 | 0.6212 | | 1.3416 | 9.0 | 522 | 1.3995 | 0.3997 | 0.5 | 0.4690 | 0.4218 | 0.5 | 0.5024 | 0.4509 | 0.5 | 0.5 | 0.5 | | 0.758 | 10.0 | 580 | 1.1693 | 0.5066 | 0.5985 | 0.5836 | 0.5262 | 0.5985 | 0.6031 | 0.5279 | 0.5985 | 0.5985 | 0.5985 | | 0.7758 | 11.0 | 638 | 1.0800 | 0.5491 | 0.6515 | 0.6320 | 0.5729 | 0.6515 | 0.6501 | 0.5710 | 0.6515 | 0.6515 | 0.6515 | | 0.2319 | 12.0 | 696 | 1.1553 | 0.5467 | 0.6742 | 0.6410 | 0.5816 | 0.6742 | 0.6699 | 0.5711 | 0.6742 | 0.6742 | 0.6742 | | 0.3528 | 13.0 | 754 | 1.1685 | 0.5794 | 0.6894 | 0.6711 | 0.5887 | 0.6894 | 0.6752 | 0.5955 | 0.6894 | 0.6894 | 0.6894 | | 0.6238 | 14.0 | 812 | 1.1781 | 0.5579 | 0.6439 | 0.6285 | 0.5451 | 0.6439 | 0.6278 | 0.5856 | 0.6439 | 0.6439 | 0.6439 | | 0.1869 | 15.0 | 870 | 1.2305 | 0.5146 | 0.6061 | 0.5983 | 0.5032 | 0.6061 | 0.6013 | 0.5369 | 0.6061 | 0.6061 | 0.6061 | | 0.1015 | 16.0 | 928 | 1.3576 | 0.5019 | 0.5909 | 0.5932 | 0.5440 | 0.5909 | 0.6312 | 0.4959 | 0.5909 | 0.5909 | 0.5909 | | 0.3809 | 17.0 | 986 | 1.2998 | 0.5667 | 0.6591 | 0.6527 | 0.5828 | 0.6591 | 0.6885 | 0.5838 | 0.6591 | 0.6591 | 0.6591 | | 0.0887 | 18.0 | 1044 | 1.4154 | 0.5572 | 0.6667 | 0.6489 | 0.5682 | 0.6667 | 0.6518 | 0.5683 | 0.6667 | 0.6667 | 0.6667 | | 0.1422 | 19.0 | 1102 | 1.3989 | 0.5609 | 0.6667 | 0.6472 | 0.5672 | 0.6667 | 0.6420 | 0.5695 | 0.6667 | 0.6667 | 0.6667 | | 0.0037 | 20.0 | 1160 | 1.5134 | 0.5242 | 0.6212 | 0.6078 | 0.5263 | 0.6212 | 0.6093 | 0.5374 | 0.6212 | 0.6212 | 0.6212 | | 0.0602 | 21.0 | 1218 | 1.5349 | 0.5660 | 0.6667 | 0.6544 | 0.5710 | 0.6667 | 0.6503 | 0.5671 | 0.6667 | 0.6667 | 0.6667 | | 0.0353 | 22.0 | 1276 | 1.4489 | 0.6137 | 0.7045 | 0.6919 | 0.6146 | 0.7045 | 0.6909 | 0.6242 | 0.7045 | 0.7045 | 0.7045 | | 0.001 | 23.0 | 1334 | 1.4781 | 0.5715 | 0.6667 | 0.6541 | 0.5657 | 0.6667 | 0.6449 | 0.5805 | 0.6667 | 0.6667 | 0.6667 | | 0.0007 | 24.0 | 1392 | 1.6326 | 0.5713 | 0.6591 | 0.6511 | 0.5871 | 0.6591 | 0.6648 | 0.5786 | 0.6591 | 0.6591 | 0.6591 | | 0.0084 | 25.0 | 1450 | 1.5856 | 0.5684 | 0.6591 | 0.6569 | 0.5662 | 0.6591 | 0.6672 | 0.5802 | 0.6591 | 0.6591 | 0.6591 | | 0.0008 | 26.0 | 1508 | 1.5799 | 0.5826 | 0.6818 | 0.6675 | 0.5849 | 0.6818 | 0.6632 | 0.5884 | 0.6818 | 0.6818 | 0.6818 | | 0.0053 | 27.0 | 1566 | 1.5308 | 0.5719 | 0.6667 | 0.6556 | 0.5667 | 0.6667 | 0.6524 | 0.5843 | 0.6667 | 0.6667 | 0.6667 | | 0.0004 | 28.0 | 1624 | 1.5639 | 0.5732 | 0.6667 | 0.6617 | 0.5684 | 0.6667 | 0.6673 | 0.5867 | 0.6667 | 0.6667 | 0.6667 | | 0.0007 | 29.0 | 1682 | 1.5346 | 0.5835 | 0.6742 | 0.6678 | 0.5786 | 0.6742 | 0.6703 | 0.5965 | 0.6742 | 0.6742 | 0.6742 | | 0.0004 | 30.0 | 1740 | 1.5232 | 0.5791 | 0.6742 | 0.6661 | 0.5707 | 0.6742 | 0.6628 | 0.5918 | 0.6742 | 0.6742 | 0.6742 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "-", "0", "1", "2", "3", "4", "5" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV37
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV37 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8351 - Accuracy: 0.6538 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.8 | 3 | 1.6284 | 0.1346 | | No log | 1.8 | 6 | 1.5966 | 0.2404 | | No log | 2.8 | 9 | 1.5076 | 0.3942 | | 6.28 | 3.8 | 12 | 1.2912 | 0.4615 | | 6.28 | 4.8 | 15 | 1.2137 | 0.5096 | | 6.28 | 5.8 | 18 | 1.1917 | 0.5385 | | 6.28 | 6.8 | 21 | 1.1498 | 0.5673 | | 2.9539 | 7.8 | 24 | 1.2026 | 0.5865 | | 2.9539 | 8.8 | 27 | 1.2711 | 0.5962 | | 2.9539 | 9.8 | 30 | 1.3534 | 0.625 | | 2.9539 | 10.8 | 33 | 1.3210 | 0.625 | | 0.9643 | 11.8 | 36 | 1.3940 | 0.6346 | | 0.9643 | 12.8 | 39 | 1.4859 | 0.6346 | | 0.9643 | 13.8 | 42 | 1.4965 | 0.6346 | | 0.9643 | 14.8 | 45 | 1.5463 | 0.625 | | 0.3275 | 15.8 | 48 | 1.5885 | 0.6346 | | 0.3275 | 16.8 | 51 | 1.6466 | 0.6442 | | 0.3275 | 17.8 | 54 | 1.8351 | 0.6538 | | 0.3275 | 18.8 | 57 | 1.8326 | 0.6442 | | 0.1501 | 19.8 | 60 | 1.7521 | 0.6346 | | 0.1501 | 20.8 | 63 | 1.7806 | 0.6538 | | 0.1501 | 21.8 | 66 | 1.7669 | 0.6538 | | 0.1501 | 22.8 | 69 | 1.8874 | 0.6346 | | 0.09 | 23.8 | 72 | 1.8827 | 0.6538 | | 0.09 | 24.8 | 75 | 1.8330 | 0.6538 | | 0.09 | 25.8 | 78 | 1.8331 | 0.6538 | | 0.09 | 26.8 | 81 | 1.8410 | 0.6538 | | 0.0595 | 27.8 | 84 | 1.8441 | 0.6442 | | 0.0595 | 28.8 | 87 | 1.8444 | 0.6538 | | 0.0595 | 29.8 | 90 | 1.8446 | 0.6538 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "label_0", "label_1", "label_2", "label_3", "label_4" ]
pipidepulus/hojas
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hojas This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0200 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1305 | 3.8462 | 500 | 0.0200 | 0.9925 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV38
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV38 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1368 - Accuracy: 0.6591 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 1.0 | 4 | 1.4929 | 0.4545 | | No log | 2.0 | 8 | 1.2769 | 0.4545 | | 4.8085 | 3.0 | 12 | 1.2009 | 0.5682 | | 4.8085 | 4.0 | 16 | 1.0885 | 0.5114 | | 4.8085 | 5.0 | 20 | 0.9977 | 0.6591 | | 3.4418 | 6.0 | 24 | 0.9279 | 0.7045 | | 3.4418 | 7.0 | 28 | 0.9311 | 0.6591 | | 3.4418 | 8.0 | 32 | 0.9485 | 0.6591 | | 2.4374 | 9.0 | 36 | 0.9279 | 0.6705 | | 2.4374 | 10.0 | 40 | 0.9567 | 0.6705 | | 2.4374 | 11.0 | 44 | 0.9832 | 0.6705 | | 1.6575 | 12.0 | 48 | 0.9993 | 0.6705 | | 1.6575 | 13.0 | 52 | 1.0360 | 0.6705 | | 1.6575 | 14.0 | 56 | 1.0418 | 0.6591 | | 1.1852 | 15.0 | 60 | 1.0619 | 0.6477 | | 1.1852 | 16.0 | 64 | 1.0820 | 0.6705 | | 1.1852 | 17.0 | 68 | 1.1155 | 0.6477 | | 0.9213 | 18.0 | 72 | 1.1154 | 0.6591 | | 0.9213 | 19.0 | 76 | 1.1240 | 0.6591 | | 0.9213 | 20.0 | 80 | 1.1349 | 0.6591 | | 0.7877 | 21.0 | 84 | 1.1367 | 0.6591 | | 0.7877 | 22.0 | 88 | 1.1367 | 0.6591 | | 0.7877 | 22.6154 | 90 | 1.1368 | 0.6591 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "label_0", "label_1", "label_2", "label_3", "label_4" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV39
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV39 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9091 - Accuracy: 0.75 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 1.0 | 4 | 1.3890 | 0.4318 | | No log | 2.0 | 8 | 1.2016 | 0.5682 | | 4.424 | 3.0 | 12 | 1.0279 | 0.6364 | | 4.424 | 4.0 | 16 | 0.9227 | 0.6477 | | 4.424 | 5.0 | 20 | 0.9294 | 0.6818 | | 2.6292 | 6.0 | 24 | 0.9936 | 0.625 | | 2.6292 | 7.0 | 28 | 0.9816 | 0.6705 | | 2.6292 | 8.0 | 32 | 0.9091 | 0.75 | | 1.5117 | 9.0 | 36 | 0.8995 | 0.7159 | | 1.5117 | 10.0 | 40 | 1.0646 | 0.7045 | | 1.5117 | 11.0 | 44 | 1.1826 | 0.6705 | | 0.7053 | 12.0 | 48 | 1.2360 | 0.6932 | | 0.7053 | 13.0 | 52 | 1.4215 | 0.6818 | | 0.7053 | 14.0 | 56 | 1.3694 | 0.7045 | | 0.2599 | 15.0 | 60 | 1.2998 | 0.7159 | | 0.2599 | 16.0 | 64 | 1.3703 | 0.7159 | | 0.2599 | 17.0 | 68 | 1.4626 | 0.7159 | | 0.1282 | 18.0 | 72 | 1.5181 | 0.7159 | | 0.1282 | 19.0 | 76 | 1.5692 | 0.7045 | | 0.1282 | 20.0 | 80 | 1.5965 | 0.6932 | | 0.0816 | 21.0 | 84 | 1.6018 | 0.7045 | | 0.0816 | 22.0 | 88 | 1.6046 | 0.7045 | | 0.0816 | 22.6154 | 90 | 1.6049 | 0.7045 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "label_0", "label_1", "label_2", "label_3", "label_4" ]
lingjy/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.3501 - Accuracy: 0.9269 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 47 | 1.1416 | 0.8525 | | No log | 2.0 | 94 | 0.5193 | 0.9120 | | 1.4502 | 3.0 | 141 | 0.3979 | 0.9175 | | 1.4502 | 4.0 | 188 | 0.3580 | 0.9269 | | 0.376 | 5.0 | 235 | 0.3475 | 0.9283 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.0.0+cu117 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
Ivanrs/vit-base-kidney-stone-Michel_Daudon_-w256_1k_v1-_SUR
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-kidney-stone-Michel_Daudon_-w256_1k_v1-_SUR This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8337 - Accuracy: 0.7580 - Precision: 0.7873 - Recall: 0.7580 - F1: 0.7485 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.1701 | 0.6667 | 100 | 0.8337 | 0.7580 | 0.7873 | 0.7580 | 0.7485 | | 0.1078 | 1.3333 | 200 | 0.9744 | 0.7392 | 0.7683 | 0.7392 | 0.7328 | | 0.0149 | 2.0 | 300 | 1.1815 | 0.7490 | 0.8429 | 0.7490 | 0.7488 | | 0.0518 | 2.6667 | 400 | 1.3244 | 0.7522 | 0.8024 | 0.7522 | 0.7474 | | 0.008 | 3.3333 | 500 | 1.0330 | 0.7727 | 0.8049 | 0.7727 | 0.7753 | | 0.0058 | 4.0 | 600 | 1.2145 | 0.7490 | 0.7861 | 0.7490 | 0.7510 | | 0.0031 | 4.6667 | 700 | 0.9566 | 0.8013 | 0.7999 | 0.8013 | 0.7994 | | 0.0026 | 5.3333 | 800 | 1.3827 | 0.7678 | 0.8112 | 0.7678 | 0.7710 | | 0.0141 | 6.0 | 900 | 1.0396 | 0.8078 | 0.8238 | 0.8078 | 0.8029 | | 0.0194 | 6.6667 | 1000 | 1.3622 | 0.7514 | 0.7612 | 0.7514 | 0.7525 | | 0.0015 | 7.3333 | 1100 | 1.1867 | 0.7784 | 0.8293 | 0.7784 | 0.7784 | | 0.0012 | 8.0 | 1200 | 1.5671 | 0.7269 | 0.7813 | 0.7269 | 0.7367 | | 0.0011 | 8.6667 | 1300 | 1.2410 | 0.7629 | 0.7779 | 0.7629 | 0.7682 | | 0.001 | 9.3333 | 1400 | 1.2369 | 0.7899 | 0.8155 | 0.7899 | 0.7849 | | 0.0009 | 10.0 | 1500 | 1.2282 | 0.7915 | 0.8187 | 0.7915 | 0.7878 | | 0.0008 | 10.6667 | 1600 | 1.2243 | 0.7948 | 0.8223 | 0.7948 | 0.7917 | | 0.0008 | 11.3333 | 1700 | 1.2258 | 0.7989 | 0.8256 | 0.7989 | 0.7957 | | 0.0007 | 12.0 | 1800 | 1.2286 | 0.7997 | 0.8262 | 0.7997 | 0.7965 | | 0.0007 | 12.6667 | 1900 | 1.2296 | 0.7989 | 0.8245 | 0.7989 | 0.7957 | | 0.0007 | 13.3333 | 2000 | 1.2314 | 0.7989 | 0.8245 | 0.7989 | 0.7957 | | 0.0006 | 14.0 | 2100 | 1.2325 | 0.7997 | 0.8252 | 0.7997 | 0.7967 | | 0.0006 | 14.6667 | 2200 | 1.2330 | 0.8005 | 0.8258 | 0.8005 | 0.7978 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "sur-subtype_iva", "sur-subtype_iva2", "sur-subtype_ivc", "sur-subtype_ivd", "sur-subtype_ia", "sur-subtype_va" ]
Ivanrs/vit-base-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_MIX
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_MIX This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4058 - Accuracy: 0.8942 - Precision: 0.9042 - Recall: 0.8942 - F1: 0.8940 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.2458 | 0.3333 | 100 | 0.6117 | 0.8183 | 0.8403 | 0.8183 | 0.8152 | | 0.1311 | 0.6667 | 200 | 0.4116 | 0.8696 | 0.8705 | 0.8696 | 0.8694 | | 0.037 | 1.0 | 300 | 0.4058 | 0.8942 | 0.9042 | 0.8942 | 0.8940 | | 0.149 | 1.3333 | 400 | 0.4525 | 0.8846 | 0.8926 | 0.8846 | 0.8818 | | 0.1007 | 1.6667 | 500 | 0.8220 | 0.7908 | 0.8404 | 0.7908 | 0.7917 | | 0.0189 | 2.0 | 600 | 0.5199 | 0.8762 | 0.8808 | 0.8762 | 0.8756 | | 0.0531 | 2.3333 | 700 | 0.5875 | 0.8804 | 0.8944 | 0.8804 | 0.8784 | | 0.0169 | 2.6667 | 800 | 0.7323 | 0.8488 | 0.8554 | 0.8488 | 0.8479 | | 0.0076 | 3.0 | 900 | 0.4755 | 0.8954 | 0.9015 | 0.8954 | 0.8931 | | 0.0015 | 3.3333 | 1000 | 0.4957 | 0.9025 | 0.9070 | 0.9025 | 0.9006 | | 0.012 | 3.6667 | 1100 | 0.8585 | 0.8367 | 0.8589 | 0.8367 | 0.8292 | | 0.1429 | 4.0 | 1200 | 0.5490 | 0.8804 | 0.8904 | 0.8804 | 0.8785 | | 0.0242 | 4.3333 | 1300 | 0.4934 | 0.9021 | 0.9144 | 0.9021 | 0.8970 | | 0.001 | 4.6667 | 1400 | 0.5054 | 0.9062 | 0.9195 | 0.9062 | 0.9039 | | 0.0012 | 5.0 | 1500 | 0.7132 | 0.8675 | 0.8886 | 0.8675 | 0.8680 | | 0.0043 | 5.3333 | 1600 | 0.7203 | 0.8871 | 0.9069 | 0.8871 | 0.8844 | | 0.0007 | 5.6667 | 1700 | 0.5250 | 0.9079 | 0.9097 | 0.9079 | 0.9072 | | 0.043 | 6.0 | 1800 | 0.6485 | 0.8788 | 0.8943 | 0.8788 | 0.8740 | | 0.0006 | 6.3333 | 1900 | 0.5322 | 0.8996 | 0.9015 | 0.8996 | 0.8996 | | 0.0005 | 6.6667 | 2000 | 0.6328 | 0.8904 | 0.9044 | 0.8904 | 0.8872 | | 0.0004 | 7.0 | 2100 | 0.6130 | 0.8942 | 0.9061 | 0.8942 | 0.8916 | | 0.0004 | 7.3333 | 2200 | 0.6070 | 0.8967 | 0.9076 | 0.8967 | 0.8942 | | 0.0003 | 7.6667 | 2300 | 0.6067 | 0.8983 | 0.9095 | 0.8983 | 0.8960 | | 0.0003 | 8.0 | 2400 | 0.6028 | 0.9004 | 0.9107 | 0.9004 | 0.8981 | | 0.0003 | 8.3333 | 2500 | 0.6009 | 0.9021 | 0.9118 | 0.9021 | 0.8999 | | 0.0003 | 8.6667 | 2600 | 0.6020 | 0.9042 | 0.9132 | 0.9042 | 0.9021 | | 0.0003 | 9.0 | 2700 | 0.6018 | 0.9042 | 0.9130 | 0.9042 | 0.9022 | | 0.0002 | 9.3333 | 2800 | 0.6026 | 0.9042 | 0.9125 | 0.9042 | 0.9022 | | 0.0002 | 9.6667 | 2900 | 0.6037 | 0.9042 | 0.9125 | 0.9042 | 0.9022 | | 0.0002 | 10.0 | 3000 | 0.6053 | 0.905 | 0.9128 | 0.905 | 0.9031 | | 0.0002 | 10.3333 | 3100 | 0.6060 | 0.9058 | 0.9133 | 0.9058 | 0.9040 | | 0.0002 | 10.6667 | 3200 | 0.6082 | 0.9058 | 0.9133 | 0.9058 | 0.9040 | | 0.0002 | 11.0 | 3300 | 0.6095 | 0.9058 | 0.9133 | 0.9058 | 0.9040 | | 0.0002 | 11.3333 | 3400 | 0.6109 | 0.9062 | 0.9136 | 0.9062 | 0.9045 | | 0.0002 | 11.6667 | 3500 | 0.6122 | 0.9062 | 0.9136 | 0.9062 | 0.9045 | | 0.0002 | 12.0 | 3600 | 0.6135 | 0.9062 | 0.9136 | 0.9062 | 0.9045 | | 0.0002 | 12.3333 | 3700 | 0.6150 | 0.9067 | 0.9139 | 0.9067 | 0.9050 | | 0.0002 | 12.6667 | 3800 | 0.6159 | 0.9067 | 0.9139 | 0.9067 | 0.9050 | | 0.0002 | 13.0 | 3900 | 0.6169 | 0.9067 | 0.9139 | 0.9067 | 0.9050 | | 0.0002 | 13.3333 | 4000 | 0.6179 | 0.9067 | 0.9139 | 0.9067 | 0.9050 | | 0.0001 | 13.6667 | 4100 | 0.6187 | 0.9067 | 0.9139 | 0.9067 | 0.9050 | | 0.0001 | 14.0 | 4200 | 0.6193 | 0.9067 | 0.9139 | 0.9067 | 0.9050 | | 0.0001 | 14.3333 | 4300 | 0.6198 | 0.9067 | 0.9139 | 0.9067 | 0.9050 | | 0.0001 | 14.6667 | 4400 | 0.6201 | 0.9067 | 0.9139 | 0.9067 | 0.9050 | | 0.0001 | 15.0 | 4500 | 0.6202 | 0.9067 | 0.9139 | 0.9067 | 0.9050 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "mix-subtype_iiia", "mix-subtype_iia", "mix-subtype_ivc", "mix-subtype_ivd", "mix-subtype_ia", "mix-subtype_va" ]