model_id
stringlengths
7
105
model_card
stringlengths
1
130k
model_labels
listlengths
2
80k
cotysong113/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5974 - Accuracy: 0.899 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7165 | 0.992 | 62 | 2.5197 | 0.82 | | 1.8377 | 2.0 | 125 | 1.7734 | 0.868 | | 1.5955 | 2.976 | 186 | 1.5974 | 0.899 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-004
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-004 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0895 - Accuracy: 0.9444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.8954 | 5.7143 | 10 | 0.3421 | 0.9444 | | 0.2087 | 11.4286 | 20 | 0.1405 | 0.9444 | | 0.062 | 17.1429 | 30 | 0.0895 | 0.9444 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-005
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-005 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0719 - Accuracy: 0.9722 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.582 | 6.5714 | 10 | 0.1846 | 0.9722 | | 0.0293 | 13.2857 | 20 | 0.0766 | 0.9722 | | 0.0021 | 19.8571 | 30 | 0.0719 | 0.9722 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-006
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-006 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1481 - Accuracy: 0.9722 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 32 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.0005 | 6.5714 | 10 | 0.1356 | 0.9722 | | 0.0 | 13.2857 | 20 | 0.1541 | 0.9722 | | 0.0 | 19.8571 | 30 | 0.1481 | 0.9722 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "three", "two", "zero" ]
Zainajabroh/image_emotion_classification_project_4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # image_emotion_classification_project_4 This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.9052 - Accuracy: 0.5188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: reduce_lr_on_plateau - lr_scheduler_warmup_steps: 50 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.6977 | 1.0 | 640 | 1.5713 | 0.325 | | 1.7006 | 2.0 | 1280 | 1.4543 | 0.4562 | | 1.6725 | 3.0 | 1920 | 1.6124 | 0.4625 | | 1.2312 | 4.0 | 2560 | 1.6711 | 0.5 | | 0.6097 | 5.0 | 3200 | 1.8838 | 0.5312 | | 1.264 | 6.0 | 3840 | 2.0933 | 0.4875 | | 2.4064 | 7.0 | 4480 | 2.0628 | 0.5188 | | 2.0741 | 8.0 | 5120 | 2.6505 | 0.4625 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-007
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-007 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0674 - Accuracy: 0.9722 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 31 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.0 | 6.5714 | 10 | 0.0367 | 0.9722 | | 0.0 | 13.2857 | 20 | 0.0632 | 0.9722 | | 0.0 | 19.8571 | 30 | 0.0674 | 0.9722 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-008
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-008 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0935 - Accuracy: 0.9722 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.0018 | 6.5714 | 10 | 0.1107 | 0.9722 | | 0.0002 | 13.2857 | 20 | 0.1064 | 0.9722 | | 0.0001 | 19.8571 | 30 | 0.0935 | 0.9722 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-009
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-009 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0429 - Accuracy: 1.0 ## Model description Dungeon Maps - Geo Morphs - with 0 to 3 entrances ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.7303 | 6.5714 | 10 | 0.2536 | 0.9444 | | 0.0464 | 13.2857 | 20 | 0.0737 | 0.9444 | | 0.0017 | 19.8571 | 30 | 0.0429 | 1.0 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-010
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-010 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1230 - Accuracy: 0.9444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.9545 | 6.5714 | 10 | 0.3644 | 0.9444 | | 0.2033 | 13.2857 | 20 | 0.1559 | 0.9444 | | 0.0472 | 19.8571 | 30 | 0.1230 | 0.9444 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-011
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-011 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1657 - Accuracy: 0.9444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.0008 | 6.5714 | 10 | 0.1917 | 0.9444 | | 0.0 | 13.2857 | 20 | 0.1489 | 0.9444 | | 0.0 | 19.8571 | 30 | 0.1657 | 0.9444 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-012
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-012 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1487 - Accuracy: 0.9722 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.0 | 6.5714 | 10 | 0.1662 | 0.9722 | | 0.0044 | 13.2857 | 20 | 0.2218 | 0.9444 | | 0.0 | 19.8571 | 30 | 0.1487 | 0.9722 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "three", "two", "zero" ]
ansaritk/vit-base-patch16-224-finetuned-flower-classify
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-flower-classify This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "daisy", "dandelion", "roses", "sunflowers", "tulips" ]
chun061205/vit-base-beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0645 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2821 | 1.0 | 130 | 0.2170 | 0.9624 | | 0.1291 | 2.0 | 260 | 0.1299 | 0.9699 | | 0.1379 | 3.0 | 390 | 0.0972 | 0.9774 | | 0.0803 | 4.0 | 520 | 0.0645 | 0.9850 | | 0.1123 | 5.0 | 650 | 0.0791 | 0.9774 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu118 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
AhmadIshaqai/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6109 - Accuracy: 0.901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7014 | 0.992 | 62 | 2.5097 | 0.847 | | 1.8804 | 2.0 | 125 | 1.7599 | 0.89 | | 1.6054 | 2.976 | 186 | 1.6109 | 0.901 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
masafresh/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8358 - Accuracy: 0.6580 - F1: 0.6497 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 384 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:| | 1.0613 | 0.9997 | 1973 | 0.9913 | 0.5967 | 0.5794 | | 1.0041 | 2.0 | 3947 | 0.8844 | 0.6358 | 0.6275 | | 0.9508 | 2.9992 | 5919 | 0.8358 | 0.6580 | 0.6497 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.1
[ "hglo", "mglo", "lglo", "hgso", "mwl", "uglo", "mgso", "lgso", "mws" ]
brigettesegovia/plant_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # plant_model This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1169 - Accuracy: 0.9469 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1893 | 1.0 | 13 | 0.1545 | 0.9420 | | 0.1421 | 2.0 | 26 | 0.1169 | 0.9469 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "angular leaf spot", "bean rust", "healthy" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-009-dungeon-geo-morphs-001
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-009-dungeon-geo-morphs-001 This model is a fine-tuned version of [griffio/vit-large-patch16-224-dungeon-geo-morphs-009](https://huggingface.co/griffio/vit-large-patch16-224-dungeon-geo-morphs-009) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0213 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.0288 | 5.7143 | 10 | 0.0213 | 1.0 | | 0.0002 | 11.4286 | 20 | 0.0726 | 0.9722 | | 0.0001 | 17.1429 | 30 | 0.0599 | 0.9722 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-009-dungeon-geo-morphs-002
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-009-dungeon-geo-morphs-002 This model is a fine-tuned version of [griffio/vit-large-patch16-224-dungeon-geo-morphs-009](https://huggingface.co/griffio/vit-large-patch16-224-dungeon-geo-morphs-009) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0846 - Accuracy: 0.9444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.1962 | 5.7143 | 10 | 0.0717 | 1.0 | | 0.0443 | 11.4286 | 20 | 0.0866 | 0.9722 | | 0.0128 | 17.1429 | 30 | 0.0846 | 0.9444 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-009-dungeon-geo-morphs-003
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-009-dungeon-geo-morphs-003 This model is a fine-tuned version of [griffio/vit-large-patch16-224-dungeon-geo-morphs-009](https://huggingface.co/griffio/vit-large-patch16-224-dungeon-geo-morphs-009) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0943 - Accuracy: 0.9444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.2469 | 5.7143 | 10 | 0.0943 | 0.9444 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-1005
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-1005 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1175 - Accuracy: 0.9444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.7083 | 5.7143 | 10 | 0.3063 | 0.8611 | | 0.1533 | 11.4286 | 20 | 0.1348 | 0.9444 | | 0.0426 | 17.1429 | 30 | 0.1175 | 0.9444 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-1006
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-1006 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0782 - Accuracy: 0.9444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.7742 | 5.7143 | 10 | 0.2863 | 0.9444 | | 0.162 | 11.4286 | 20 | 0.1305 | 0.9444 | | 0.039 | 17.1429 | 30 | 0.0782 | 0.9444 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-1007
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-1007 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0094 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.0076 | 5.7143 | 10 | 0.0398 | 1.0 | | 0.0006 | 11.4286 | 20 | 0.0392 | 1.0 | | 0.0003 | 17.1429 | 30 | 0.0094 | 1.0 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "three", "two", "zero" ]
n1hal/swinv2-plantclef
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-plantclef This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window16-256](https://huggingface.co/microsoft/swinv2-base-patch4-window16-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0548 - Accuracy: 0.8199 - F1: 0.8190 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 16 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 1.1414 | 1.0 | 897 | 0.9819 | 0.7171 | 0.7046 | | 0.654 | 2.0 | 1794 | 0.7608 | 0.7694 | 0.7688 | | 0.394 | 3.0 | 2691 | 0.7461 | 0.7795 | 0.7767 | | 0.2437 | 4.0 | 3588 | 0.7369 | 0.7917 | 0.7908 | | 0.1428 | 5.0 | 4485 | 0.7939 | 0.7945 | 0.7929 | | 0.0878 | 6.0 | 5382 | 0.8352 | 0.7958 | 0.7950 | | 0.0621 | 7.0 | 6279 | 0.8802 | 0.7945 | 0.7928 | | 0.0353 | 8.0 | 7176 | 0.9028 | 0.8011 | 0.8005 | | 0.0241 | 9.0 | 8073 | 0.9592 | 0.8043 | 0.8045 | | 0.0241 | 10.0 | 8970 | 1.0075 | 0.8068 | 0.8047 | | 0.0129 | 11.0 | 9867 | 1.0254 | 0.8127 | 0.8120 | | 0.0058 | 12.0 | 10764 | 1.0340 | 0.8162 | 0.8151 | | 0.007 | 13.0 | 11661 | 1.0661 | 0.8165 | 0.8159 | | 0.0052 | 14.0 | 12558 | 1.0533 | 0.8168 | 0.8166 | | 0.0049 | 15.0 | 13455 | 1.0660 | 0.8174 | 0.8164 | | 0.015 | 16.0 | 14352 | 1.0548 | 0.8199 | 0.8190 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0 - Datasets 3.1.0 - Tokenizers 0.20.1
[ "1355868", "1355869", "1355885", "1355886", "1355894", "1355897", "1355898", "1355899", "1355900", "1355901", "1355902", "1355903", "1355870", "1355907", "1355908", "1355914", "1355926", "1355927", "1355928", "1355932", "1355934", "1355935", "1355936", "1355871", "1355937", "1355941", "1355948", "1355950", "1355952", "1355953", "1355964", "1355967", "1355968", "1355969", "1355872", "1355970", "1355971", "1355972", "1355977", "1355978", "1355984", "1355986", "1355987", "1355989", "1355990", "1355873", "1355991", "1355992", "1355993", "1355994", "1355995", "1355997", "1355998", "1356001", "1356007", "1356008", "1355880", "1356012", "1356013", "1356017", "1356022", "1356023", "1356024", "1356033", "1356040", "1356042", "1356044", "1355881", "1356045", "1356046", "1356052", "1356054", "1356055", "1356058", "1356062", "1356063", "1356064", "1356065", "1355882", "1356066", "1356067", "1356069", "1356070", "1356072", "1356075", "1356078", "1356079", "1356081", "1356082", "1355884", "1356083", "1356084", "1356086", "1356089", "1356091", "1356094", "1356095", "1356105", "1356106", "1356107" ]
nemik/frost-vision-v2-google_vit-base-patch16-224-v2024-11-14
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # frost-vision-v2-google_vit-base-patch16-224-v2024-11-14 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the webdataset dataset. It achieves the following results on the evaluation set: - Loss: 0.1577 - Accuracy: 0.9389 - F1: 0.8436 - Precision: 0.8655 - Recall: 0.8228 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.3381 | 1.2346 | 100 | 0.3271 | 0.8660 | 0.5669 | 0.8045 | 0.4376 | | 0.2067 | 2.4691 | 200 | 0.2080 | 0.9194 | 0.7827 | 0.8514 | 0.7242 | | 0.1745 | 3.7037 | 300 | 0.1864 | 0.9228 | 0.8003 | 0.8308 | 0.7720 | | 0.1724 | 4.9383 | 400 | 0.1792 | 0.9299 | 0.8188 | 0.8493 | 0.7904 | | 0.128 | 6.1728 | 500 | 0.1736 | 0.9327 | 0.8292 | 0.8437 | 0.8151 | | 0.1034 | 7.4074 | 600 | 0.1672 | 0.9355 | 0.8348 | 0.8571 | 0.8136 | | 0.0944 | 8.6420 | 700 | 0.1579 | 0.9392 | 0.8452 | 0.8622 | 0.8290 | | 0.0919 | 9.8765 | 800 | 0.1631 | 0.9364 | 0.8347 | 0.8710 | 0.8012 | | 0.0791 | 11.1111 | 900 | 0.1592 | 0.9380 | 0.8383 | 0.8771 | 0.8028 | | 0.0684 | 12.3457 | 1000 | 0.1577 | 0.9389 | 0.8436 | 0.8655 | 0.8228 | | 0.0737 | 13.5802 | 1100 | 0.1678 | 0.9380 | 0.8416 | 0.8613 | 0.8228 | | 0.0625 | 14.8148 | 1200 | 0.1646 | 0.9426 | 0.8542 | 0.8692 | 0.8398 | | 0.0591 | 16.0494 | 1300 | 0.1625 | 0.9432 | 0.8549 | 0.8756 | 0.8351 | | 0.0464 | 17.2840 | 1400 | 0.1722 | 0.9386 | 0.8422 | 0.8676 | 0.8182 | | 0.048 | 18.5185 | 1500 | 0.1694 | 0.9401 | 0.8472 | 0.8663 | 0.8290 | | 0.0353 | 19.7531 | 1600 | 0.1715 | 0.9392 | 0.8462 | 0.8576 | 0.8351 | | 0.0434 | 20.9877 | 1700 | 0.1817 | 0.9370 | 0.8386 | 0.8618 | 0.8166 | | 0.0332 | 22.2222 | 1800 | 0.1797 | 0.9383 | 0.8423 | 0.8627 | 0.8228 | | 0.0283 | 23.4568 | 1900 | 0.1810 | 0.9401 | 0.8482 | 0.8617 | 0.8351 | | 0.0474 | 24.6914 | 2000 | 0.1765 | 0.9398 | 0.8454 | 0.8709 | 0.8213 | | 0.0365 | 25.9259 | 2100 | 0.1835 | 0.9414 | 0.8516 | 0.8637 | 0.8398 | | 0.0244 | 27.1605 | 2200 | 0.1822 | 0.9404 | 0.8479 | 0.8677 | 0.8290 | | 0.0242 | 28.3951 | 2300 | 0.1808 | 0.9407 | 0.8483 | 0.8703 | 0.8274 | | 0.0296 | 29.6296 | 2400 | 0.1817 | 0.9401 | 0.8477 | 0.864 | 0.8320 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "snowing", "raining", "sunny", "cloudy", "night", "snow_on_road", "partial_snow_on_road", "clear_pavement", "wet_pavement", "iced_lens" ]
Dev176/21BAI1229
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 21BAI1229 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4078 - Accuracy: 0.8734 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 2.6034 | 0.9873 | 39 | 2.0544 | 0.4520 | | 1.4429 | 2.0 | 79 | 0.7736 | 0.7849 | | 0.8307 | 2.9873 | 118 | 0.5456 | 0.8413 | | 0.6814 | 4.0 | 158 | 0.4881 | 0.8516 | | 0.6199 | 4.9873 | 197 | 0.4614 | 0.8528 | | 0.5578 | 6.0 | 237 | 0.4419 | 0.8615 | | 0.5198 | 6.9873 | 276 | 0.4485 | 0.8603 | | 0.4811 | 8.0 | 316 | 0.4355 | 0.8659 | | 0.4568 | 8.9873 | 355 | 0.4182 | 0.8651 | | 0.4268 | 10.0 | 395 | 0.4094 | 0.8702 | | 0.4281 | 10.9873 | 434 | 0.4158 | 0.8706 | | 0.4143 | 12.0 | 474 | 0.4078 | 0.8734 | | 0.4009 | 12.9873 | 513 | 0.4066 | 0.8714 | | 0.3642 | 14.0 | 553 | 0.4131 | 0.8683 | | 0.3659 | 14.9873 | 592 | 0.4047 | 0.8726 | | 0.3487 | 16.0 | 632 | 0.4054 | 0.8710 | | 0.35 | 16.9873 | 671 | 0.4107 | 0.8722 | | 0.3291 | 18.0 | 711 | 0.4099 | 0.8698 | | 0.338 | 18.9873 | 750 | 0.4063 | 0.8718 | | 0.3419 | 19.7468 | 780 | 0.4066 | 0.8702 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "calling", "clapping", "cycling", "dancing", "drinking", "eating", "fighting", "hugging", "laughing", "listening_to_music", "running", "sitting", "sleeping", "texting", "using_laptop" ]
Docty/nose-mask-classification
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # nose-mask-classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2744 - Validation Loss: 0.0564 - Train Accuracy: 1.0 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.2744 | 0.0564 | 1.0 | 0 | ### Framework versions - Transformers 4.46.2 - TensorFlow 2.17.0 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "withmask", "withoutmask" ]
masafresh/swin-transformer
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-transformer This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7366 - Accuracy: 0.39 - F1: 0.2753 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 384 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:| | No log | 0.7273 | 2 | 2.0766 | 0.3 | 0.2161 | | No log | 1.8182 | 5 | 1.7687 | 0.37 | 0.2461 | | No log | 2.1818 | 6 | 1.7366 | 0.39 | 0.2753 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.1
[ "hglo", "mglo", "lglo", "hgso", "mwl", "uglo", "mgso", "lgso", "mws" ]
CGscorpion/vit-base-patch32-384-finetuned-eurosat-albumentations
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch32-384-finetuned-eurosat-albumentations This model is a fine-tuned version of [google/vit-base-patch32-384](https://huggingface.co/google/vit-base-patch32-384) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1871 - Accuracy: 0.9726 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.7204 | 0.9412 | 12 | 0.5695 | 0.7397 | | 0.4269 | 1.9804 | 25 | 0.2537 | 0.9178 | | 0.1605 | 2.9412 | 37 | 0.3347 | 0.8767 | | 0.0758 | 3.9804 | 50 | 0.2203 | 0.9041 | | 0.0405 | 4.9412 | 62 | 0.3563 | 0.9178 | | 0.0358 | 5.9804 | 75 | 0.2326 | 0.9315 | | 0.0188 | 6.9412 | 87 | 0.2046 | 0.9315 | | 0.026 | 7.9804 | 100 | 0.2195 | 0.8904 | | 0.0582 | 8.9412 | 112 | 0.3378 | 0.9178 | | 0.0113 | 9.9804 | 125 | 0.2685 | 0.9178 | | 0.0081 | 10.9412 | 137 | 0.2443 | 0.9315 | | 0.0091 | 11.9804 | 150 | 0.4675 | 0.9041 | | 0.0065 | 12.9412 | 162 | 0.3252 | 0.9452 | | 0.0026 | 13.9804 | 175 | 0.1871 | 0.9726 | | 0.0043 | 14.9412 | 187 | 0.2256 | 0.9589 | | 0.0094 | 15.9804 | 200 | 0.1980 | 0.9452 | | 0.0028 | 16.9412 | 212 | 0.2928 | 0.9315 | | 0.0003 | 17.9804 | 225 | 0.2241 | 0.9726 | | 0.0006 | 18.9412 | 237 | 0.2396 | 0.9726 | | 0.0012 | 19.9804 | 250 | 0.2663 | 0.9315 | | 0.0001 | 20.9412 | 262 | 0.2266 | 0.9726 | | 0.0002 | 21.9804 | 275 | 0.2637 | 0.9452 | | 0.0001 | 22.9412 | 287 | 0.2873 | 0.9452 | | 0.0003 | 23.9804 | 300 | 0.2068 | 0.9589 | | 0.0001 | 24.9412 | 312 | 0.2485 | 0.9452 | | 0.0047 | 25.9804 | 325 | 0.3375 | 0.9178 | | 0.0015 | 26.9412 | 337 | 0.3132 | 0.9589 | | 0.0001 | 27.9804 | 350 | 0.3148 | 0.9452 | | 0.0025 | 28.9412 | 362 | 0.2533 | 0.9452 | | 0.0038 | 29.9804 | 375 | 0.2860 | 0.9315 | | 0.0025 | 30.9412 | 387 | 0.2785 | 0.9452 | | 0.0031 | 31.9804 | 400 | 0.3246 | 0.9452 | | 0.0 | 32.9412 | 412 | 0.3367 | 0.9452 | | 0.0006 | 33.9804 | 425 | 0.2625 | 0.9726 | | 0.0 | 34.9412 | 437 | 0.2689 | 0.9589 | | 0.0007 | 35.9804 | 450 | 0.2891 | 0.9726 | | 0.0003 | 36.9412 | 462 | 0.4523 | 0.9315 | | 0.0003 | 37.9804 | 475 | 0.3426 | 0.9452 | | 0.0001 | 38.9412 | 487 | 0.3167 | 0.9589 | | 0.0 | 39.9804 | 500 | 0.3237 | 0.9589 | | 0.0002 | 40.9412 | 512 | 0.3085 | 0.9589 | | 0.0 | 41.9804 | 525 | 0.3095 | 0.9589 | | 0.0 | 42.9412 | 537 | 0.3049 | 0.9589 | | 0.0002 | 43.9804 | 550 | 0.3039 | 0.9589 | | 0.0001 | 44.9412 | 562 | 0.3044 | 0.9589 | | 0.0001 | 45.9804 | 575 | 0.3031 | 0.9726 | | 0.0 | 46.9412 | 587 | 0.3028 | 0.9726 | | 0.0 | 47.9804 | 600 | 0.3027 | 0.9726 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 2.21.0 - Tokenizers 0.20.3
[ "answer", "delete_line" ]
Twipsy/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.1763 - Accuracy: 0.9499 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3698 | 1.0 | 370 | 0.2753 | 0.9296 | | 0.2212 | 2.0 | 740 | 0.2142 | 0.9378 | | 0.1741 | 3.0 | 1110 | 0.1975 | 0.9432 | | 0.1546 | 4.0 | 1480 | 0.1899 | 0.9432 | | 0.1355 | 5.0 | 1850 | 0.1883 | 0.9472 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.2.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
wagodo/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2294 - Accuracy: 0.9364 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3645 | 1.0 | 370 | 0.2793 | 0.9296 | | 0.2042 | 2.0 | 740 | 0.2111 | 0.9310 | | 0.1733 | 3.0 | 1110 | 0.1835 | 0.9405 | | 0.15 | 4.0 | 1480 | 0.1776 | 0.9432 | | 0.1223 | 5.0 | 1850 | 0.1761 | 0.9459 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.2.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
sogueeti/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2022 - Accuracy: 0.9391 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3716 | 1.0 | 370 | 0.3101 | 0.9283 | | 0.2157 | 2.0 | 740 | 0.2396 | 0.9323 | | 0.1558 | 3.0 | 1110 | 0.2290 | 0.9350 | | 0.1375 | 4.0 | 1480 | 0.2166 | 0.9364 | | 0.1301 | 5.0 | 1850 | 0.2135 | 0.9418 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.2.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
nemethomas/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2038 - Accuracy: 0.9445 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.373 | 1.0 | 370 | 0.2732 | 0.9337 | | 0.2127 | 2.0 | 740 | 0.2148 | 0.9405 | | 0.1801 | 3.0 | 1110 | 0.1918 | 0.9445 | | 0.1448 | 4.0 | 1480 | 0.1857 | 0.9472 | | 0.1308 | 5.0 | 1850 | 0.1814 | 0.9445 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.2.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
mahmuili/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.1733 - Accuracy: 0.9553 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3654 | 1.0 | 370 | 0.3021 | 0.9378 | | 0.2271 | 2.0 | 740 | 0.2237 | 0.9418 | | 0.1618 | 3.0 | 1110 | 0.2024 | 0.9472 | | 0.1535 | 4.0 | 1480 | 0.1923 | 0.9445 | | 0.1349 | 5.0 | 1850 | 0.1886 | 0.9472 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.2.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
majorSeaweed/SWIN_BASE_PRETRAINED
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SWIN_BASE_PRETRAINED This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "0", "1", "2", "3", "4" ]
masafresh/swin-transformer2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-transformer2 This model is a fine-tuned version of [microsoft/swin-large-patch4-window12-384](https://huggingface.co/microsoft/swin-large-patch4-window12-384) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2129 - Accuracy: 0.6386 - F1: 0.6328 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:| | 1.6336 | 0.9840 | 46 | 1.6510 | 0.2530 | 0.1876 | | 1.2894 | 1.9893 | 93 | 1.2218 | 0.4458 | 0.3780 | | 1.0959 | 2.9947 | 140 | 1.1383 | 0.5060 | 0.3518 | | 1.0467 | 4.0 | 187 | 0.9372 | 0.5542 | 0.4352 | | 0.9879 | 4.9840 | 233 | 1.0139 | 0.5301 | 0.4718 | | 0.9086 | 5.9893 | 280 | 0.8822 | 0.6627 | 0.6359 | | 0.9776 | 6.9947 | 327 | 1.0269 | 0.5542 | 0.5139 | | 0.9715 | 8.0 | 374 | 0.7964 | 0.5663 | 0.5588 | | 0.9049 | 8.9840 | 420 | 0.7839 | 0.5904 | 0.5346 | | 0.8697 | 9.9893 | 467 | 1.0379 | 0.5663 | 0.4921 | | 0.882 | 10.9947 | 514 | 0.9132 | 0.5663 | 0.5379 | | 0.832 | 12.0 | 561 | 0.8513 | 0.5783 | 0.5008 | | 0.7475 | 12.9840 | 607 | 0.7612 | 0.6627 | 0.6427 | | 0.9056 | 13.9893 | 654 | 0.8431 | 0.6145 | 0.5725 | | 0.9978 | 14.9947 | 701 | 0.7221 | 0.7108 | 0.6983 | | 0.6956 | 16.0 | 748 | 0.7545 | 0.6145 | 0.5888 | | 0.7185 | 16.9840 | 794 | 0.6561 | 0.6627 | 0.6499 | | 0.8139 | 17.9893 | 841 | 0.7512 | 0.6506 | 0.6386 | | 0.6837 | 18.9947 | 888 | 0.6491 | 0.6988 | 0.6849 | | 0.5191 | 20.0 | 935 | 0.7290 | 0.6386 | 0.6336 | | 0.6538 | 20.9840 | 981 | 0.8000 | 0.6988 | 0.6621 | | 0.7912 | 21.9893 | 1028 | 1.0183 | 0.6145 | 0.5824 | | 0.6093 | 22.9947 | 1075 | 0.9124 | 0.6506 | 0.6396 | | 0.5312 | 24.0 | 1122 | 0.9098 | 0.6024 | 0.5581 | | 0.6654 | 24.9840 | 1168 | 1.0432 | 0.5422 | 0.5028 | | 0.5798 | 25.9893 | 1215 | 0.7369 | 0.6627 | 0.6553 | | 0.506 | 26.9947 | 1262 | 0.9057 | 0.6265 | 0.6236 | | 0.4638 | 28.0 | 1309 | 0.7950 | 0.6867 | 0.6644 | | 0.371 | 28.9840 | 1355 | 1.0368 | 0.6627 | 0.6473 | | 0.4721 | 29.9893 | 1402 | 0.8129 | 0.6747 | 0.6673 | | 0.54 | 30.9947 | 1449 | 1.0379 | 0.6627 | 0.6491 | | 0.3978 | 32.0 | 1496 | 1.3857 | 0.5904 | 0.5481 | | 0.3503 | 32.9840 | 1542 | 1.0920 | 0.6024 | 0.5847 | | 0.4407 | 33.9893 | 1589 | 1.1912 | 0.5904 | 0.5505 | | 0.3786 | 34.9947 | 1636 | 1.5071 | 0.6024 | 0.5915 | | 0.3482 | 36.0 | 1683 | 1.1161 | 0.6386 | 0.6240 | | 0.2695 | 36.9840 | 1729 | 1.2040 | 0.5904 | 0.5704 | | 0.2296 | 37.9893 | 1776 | 1.5781 | 0.5181 | 0.4691 | | 0.2922 | 38.9947 | 1823 | 1.3713 | 0.6024 | 0.5879 | | 0.1511 | 40.0 | 1870 | 1.1638 | 0.6506 | 0.6553 | | 0.2814 | 40.9840 | 1916 | 1.3384 | 0.6988 | 0.6939 | | 0.2196 | 41.9893 | 1963 | 1.2872 | 0.6506 | 0.6330 | | 0.2477 | 42.9947 | 2010 | 1.5322 | 0.6627 | 0.6375 | | 0.3296 | 44.0 | 2057 | 1.3479 | 0.6506 | 0.6353 | | 0.2015 | 44.9840 | 2103 | 1.2521 | 0.6145 | 0.6044 | | 0.3476 | 45.9893 | 2150 | 1.2464 | 0.6747 | 0.6641 | | 0.189 | 46.9947 | 2197 | 1.4480 | 0.6506 | 0.6235 | | 0.1852 | 48.0 | 2244 | 1.3611 | 0.6747 | 0.6594 | | 0.2798 | 48.9840 | 2290 | 1.4427 | 0.6988 | 0.6957 | | 0.1523 | 49.9893 | 2337 | 1.3352 | 0.6506 | 0.6450 | | 0.1224 | 50.9947 | 2384 | 1.8088 | 0.6386 | 0.6201 | | 0.0926 | 52.0 | 2431 | 1.4695 | 0.6506 | 0.6296 | | 0.2071 | 52.9840 | 2477 | 1.4673 | 0.6867 | 0.6806 | | 0.1063 | 53.9893 | 2524 | 1.4862 | 0.7108 | 0.6975 | | 0.1831 | 54.9947 | 2571 | 1.4666 | 0.6506 | 0.6161 | | 0.158 | 56.0 | 2618 | 1.8832 | 0.6988 | 0.6673 | | 0.26 | 56.9840 | 2664 | 1.5855 | 0.6386 | 0.5986 | | 0.1697 | 57.9893 | 2711 | 1.2184 | 0.7470 | 0.7434 | | 0.2024 | 58.9947 | 2758 | 1.3524 | 0.6867 | 0.6682 | | 0.2495 | 60.0 | 2805 | 1.7523 | 0.6627 | 0.6427 | | 0.1247 | 60.9840 | 2851 | 1.7007 | 0.6506 | 0.6372 | | 0.1436 | 61.9893 | 2898 | 1.9171 | 0.6386 | 0.6120 | | 0.1438 | 62.9947 | 2945 | 1.8998 | 0.6265 | 0.5897 | | 0.1137 | 64.0 | 2992 | 2.4028 | 0.5904 | 0.5498 | | 0.1619 | 64.9840 | 3038 | 1.7087 | 0.7470 | 0.7473 | | 0.1105 | 65.9893 | 3085 | 1.6545 | 0.6988 | 0.6975 | | 0.1597 | 66.9947 | 3132 | 1.8024 | 0.6747 | 0.6758 | | 0.0338 | 68.0 | 3179 | 1.8962 | 0.6747 | 0.6706 | | 0.1184 | 68.9840 | 3225 | 2.1642 | 0.7108 | 0.7102 | | 0.0878 | 69.9893 | 3272 | 2.0974 | 0.6506 | 0.6610 | | 0.0963 | 70.9947 | 3319 | 1.8719 | 0.7108 | 0.7162 | | 0.0827 | 72.0 | 3366 | 1.7538 | 0.6988 | 0.7000 | | 0.0933 | 72.9840 | 3412 | 1.9357 | 0.6988 | 0.6988 | | 0.0593 | 73.9893 | 3459 | 1.9924 | 0.6506 | 0.6420 | | 0.0423 | 74.9947 | 3506 | 2.2029 | 0.6627 | 0.6702 | | 0.0311 | 76.0 | 3553 | 1.9236 | 0.7108 | 0.7155 | | 0.1881 | 76.9840 | 3599 | 1.9606 | 0.6747 | 0.6787 | | 0.0566 | 77.9893 | 3646 | 2.1122 | 0.6265 | 0.6206 | | 0.0266 | 78.9947 | 3693 | 2.1469 | 0.6506 | 0.6536 | | 0.1015 | 80.0 | 3740 | 2.0335 | 0.6506 | 0.6587 | | 0.1083 | 80.9840 | 3786 | 2.2123 | 0.6506 | 0.6509 | | 0.0161 | 81.9893 | 3833 | 2.3094 | 0.6988 | 0.7064 | | 0.0194 | 82.9947 | 3880 | 2.3315 | 0.6145 | 0.6101 | | 0.113 | 84.0 | 3927 | 2.5276 | 0.6867 | 0.6908 | | 0.0653 | 84.9840 | 3973 | 2.0321 | 0.6265 | 0.6263 | | 0.0684 | 85.9893 | 4020 | 2.0302 | 0.6627 | 0.6706 | | 0.1724 | 86.9947 | 4067 | 2.5865 | 0.5904 | 0.5860 | | 0.028 | 88.0 | 4114 | 2.3814 | 0.5904 | 0.5804 | | 0.0528 | 88.9840 | 4160 | 2.2804 | 0.6386 | 0.6410 | | 0.0341 | 89.9893 | 4207 | 2.0635 | 0.5783 | 0.5736 | | 0.0074 | 90.9947 | 4254 | 2.3491 | 0.6024 | 0.5993 | | 0.0165 | 92.0 | 4301 | 2.2152 | 0.6145 | 0.6036 | | 0.0157 | 92.9840 | 4347 | 2.3380 | 0.6145 | 0.6036 | | 0.0544 | 93.9893 | 4394 | 2.3319 | 0.6265 | 0.6221 | | 0.0577 | 94.9947 | 4441 | 2.2671 | 0.6265 | 0.6221 | | 0.1516 | 96.0 | 4488 | 2.2034 | 0.6265 | 0.6204 | | 0.0318 | 96.9840 | 4534 | 2.1932 | 0.6265 | 0.6204 | | 0.043 | 97.9893 | 4581 | 2.2178 | 0.6265 | 0.6204 | | 0.0099 | 98.3957 | 4600 | 2.2129 | 0.6386 | 0.6328 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.1
[ "lglo", "mglo", "hglo", "uglo", "mgso", "lgso", "hgso", "mws", "mwl" ]
griffio/vit-large-patch16-224-new-dungeon-geo-morphs-002
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-new-dungeon-geo-morphs-002 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0232 - Accuracy: 0.9787 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.0832 | 4.4444 | 10 | 0.4421 | 0.9149 | | 0.1422 | 8.8889 | 20 | 0.0481 | 1.0 | | 0.0055 | 13.3333 | 30 | 0.0213 | 1.0 | | 0.0007 | 17.7778 | 40 | 0.0223 | 0.9787 | | 0.0003 | 22.2222 | 50 | 0.0205 | 0.9787 | | 0.0002 | 26.6667 | 60 | 0.0223 | 0.9787 | | 0.0002 | 31.1111 | 70 | 0.0232 | 0.9787 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "three", "two", "zero" ]
griffio/vit-large-patch16-224-new-dungeon-geo-morphs-004
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-new-dungeon-geo-morphs-004 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0309 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.2665 | 4.4444 | 10 | 0.5981 | 0.8936 | | 0.4646 | 8.8889 | 20 | 0.1825 | 0.9787 | | 0.135 | 13.3333 | 30 | 0.0772 | 0.9574 | | 0.0486 | 17.7778 | 40 | 0.0660 | 0.9574 | | 0.0254 | 22.2222 | 50 | 0.0434 | 0.9574 | | 0.0108 | 26.6667 | 60 | 0.0309 | 1.0 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "three", "two", "zero" ]
masafresh/swin-transformer3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-transformer3 This model is a fine-tuned version of [microsoft/swin-large-patch4-window12-384](https://huggingface.co/microsoft/swin-large-patch4-window12-384) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.1081 - Accuracy: 0.5667 - F1: 0.5667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:| | 1.6659 | 0.9925 | 33 | 1.0639 | 0.6333 | 0.6250 | | 0.7561 | 1.9850 | 66 | 0.7258 | 0.5167 | 0.3520 | | 0.7106 | 2.9774 | 99 | 0.7334 | 0.5 | 0.3755 | | 0.6749 | 4.0 | 133 | 0.7088 | 0.4833 | 0.3661 | | 0.751 | 4.9925 | 166 | 0.7356 | 0.4833 | 0.3661 | | 0.7146 | 5.9850 | 199 | 0.7837 | 0.4833 | 0.3150 | | 0.6699 | 6.9774 | 232 | 0.7569 | 0.4833 | 0.3424 | | 0.6521 | 8.0 | 266 | 0.7255 | 0.5333 | 0.4674 | | 0.6885 | 8.9925 | 299 | 0.7253 | 0.5167 | 0.4070 | | 0.6407 | 9.9850 | 332 | 0.6506 | 0.6 | 0.5909 | | 0.6436 | 10.9774 | 365 | 0.6720 | 0.55 | 0.4442 | | 0.7865 | 12.0 | 399 | 0.6606 | 0.55 | 0.4792 | | 0.7191 | 12.9925 | 432 | 0.6407 | 0.65 | 0.6466 | | 0.5889 | 13.9850 | 465 | 0.8008 | 0.4833 | 0.3619 | | 0.5489 | 14.9774 | 498 | 0.7298 | 0.5333 | 0.4674 | | 0.596 | 16.0 | 532 | 0.7465 | 0.6667 | 0.6591 | | 0.6136 | 16.9925 | 565 | 0.9118 | 0.5333 | 0.4692 | | 0.5961 | 17.9850 | 598 | 0.6902 | 0.65 | 0.6298 | | 0.6327 | 18.9774 | 631 | 0.8260 | 0.5667 | 0.5190 | | 0.6518 | 20.0 | 665 | 0.6919 | 0.5833 | 0.5715 | | 0.5551 | 20.9925 | 698 | 1.1780 | 0.55 | 0.516 | | 0.511 | 21.9850 | 731 | 0.7414 | 0.6 | 0.6 | | 0.4749 | 22.9774 | 764 | 0.7978 | 0.6167 | 0.6129 | | 0.4607 | 24.0 | 798 | 0.8087 | 0.55 | 0.5420 | | 0.5837 | 24.9925 | 831 | 0.8271 | 0.5667 | 0.5456 | | 0.4608 | 25.9850 | 864 | 0.8539 | 0.6 | 0.5863 | | 0.536 | 26.9774 | 897 | 0.9802 | 0.5333 | 0.5026 | | 0.4225 | 28.0 | 931 | 0.9275 | 0.6 | 0.5910 | | 0.4325 | 28.9925 | 964 | 0.8834 | 0.6167 | 0.6099 | | 0.4874 | 29.9850 | 997 | 0.8721 | 0.6167 | 0.6168 | | 0.4165 | 30.9774 | 1030 | 1.0360 | 0.6167 | 0.6163 | | 0.4773 | 32.0 | 1064 | 1.2210 | 0.5833 | 0.5759 | | 0.3756 | 32.9925 | 1097 | 1.1291 | 0.5833 | 0.5830 | | 0.636 | 33.9850 | 1130 | 1.0178 | 0.5833 | 0.5830 | | 0.5474 | 34.9774 | 1163 | 0.9479 | 0.5667 | 0.5608 | | 0.3462 | 36.0 | 1197 | 0.9585 | 0.6167 | 0.6163 | | 0.3057 | 36.9925 | 1230 | 1.2014 | 0.6167 | 0.6163 | | 0.2304 | 37.9850 | 1263 | 1.1975 | 0.6333 | 0.6333 | | 0.2628 | 38.9774 | 1296 | 1.5224 | 0.5833 | 0.5793 | | 0.3774 | 40.0 | 1330 | 1.2903 | 0.5667 | 0.5516 | | 0.2604 | 40.9925 | 1363 | 1.4082 | 0.5667 | 0.5608 | | 0.2522 | 41.9850 | 1396 | 1.1783 | 0.6167 | 0.6163 | | 0.1925 | 42.9774 | 1429 | 1.3613 | 0.6167 | 0.6163 | | 0.3436 | 44.0 | 1463 | 1.6383 | 0.5333 | 0.5173 | | 0.1955 | 44.9925 | 1496 | 1.8947 | 0.5 | 0.4829 | | 0.2206 | 45.9850 | 1529 | 1.4390 | 0.6 | 0.6 | | 0.1912 | 46.9774 | 1562 | 1.5288 | 0.65 | 0.6400 | | 0.2794 | 48.0 | 1596 | 1.7393 | 0.55 | 0.5420 | | 0.3166 | 48.9925 | 1629 | 2.0414 | 0.5667 | 0.5608 | | 0.173 | 49.9850 | 1662 | 1.6377 | 0.6 | 0.5991 | | 0.1375 | 50.9774 | 1695 | 1.6228 | 0.6 | 0.6 | | 0.2659 | 52.0 | 1729 | 1.6452 | 0.6333 | 0.6333 | | 0.2045 | 52.9925 | 1762 | 1.9706 | 0.5667 | 0.5608 | | 0.1081 | 53.9850 | 1795 | 1.9546 | 0.6167 | 0.6009 | | 0.1782 | 54.9774 | 1828 | 2.1268 | 0.5667 | 0.5608 | | 0.244 | 56.0 | 1862 | 1.8301 | 0.6167 | 0.6098 | | 0.1783 | 56.9925 | 1895 | 2.5808 | 0.5667 | 0.5071 | | 0.2429 | 57.9850 | 1928 | 2.1214 | 0.6167 | 0.6059 | | 0.2 | 58.9774 | 1961 | 2.2282 | 0.5667 | 0.5657 | | 0.1646 | 60.0 | 1995 | 2.3272 | 0.5833 | 0.5662 | | 0.1663 | 60.9925 | 2028 | 2.4723 | 0.5333 | 0.5323 | | 0.1935 | 61.9850 | 2061 | 2.3384 | 0.6 | 0.5973 | | 0.2079 | 62.9774 | 2094 | 1.9271 | 0.5833 | 0.5830 | | 0.1797 | 64.0 | 2128 | 1.8707 | 0.6167 | 0.6151 | | 0.173 | 64.9925 | 2161 | 2.6292 | 0.5167 | 0.5031 | | 0.1815 | 65.9850 | 2194 | 2.6567 | 0.6 | 0.5973 | | 0.0665 | 66.9774 | 2227 | 3.2104 | 0.5167 | 0.5031 | | 0.1084 | 68.0 | 2261 | 3.6692 | 0.5333 | 0.5228 | | 0.1298 | 68.9925 | 2294 | 3.4104 | 0.55 | 0.5373 | | 0.1338 | 69.9850 | 2327 | 2.8215 | 0.6 | 0.5973 | | 0.0795 | 70.9774 | 2360 | 2.9208 | 0.5833 | 0.5830 | | 0.1138 | 72.0 | 2394 | 3.4277 | 0.5333 | 0.5302 | | 0.1644 | 72.9925 | 2427 | 2.8141 | 0.5833 | 0.5830 | | 0.1659 | 73.9850 | 2460 | 2.8723 | 0.6 | 0.6 | | 0.0453 | 74.9774 | 2493 | 2.8769 | 0.6333 | 0.6309 | | 0.0956 | 76.0 | 2527 | 3.2970 | 0.6167 | 0.6098 | | 0.1581 | 76.9925 | 2560 | 3.6672 | 0.5833 | 0.5816 | | 0.157 | 77.9850 | 2593 | 3.5317 | 0.55 | 0.5501 | | 0.0662 | 78.9774 | 2626 | 3.9003 | 0.55 | 0.5456 | | 0.1954 | 80.0 | 2660 | 3.3000 | 0.5833 | 0.5834 | | 0.0527 | 80.9925 | 2693 | 3.9596 | 0.5667 | 0.5638 | | 0.1578 | 81.9850 | 2726 | 3.6724 | 0.55 | 0.5481 | | 0.0737 | 82.9774 | 2759 | 4.0222 | 0.5167 | 0.5119 | | 0.0617 | 84.0 | 2793 | 3.5510 | 0.5833 | 0.5834 | | 0.0531 | 84.9925 | 2826 | 3.5110 | 0.6 | 0.6 | | 0.0993 | 85.9850 | 2859 | 4.0699 | 0.55 | 0.5481 | | 0.1545 | 86.9774 | 2892 | 3.6860 | 0.5667 | 0.5667 | | 0.0554 | 88.0 | 2926 | 3.4409 | 0.6 | 0.6 | | 0.0641 | 88.9925 | 2959 | 3.8304 | 0.55 | 0.5496 | | 0.0633 | 89.9850 | 2992 | 4.0899 | 0.55 | 0.5456 | | 0.0991 | 90.9774 | 3025 | 3.7344 | 0.6 | 0.6 | | 0.0772 | 92.0 | 3059 | 3.8448 | 0.6 | 0.5991 | | 0.0646 | 92.9925 | 3092 | 3.7794 | 0.6 | 0.5991 | | 0.0562 | 93.9850 | 3125 | 3.9340 | 0.5833 | 0.5830 | | 0.0475 | 94.9774 | 3158 | 4.2388 | 0.55 | 0.5481 | | 0.0715 | 96.0 | 3192 | 4.2732 | 0.5333 | 0.5302 | | 0.0875 | 96.9925 | 3225 | 4.1521 | 0.5667 | 0.5657 | | 0.0253 | 97.9850 | 3258 | 4.0813 | 0.5667 | 0.5667 | | 0.1037 | 98.9774 | 3291 | 4.1074 | 0.5667 | 0.5667 | | 0.1094 | 99.2481 | 3300 | 4.1081 | 0.5667 | 0.5667 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.1
[ "lglo", "mglo", "hglo", "uglo", "mgso", "lgso", "hgso", "mws", "mwl" ]
theofilusdf/results
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.7049 - Accuracy: 0.3875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 1.9292 | 0.2625 | | No log | 2.0 | 80 | 1.7516 | 0.3187 | | No log | 3.0 | 120 | 1.7049 | 0.3875 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
griffio/vit-large-patch16-224-new-dungeon-geo-morphs-006
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-new-dungeon-geo-morphs-006 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1766 - Accuracy: 0.94 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.1261 | 4.4444 | 10 | 0.5915 | 0.84 | | 0.3737 | 8.8889 | 20 | 0.1990 | 0.94 | | 0.1009 | 13.3333 | 30 | 0.1418 | 0.94 | | 0.0351 | 17.7778 | 40 | 0.1632 | 0.94 | | 0.02 | 22.2222 | 50 | 0.1713 | 0.94 | | 0.0117 | 26.6667 | 60 | 0.1766 | 0.94 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "three", "two", "zero" ]
haywoodsloan/ai-image-detector-deploy
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metrics loss: 0.09541802108287811 f1: 0.9853826686447335 precision: 0.9808886765408504 recall: 0.9899180291938807 auc: 0.9957081876919603 accuracy: 0.9794339738473816
[ "artificial", "real" ]
griffio/vit-large-patch16-224-new-dungeon-geo-morphs-009
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-new-dungeon-geo-morphs-009 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0997 - Accuracy: 0.96 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.1702 | 4.4444 | 10 | 0.5499 | 0.94 | | 0.1925 | 8.8889 | 20 | 0.1645 | 0.94 | | 0.0112 | 13.3333 | 30 | 0.0997 | 0.96 | | 0.0011 | 17.7778 | 40 | 0.1255 | 0.96 | | 0.0005 | 22.2222 | 50 | 0.1313 | 0.96 | | 0.0004 | 26.6667 | 60 | 0.1238 | 0.96 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "three", "two", "zero" ]
masafresh/vit-transformer3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-transformer3 This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8890 - Accuracy: 0.6833 - F1: 0.6840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:| | 1.8752 | 0.9552 | 16 | 0.9886 | 0.6667 | 0.6484 | | 0.7728 | 1.9701 | 33 | 0.6862 | 0.5667 | 0.4099 | | 0.7065 | 2.9851 | 50 | 0.6627 | 0.6333 | 0.6132 | | 0.6845 | 4.0 | 67 | 0.7065 | 0.55 | 0.4922 | | 0.6513 | 4.9552 | 83 | 0.7202 | 0.4667 | 0.3905 | | 0.6567 | 5.9701 | 100 | 0.7677 | 0.5333 | 0.4667 | | 0.6539 | 6.9851 | 117 | 0.6269 | 0.6167 | 0.6047 | | 0.7025 | 8.0 | 134 | 0.6838 | 0.65 | 0.6107 | | 0.6698 | 8.9552 | 150 | 0.6313 | 0.6667 | 0.6337 | | 0.6986 | 9.9701 | 167 | 0.6200 | 0.6667 | 0.6484 | | 0.6811 | 10.9851 | 184 | 0.5869 | 0.6833 | 0.6840 | | 0.6132 | 12.0 | 201 | 0.5881 | 0.6833 | 0.6687 | | 0.7235 | 12.9552 | 217 | 0.5732 | 0.65 | 0.6274 | | 0.5768 | 13.9701 | 234 | 0.5802 | 0.6833 | 0.6825 | | 0.5307 | 14.9851 | 251 | 0.6610 | 0.7 | 0.7010 | | 0.552 | 16.0 | 268 | 0.6229 | 0.7333 | 0.7296 | | 0.5548 | 16.9552 | 284 | 0.6186 | 0.7167 | 0.7036 | | 0.4863 | 17.9701 | 301 | 0.8409 | 0.5667 | 0.5366 | | 0.5048 | 18.9851 | 318 | 1.0019 | 0.4833 | 0.4015 | | 0.4919 | 20.0 | 335 | 0.6475 | 0.7333 | 0.7333 | | 0.4788 | 20.9552 | 351 | 0.6931 | 0.6333 | 0.6282 | | 0.5076 | 21.9701 | 368 | 0.6798 | 0.7 | 0.6983 | | 0.5047 | 22.9851 | 385 | 0.6784 | 0.7 | 0.7 | | 0.3477 | 24.0 | 402 | 0.8261 | 0.7 | 0.6983 | | 0.4508 | 24.9552 | 418 | 0.6846 | 0.6833 | 0.6825 | | 0.4948 | 25.9701 | 435 | 0.7509 | 0.6833 | 0.6804 | | 0.3661 | 26.9851 | 452 | 0.7321 | 0.6667 | 0.6678 | | 0.3072 | 28.0 | 469 | 0.8338 | 0.6833 | 0.6839 | | 0.3573 | 28.9552 | 485 | 0.9031 | 0.65 | 0.6434 | | 0.3828 | 29.9701 | 502 | 0.8582 | 0.6667 | 0.6667 | | 0.2931 | 30.9851 | 519 | 0.7648 | 0.65 | 0.6515 | | 0.3193 | 32.0 | 536 | 0.9218 | 0.6333 | 0.6333 | | 0.2783 | 32.9552 | 552 | 0.8452 | 0.7 | 0.7013 | | 0.2816 | 33.9701 | 569 | 0.8310 | 0.6833 | 0.6735 | | 0.3018 | 34.9851 | 586 | 0.8437 | 0.7 | 0.6960 | | 0.2256 | 36.0 | 603 | 1.0684 | 0.65 | 0.6507 | | 0.2609 | 36.9552 | 619 | 0.9117 | 0.65 | 0.6491 | | 0.2198 | 37.9701 | 636 | 1.1688 | 0.5833 | 0.5652 | | 0.306 | 38.9851 | 653 | 0.9001 | 0.6167 | 0.6130 | | 0.2243 | 40.0 | 670 | 1.2253 | 0.6333 | 0.6313 | | 0.3482 | 40.9552 | 686 | 1.0028 | 0.65 | 0.6491 | | 0.196 | 41.9701 | 703 | 0.8747 | 0.6667 | 0.6682 | | 0.2261 | 42.9851 | 720 | 1.3642 | 0.65 | 0.6468 | | 0.2802 | 44.0 | 737 | 1.3271 | 0.5833 | 0.5704 | | 0.1965 | 44.9552 | 753 | 1.3784 | 0.6 | 0.6018 | | 0.2198 | 45.9701 | 770 | 1.3224 | 0.6667 | 0.6682 | | 0.1852 | 46.9851 | 787 | 1.5364 | 0.6333 | 0.6243 | | 0.197 | 48.0 | 804 | 1.5706 | 0.6167 | 0.6174 | | 0.1932 | 48.9552 | 820 | 1.3610 | 0.6667 | 0.6648 | | 0.1495 | 49.9701 | 837 | 1.4687 | 0.6167 | 0.6174 | | 0.1404 | 50.9851 | 854 | 1.3438 | 0.7 | 0.6983 | | 0.1275 | 52.0 | 871 | 1.4674 | 0.6 | 0.5978 | | 0.1545 | 52.9552 | 887 | 1.3120 | 0.6167 | 0.6183 | | 0.147 | 53.9701 | 904 | 1.5816 | 0.6167 | 0.6183 | | 0.1541 | 54.9851 | 921 | 1.5117 | 0.6667 | 0.6678 | | 0.1283 | 56.0 | 938 | 1.5965 | 0.6667 | 0.6678 | | 0.1715 | 56.9552 | 954 | 1.6750 | 0.65 | 0.6491 | | 0.1513 | 57.9701 | 971 | 1.9170 | 0.5333 | 0.5164 | | 0.2349 | 58.9851 | 988 | 1.5358 | 0.6333 | 0.6346 | | 0.1248 | 60.0 | 1005 | 1.6686 | 0.6833 | 0.6840 | | 0.1076 | 60.9552 | 1021 | 1.7018 | 0.6333 | 0.6346 | | 0.1431 | 61.9701 | 1038 | 1.9088 | 0.6333 | 0.6333 | | 0.0838 | 62.9851 | 1055 | 1.8821 | 0.6333 | 0.6346 | | 0.0989 | 64.0 | 1072 | 1.6053 | 0.65 | 0.6491 | | 0.1323 | 64.9552 | 1088 | 1.7114 | 0.6333 | 0.6312 | | 0.0908 | 65.9701 | 1105 | 1.7326 | 0.65 | 0.6491 | | 0.2056 | 66.9851 | 1122 | 1.7166 | 0.6167 | 0.6130 | | 0.0752 | 68.0 | 1139 | 1.8009 | 0.65 | 0.6467 | | 0.1116 | 68.9552 | 1155 | 1.6964 | 0.6667 | 0.6678 | | 0.0821 | 69.9701 | 1172 | 1.7557 | 0.6167 | 0.6094 | | 0.1284 | 70.9851 | 1189 | 1.8039 | 0.65 | 0.6491 | | 0.1905 | 72.0 | 1206 | 1.7951 | 0.6167 | 0.6094 | | 0.1031 | 72.9552 | 1222 | 1.6888 | 0.6667 | 0.6648 | | 0.0706 | 73.9701 | 1239 | 1.8992 | 0.65 | 0.6467 | | 0.0944 | 74.9851 | 1256 | 1.6965 | 0.6833 | 0.6840 | | 0.1042 | 76.0 | 1273 | 1.6756 | 0.6833 | 0.6825 | | 0.1599 | 76.9552 | 1289 | 1.4360 | 0.7333 | 0.7342 | | 0.0896 | 77.9701 | 1306 | 1.5759 | 0.65 | 0.6467 | | 0.0674 | 78.9851 | 1323 | 1.7071 | 0.7 | 0.7010 | | 0.1133 | 80.0 | 1340 | 1.6499 | 0.6833 | 0.6840 | | 0.0506 | 80.9552 | 1356 | 1.6546 | 0.6833 | 0.6825 | | 0.1015 | 81.9701 | 1373 | 1.6468 | 0.7 | 0.7013 | | 0.0923 | 82.9851 | 1390 | 1.8567 | 0.6667 | 0.6622 | | 0.0752 | 84.0 | 1407 | 1.8140 | 0.7 | 0.7010 | | 0.0768 | 84.9552 | 1423 | 1.8225 | 0.6667 | 0.6678 | | 0.0683 | 85.9701 | 1440 | 1.8094 | 0.6833 | 0.6840 | | 0.0454 | 86.9851 | 1457 | 1.8892 | 0.65 | 0.6491 | | 0.054 | 88.0 | 1474 | 1.8180 | 0.7 | 0.7010 | | 0.0449 | 88.9552 | 1490 | 1.7891 | 0.7333 | 0.7345 | | 0.0645 | 89.9701 | 1507 | 1.8262 | 0.7 | 0.7010 | | 0.0632 | 90.9851 | 1524 | 1.8187 | 0.7167 | 0.7179 | | 0.0795 | 92.0 | 1541 | 1.7941 | 0.7333 | 0.7345 | | 0.0923 | 92.9552 | 1557 | 1.8340 | 0.6833 | 0.6840 | | 0.0486 | 93.9701 | 1574 | 1.8843 | 0.6667 | 0.6667 | | 0.0821 | 94.9851 | 1591 | 1.8907 | 0.6667 | 0.6667 | | 0.0384 | 95.5224 | 1600 | 1.8890 | 0.6833 | 0.6840 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.1
[ "lglo", "mglo", "hglo", "uglo", "mgso", "lgso", "hgso", "mws", "mwl" ]
SABR22/food_models
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # food_models This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7386 - Accuracy: 0.8545 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.7386 | 0.9994 | 1183 | 1.5360 | 0.7945 | | 1.0097 | 1.9998 | 2367 | 0.8811 | 0.8401 | | 0.8608 | 2.9985 | 3549 | 0.7386 | 0.8545 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
Soponnnn/food_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Soponnnn/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3916 - Validation Loss: 0.3630 - Train Accuracy: 0.916 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 2.8264 | 1.7259 | 0.779 | 0 | | 1.2602 | 0.8512 | 0.871 | 1 | | 0.7141 | 0.5674 | 0.885 | 2 | | 0.5119 | 0.4395 | 0.908 | 3 | | 0.3916 | 0.3630 | 0.916 | 4 | ### Framework versions - Transformers 4.46.2 - TensorFlow 2.17.1 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
SABR22/ViT-threat-classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT-threat-classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on a threat classification dataset. This model was created for a Carleton University computer vision hacking event and serves as a proof of concept rather than complete model. It is trained on an extremely small and limited dataset and the performance is limited. It achieves the following results on the evaluation set: - Loss: 0.4568 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.328 | 0.9756 | 10 | 0.4556 | 0.875 | | 0.3226 | 1.9512 | 20 | 0.4736 | 0.75 | | 0.3619 | 2.9268 | 30 | 0.4568 | 1.0 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "non-threat", "threat" ]
cvmil/vit-base-patch16-224_rice-disease-02
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224_rice-disease-02_111724 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3312 - Accuracy: 0.9029 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9444 | 1.0 | 423 | 1.3919 | 0.6420 | | 0.9896 | 2.0 | 846 | 0.7862 | 0.7838 | | 0.6372 | 3.0 | 1269 | 0.6040 | 0.8164 | | 0.5079 | 4.0 | 1692 | 0.5136 | 0.8450 | | 0.4377 | 5.0 | 2115 | 0.4580 | 0.8623 | | 0.3922 | 6.0 | 2538 | 0.4210 | 0.8769 | | 0.3608 | 7.0 | 2961 | 0.3966 | 0.8809 | | 0.3386 | 8.0 | 3384 | 0.3762 | 0.8882 | | 0.3207 | 9.0 | 3807 | 0.3641 | 0.8916 | | 0.3078 | 10.0 | 4230 | 0.3519 | 0.8935 | | 0.2975 | 11.0 | 4653 | 0.3441 | 0.8969 | | 0.2898 | 12.0 | 5076 | 0.3380 | 0.9009 | | 0.2845 | 13.0 | 5499 | 0.3341 | 0.9029 | | 0.2805 | 14.0 | 5922 | 0.3319 | 0.9035 | | 0.2786 | 15.0 | 6345 | 0.3312 | 0.9029 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "bacterial_leaf_blight", "brown_spot", "healthy", "leaf_blast", "leaf_scald", "narrow_brown_spot", "neck_blast", "rice_hispa", "sheath_blight", "tungro" ]
cvmil/beit-base-patch16-224_rice-disease-02
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beit-base-patch16-224_rice-disease-02_111724 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1321 - Accuracy: 0.9574 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.3364 | 1.0 | 845 | 0.5174 | 0.8430 | | 0.344 | 2.0 | 1690 | 0.2503 | 0.9222 | | 0.2076 | 3.0 | 2535 | 0.1983 | 0.9375 | | 0.1649 | 4.0 | 3380 | 0.1730 | 0.9468 | | 0.1443 | 5.0 | 4225 | 0.1581 | 0.9528 | | 0.1261 | 6.0 | 5070 | 0.1544 | 0.9554 | | 0.1166 | 7.0 | 5915 | 0.1498 | 0.9528 | | 0.1097 | 8.0 | 6760 | 0.1479 | 0.9554 | | 0.1017 | 9.0 | 7605 | 0.1477 | 0.9501 | | 0.1016 | 10.0 | 8450 | 0.1382 | 0.9561 | | 0.0946 | 11.0 | 9295 | 0.1362 | 0.9574 | | 0.0934 | 12.0 | 10140 | 0.1330 | 0.9587 | | 0.0903 | 13.0 | 10985 | 0.1330 | 0.9548 | | 0.0863 | 14.0 | 11830 | 0.1323 | 0.9568 | | 0.0877 | 15.0 | 12675 | 0.1321 | 0.9574 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "bacterial_leaf_blight", "brown_spot", "healthy", "leaf_blast", "leaf_scald", "narrow_brown_spot", "neck_blast", "rice_hispa", "sheath_blight", "tungro" ]
theofilusdf/emotion-classifier
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion-classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9405 - Accuracy: 0.2938 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 2.0322 | 0.2 | | No log | 2.0 | 80 | 1.9634 | 0.2562 | | No log | 3.0 | 120 | 1.9405 | 0.2938 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Tokenizers 0.20.3
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
cvmil/resnet-50_rice-disease-02
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-50_rice-disease-02_111724 This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6774 - Accuracy: 0.8044 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.1567 | 1.0 | 212 | 1.9092 | 0.5476 | | 1.6124 | 2.0 | 424 | 1.3708 | 0.6773 | | 1.2221 | 3.0 | 636 | 1.1384 | 0.7186 | | 1.0356 | 4.0 | 848 | 0.9888 | 0.7339 | | 0.9297 | 5.0 | 1060 | 0.9108 | 0.7425 | | 0.8599 | 6.0 | 1272 | 0.8448 | 0.7538 | | 0.8082 | 7.0 | 1484 | 0.8129 | 0.7645 | | 0.7648 | 8.0 | 1696 | 0.7604 | 0.7864 | | 0.7368 | 9.0 | 1908 | 0.7597 | 0.7738 | | 0.7092 | 10.0 | 2120 | 0.7230 | 0.7884 | | 0.6928 | 11.0 | 2332 | 0.7014 | 0.7884 | | 0.6797 | 12.0 | 2544 | 0.6970 | 0.7917 | | 0.6686 | 13.0 | 2756 | 0.6933 | 0.8017 | | 0.6642 | 14.0 | 2968 | 0.6813 | 0.8024 | | 0.6601 | 15.0 | 3180 | 0.6774 | 0.8044 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "bacterial_leaf_blight", "brown_spot", "healthy", "leaf_blast", "leaf_scald", "narrow_brown_spot", "neck_blast", "rice_hispa", "sheath_blight", "tungro" ]
damelia/emotion_classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3105 - Accuracy: 0.5188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0819 | 1.0 | 10 | 2.0549 | 0.2375 | | 2.0249 | 2.0 | 20 | 1.9696 | 0.3625 | | 1.8988 | 3.0 | 30 | 1.8123 | 0.3937 | | 1.7331 | 4.0 | 40 | 1.6707 | 0.4375 | | 1.5894 | 5.0 | 50 | 1.5504 | 0.4938 | | 1.4997 | 6.0 | 60 | 1.4963 | 0.5188 | | 1.424 | 7.0 | 70 | 1.4749 | 0.4688 | | 1.3576 | 8.0 | 80 | 1.4223 | 0.5125 | | 1.2986 | 9.0 | 90 | 1.3850 | 0.5312 | | 1.2358 | 10.0 | 100 | 1.3588 | 0.5375 | | 1.2052 | 11.0 | 110 | 1.3226 | 0.55 | | 1.1699 | 12.0 | 120 | 1.3446 | 0.525 | | 1.1334 | 13.0 | 130 | 1.3223 | 0.525 | | 1.1178 | 14.0 | 140 | 1.3089 | 0.575 | | 1.1062 | 15.0 | 150 | 1.2776 | 0.5625 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Tokenizers 0.20.3
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
tdhcuong/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0701 - Accuracy: 0.9737 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2713 | 1.0 | 190 | 0.1596 | 0.9470 | | 0.1888 | 2.0 | 380 | 0.0995 | 0.9644 | | 0.1643 | 3.0 | 570 | 0.0701 | 0.9737 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "annualcrop", "forest", "herbaceousvegetation", "highway", "industrial", "pasture", "permanentcrop", "residential", "river", "sealake" ]
ArjTheHacker/vit_detection_of_retinology
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "label_0", "label_1", "label_2", "label_3", "label_4" ]
Docty/Blood-Cell
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Blood-Cell This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.9897 - Validation Loss: 1.9904 - Train Accuracy: 0.3905 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 10, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 1.9904 | 1.9904 | 0.3905 | 0 | | 1.9894 | 1.9904 | 0.3905 | 1 | | 1.9898 | 1.9904 | 0.3905 | 2 | | 1.9897 | 1.9904 | 0.3905 | 3 | | 1.9897 | 1.9904 | 0.3905 | 4 | | 1.9901 | 1.9904 | 0.3905 | 5 | | 1.9897 | 1.9904 | 0.3905 | 6 | | 1.9897 | 1.9904 | 0.3905 | 7 | | 1.9902 | 1.9904 | 0.3905 | 8 | | 1.9897 | 1.9904 | 0.3905 | 9 | ### Framework versions - Transformers 4.46.2 - TensorFlow 2.17.1 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "monocyte", "ig", "neutrophil", "basophil", "lymphocyte", "erythroblast", "eosinophil", "platelet" ]
RenSurii/vit-base-patch16-224-in21k-finetuned-image-classification
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # RenSurii/vit-base-patch16-224-in21k-finetuned-image-classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the mnist dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3712 - Train Accuracy: 0.9621 - Validation Loss: 0.3312 - Validation Accuracy: 0.9621 - Epoch: 5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': 1.0, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 6000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 2.0107 | 0.8548 | 1.5288 | 0.8548 | 0 | | 1.3538 | 0.9149 | 0.9913 | 0.9149 | 1 | | 0.9517 | 0.934 | 0.7421 | 0.9340 | 2 | | 0.6882 | 0.9467 | 0.5690 | 0.9467 | 3 | | 0.4999 | 0.9554 | 0.4264 | 0.9554 | 4 | | 0.3712 | 0.9621 | 0.3312 | 0.9621 | 5 | ### Framework versions - Transformers 4.47.0.dev0 - TensorFlow 2.18.0 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9" ]
masafresh/swin-transformer-class
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-transformer-class This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window16-256](https://huggingface.co/microsoft/swinv2-base-patch4-window16-256) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2549 - Accuracy: 0.4953 - F1: 0.4547 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:--------:|:----:|:---------------:|:--------:|:------:| | 2.1381 | 0.9748 | 29 | 2.1103 | 0.2594 | 0.1420 | | 1.9462 | 1.9832 | 59 | 1.8963 | 0.2783 | 0.1481 | | 1.7299 | 2.9916 | 89 | 1.6978 | 0.3066 | 0.2504 | | 1.6406 | 4.0 | 119 | 1.5954 | 0.3585 | 0.3221 | | 1.5067 | 4.9748 | 148 | 1.5339 | 0.3915 | 0.3527 | | 1.4566 | 5.9832 | 178 | 1.4972 | 0.4151 | 0.3769 | | 1.4487 | 6.9916 | 208 | 1.4635 | 0.4387 | 0.3369 | | 1.4335 | 8.0 | 238 | 1.4377 | 0.4481 | 0.3958 | | 1.3974 | 8.9748 | 267 | 1.4213 | 0.4623 | 0.4066 | | 1.3542 | 9.9832 | 297 | 1.4004 | 0.4575 | 0.4090 | | 1.2964 | 10.9916 | 327 | 1.3880 | 0.4434 | 0.3832 | | 1.3073 | 12.0 | 357 | 1.3716 | 0.4906 | 0.4449 | | 1.3256 | 12.9748 | 386 | 1.3664 | 0.4528 | 0.4175 | | 1.2867 | 13.9832 | 416 | 1.3622 | 0.4434 | 0.4033 | | 1.3096 | 14.9916 | 446 | 1.3418 | 0.4764 | 0.4281 | | 1.3012 | 16.0 | 476 | 1.3321 | 0.4528 | 0.4161 | | 1.3086 | 16.9748 | 505 | 1.3248 | 0.4481 | 0.3578 | | 1.2646 | 17.9832 | 535 | 1.3164 | 0.4717 | 0.4269 | | 1.2647 | 18.9916 | 565 | 1.3140 | 0.4811 | 0.4394 | | 1.2673 | 20.0 | 595 | 1.3073 | 0.4670 | 0.4311 | | 1.2649 | 20.9748 | 624 | 1.2999 | 0.4906 | 0.4319 | | 1.2721 | 21.9832 | 654 | 1.3007 | 0.4764 | 0.4236 | | 1.317 | 22.9916 | 684 | 1.2982 | 0.4670 | 0.4167 | | 1.2397 | 24.0 | 714 | 1.3031 | 0.4623 | 0.4115 | | 1.209 | 24.9748 | 743 | 1.3075 | 0.4811 | 0.4379 | | 1.1994 | 25.9832 | 773 | 1.3091 | 0.4245 | 0.3765 | | 1.2695 | 26.9916 | 803 | 1.3017 | 0.4717 | 0.4362 | | 1.2167 | 28.0 | 833 | 1.2986 | 0.4575 | 0.4153 | | 1.234 | 28.9748 | 862 | 1.3082 | 0.4292 | 0.3773 | | 1.2726 | 29.9832 | 892 | 1.3003 | 0.4670 | 0.4238 | | 1.207 | 30.9916 | 922 | 1.2964 | 0.4670 | 0.4260 | | 1.1534 | 32.0 | 952 | 1.3059 | 0.4292 | 0.3727 | | 1.2477 | 32.9748 | 981 | 1.2924 | 0.4858 | 0.4397 | | 1.2202 | 33.9832 | 1011 | 1.2924 | 0.4623 | 0.3850 | | 1.2248 | 34.9916 | 1041 | 1.2969 | 0.4434 | 0.3680 | | 1.1775 | 36.0 | 1071 | 1.2848 | 0.4953 | 0.4485 | | 1.2401 | 36.9748 | 1100 | 1.2887 | 0.4575 | 0.4214 | | 1.2311 | 37.9832 | 1130 | 1.2838 | 0.4858 | 0.4420 | | 1.2143 | 38.9916 | 1160 | 1.2846 | 0.4906 | 0.4354 | | 1.1548 | 40.0 | 1190 | 1.2828 | 0.4481 | 0.4057 | | 1.1405 | 40.9748 | 1219 | 1.2878 | 0.4717 | 0.4356 | | 1.1957 | 41.9832 | 1249 | 1.2839 | 0.4528 | 0.4063 | | 1.211 | 42.9916 | 1279 | 1.2853 | 0.4670 | 0.4097 | | 1.1849 | 44.0 | 1309 | 1.2779 | 0.4811 | 0.4360 | | 1.1466 | 44.9748 | 1338 | 1.2765 | 0.4764 | 0.4341 | | 1.1386 | 45.9832 | 1368 | 1.2836 | 0.4623 | 0.4184 | | 1.2258 | 46.9916 | 1398 | 1.2718 | 0.4717 | 0.4293 | | 1.2139 | 48.0 | 1428 | 1.2695 | 0.4906 | 0.4409 | | 1.1938 | 48.9748 | 1457 | 1.2737 | 0.4764 | 0.4385 | | 1.2171 | 49.9832 | 1487 | 1.2709 | 0.4670 | 0.4189 | | 1.1804 | 50.9916 | 1517 | 1.2657 | 0.4764 | 0.4327 | | 1.143 | 52.0 | 1547 | 1.2701 | 0.4764 | 0.4345 | | 1.1723 | 52.9748 | 1576 | 1.2783 | 0.4717 | 0.4152 | | 1.1454 | 53.9832 | 1606 | 1.2670 | 0.5047 | 0.4496 | | 1.1957 | 54.9916 | 1636 | 1.2709 | 0.4670 | 0.4211 | | 1.2383 | 56.0 | 1666 | 1.2752 | 0.4670 | 0.4136 | | 1.1935 | 56.9748 | 1695 | 1.2670 | 0.4623 | 0.4201 | | 1.159 | 57.9832 | 1725 | 1.2696 | 0.4717 | 0.4199 | | 1.2267 | 58.9916 | 1755 | 1.2676 | 0.4858 | 0.4404 | | 1.2047 | 60.0 | 1785 | 1.2659 | 0.4764 | 0.4336 | | 1.1168 | 60.9748 | 1814 | 1.2680 | 0.4953 | 0.4466 | | 1.2396 | 61.9832 | 1844 | 1.2741 | 0.4481 | 0.4045 | | 1.1193 | 62.9916 | 1874 | 1.2791 | 0.4623 | 0.4184 | | 1.1587 | 64.0 | 1904 | 1.2657 | 0.4858 | 0.4369 | | 1.1492 | 64.9748 | 1933 | 1.2736 | 0.4717 | 0.4367 | | 1.1303 | 65.9832 | 1963 | 1.2683 | 0.4811 | 0.4300 | | 1.1672 | 66.9916 | 1993 | 1.2683 | 0.4953 | 0.4494 | | 1.2035 | 68.0 | 2023 | 1.2667 | 0.4811 | 0.4447 | | 1.1494 | 68.9748 | 2052 | 1.2645 | 0.4858 | 0.4476 | | 1.1537 | 69.9832 | 2082 | 1.2714 | 0.4811 | 0.4434 | | 1.18 | 70.9916 | 2112 | 1.2701 | 0.4811 | 0.4344 | | 1.1386 | 72.0 | 2142 | 1.2688 | 0.4858 | 0.4440 | | 1.1757 | 72.9748 | 2171 | 1.2694 | 0.4906 | 0.4514 | | 1.1335 | 73.9832 | 2201 | 1.2712 | 0.4858 | 0.4419 | | 1.1669 | 74.9916 | 2231 | 1.2701 | 0.5094 | 0.4651 | | 1.1862 | 76.0 | 2261 | 1.2684 | 0.4764 | 0.4316 | | 1.1695 | 76.9748 | 2290 | 1.2642 | 0.4906 | 0.4509 | | 1.1317 | 77.9832 | 2320 | 1.2687 | 0.4811 | 0.4391 | | 1.2023 | 78.9916 | 2350 | 1.2647 | 0.5 | 0.4579 | | 1.1603 | 80.0 | 2380 | 1.2650 | 0.5 | 0.4596 | | 1.1461 | 80.9748 | 2409 | 1.2623 | 0.4811 | 0.4396 | | 1.1356 | 81.9832 | 2439 | 1.2621 | 0.4953 | 0.4449 | | 1.1646 | 82.9916 | 2469 | 1.2713 | 0.4953 | 0.4526 | | 1.152 | 84.0 | 2499 | 1.2661 | 0.5047 | 0.4632 | | 1.0999 | 84.9748 | 2528 | 1.2685 | 0.5047 | 0.4576 | | 1.1749 | 85.9832 | 2558 | 1.2716 | 0.4858 | 0.4459 | | 1.1823 | 86.9916 | 2588 | 1.2624 | 0.4906 | 0.4441 | | 1.1736 | 88.0 | 2618 | 1.2650 | 0.4811 | 0.4377 | | 1.1565 | 88.9748 | 2647 | 1.2667 | 0.4670 | 0.4226 | | 1.1565 | 89.9832 | 2677 | 1.2667 | 0.4953 | 0.4453 | | 1.192 | 90.9916 | 2707 | 1.2634 | 0.5047 | 0.4635 | | 1.1271 | 92.0 | 2737 | 1.2639 | 0.4764 | 0.4303 | | 1.19 | 92.9748 | 2766 | 1.2631 | 0.4858 | 0.4412 | | 1.1866 | 93.9832 | 2796 | 1.2616 | 0.4953 | 0.4555 | | 1.0829 | 94.9916 | 2826 | 1.2586 | 0.4953 | 0.4522 | | 1.1692 | 96.0 | 2856 | 1.2608 | 0.4906 | 0.4497 | | 1.1503 | 96.9748 | 2885 | 1.2607 | 0.4953 | 0.4551 | | 1.1263 | 97.9832 | 2915 | 1.2577 | 0.4953 | 0.4543 | | 1.2199 | 98.9916 | 2945 | 1.2570 | 0.5047 | 0.4601 | | 1.1347 | 100.0 | 2975 | 1.2555 | 0.4953 | 0.4503 | | 1.1583 | 100.9748 | 3004 | 1.2557 | 0.5 | 0.4592 | | 1.1697 | 101.9832 | 3034 | 1.2578 | 0.4858 | 0.4467 | | 1.1918 | 102.9916 | 3064 | 1.2572 | 0.5047 | 0.4598 | | 1.1959 | 104.0 | 3094 | 1.2563 | 0.5094 | 0.4649 | | 1.2032 | 104.9748 | 3123 | 1.2551 | 0.4906 | 0.4480 | | 1.2031 | 105.9832 | 3153 | 1.2552 | 0.4906 | 0.4491 | | 1.1565 | 106.9916 | 3183 | 1.2544 | 0.5142 | 0.4668 | | 1.1703 | 108.0 | 3213 | 1.2570 | 0.5 | 0.4598 | | 1.2085 | 108.9748 | 3242 | 1.2550 | 0.5094 | 0.4639 | | 1.1641 | 109.9832 | 3272 | 1.2578 | 0.4953 | 0.4551 | | 1.1846 | 110.9916 | 3302 | 1.2579 | 0.4906 | 0.4510 | | 1.1989 | 112.0 | 3332 | 1.2560 | 0.5 | 0.4579 | | 1.111 | 112.9748 | 3361 | 1.2561 | 0.4953 | 0.4545 | | 1.1703 | 113.9832 | 3391 | 1.2561 | 0.5047 | 0.4567 | | 1.165 | 114.9916 | 3421 | 1.2567 | 0.5 | 0.4480 | | 1.1295 | 116.0 | 3451 | 1.2582 | 0.4953 | 0.4475 | | 1.1084 | 116.9748 | 3480 | 1.2574 | 0.5 | 0.4571 | | 1.1577 | 117.9832 | 3510 | 1.2573 | 0.5047 | 0.4617 | | 1.156 | 118.9916 | 3540 | 1.2565 | 0.4953 | 0.4559 | | 1.1491 | 120.0 | 3570 | 1.2564 | 0.5 | 0.4573 | | 1.1396 | 120.9748 | 3599 | 1.2572 | 0.5 | 0.4534 | | 1.1545 | 121.9832 | 3629 | 1.2565 | 0.5 | 0.4604 | | 1.1796 | 122.9916 | 3659 | 1.2563 | 0.5 | 0.4593 | | 1.2012 | 124.0 | 3689 | 1.2559 | 0.4858 | 0.4454 | | 1.1396 | 124.9748 | 3718 | 1.2567 | 0.4953 | 0.4555 | | 1.1999 | 125.9832 | 3748 | 1.2558 | 0.4858 | 0.4450 | | 1.1524 | 126.9916 | 3778 | 1.2569 | 0.4953 | 0.4554 | | 1.2299 | 128.0 | 3808 | 1.2560 | 0.4953 | 0.4525 | | 1.1548 | 128.9748 | 3837 | 1.2553 | 0.4764 | 0.4375 | | 1.1869 | 129.9832 | 3867 | 1.2554 | 0.4811 | 0.4426 | | 1.1891 | 130.9916 | 3897 | 1.2555 | 0.4811 | 0.4423 | | 1.1353 | 132.0 | 3927 | 1.2565 | 0.4953 | 0.4554 | | 1.1717 | 132.9748 | 3956 | 1.2569 | 0.5047 | 0.4643 | | 1.1536 | 133.9832 | 3986 | 1.2556 | 0.5 | 0.4574 | | 1.1667 | 134.9916 | 4016 | 1.2555 | 0.5 | 0.4594 | | 1.1633 | 136.0 | 4046 | 1.2550 | 0.4953 | 0.4551 | | 1.1646 | 136.9748 | 4075 | 1.2539 | 0.4858 | 0.4457 | | 1.1618 | 137.9832 | 4105 | 1.2540 | 0.5047 | 0.4594 | | 1.1581 | 138.9916 | 4135 | 1.2545 | 0.4858 | 0.4460 | | 1.117 | 140.0 | 4165 | 1.2549 | 0.4858 | 0.4457 | | 1.184 | 140.9748 | 4194 | 1.2552 | 0.4906 | 0.4504 | | 1.1323 | 141.9832 | 4224 | 1.2553 | 0.4906 | 0.4504 | | 1.1219 | 142.9916 | 4254 | 1.2550 | 0.4953 | 0.4547 | | 1.1478 | 144.0 | 4284 | 1.2550 | 0.4953 | 0.4547 | | 1.1177 | 144.9748 | 4313 | 1.2550 | 0.4953 | 0.4547 | | 1.1326 | 145.9832 | 4343 | 1.2549 | 0.4953 | 0.4547 | | 1.1392 | 146.2185 | 4350 | 1.2549 | 0.4953 | 0.4547 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.1
[ "lgso", "mwl", "hgso", "mws", "hglo", "lglo", "uglo", "mglo", "mgso" ]
SABR22/ViT-threat-classification-v2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT-threat-classification-v2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. This is model created as a prrof of concept for a Carleton University computer vision event. It is by no means meant to be used in deliverable systems in its current state, and should be used exclusively for research and development. It achieves the following results on the evaluation set: - Loss: 0.0381 - F1: 0.9657 - Precision: 0.9563 - Recall: 0.9752 ## Model description More information needed ## Intended uses & limitations More information needed ## Collaborators [Angus Bailey](https://huggingface.co/boshy) [Thomas Nolasque](https://github.com/thomasnol) ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall | |:-------------:|:------:|:----:|:---------------:|:------:|:---------:|:------:| | 0.0744 | 0.9985 | 326 | 0.0576 | 0.9466 | 0.9738 | 0.9208 | | 0.0449 | 2.0 | 653 | 0.0397 | 0.9641 | 0.9747 | 0.9538 | | 0.0207 | 2.9985 | 979 | 0.0409 | 0.9647 | 0.9607 | 0.9686 | | 0.0342 | 4.0 | 1306 | 0.0382 | 0.9650 | 0.9518 | 0.9785 | | 0.0286 | 4.9923 | 1630 | 0.0381 | 0.9657 | 0.9563 | 0.9752 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "non-threat", "threat" ]
cvmil/dinov2-base_rice-disease-02
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dinov2-base_rice-disease-02_111824 This model is a fine-tuned version of [facebook/dinov2-base](https://huggingface.co/facebook/dinov2-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1191 - Accuracy: 0.9654 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 1.4008 | 1.0 | 212 | 0.8303 | 0.5579 | | 0.331 | 2.0 | 424 | 0.9128 | 0.2795 | | 0.192 | 3.0 | 636 | 0.9368 | 0.2083 | | 0.1462 | 4.0 | 848 | 0.9488 | 0.1819 | | 0.1224 | 5.0 | 1060 | 0.9534 | 0.1633 | | 0.1067 | 6.0 | 1272 | 0.9521 | 0.1567 | | 0.0954 | 7.0 | 1484 | 0.9574 | 0.1431 | | 0.0879 | 8.0 | 1696 | 0.9594 | 0.1348 | | 0.0809 | 9.0 | 1908 | 0.9594 | 0.1325 | | 0.0759 | 10.0 | 2120 | 0.9634 | 0.1273 | | 0.0721 | 11.0 | 2332 | 0.1264 | 0.9607 | | 0.0688 | 12.0 | 2544 | 0.1212 | 0.9621 | | 0.0662 | 13.0 | 2756 | 0.1199 | 0.9654 | | 0.0645 | 14.0 | 2968 | 0.1191 | 0.9601 | | 0.0631 | 15.0 | 3180 | 0.1191 | 0.9654 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "bacterial_leaf_blight", "brown_spot", "healthy", "leaf_blast", "leaf_scald", "narrow_brown_spot", "neck_blast", "rice_hispa", "sheath_blight", "tungro" ]
nemik/frost-vision-v2-google_vit-base-patch16-224
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # frost-vision-v2-google_vit-base-patch16-224 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the webdataset dataset. It achieves the following results on the evaluation set: - Loss: 0.1562 - Accuracy: 0.9359 - F1: 0.8381 - Precision: 0.8896 - Recall: 0.7922 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.3416 | 1.1494 | 100 | 0.3273 | 0.8771 | 0.6124 | 0.9005 | 0.4640 | | 0.2215 | 2.2989 | 200 | 0.2187 | 0.9183 | 0.7902 | 0.8537 | 0.7355 | | 0.1753 | 3.4483 | 300 | 0.1899 | 0.9238 | 0.8098 | 0.8472 | 0.7756 | | 0.1656 | 4.5977 | 400 | 0.1732 | 0.9272 | 0.8175 | 0.8606 | 0.7784 | | 0.1288 | 5.7471 | 500 | 0.1562 | 0.9359 | 0.8381 | 0.8896 | 0.7922 | | 0.1323 | 6.8966 | 600 | 0.1597 | 0.9322 | 0.8326 | 0.8609 | 0.8061 | | 0.1004 | 8.0460 | 700 | 0.1613 | 0.9316 | 0.8324 | 0.8542 | 0.8116 | | 0.0956 | 9.1954 | 800 | 0.1612 | 0.9336 | 0.8368 | 0.8620 | 0.8130 | | 0.0841 | 10.3448 | 900 | 0.1621 | 0.9345 | 0.8383 | 0.8669 | 0.8116 | | 0.0764 | 11.4943 | 1000 | 0.1586 | 0.9359 | 0.8438 | 0.8615 | 0.8269 | | 0.0726 | 12.6437 | 1100 | 0.1546 | 0.9420 | 0.8594 | 0.8729 | 0.8463 | | 0.0732 | 13.7931 | 1200 | 0.1529 | 0.9409 | 0.8565 | 0.87 | 0.8435 | | 0.0626 | 14.9425 | 1300 | 0.1589 | 0.9377 | 0.8485 | 0.8637 | 0.8338 | | 0.0481 | 16.0920 | 1400 | 0.1612 | 0.9394 | 0.8510 | 0.8767 | 0.8269 | | 0.0507 | 17.2414 | 1500 | 0.1679 | 0.9339 | 0.8394 | 0.8539 | 0.8255 | | 0.0446 | 18.3908 | 1600 | 0.1623 | 0.9417 | 0.8597 | 0.8664 | 0.8532 | | 0.0498 | 19.5402 | 1700 | 0.1625 | 0.9417 | 0.8601 | 0.8643 | 0.8560 | | 0.0458 | 20.6897 | 1800 | 0.1601 | 0.9397 | 0.8533 | 0.8693 | 0.8380 | | 0.0307 | 21.8391 | 1900 | 0.1626 | 0.9432 | 0.8637 | 0.8673 | 0.8601 | | 0.0334 | 22.9885 | 2000 | 0.1621 | 0.9443 | 0.8642 | 0.8829 | 0.8463 | | 0.0339 | 24.1379 | 2100 | 0.1680 | 0.9435 | 0.8645 | 0.8675 | 0.8615 | | 0.0222 | 25.2874 | 2200 | 0.1656 | 0.9394 | 0.8537 | 0.8628 | 0.8449 | | 0.026 | 26.4368 | 2300 | 0.1687 | 0.9386 | 0.8515 | 0.8612 | 0.8421 | | 0.0353 | 27.5862 | 2400 | 0.1666 | 0.9403 | 0.8555 | 0.8665 | 0.8449 | | 0.0294 | 28.7356 | 2500 | 0.1660 | 0.9429 | 0.8614 | 0.8755 | 0.8476 | | 0.0243 | 29.8851 | 2600 | 0.1664 | 0.9423 | 0.8590 | 0.8795 | 0.8393 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "snowing", "raining", "sunny", "cloudy", "night", "snow_on_road", "partial_snow_on_road", "clear_pavement", "wet_pavement", "iced_lens" ]
griffio/vit-large-patch16-224-new-dungeon-geo-morphs-012
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-new-dungeon-geo-morphs-012 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.2851 - Accuracy: 0.94 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.0001 | 4.4444 | 10 | 0.3101 | 0.94 | | 0.0 | 8.8889 | 20 | 0.2851 | 0.94 | | 0.0 | 13.3333 | 30 | 0.3880 | 0.96 | | 0.0 | 17.7778 | 40 | 0.3946 | 0.96 | | 0.0 | 22.2222 | 50 | 0.3829 | 0.96 | | 0.0 | 26.6667 | 60 | 0.3797 | 0.96 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "three", "two", "zero" ]
griffio/vit-large-patch16-224-new-dungeon-geo-morphs-015
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-new-dungeon-geo-morphs-015 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.4824 - Accuracy: 0.96 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0 | 8.0 | 10 | 0.4951 | 0.96 | | 0.0 | 16.0 | 20 | 0.4893 | 0.96 | | 0.0 | 24.0 | 30 | 0.4824 | 0.96 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "three", "two", "zero" ]
dima806/car_models_image_detection
Returns car brand with about 84% accuracy given on an image. See https://www.kaggle.com/code/dima806/car-models-image-detection-vit for details. ``` Accuracy: 0.8410 F1 Score: 0.8372 Classification report: precision recall f1-score support Acura ILX 0.7004 0.8657 0.7743 216 Acura MDX 0.8211 0.7222 0.7685 216 Acura NSX 0.8434 0.7731 0.8068 216 Acura RDX 0.6456 0.8519 0.7345 216 Acura RLX 0.7159 0.8750 0.7875 216 Acura TLX 0.8125 0.9028 0.8553 216 Alfa Romeo 4C 0.9596 0.8796 0.9179 216 Alfa Romeo 4C Spider 0.9114 1.0000 0.9536 216 Alfa Romeo Giulia 0.9289 0.9676 0.9478 216 Alfa Romeo Stelvio 0.9721 0.9676 0.9698 216 Aston Martin DB11 0.9933 0.6898 0.8142 216 Aston Martin DBS 1.0000 0.6991 0.8229 216 Aston Martin Vanquish 0.9256 0.9213 0.9234 216 Aston Martin Vantage 0.6407 0.8791 0.7412 215 Audi A3 0.6429 0.6698 0.6560 215 Audi A4 0.6598 0.7395 0.6974 215 Audi A5 0.7440 0.7163 0.7299 215 Audi A6 0.6383 0.5556 0.5941 216 Audi A7 0.6611 0.7349 0.6960 215 Audi A8 0.6760 0.7860 0.7269 215 Audi Q3 0.9459 0.9767 0.9611 215 Audi Q5 0.7934 0.7860 0.7897 215 Audi Q7 0.8259 0.8565 0.8409 216 Audi Q8 0.9346 0.9302 0.9324 215 Audi R8 0.7215 0.7315 0.7264 216 Audi TT 0.6949 0.8791 0.7762 215 Audi e-tron 0.9908 1.0000 0.9954 216 BMW 2-Series 0.6548 0.5116 0.5744 215 BMW 3-Series 0.6575 0.6667 0.6621 216 BMW 4-Series 0.6411 0.7361 0.6853 216 BMW 5-Series 0.6224 0.4120 0.4958 216 BMW 6-Series 0.7765 0.6140 0.6857 215 BMW 7-Series 0.7195 0.7361 0.7277 216 BMW 8-Series 1.0000 0.8935 0.9438 216 BMW X1 0.8442 0.9070 0.8744 215 BMW X2 0.9231 1.0000 0.9600 216 BMW X3 0.7445 0.7824 0.7630 216 BMW X4 0.8700 0.8093 0.8386 215 BMW X5 0.7816 0.6326 0.6992 215 BMW X6 0.7137 0.7500 0.7314 216 BMW X7 0.9774 1.0000 0.9886 216 BMW Z4 0.8400 0.6837 0.7538 215 BMW i3 0.8729 0.9581 0.9135 215 BMW i8 0.8629 0.9907 0.9224 216 Bentley Bentayga 0.9591 0.9769 0.9679 216 Bentley Continental GT 0.7621 0.7269 0.7441 216 Bentley Flying Spur 0.7908 0.8750 0.8308 216 Bentley Mulsanne 0.8242 0.9769 0.8941 216 Buick Cascada 0.9770 0.9860 0.9815 215 Buick Enclave 0.7756 0.9120 0.8383 216 Buick Encore 0.8798 0.9491 0.9131 216 Buick Envision 0.8950 0.9861 0.9383 216 Buick Lacrosse 0.7005 0.6419 0.6699 215 Buick Regal 0.7939 0.6065 0.6877 216 Cadillac ATS 0.6867 0.7953 0.7371 215 Cadillac CT4 0.9908 1.0000 0.9954 216 Cadillac CT5 0.9908 1.0000 0.9954 216 Cadillac CT6 0.8472 0.8981 0.8719 216 Cadillac CTS 0.7337 0.6791 0.7053 215 Cadillac Escalade 0.8155 0.7814 0.7981 215 Cadillac XT4 1.0000 1.0000 1.0000 216 Cadillac XT5 0.9231 1.0000 0.9600 216 Cadillac XT6 0.9729 1.0000 0.9862 215 Cadillac XTS 0.8333 0.8565 0.8447 216 Chevrolet Blazer 0.9450 0.9537 0.9493 216 Chevrolet Bolt EV 1.0000 0.9769 0.9883 216 Chevrolet Camaro 0.7423 0.6698 0.7042 215 Chevrolet Colorado 0.7043 0.6093 0.6534 215 Chevrolet Corvette 0.8247 0.7407 0.7805 216 Chevrolet Cruze 0.7000 0.5833 0.6364 216 Chevrolet Equinox 0.7814 0.7814 0.7814 215 Chevrolet Impala 0.6955 0.9306 0.7960 216 Chevrolet Malibu 0.7562 0.5602 0.6436 216 Chevrolet Silverado 1500 0.6000 0.4167 0.4918 216 Chevrolet Silverado 2500HD 0.6494 0.7546 0.6981 216 Chevrolet Sonic 0.8925 0.8843 0.8884 216 Chevrolet Spark 0.8761 0.9209 0.8980 215 Chevrolet Suburban 0.8922 0.8426 0.8667 216 Chevrolet Tahoe 0.8914 0.9163 0.9037 215 Chevrolet TrailBlazer 0.9417 0.9722 0.9567 216 Chevrolet Traverse 0.8462 0.9167 0.8800 216 Chevrolet Trax 0.9381 0.9860 0.9615 215 Chevrolet Volt 0.7650 0.7721 0.7685 215 Chrysler 300 0.7261 0.8140 0.7675 215 Chrysler Pacifica 0.8233 0.8843 0.8527 216 Dodge Challenger 0.6932 0.8056 0.7452 216 Dodge Charger 0.6435 0.6435 0.6435 216 Dodge Durango 0.8832 0.8750 0.8791 216 Dodge Grand Caravan 0.9676 0.9676 0.9676 216 Dodge Journey 0.8286 0.9442 0.8826 215 FIAT 124 Spider 0.9953 0.9767 0.9859 215 FIAT 500 0.7944 0.7870 0.7907 216 FIAT 500L 0.9725 0.9860 0.9792 215 FIAT 500X 0.9513 0.9954 0.9729 216 FIAT 500e 0.9512 0.9028 0.9264 216 Ferrari 488 GTB 0.9633 0.9722 0.9677 216 Ferrari GTC4Lusso 1.0000 1.0000 1.0000 216 Ferrari Portofino 1.0000 1.0000 1.0000 216 Ford Ecosport 0.9729 1.0000 0.9862 215 Ford Edge 0.8621 0.4630 0.6024 216 Ford Escape 0.8973 0.6065 0.7238 216 Ford Expedition 0.8646 0.7685 0.8137 216 Ford Explorer 0.8048 0.7860 0.7953 215 Ford F-150 0.6718 0.6093 0.6390 215 Ford Fiesta 0.7425 0.5741 0.6475 216 Ford Flex 0.8688 0.8889 0.8787 216 Ford Fusion 0.7571 0.7395 0.7482 215 Ford Mustang 0.6471 0.5093 0.5699 216 Ford Ranger 0.8861 0.8287 0.8565 216 Ford Super Duty F-250 0.7540 0.8698 0.8078 215 Ford Taurus 0.7108 0.8233 0.7629 215 Ford Transit Connect Wagon 0.9809 0.9535 0.9670 215 GMC Acadia 0.9272 0.8884 0.9074 215 GMC Canyon 0.7717 0.9074 0.8340 216 GMC Sierra 1500 0.5957 0.3889 0.4706 216 GMC Sierra 2500HD 0.7056 0.6435 0.6731 216 GMC Terrain 0.8878 0.8426 0.8646 216 GMC Yukon 0.9224 0.9395 0.9309 215 Genesis G70 0.9904 0.9628 0.9764 215 Genesis G80 0.9474 1.0000 0.9730 216 Genesis G90 0.8777 0.9349 0.9054 215 Honda Accord 0.8019 0.3935 0.5280 216 Honda CR-V 0.7714 0.7535 0.7624 215 Honda Civic 0.6837 0.3102 0.4268 216 Honda Clarity 0.7886 0.8981 0.8398 216 Honda Fit 0.7865 0.7023 0.7420 215 Honda HR-V 0.9244 0.9630 0.9433 216 Honda Insight 0.7238 0.8047 0.7621 215 Honda Odyssey 0.8643 0.8843 0.8741 216 Honda Passport 0.8898 0.9767 0.9313 215 Honda Pilot 0.8009 0.7860 0.7934 215 Honda Ridgeline 0.7760 0.8981 0.8326 216 Hyundai Accent 0.7577 0.7963 0.7765 216 Hyundai Elantra 0.6067 0.5023 0.5496 215 Hyundai Ioniq 0.8361 0.9256 0.8786 215 Hyundai Kona 0.9899 0.9120 0.9494 216 Hyundai Kona Electric 0.9188 1.0000 0.9577 215 Hyundai NEXO 1.0000 1.0000 1.0000 215 Hyundai Palisade 0.9515 1.0000 0.9752 216 Hyundai Santa Fe 0.8392 0.5581 0.6704 215 Hyundai Sonata 0.5817 0.5628 0.5721 215 Hyundai Tucson 0.9249 0.7442 0.8247 215 Hyundai Veloster 0.8249 0.8287 0.8268 216 Hyundai Venue 0.9774 1.0000 0.9886 216 INFINITI Q50 0.8725 0.8279 0.8496 215 INFINITI Q60 0.8565 0.9398 0.8962 216 INFINITI Q70 0.9450 0.9537 0.9493 216 INFINITI QX30 0.9908 1.0000 0.9954 216 INFINITI QX50 0.8445 0.9349 0.8874 215 INFINITI QX60 0.8919 0.9167 0.9041 216 INFINITI QX80 0.9159 0.9628 0.9388 215 Jaguar E-Pace 0.9818 1.0000 0.9908 216 Jaguar F-Pace 0.9798 0.8981 0.9372 216 Jaguar F-Type 0.8768 0.8279 0.8517 215 Jaguar I-Pace 0.8471 0.9535 0.8972 215 Jaguar XE 0.7984 0.9167 0.8534 216 Jaguar XF 0.7467 0.5209 0.6137 215 Jaguar XJ 0.7568 0.7778 0.7671 216 Jeep Cherokee 0.9122 0.8698 0.8905 215 Jeep Compass 0.8756 0.8837 0.8796 215 Jeep Gladiator 1.0000 1.0000 1.0000 216 Jeep Grand Cherokee 0.8950 0.8287 0.8606 216 Jeep Renegade 0.9816 0.9861 0.9838 216 Jeep Wrangler 0.9810 0.9583 0.9696 216 Kia Cadenza 0.8164 0.9721 0.8875 215 Kia Forte 0.5972 0.5860 0.5915 215 Kia K900 0.9149 1.0000 0.9556 215 Kia Niro 0.8077 0.9722 0.8824 216 Kia Optima 0.7009 0.7269 0.7136 216 Kia Rio 0.7089 0.6991 0.7040 216 Kia Sedona 0.8475 0.9259 0.8850 216 Kia Sorento 0.7299 0.7163 0.7230 215 Kia Soul 0.7432 0.8884 0.8093 215 Kia Soul EV 0.9498 0.9674 0.9585 215 Kia Sportage 0.9100 0.8889 0.8993 216 Kia Stinger 0.9862 1.0000 0.9931 215 Kia Telluride 0.9163 0.9674 0.9412 215 Lamborghini Aventador 1.0000 1.0000 1.0000 215 Lamborghini Huracan 0.9488 0.9488 0.9488 215 Lamborghini Urus 0.9954 1.0000 0.9977 215 Land Rover Defender 0.9954 1.0000 0.9977 215 Land Rover Discovery 0.8793 0.9488 0.9128 215 Land Rover Discovery Sport 0.8723 0.9535 0.9111 215 Land Rover Range Rover 0.6016 0.7130 0.6525 216 Land Rover Range Rover Evoque 0.8807 0.8930 0.8868 215 Land Rover Range Rover Sport 0.7353 0.6944 0.7143 216 Land Rover Range Rover Velar 0.9770 0.9815 0.9792 216 Lexus ES 0.7277 0.7917 0.7583 216 Lexus GS 0.8247 0.7407 0.7805 216 Lexus GX 0.9177 0.9860 0.9507 215 Lexus IS 0.8095 0.7907 0.8000 215 Lexus LC 0.9685 1.0000 0.9840 215 Lexus LS 0.8419 0.8419 0.8419 215 Lexus LX 0.8750 0.8102 0.8413 216 Lexus NX 0.8846 0.9628 0.9220 215 Lexus RC 0.8211 0.8287 0.8249 216 Lexus RX 0.7611 0.7963 0.7783 216 Lexus UX 0.9513 1.0000 0.9751 215 Lincoln Aviator 0.9183 0.8884 0.9031 215 Lincoln Continental 0.7711 0.8889 0.8258 216 Lincoln Corsair 0.9191 1.0000 0.9579 216 Lincoln MKC 0.9635 0.9814 0.9724 215 Lincoln MKT 0.8814 0.9630 0.9204 216 Lincoln MKZ 0.7788 0.7824 0.7806 216 Lincoln Nautilus 0.9452 0.9628 0.9539 215 Lincoln Navigator 0.8767 0.8889 0.8828 216 MINI Clubman 0.8733 0.8935 0.8833 216 MINI Cooper 0.8155 0.7778 0.7962 216 MINI Cooper Countryman 0.8386 0.8698 0.8539 215 Maserati Ghibli 0.9427 0.9907 0.9661 216 Maserati GranTurismo 0.8357 0.8241 0.8298 216 Maserati Levante 0.9773 1.0000 0.9885 215 Maserati Quattroporte 0.9019 0.8977 0.8998 215 Mazda CX-3 0.9378 0.9769 0.9569 216 Mazda CX-30 0.9600 1.0000 0.9796 216 Mazda CX-5 0.8778 0.7315 0.7980 216 Mazda CX-9 0.8718 0.9444 0.9067 216 Mazda MAZDA3 0.7041 0.6389 0.6699 216 Mazda MAZDA6 0.6951 0.7176 0.7062 216 Mazda MX-5 Miata 0.8889 0.7778 0.8296 216 Mazda Mazda3 Hatchback 0.9954 1.0000 0.9977 215 McLaren 570GT 1.0000 1.0000 1.0000 216 McLaren 570S 1.0000 1.0000 1.0000 215 McLaren 720S 0.9774 1.0000 0.9886 216 Mercedes-Benz A Class 0.9474 1.0000 0.9730 216 Mercedes-Benz AMG GT 0.9295 0.9769 0.9526 216 Mercedes-Benz C Class 0.6261 0.3333 0.4350 216 Mercedes-Benz CLA Class 0.7036 0.9120 0.7944 216 Mercedes-Benz CLS Class 0.6714 0.6620 0.6667 216 Mercedes-Benz E Class 0.7026 0.6343 0.6667 216 Mercedes-Benz EQC 0.9862 1.0000 0.9931 215 Mercedes-Benz G Class 0.8390 0.9209 0.8780 215 Mercedes-Benz GLA Class 0.7935 0.9116 0.8485 215 Mercedes-Benz GLB Class 0.9389 1.0000 0.9685 215 Mercedes-Benz GLC Class 0.7989 0.6465 0.7147 215 Mercedes-Benz GLE Class 0.9103 0.6605 0.7655 215 Mercedes-Benz GLS Class 0.8471 1.0000 0.9172 216 Mercedes-Benz Metris 0.9774 1.0000 0.9886 216 Mercedes-Benz S Class 0.6364 0.5509 0.5906 216 Mercedes-Benz SL Class 0.7160 0.8326 0.7699 215 Mercedes-Benz SLC Class 0.9381 0.9815 0.9593 216 Mitsubishi Eclipse Cross 0.9908 1.0000 0.9954 216 Mitsubishi Mirage 0.8481 0.9349 0.8894 215 Mitsubishi Outlander 0.8554 0.6574 0.7435 216 Mitsubishi Outlander Sport 0.7600 0.8796 0.8155 216 Nissan 370Z 0.9742 0.8750 0.9220 216 Nissan Altima 0.8353 0.6605 0.7377 215 Nissan Armada 0.9193 0.9491 0.9339 216 Nissan Frontier 0.8738 0.8698 0.8718 215 Nissan GT-R 0.6301 0.7176 0.6710 216 Nissan Kicks 0.9474 1.0000 0.9730 216 Nissan Leaf 0.7673 0.7176 0.7416 216 Nissan Maxima 0.8479 0.8558 0.8519 215 Nissan Murano 0.8726 0.8605 0.8665 215 Nissan NV200 1.0000 1.0000 1.0000 215 Nissan Pathfinder 0.8028 0.8102 0.8065 216 Nissan Rogue 0.7822 0.8148 0.7982 216 Nissan Rogue Sport 0.9773 1.0000 0.9885 215 Nissan Sentra 0.6009 0.6343 0.6171 216 Nissan Titan 0.8042 0.7037 0.7506 216 Nissan Versa 0.7770 0.5023 0.6102 215 Porsche 718 0.9106 0.9907 0.9490 216 Porsche 718 Spyder 1.0000 1.0000 1.0000 216 Porsche 911 0.7701 0.6667 0.7146 216 Porsche Cayenne 0.7701 0.6667 0.7146 216 Porsche Macan 0.8432 0.9256 0.8825 215 Porsche Panamera 0.7018 0.7407 0.7207 216 Porsche Taycan 0.9336 0.9769 0.9548 216 Ram 1500 0.7523 0.7767 0.7643 215 Ram 2500 0.8287 0.8287 0.8287 216 Rolls-Royce Cullinan 0.9903 0.9491 0.9693 216 Rolls-Royce Dawn 1.0000 1.0000 1.0000 216 Rolls-Royce Ghost 0.9279 0.9581 0.9428 215 Rolls-Royce Phantom 0.9641 0.9954 0.9795 216 Rolls-Royce Wraith 1.0000 1.0000 1.0000 216 Subaru Ascent 0.8458 0.9907 0.9126 216 Subaru BRZ 0.8272 0.9306 0.8758 216 Subaru Crosstrek 0.8599 0.8279 0.8436 215 Subaru Forester 0.7889 0.7269 0.7566 216 Subaru Impreza 0.6215 0.6186 0.6200 215 Subaru Legacy 0.5024 0.4791 0.4905 215 Subaru Outback 0.7438 0.8333 0.7860 216 Subaru STI S209 1.0000 1.0000 1.0000 215 Subaru WRX 0.6816 0.7767 0.7261 215 Tesla Model 3 0.9310 1.0000 0.9643 216 Tesla Model S 0.7881 0.8611 0.8230 216 Tesla Model X 0.9908 1.0000 0.9954 216 Tesla Model Y 1.0000 1.0000 1.0000 216 Toyota 4Runner 0.9167 0.9167 0.9167 216 Toyota 86 1.0000 1.0000 1.0000 216 Toyota Avalon 0.7880 0.6713 0.7250 216 Toyota C-HR 0.9515 1.0000 0.9752 216 Toyota Camry 0.6745 0.6620 0.6682 216 Toyota Corolla 0.7586 0.6140 0.6787 215 Toyota Highlander 0.8539 0.7037 0.7716 216 Toyota Land Cruiser 0.9147 0.8935 0.9040 216 Toyota Mirai 0.9127 0.9676 0.9393 216 Toyota Prius 0.6484 0.7721 0.7049 215 Toyota Prius C 0.7092 0.9302 0.8048 215 Toyota RAV4 0.7403 0.6233 0.6768 215 Toyota Sequoia 0.9217 0.9259 0.9238 216 Toyota Sienna 0.9703 0.9074 0.9378 216 Toyota Supra 0.9505 0.9769 0.9635 216 Toyota Tacoma 0.6969 0.8233 0.7548 215 Toyota Tundra 0.7376 0.6930 0.7146 215 Toyota Yaris 0.6806 0.4537 0.5444 216 Toyota Yaris Hatchback 1.0000 1.0000 1.0000 216 Volkswagen Arteon 0.9471 1.0000 0.9729 215 Volkswagen Atlas 0.8921 1.0000 0.9430 215 Volkswagen Beetle 0.7839 0.8565 0.8186 216 Volkswagen Golf 0.7040 0.7269 0.7153 216 Volkswagen Jetta 0.5907 0.7083 0.6442 216 Volkswagen Passat 0.6947 0.4233 0.5260 215 Volkswagen Tiguan 0.7926 0.8000 0.7963 215 Volkswagen e-Golf 0.8584 0.9259 0.8909 216 Volvo S60 0.6640 0.3843 0.4868 216 Volvo S90 0.7878 0.8935 0.8373 216 Volvo V60 0.6966 0.7546 0.7244 216 Volvo V90 0.8833 0.9860 0.9319 215 Volvo XC40 0.9729 1.0000 0.9862 215 Volvo XC60 0.7841 0.8241 0.8036 216 Volvo XC90 0.8528 0.7778 0.8136 216 smart fortwo 0.8418 0.7639 0.8010 216 accuracy 0.8410 69639 macro avg 0.8406 0.8410 0.8372 69639 weighted avg 0.8406 0.8410 0.8372 69639 ```
[ "acura ilx", "acura mdx", "acura nsx", "acura rdx", "acura rlx", "acura tlx", "alfa romeo 4c", "alfa romeo 4c spider", "alfa romeo giulia", "alfa romeo stelvio", "aston martin db11", "aston martin dbs", "aston martin vanquish", "aston martin vantage", "audi a3", "audi a4", "audi a5", "audi a6", "audi a7", "audi a8", "audi q3", "audi q5", "audi q7", "audi q8", "audi r8", "audi tt", "audi e-tron", "bmw 2-series", "bmw 3-series", "bmw 4-series", "bmw 5-series", "bmw 6-series", "bmw 7-series", "bmw 8-series", "bmw x1", "bmw x2", "bmw x3", "bmw x4", "bmw x5", "bmw x6", "bmw x7", "bmw z4", "bmw i3", "bmw i8", "bentley bentayga", "bentley continental gt", "bentley flying spur", "bentley mulsanne", "buick cascada", "buick enclave", "buick encore", "buick envision", "buick lacrosse", "buick regal", "cadillac ats", "cadillac ct4", "cadillac ct5", "cadillac ct6", "cadillac cts", "cadillac escalade", "cadillac xt4", "cadillac xt5", "cadillac xt6", "cadillac xts", "chevrolet blazer", "chevrolet bolt ev", "chevrolet camaro", "chevrolet colorado", "chevrolet corvette", "chevrolet cruze", "chevrolet equinox", "chevrolet impala", "chevrolet malibu", "chevrolet silverado 1500", "chevrolet silverado 2500hd", "chevrolet sonic", "chevrolet spark", "chevrolet suburban", "chevrolet tahoe", "chevrolet trailblazer", "chevrolet traverse", "chevrolet trax", "chevrolet volt", "chrysler 300", "chrysler pacifica", "dodge challenger", "dodge charger", "dodge durango", "dodge grand caravan", "dodge journey", "fiat 124 spider", "fiat 500", "fiat 500l", "fiat 500x", "fiat 500e", "ferrari 488 gtb", "ferrari gtc4lusso", "ferrari portofino", "ford ecosport", "ford edge", "ford escape", "ford expedition", "ford explorer", "ford f-150", "ford fiesta", "ford flex", "ford fusion", "ford mustang", "ford ranger", "ford super duty f-250", "ford taurus", "ford transit connect wagon", "gmc acadia", "gmc canyon", "gmc sierra 1500", "gmc sierra 2500hd", "gmc terrain", "gmc yukon", "genesis g70", "genesis g80", "genesis g90", "honda accord", "honda cr-v", "honda civic", "honda clarity", "honda fit", "honda hr-v", "honda insight", "honda odyssey", "honda passport", "honda pilot", "honda ridgeline", "hyundai accent", "hyundai elantra", "hyundai ioniq", "hyundai kona", "hyundai kona electric", "hyundai nexo", "hyundai palisade", "hyundai santa fe", "hyundai sonata", "hyundai tucson", "hyundai veloster", "hyundai venue", "infiniti q50", "infiniti q60", "infiniti q70", "infiniti qx30", "infiniti qx50", "infiniti qx60", "infiniti qx80", "jaguar e-pace", "jaguar f-pace", "jaguar f-type", "jaguar i-pace", "jaguar xe", "jaguar xf", "jaguar xj", "jeep cherokee", "jeep compass", "jeep gladiator", "jeep grand cherokee", "jeep renegade", "jeep wrangler", "kia cadenza", "kia forte", "kia k900", "kia niro", "kia optima", "kia rio", "kia sedona", "kia sorento", "kia soul", "kia soul ev", "kia sportage", "kia stinger", "kia telluride", "lamborghini aventador", "lamborghini huracan", "lamborghini urus", "land rover defender", "land rover discovery", "land rover discovery sport", "land rover range rover", "land rover range rover evoque", "land rover range rover sport", "land rover range rover velar", "lexus es", "lexus gs", "lexus gx", "lexus is", "lexus lc", "lexus ls", "lexus lx", "lexus nx", "lexus rc", "lexus rx", "lexus ux", "lincoln aviator", "lincoln continental", "lincoln corsair", "lincoln mkc", "lincoln mkt", "lincoln mkz", "lincoln nautilus", "lincoln navigator", "mini clubman", "mini cooper", "mini cooper countryman", "maserati ghibli", "maserati granturismo", "maserati levante", "maserati quattroporte", "mazda cx-3", "mazda cx-30", "mazda cx-5", "mazda cx-9", "mazda mazda3", "mazda mazda6", "mazda mx-5 miata", "mazda mazda3 hatchback", "mclaren 570gt", "mclaren 570s", "mclaren 720s", "mercedes-benz a class", "mercedes-benz amg gt", "mercedes-benz c class", "mercedes-benz cla class", "mercedes-benz cls class", "mercedes-benz e class", "mercedes-benz eqc", "mercedes-benz g class", "mercedes-benz gla class", "mercedes-benz glb class", "mercedes-benz glc class", "mercedes-benz gle class", "mercedes-benz gls class", "mercedes-benz metris", "mercedes-benz s class", "mercedes-benz sl class", "mercedes-benz slc class", "mitsubishi eclipse cross", "mitsubishi mirage", "mitsubishi outlander", "mitsubishi outlander sport", "nissan 370z", "nissan altima", "nissan armada", "nissan frontier", "nissan gt-r", "nissan kicks", "nissan leaf", "nissan maxima", "nissan murano", "nissan nv200", "nissan pathfinder", "nissan rogue", "nissan rogue sport", "nissan sentra", "nissan titan", "nissan versa", "porsche 718", "porsche 718 spyder", "porsche 911", "porsche cayenne", "porsche macan", "porsche panamera", "porsche taycan", "ram 1500", "ram 2500", "rolls-royce cullinan", "rolls-royce dawn", "rolls-royce ghost", "rolls-royce phantom", "rolls-royce wraith", "subaru ascent", "subaru brz", "subaru crosstrek", "subaru forester", "subaru impreza", "subaru legacy", "subaru outback", "subaru sti s209", "subaru wrx", "tesla model 3", "tesla model s", "tesla model x", "tesla model y", "toyota 4runner", "toyota 86", "toyota avalon", "toyota c-hr", "toyota camry", "toyota corolla", "toyota highlander", "toyota land cruiser", "toyota mirai", "toyota prius", "toyota prius c", "toyota rav4", "toyota sequoia", "toyota sienna", "toyota supra", "toyota tacoma", "toyota tundra", "toyota yaris", "toyota yaris hatchback", "volkswagen arteon", "volkswagen atlas", "volkswagen beetle", "volkswagen golf", "volkswagen jetta", "volkswagen passat", "volkswagen tiguan", "volkswagen e-golf", "volvo s60", "volvo s90", "volvo v60", "volvo v90", "volvo xc40", "volvo xc60", "volvo xc90", "smart fortwo" ]
TuyenTrungLe/finetuned-vietnamese-food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-vietnamese-food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_vietnam_images dataset. It achieves the following results on the evaluation set: - Loss: 0.3760 - Accuracy: 0.8958 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 2.1058 | 0.0910 | 100 | 1.9974 | 0.5694 | | 1.4012 | 0.1820 | 200 | 1.4076 | 0.6855 | | 1.3551 | 0.2730 | 300 | 1.1650 | 0.7264 | | 1.1111 | 0.3640 | 400 | 1.0998 | 0.7062 | | 1.0038 | 0.4550 | 500 | 0.9087 | 0.7483 | | 0.9599 | 0.5460 | 600 | 0.8278 | 0.7682 | | 1.0932 | 0.6369 | 700 | 0.9115 | 0.7360 | | 0.7807 | 0.7279 | 800 | 0.8011 | 0.7730 | | 0.8237 | 0.8189 | 900 | 0.8345 | 0.7726 | | 0.7288 | 0.9099 | 1000 | 0.6427 | 0.8258 | | 0.7982 | 1.0009 | 1100 | 0.6427 | 0.8215 | | 0.7331 | 1.0919 | 1200 | 0.6423 | 0.8183 | | 0.6849 | 1.1829 | 1300 | 0.6820 | 0.8151 | | 0.671 | 1.2739 | 1400 | 0.6325 | 0.8191 | | 0.7307 | 1.3649 | 1500 | 0.6079 | 0.8286 | | 0.7499 | 1.4559 | 1600 | 0.5832 | 0.8346 | | 0.7004 | 1.5469 | 1700 | 0.6048 | 0.8342 | | 0.7543 | 1.6379 | 1800 | 0.5612 | 0.8394 | | 0.5557 | 1.7288 | 1900 | 0.5740 | 0.8318 | | 0.5019 | 1.8198 | 2000 | 0.5064 | 0.8561 | | 0.7043 | 1.9108 | 2100 | 0.5513 | 0.8441 | | 0.519 | 2.0018 | 2200 | 0.5862 | 0.8350 | | 0.3366 | 2.0928 | 2300 | 0.5159 | 0.8517 | | 0.4167 | 2.1838 | 2400 | 0.5386 | 0.8469 | | 0.402 | 2.2748 | 2500 | 0.5614 | 0.8374 | | 0.4133 | 2.3658 | 2600 | 0.4756 | 0.8652 | | 0.4751 | 2.4568 | 2700 | 0.4882 | 0.8612 | | 0.3108 | 2.5478 | 2800 | 0.4946 | 0.8648 | | 0.3218 | 2.6388 | 2900 | 0.4707 | 0.8680 | | 0.282 | 2.7298 | 3000 | 0.4407 | 0.8712 | | 0.2823 | 2.8207 | 3100 | 0.4843 | 0.8712 | | 0.3498 | 2.9117 | 3200 | 0.4609 | 0.8744 | | 0.3196 | 3.0027 | 3300 | 0.4369 | 0.8763 | | 0.2822 | 3.0937 | 3400 | 0.4662 | 0.8748 | | 0.4166 | 3.1847 | 3500 | 0.4539 | 0.8779 | | 0.1904 | 3.2757 | 3600 | 0.4205 | 0.8887 | | 0.388 | 3.3667 | 3700 | 0.4163 | 0.8863 | | 0.2851 | 3.4577 | 3800 | 0.4168 | 0.8891 | | 0.2455 | 3.5487 | 3900 | 0.4004 | 0.8930 | | 0.2804 | 3.6397 | 4000 | 0.4044 | 0.8938 | | 0.2008 | 3.7307 | 4100 | 0.3833 | 0.8950 | | 0.2487 | 3.8217 | 4200 | 0.3812 | 0.8958 | | 0.2077 | 3.9126 | 4300 | 0.3760 | 0.8958 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "banh beo", "banh bot loc", "banh pia", "banh tet", "banh trang nuong", "banh xeo", "bun bo hue", "bun dau mam tom", "bun mam", "bun rieu", "bun thit nuong", "ca kho to", "banh can", "canh chua", "cao lau", "chao long", "com tam", "goi cuon", "hu tieu", "mi quang", "nem chua", "pho", "xoi xeo", "banh canh", "banh chung", "banh cuon", "banh duc", "banh gio", "banh khot", "banh mi" ]
griffio/vit-large-patch16-224-new-dungeon-geo-morphs-016
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-new-dungeon-geo-morphs-016 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1175 - Accuracy: 0.96 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0003 | 8.0 | 10 | 0.1202 | 0.96 | | 0.0002 | 16.0 | 20 | 0.1240 | 0.96 | | 0.0001 | 24.0 | 30 | 0.1175 | 0.96 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "three", "two", "zero" ]
kdrianm/emotion_classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5599 - Accuracy: 0.475 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 5 | 2.0884 | 0.1125 | | 2.08 | 2.0 | 10 | 2.0750 | 0.1437 | | 2.08 | 3.0 | 15 | 2.0519 | 0.2125 | | 2.0091 | 4.0 | 20 | 2.0177 | 0.225 | | 2.0091 | 5.0 | 25 | 1.9777 | 0.2625 | | 1.8779 | 6.0 | 30 | 1.9381 | 0.3125 | | 1.8779 | 7.0 | 35 | 1.8990 | 0.3438 | | 1.7355 | 8.0 | 40 | 1.8592 | 0.3688 | | 1.7355 | 9.0 | 45 | 1.8217 | 0.3812 | | 1.598 | 10.0 | 50 | 1.7844 | 0.4 | | 1.598 | 11.0 | 55 | 1.7536 | 0.4062 | | 1.4689 | 12.0 | 60 | 1.7217 | 0.4188 | | 1.4689 | 13.0 | 65 | 1.7019 | 0.4188 | | 1.3534 | 14.0 | 70 | 1.6773 | 0.4188 | | 1.3534 | 15.0 | 75 | 1.6614 | 0.425 | | 1.2526 | 16.0 | 80 | 1.6448 | 0.4562 | | 1.2526 | 17.0 | 85 | 1.6306 | 0.45 | | 1.1657 | 18.0 | 90 | 1.6201 | 0.4562 | | 1.1657 | 19.0 | 95 | 1.6067 | 0.4562 | | 1.0918 | 20.0 | 100 | 1.5992 | 0.45 | | 1.0918 | 21.0 | 105 | 1.5889 | 0.4562 | | 1.0311 | 22.0 | 110 | 1.5852 | 0.4562 | | 1.0311 | 23.0 | 115 | 1.5767 | 0.4625 | | 0.9814 | 24.0 | 120 | 1.5733 | 0.45 | | 0.9814 | 25.0 | 125 | 1.5688 | 0.4625 | | 0.9439 | 26.0 | 130 | 1.5643 | 0.4562 | | 0.9439 | 27.0 | 135 | 1.5620 | 0.4625 | | 0.918 | 28.0 | 140 | 1.5599 | 0.475 | | 0.918 | 29.0 | 145 | 1.5586 | 0.4625 | | 0.9044 | 30.0 | 150 | 1.5582 | 0.4562 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
FA24-CS462-Group-26/convnext_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext_model This model is a fine-tuned version of [facebook/convnext-base-224](https://huggingface.co/facebook/convnext-base-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0712 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0945 | 1.0 | 100 | 0.1009 | | 0.0233 | 2.0 | 200 | 0.0851 | | 0.0041 | 3.0 | 300 | 0.0755 | | 0.0026 | 4.0 | 400 | 0.0715 | | 0.0024 | 5.0 | 500 | 0.0712 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "cyclone", "earthquake", "flood", "wildfire" ]
kiranshivaraju/convnext-large-classify-diode
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "negative", "positive" ]
deyakovleva/vit-base-oxford-iiit-pets
# vit-base-oxford-iiit-pets This model was trained to classify cats and dogs and define it's breed using transfer learning method. It is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2068 - Accuracy: 0.9350 ## Model description Since [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) was used as the base model, the final classification layer was modified to predict 37 classes of cats and dogs from the dataset. ## Intended uses & limitations This model is designed for educational purposes, enabling the classification of cats and dogs and the identification of their breeds. It currently supports 37 distinct breeds, offering a starting point for various learning and experimentation scenarios. Beyond its educational use, the model can serve as a foundation for further development, such as expanding its classification capabilities to include additional breeds, other animal species, or even entirely different tasks. With fine-tuning, this model could be adapted to broader applications in animal recognition, wildlife monitoring, and pet identification systems. ## Training and evaluation data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3625 | 1.0 | 370 | 0.2933 | 0.9269 | | 0.2002 | 2.0 | 740 | 0.2221 | 0.9432 | | 0.1511 | 3.0 | 1110 | 0.2057 | 0.9418 | | 0.1253 | 4.0 | 1480 | 0.1876 | 0.9418 | | 0.1236 | 5.0 | 1850 | 0.1825 | 0.9432 | | 0.1078 | 6.0 | 2220 | 0.1785 | 0.9418 | | 0.078 | 7.0 | 2590 | 0.1809 | 0.9364 | | 0.0798 | 8.0 | 2960 | 0.1785 | 0.9378 | | 0.0811 | 9.0 | 3330 | 0.1774 | 0.9364 | | 0.0736 | 10.0 | 3700 | 0.1769 | 0.9391 | ### Evaluation results | Metric | Value | |--------------------------|----------------------| | Evaluation Loss | 0.2202 | | Evaluation Accuracy | 92.56% | | Evaluation Runtime (s) | 7.39 | | Samples Per Second | 100.04 | | Steps Per Second | 12.59 | | Epoch | 10 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.0.1+cu117 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
griffio/vit-large-patch16-224-new-dungeon-geo-morphs-018
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-new-dungeon-geo-morphs-018 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1702 - Accuracy: 0.9623 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.9831 | 6.6667 | 10 | 0.4326 | 0.8868 | | 0.112 | 13.3333 | 20 | 0.1814 | 0.9434 | | 0.0087 | 20.0 | 30 | 0.1702 | 0.9623 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "three", "two", "zero" ]
griffio/vit-large-patch16-224-new-dungeon-geo-morphs-019
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-new-dungeon-geo-morphs-019 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1377 - Accuracy: 0.9623 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.9688 | 6.6667 | 10 | 0.4105 | 0.8868 | | 0.114 | 13.3333 | 20 | 0.1491 | 0.9623 | | 0.0082 | 20.0 | 30 | 0.1377 | 0.9623 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "three", "two", "zero" ]
griffio/vit-large-patch16-224-new-dungeon-geo-morphs-020
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-new-dungeon-geo-morphs-020 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1267 - Accuracy: 0.9811 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.1059 | 6.6667 | 10 | 0.5386 | 0.8868 | | 0.3385 | 13.3333 | 20 | 0.1848 | 0.9434 | | 0.1115 | 20.0 | 30 | 0.1267 | 0.9811 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "three", "two", "zero" ]
griffio/vit-large-patch16-224-new-dungeon-geo-morphs-025
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-new-dungeon-geo-morphs-025 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.5643 - Accuracy: 0.9434 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.0 | 3.6364 | 10 | 0.5643 | 0.9434 | | 0.0 | 7.2727 | 20 | 0.6440 | 0.9434 | | 0.0 | 10.9091 | 30 | 0.6484 | 0.9434 | | 0.0 | 14.5455 | 40 | 0.6491 | 0.9434 | | 0.0 | 18.1818 | 50 | 0.6515 | 0.9434 | | 0.0 | 21.8182 | 60 | 0.6539 | 0.9434 | | 0.0 | 25.4545 | 70 | 0.6547 | 0.9434 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "three", "two", "zero" ]
qubvel-hf/my_awesome_model
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6", "label_7", "label_8", "label_9" ]
kdrianm/vit-emotion_classifier
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-emotion_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4782 - Accuracy: 0.525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0776 | 1.0 | 10 | 2.0731 | 0.1437 | | 2.0526 | 2.0 | 20 | 2.0567 | 0.1688 | | 1.9975 | 3.0 | 30 | 2.0160 | 0.2 | | 1.8977 | 4.0 | 40 | 1.9550 | 0.3 | | 1.778 | 5.0 | 50 | 1.8805 | 0.3625 | | 1.6549 | 6.0 | 60 | 1.8073 | 0.375 | | 1.5379 | 7.0 | 70 | 1.7428 | 0.4125 | | 1.4241 | 8.0 | 80 | 1.6957 | 0.4062 | | 1.3212 | 9.0 | 90 | 1.6550 | 0.45 | | 1.2245 | 10.0 | 100 | 1.6271 | 0.4437 | | 1.1336 | 11.0 | 110 | 1.5928 | 0.4562 | | 1.0483 | 12.0 | 120 | 1.5695 | 0.4688 | | 0.9669 | 13.0 | 130 | 1.5452 | 0.4875 | | 0.8889 | 14.0 | 140 | 1.5248 | 0.4875 | | 0.815 | 15.0 | 150 | 1.5063 | 0.5062 | | 0.7466 | 16.0 | 160 | 1.4909 | 0.4938 | | 0.6852 | 17.0 | 170 | 1.4782 | 0.525 | | 0.6308 | 18.0 | 180 | 1.4615 | 0.5 | | 0.5819 | 19.0 | 190 | 1.4541 | 0.5 | | 0.5392 | 20.0 | 200 | 1.4458 | 0.5125 | | 0.503 | 21.0 | 210 | 1.4393 | 0.5 | | 0.4718 | 22.0 | 220 | 1.4289 | 0.5188 | | 0.4458 | 23.0 | 230 | 1.4238 | 0.5188 | | 0.4234 | 24.0 | 240 | 1.4211 | 0.5125 | | 0.405 | 25.0 | 250 | 1.4182 | 0.5 | | 0.3905 | 26.0 | 260 | 1.4157 | 0.5062 | | 0.379 | 27.0 | 270 | 1.4125 | 0.5062 | | 0.3706 | 28.0 | 280 | 1.4119 | 0.5062 | | 0.3649 | 29.0 | 290 | 1.4115 | 0.5062 | | 0.3618 | 30.0 | 300 | 1.4111 | 0.5062 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
quangtuyennguyen/food_classify_viT
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # food_classify_viT This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8683 - Accuracy: 0.8948 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | No log | 0.9970 | 83 | 1.6473 | 0.8236 | | No log | 1.9940 | 166 | 1.1061 | 0.8863 | | No log | 2.9910 | 249 | 0.9208 | 0.8820 | | No log | 3.9880 | 332 | 0.8683 | 0.8948 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "burger", "butter_naan", "chai", "chapati", "chole_bhature", "dal_makhani", "dhokla", "fried_rice", "idli", "jalebi", "kaathi_rolls", "kadai_paneer", "kulfi", "masala_dosa", "momos", "paani_puri", "pakode", "pav_bhaji", "pizza", "samosa" ]
quangtuyennguyen/mri_classification_alzheimer_disease
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mri_classification_alzheimer_disease This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7795 - Accuracy: 0.6453 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 80 | 0.8764 | 0.5859 | | No log | 2.0 | 160 | 0.8594 | 0.5703 | | No log | 3.0 | 240 | 0.8095 | 0.6391 | | No log | 4.0 | 320 | 0.7795 | 0.6453 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "mild_demented", "moderate_demented", "non_demented", "very_mild_demented" ]
cvmil/deit-base-patch16-224_rice-disease-02
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deit-base-patch16-224_rice-disease-02_112024 This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3063 - Accuracy: 0.9148 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 1.8862 | 1.0 | 212 | 0.7092 | 1.2580 | | 0.8631 | 2.0 | 424 | 0.8190 | 0.6676 | | 0.5449 | 3.0 | 636 | 0.8523 | 0.5124 | | 0.4396 | 4.0 | 848 | 0.8736 | 0.4459 | | 0.3852 | 5.0 | 1060 | 0.8816 | 0.4026 | | 0.3488 | 6.0 | 1272 | 0.8902 | 0.3763 | | 0.324 | 7.0 | 1484 | 0.8942 | 0.3588 | | 0.3072 | 8.0 | 1696 | 0.9062 | 0.3420 | | 0.2928 | 9.0 | 1908 | 0.9055 | 0.3330 | | 0.2826 | 10.0 | 2120 | 0.9082 | 0.3231 | | 0.2732 | 11.0 | 2332 | 0.9115 | 0.3172 | | 0.2669 | 12.0 | 2544 | 0.3119 | 0.9128 | | 0.2619 | 13.0 | 2756 | 0.3086 | 0.9155 | | 0.258 | 14.0 | 2968 | 0.3068 | 0.9155 | | 0.2566 | 15.0 | 3180 | 0.3063 | 0.9148 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "bacterial_leaf_blight", "brown_spot", "healthy", "leaf_blast", "leaf_scald", "narrow_brown_spot", "neck_blast", "rice_hispa", "sheath_blight", "tungro" ]
cvmil/swin-base-patch4-window7-224_rice-disease-02
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-base-patch4-window7-224_rice-disease-02_112024 This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2147 - Accuracy: 0.9281 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 1.7761 | 1.0 | 212 | 0.7405 | 0.9638 | | 0.6771 | 2.0 | 424 | 0.8476 | 0.4818 | | 0.4223 | 3.0 | 636 | 0.8756 | 0.3695 | | 0.3403 | 4.0 | 848 | 0.8922 | 0.3168 | | 0.2958 | 5.0 | 1060 | 0.9082 | 0.2835 | | 0.2709 | 6.0 | 1272 | 0.2664 | 0.9075 | | 0.2494 | 7.0 | 1484 | 0.2498 | 0.9168 | | 0.2395 | 8.0 | 1696 | 0.2420 | 0.9182 | | 0.2286 | 9.0 | 1908 | 0.2365 | 0.9215 | | 0.22 | 10.0 | 2120 | 0.2296 | 0.9202 | | 0.2137 | 11.0 | 2332 | 0.2230 | 0.9242 | | 0.2093 | 12.0 | 2544 | 0.2178 | 0.9281 | | 0.202 | 13.0 | 2756 | 0.2162 | 0.9295 | | 0.2017 | 14.0 | 2968 | 0.2151 | 0.9275 | | 0.1986 | 15.0 | 3180 | 0.2147 | 0.9281 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "bacterial_leaf_blight", "brown_spot", "healthy", "leaf_blast", "leaf_scald", "narrow_brown_spot", "neck_blast", "rice_hispa", "sheath_blight", "tungro" ]
kiranshivaraju/convnext-large-v1
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "bad", "good" ]
AmadFR/Emotion_Classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Emotion_Classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3727 - Accuracy: 0.55 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.083 | 1.0 | 10 | 2.0798 | 0.1625 | | 2.0591 | 2.0 | 20 | 2.0464 | 0.2812 | | 2.0043 | 3.0 | 30 | 1.9889 | 0.325 | | 1.9174 | 4.0 | 40 | 1.9087 | 0.3375 | | 1.819 | 5.0 | 50 | 1.8037 | 0.3875 | | 1.7161 | 6.0 | 60 | 1.6875 | 0.4125 | | 1.6253 | 7.0 | 70 | 1.6207 | 0.4437 | | 1.549 | 8.0 | 80 | 1.5978 | 0.4437 | | 1.4946 | 9.0 | 90 | 1.5430 | 0.4688 | | 1.4426 | 10.0 | 100 | 1.4995 | 0.5125 | | 1.4061 | 11.0 | 110 | 1.4919 | 0.4938 | | 1.3648 | 12.0 | 120 | 1.4628 | 0.525 | | 1.3306 | 13.0 | 130 | 1.4207 | 0.5437 | | 1.3071 | 14.0 | 140 | 1.4340 | 0.5188 | | 1.2791 | 15.0 | 150 | 1.4126 | 0.5188 | | 1.2589 | 16.0 | 160 | 1.4119 | 0.5375 | | 1.2199 | 17.0 | 170 | 1.4168 | 0.4938 | | 1.2189 | 18.0 | 180 | 1.3957 | 0.525 | | 1.2096 | 19.0 | 190 | 1.4015 | 0.5625 | | 1.2114 | 20.0 | 200 | 1.3932 | 0.5188 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Tokenizers 0.20.3
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
kiranshivaraju/convnext-large-v2
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "bad", "good" ]
nergizinal/vit-base-nationality
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-nationality This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 1.2289 - Precision: 0.5992 - Recall: 0.6005 - Accuracy: 0.6005 - F1: 0.5861 - Score: 0.6005 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 | Score | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|:------:|:------:| | 1.2527 | 1.0 | 105 | 1.2744 | 0.5925 | 0.5820 | 0.5820 | 0.5631 | 0.5820 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "italian", "russian", "mexican", "french", "belgian", "spanish", "dutch", "austrian", "flemish", "spanish,greek", "german", "french,british", "french,jewish,belarusian", "british", "norwegian", "german,swiss", "american" ]
kiranshivaraju/convnext-large-v3
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "bad", "good" ]
kiranshivaraju/convnext-large-v4
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "bad", "good" ]
MBARI-org/mbari-uav-vit-b-16
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "batray", "bird", "boat", "buoy", "egregia", "foam", "jelly", "kelp", "mola", "mooring", "otter", "person", "pinniped", "poop", "rib", "reflectance", "secci_disc", "shark", "surfboard", "wave", "whale", "wood" ]
tdhcuong/swin-tiny-patch4-window7-224-finetuned-azure-poc-img-classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-azure-poc-img-classification This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2119 - Accuracy: 0.9122 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5888 | 1.0 | 41 | 0.4436 | 0.8348 | | 0.3118 | 2.0 | 82 | 0.3028 | 0.8692 | | 0.2284 | 3.0 | 123 | 0.2879 | 0.8795 | | 0.203 | 4.0 | 164 | 0.2368 | 0.8950 | | 0.2254 | 5.0 | 205 | 0.2276 | 0.8985 | | 0.1976 | 6.0 | 246 | 0.2339 | 0.8967 | | 0.1603 | 7.0 | 287 | 0.2191 | 0.9036 | | 0.1556 | 8.0 | 328 | 0.2249 | 0.9036 | | 0.1488 | 9.0 | 369 | 0.2018 | 0.9071 | | 0.158 | 10.0 | 410 | 0.2119 | 0.9122 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "concrete_anchors", "steel_connectors", "steel_fasteners", "wood_connectors", "wood_fasteners" ]
kiranshivaraju/convnext-large-224-v5
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "bad", "good" ]
kiranshivaraju/convnext-large-v6
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "bad", "good" ]
krisnadwipayanap/results
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [nateraw/vit-age-classifier](https://huggingface.co/nateraw/vit-age-classifier) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "0-2", "3-9", "10-19", "20-29", "30-39", "40-49", "50-59", "60-69", "more than 70" ]
kiranshivaraju/convnext-xlarge-v7
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "bad", "good" ]
initial01/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6465 - Accuracy: 0.899 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7306 | 0.992 | 62 | 2.5309 | 0.848 | | 1.8719 | 2.0 | 125 | 1.7966 | 0.896 | | 1.609 | 2.976 | 186 | 1.6465 | 0.899 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
kiranshivaraju/convnext-large-v8
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "bad", "good" ]
kiranshivaraju/convnext-xlarge-v9
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "bad", "good" ]
ljttw/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0169 - F1: 0.9932 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.0337 | 0.9994 | 830 | 0.0303 | 0.9776 | | 0.0272 | 1.9991 | 1660 | 0.0286 | 0.9819 | | 0.0071 | 2.9988 | 2490 | 0.0169 | 0.9932 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "type0", "type1", "type2", "type3" ]
ljttw/convnext-base-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-base-224-finetuned-eurosat This model is a fine-tuned version of [facebook/convnext-base-224](https://huggingface.co/facebook/convnext-base-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0081 - F1: 0.9949 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.0242 | 0.9994 | 830 | 0.0168 | 0.9903 | | 0.0127 | 1.9991 | 1660 | 0.0091 | 0.9941 | | 0.0075 | 2.9988 | 2490 | 0.0081 | 0.9949 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "type0", "type1", "type2", "type3" ]
ljttw/convnext-tiny-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-tiny-224-finetuned-eurosat This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0116 - F1: 0.9917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.0369 | 0.9994 | 830 | 0.0378 | 0.9770 | | 0.0152 | 1.9991 | 1660 | 0.0202 | 0.9903 | | 0.0003 | 2.9988 | 2490 | 0.0116 | 0.9917 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "type0", "type1", "type2", "type3" ]
kiranshivaraju/convnext-large-v10
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "bad", "good" ]
ljttw/resnet-50-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-50-finetuned-eurosat This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0498 - F1: 0.9645 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.1303 | 0.9994 | 830 | 0.1197 | 0.7228 | | 0.0878 | 1.9991 | 1660 | 0.0625 | 0.9522 | | 0.0542 | 2.9988 | 2490 | 0.0498 | 0.9645 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "type0", "type1", "type2", "type3" ]
griffio/vit-large-patch16-224-new-dungeon-geo-morphs-21Nov24-003
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-new-dungeon-geo-morphs-21Nov24-003 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1531 - Accuracy: 0.9583 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.0456 | 6.6667 | 10 | 0.5274 | 0.875 | | 0.1182 | 13.3333 | 20 | 0.2556 | 0.9167 | | 0.0095 | 20.0 | 30 | 0.1531 | 0.9583 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "three", "two", "zero" ]
ljttw/swin-base-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-base-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0099 - F1: 0.9952 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.0376 | 0.9994 | 830 | 0.0223 | 0.9759 | | 0.0088 | 1.9991 | 1660 | 0.0148 | 0.9920 | | 0.0042 | 2.9988 | 2490 | 0.0099 | 0.9952 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "type0", "type1", "type2", "type3" ]