model_id
stringlengths
7
105
model_card
stringlengths
1
130k
model_labels
listlengths
2
80k
kiranshivaraju/convnext-xlarge-v11
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "bad", "good" ]
initial01/vit-base-beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0657 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.284 | 1.0 | 130 | 0.2165 | 0.9624 | | 0.1316 | 2.0 | 260 | 0.1331 | 0.9699 | | 0.1429 | 3.0 | 390 | 0.0992 | 0.9699 | | 0.0775 | 4.0 | 520 | 0.0657 | 0.9925 | | 0.1142 | 5.0 | 650 | 0.0783 | 0.9774 | ### Framework versions - Transformers 4.47.0.dev0 - Pytorch 2.5.1+cpu - Datasets 3.1.0 - Tokenizers 0.20.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
kiranshivaraju/convnext-xlarge-v12
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "bad", "good" ]
m1keM/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6130 - Accuracy: 0.899 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.749 | 0.992 | 62 | 2.5466 | 0.858 | | 1.8327 | 2.0 | 125 | 1.7843 | 0.884 | | 1.589 | 2.976 | 186 | 1.6130 | 0.899 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
mwildana/results
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4246 - Accuracy: 0.5062 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2192 | 1.0 | 10 | 1.5404 | 0.4688 | | 1.1105 | 2.0 | 20 | 1.5094 | 0.4313 | | 0.9413 | 3.0 | 30 | 1.4630 | 0.4813 | | 0.7833 | 4.0 | 40 | 1.4246 | 0.5062 | | 0.6455 | 5.0 | 50 | 1.4159 | 0.5 | | 0.535 | 6.0 | 60 | 1.4147 | 0.4875 | | 0.446 | 7.0 | 70 | 1.3981 | 0.4875 | | 0.3777 | 8.0 | 80 | 1.4239 | 0.4625 | | 0.3258 | 9.0 | 90 | 1.4240 | 0.4813 | | 0.2865 | 10.0 | 100 | 1.4302 | 0.475 | | 0.2579 | 11.0 | 110 | 1.4488 | 0.4688 | | 0.2371 | 12.0 | 120 | 1.4653 | 0.4688 | | 0.2228 | 13.0 | 130 | 1.4644 | 0.4875 | | 0.2135 | 14.0 | 140 | 1.4743 | 0.4688 | | 0.2083 | 15.0 | 150 | 1.4733 | 0.475 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Tokenizers 0.20.3
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
fassabilf/results
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - eval_loss: 1.3919 - eval_accuracy: 0.4688 - eval_runtime: 22.7841 - eval_samples_per_second: 7.022 - eval_steps_per_second: 0.219 - epoch: 12.65 - step: 253 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
stnleyyg/image_classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # image_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6535 - Accuracy: 0.878 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7065 | 1.0 | 63 | 2.5465 | 0.799 | | 1.8582 | 2.0 | 126 | 1.8365 | 0.848 | | 1.6103 | 2.96 | 186 | 1.6695 | 0.863 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
chuun17/image_classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # image_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4879 - Accuracy: 0.5563 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0789 | 1.0 | 10 | 2.0612 | 0.2 | | 1.9841 | 2.0 | 20 | 1.9284 | 0.3125 | | 1.7615 | 3.0 | 30 | 1.6163 | 0.375 | | 1.4914 | 4.0 | 40 | 1.4871 | 0.4188 | | 1.3023 | 5.0 | 50 | 1.3431 | 0.4875 | | 1.1635 | 6.0 | 60 | 1.3240 | 0.4813 | | 1.0184 | 7.0 | 70 | 1.2126 | 0.5312 | | 0.8538 | 8.0 | 80 | 1.2680 | 0.525 | | 0.6981 | 9.0 | 90 | 1.3068 | 0.525 | | 0.6156 | 10.0 | 100 | 1.4091 | 0.4875 | | 0.6205 | 11.0 | 110 | 1.3336 | 0.4813 | | 0.5423 | 12.0 | 120 | 1.4549 | 0.4875 | | 0.44 | 13.0 | 130 | 1.4772 | 0.5 | | 0.4233 | 14.0 | 140 | 1.5430 | 0.4625 | | 0.391 | 15.0 | 150 | 1.3734 | 0.5563 | | 0.3735 | 16.0 | 160 | 1.5240 | 0.4875 | | 0.3431 | 17.0 | 170 | 1.5552 | 0.5 | | 0.3399 | 18.0 | 180 | 1.4532 | 0.5125 | | 0.3632 | 19.0 | 190 | 1.5218 | 0.5 | | 0.3171 | 20.0 | 200 | 1.6937 | 0.4813 | | 0.2326 | 21.0 | 210 | 1.4180 | 0.5625 | | 0.27 | 22.0 | 220 | 1.6422 | 0.5062 | | 0.2207 | 23.0 | 230 | 1.7011 | 0.4562 | | 0.2428 | 24.0 | 240 | 1.8067 | 0.4813 | | 0.2248 | 25.0 | 250 | 1.6980 | 0.5188 | | 0.2502 | 26.0 | 260 | 1.6963 | 0.5 | | 0.1878 | 27.0 | 270 | 1.7788 | 0.5125 | | 0.2659 | 28.0 | 280 | 1.8155 | 0.4875 | | 0.1456 | 29.0 | 290 | 1.8315 | 0.475 | | 0.2087 | 30.0 | 300 | 1.7292 | 0.4938 | | 0.1779 | 31.0 | 310 | 1.6672 | 0.55 | | 0.2008 | 32.0 | 320 | 1.7537 | 0.5062 | | 0.1441 | 33.0 | 330 | 1.7741 | 0.5062 | | 0.1799 | 34.0 | 340 | 1.8359 | 0.4875 | | 0.1333 | 35.0 | 350 | 1.9234 | 0.4813 | | 0.1442 | 36.0 | 360 | 1.9067 | 0.5062 | | 0.1682 | 37.0 | 370 | 1.8590 | 0.475 | | 0.1378 | 38.0 | 380 | 1.7157 | 0.4813 | | 0.1435 | 39.0 | 390 | 1.7980 | 0.5125 | | 0.1117 | 40.0 | 400 | 1.8570 | 0.5312 | | 0.1123 | 41.0 | 410 | 1.9124 | 0.4938 | | 0.0965 | 42.0 | 420 | 1.8322 | 0.5188 | | 0.1054 | 43.0 | 430 | 1.8154 | 0.5125 | | 0.1231 | 44.0 | 440 | 1.9575 | 0.5188 | | 0.098 | 45.0 | 450 | 1.8973 | 0.4938 | | 0.0769 | 46.0 | 460 | 1.8108 | 0.5563 | | 0.0862 | 47.0 | 470 | 1.6361 | 0.5563 | | 0.0904 | 48.0 | 480 | 1.8813 | 0.5188 | | 0.0871 | 49.0 | 490 | 1.7737 | 0.55 | | 0.1053 | 50.0 | 500 | 1.8230 | 0.5062 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
dragonities/results
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [nateraw/vit-age-classifier](https://huggingface.co/nateraw/vit-age-classifier) on the imagefolder dataset. It achieves the following results on the evaluation set: - Accuracy: 0.55 - Loss: 1.6263 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 2.1123 | 1.0 | 50 | 0.2412 | 2.0343 | | 1.8449 | 2.0 | 100 | 0.4113 | 1.7485 | | 1.7374 | 3.0 | 150 | 0.55 | 1.6263 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "0-2", "3-9", "10-19", "20-29", "30-39", "40-49", "50-59", "60-69", "more than 70" ]
griffio/vit-large-patch16-224-new-dungeon-geo-morphs-22Nov24-005
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-new-dungeon-geo-morphs-22Nov24-005 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0634 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.8777 | 6.6667 | 10 | 0.3644 | 0.9259 | | 0.0406 | 13.3333 | 20 | 0.0942 | 1.0 | | 0.0021 | 20.0 | 30 | 0.0634 | 1.0 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "three", "two", "zero" ]
griffio/vit-large-patch16-224-new-dungeon-geo-morphs-22Nov24-007
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-new-dungeon-geo-morphs-22Nov24-007 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0240 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.0851 | 6.6667 | 10 | 0.4245 | 0.9259 | | 0.121 | 13.3333 | 20 | 0.0704 | 1.0 | | 0.0094 | 20.0 | 30 | 0.0240 | 1.0 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "three", "two", "zero" ]
brigettesegovia/plant_classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # plant_classification This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0372 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1008 | 1.5385 | 100 | 0.0372 | 0.9850 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
kiranshivaraju/convnext-xlarge-224-22k-1k-v13
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-xlarge-224-22k-1k-v13 This model is a fine-tuned version of [facebook/convnext-xlarge-224-22k-1k](https://huggingface.co/facebook/convnext-xlarge-224-22k-1k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0104 - Recall: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Recall | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 9 | 0.0104 | 1.0 | | 0.0652 | 2.0 | 18 | 0.0278 | 0.9857 | | 0.03 | 3.0 | 27 | 0.0007 | 1.0 | | 0.0156 | 4.0 | 36 | 0.0002 | 1.0 | | 0.0042 | 5.0 | 45 | 0.0001 | 1.0 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "bad", "good" ]
joyjitm/vit-base-patch16-224-finetuned-flower
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-flower This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu121 - Datasets 2.14.4 - Tokenizers 0.20.3
[ "daisy", "dandelion", "roses", "sunflowers", "tulips" ]
kiranshivaraju/convnext-xlarge-224-22k-1k-v14
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-xlarge-224-22k-1k-v14 This model is a fine-tuned version of [facebook/convnext-xlarge-224-22k-1k](https://huggingface.co/facebook/convnext-xlarge-224-22k-1k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0354 - Accuracy: 0.9921 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 9 | 0.1334 | 0.9606 | | 0.2411 | 2.0 | 18 | 0.0350 | 0.9921 | | 0.0722 | 3.0 | 27 | 0.0165 | 0.9921 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "bad", "good" ]
bjbjbj/my-food-model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-food-model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2676 - Accuracy: 0.943 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4281 | 1.0 | 125 | 0.4344 | 0.922 | | 0.2177 | 2.0 | 250 | 0.2992 | 0.936 | | 0.132 | 3.0 | 375 | 0.2676 | 0.943 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1 - Datasets 2.16.1 - Tokenizers 0.20.3
[ "beignets", "bruschetta", "chicken_wings", "hamburger", "pork_chop", "prime_rib", "ramen" ]
alex-miller/pogona-vitticeps-gender
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pogona-vitticeps-gender This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5663 - Accuracy: 0.7812 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1028 | 1.0 | 2 | 1.1062 | 0.2812 | | 1.0972 | 2.0 | 4 | 1.1082 | 0.3125 | | 1.0793 | 3.0 | 6 | 1.0692 | 0.5312 | | 1.0529 | 4.0 | 8 | 1.0578 | 0.625 | | 1.0178 | 5.0 | 10 | 1.0288 | 0.625 | | 0.9809 | 6.0 | 12 | 0.9988 | 0.6562 | | 0.9422 | 7.0 | 14 | 0.9936 | 0.6562 | | 0.8692 | 8.0 | 16 | 0.9761 | 0.625 | | 0.8503 | 9.0 | 18 | 0.9326 | 0.5938 | | 0.8128 | 10.0 | 20 | 0.9236 | 0.6562 | | 0.777 | 11.0 | 22 | 0.8541 | 0.75 | | 0.7407 | 12.0 | 24 | 0.8744 | 0.6562 | | 0.692 | 13.0 | 26 | 0.8412 | 0.6875 | | 0.6779 | 14.0 | 28 | 0.8611 | 0.6562 | | 0.6261 | 15.0 | 30 | 0.8213 | 0.625 | | 0.609 | 16.0 | 32 | 0.7389 | 0.7188 | | 0.5905 | 17.0 | 34 | 0.7421 | 0.7188 | | 0.5337 | 18.0 | 36 | 0.7651 | 0.6875 | | 0.5091 | 19.0 | 38 | 0.7201 | 0.75 | | 0.5178 | 20.0 | 40 | 0.7424 | 0.7188 | | 0.4757 | 21.0 | 42 | 0.7573 | 0.6562 | | 0.4548 | 22.0 | 44 | 0.7531 | 0.6562 | | 0.4494 | 23.0 | 46 | 0.7185 | 0.7188 | | 0.4627 | 24.0 | 48 | 0.6587 | 0.7188 | | 0.423 | 25.0 | 50 | 0.6426 | 0.75 | | 0.403 | 26.0 | 52 | 0.6525 | 0.75 | | 0.3734 | 27.0 | 54 | 0.6733 | 0.75 | | 0.38 | 28.0 | 56 | 0.6736 | 0.75 | | 0.3702 | 29.0 | 58 | 0.7211 | 0.6875 | | 0.3563 | 30.0 | 60 | 0.7263 | 0.6562 | | 0.336 | 31.0 | 62 | 0.6676 | 0.6875 | | 0.3131 | 32.0 | 64 | 0.6923 | 0.6875 | | 0.3214 | 33.0 | 66 | 0.6137 | 0.75 | | 0.3271 | 34.0 | 68 | 0.6708 | 0.8125 | | 0.3253 | 35.0 | 70 | 0.5912 | 0.75 | | 0.283 | 36.0 | 72 | 0.6332 | 0.7188 | | 0.2874 | 37.0 | 74 | 0.6345 | 0.7188 | | 0.2818 | 38.0 | 76 | 0.7593 | 0.6875 | | 0.2774 | 39.0 | 78 | 0.6817 | 0.7188 | | 0.2482 | 40.0 | 80 | 0.6784 | 0.6875 | | 0.261 | 41.0 | 82 | 0.6631 | 0.7188 | | 0.2945 | 42.0 | 84 | 0.6438 | 0.75 | | 0.2734 | 43.0 | 86 | 0.7086 | 0.75 | | 0.2536 | 44.0 | 88 | 0.6380 | 0.7188 | | 0.2643 | 45.0 | 90 | 0.6723 | 0.6562 | | 0.2273 | 46.0 | 92 | 0.6775 | 0.7188 | | 0.235 | 47.0 | 94 | 0.6876 | 0.7188 | | 0.2642 | 48.0 | 96 | 0.6382 | 0.7188 | | 0.2467 | 49.0 | 98 | 0.6701 | 0.7188 | | 0.2382 | 50.0 | 100 | 0.5663 | 0.7812 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.4.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "female", "indeterminate", "male" ]
caiban123bo/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck" ]
aaryan317/finetuned-indian-food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-indian-food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset. It achieves the following results on the evaluation set: - Loss: 0.1966 - Accuracy: 0.9458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.0792 | 0.3003 | 100 | 0.9595 | 0.8310 | | 0.7392 | 0.6006 | 200 | 0.6229 | 0.8735 | | 0.5819 | 0.9009 | 300 | 0.4570 | 0.8969 | | 0.3794 | 1.2012 | 400 | 0.3989 | 0.9012 | | 0.325 | 1.5015 | 500 | 0.3898 | 0.8937 | | 0.4622 | 1.8018 | 600 | 0.3269 | 0.9086 | | 0.2743 | 2.1021 | 700 | 0.2421 | 0.9437 | | 0.3452 | 2.4024 | 800 | 0.2907 | 0.9160 | | 0.2029 | 2.7027 | 900 | 0.2620 | 0.9309 | | 0.2746 | 3.0030 | 1000 | 0.2221 | 0.9437 | | 0.1373 | 3.3033 | 1100 | 0.2311 | 0.9330 | | 0.1558 | 3.6036 | 1200 | 0.1966 | 0.9458 | | 0.1272 | 3.9039 | 1300 | 0.2092 | 0.9426 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu121 - Tokenizers 0.20.3
[ "burger", "butter_naan", "kaathi_rolls", "kadai_paneer", "kulfi", "masala_dosa", "momos", "paani_puri", "pakode", "pav_bhaji", "pizza", "samosa", "chai", "chapati", "chole_bhature", "dal_makhani", "dhokla", "fried_rice", "idli", "jalebi" ]
MahimaTayal123/DR-Classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MahimaTayal123/DR-Classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2187 - Validation Loss: 0.2654 - Train Accuracy: 0.9420 - Epoch: 5 ## Model description This model leverages the Vision Transformer (ViT) architecture to classify retinal images for early detection of Diabetic Retinopathy (DR). The fine-tuned model improves accuracy and generalization on medical imaging datasets. ## Intended uses & limitations ### Intended Uses: - Medical diagnosis support for Diabetic Retinopathy - Research applications in ophthalmology and AI-based healthcare ### Limitations: - Requires high-quality retinal images for accurate predictions - Not a substitute for professional medical advice; should be used as an assistive tool ## Training and evaluation data The model was trained on a curated dataset containing labeled retinal images. The dataset includes various severity levels of Diabetic Retinopathy, ensuring robustness in classification. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 146985, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Epoch | Train Loss | Validation Loss | Train Accuracy | |:-----:|:---------:|:---------------:|:--------------:| | 1 | 0.4513 | 0.5234 | 0.8270 | | 2 | 0.3124 | 0.4102 | 0.8930 | | 3 | 0.2751 | 0.3856 | 0.9150 | | 4 | 0.2376 | 0.3012 | 0.9320 | | 5 | 0.2187 | 0.2654 | 0.9420 | ### Framework versions - Transformers 4.46.2 - TensorFlow 2.17.1 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "0", "1", "2", "3", "4" ]
alexissaavedra/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.1733 - Accuracy: 0.9432 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3859 | 1.0 | 370 | 0.3105 | 0.9202 | | 0.2087 | 2.0 | 740 | 0.2503 | 0.9242 | | 0.1453 | 3.0 | 1110 | 0.2378 | 0.9269 | | 0.1714 | 4.0 | 1480 | 0.2260 | 0.9323 | | 0.1266 | 5.0 | 1850 | 0.2236 | 0.9323 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
platzi/platzi-vit-model-omar-espejel22
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-vit-model-omar-espejel22 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0721 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1428 | 3.8462 | 500 | 0.0721 | 0.9850 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
pyb-camag/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0802 - Accuracy: 0.9752 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2641 | 1.0 | 190 | 0.1140 | 0.9648 | | 0.1767 | 2.0 | 380 | 0.0927 | 0.9707 | | 0.1124 | 3.0 | 570 | 0.0802 | 0.9752 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "annualcrop", "forest", "herbaceousvegetation", "highway", "industrial", "pasture", "permanentcrop", "residential", "river", "sealake" ]
platzi/platzi-vit-model-Daniel-Sarmiento
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-vit-model-Daniel-Sarmiento This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0243 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1296 | 3.8462 | 500 | 0.0243 | 0.9850 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
amaye15/aimv2-large-patch14-native-image-classification
# AIMv2-Large-Patch14-Native Image Classification [Original AIMv2 Paper](https://arxiv.org/abs/2411.14402) | [BibTeX](#citation) This repository contains an adapted version of the original AIMv2 model, modified to be compatible with the `AutoModelForImageClassification` class from Hugging Face Transformers. This adaptation enables seamless use of the model for image classification tasks. **This model has not been trained/fine-tuned** ## Introduction We have adapted the original `apple/aimv2-large-patch14-native` model to work with `AutoModelForImageClassification`. The AIMv2 family consists of vision models pre-trained with a multimodal autoregressive objective, offering robust performance across various benchmarks. Some highlights of the AIMv2 models include: 1. Outperforming OAI CLIP and SigLIP on the majority of multimodal understanding benchmarks. 2. Surpassing DINOv2 in open-vocabulary object detection and referring expression comprehension. 3. Demonstrating strong recognition performance, with AIMv2-3B achieving **89.5% on ImageNet using a frozen trunk**. ## Usage ### PyTorch ```python import requests from PIL import Image from transformers import AutoImageProcessor, AutoModelForImageClassification url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained( "amaye15/aimv2-large-patch14-native-image-classification", ) model = AutoModelForImageClassification.from_pretrained( "amaye15/aimv2-large-patch14-native-image-classification", trust_remote_code=True, ) inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # Get predicted class predictions = outputs.logits.softmax(dim=-1) predicted_class = predictions.argmax(-1).item() print(f"Predicted class: {model.config.id2label[predicted_class]}") ``` ## Model Details - **Model Name**: `amaye15/aimv2-large-patch14-native-image-classification` - **Original Model**: `apple/aimv2-large-patch14-native` - **Adaptation**: Modified to be compatible with `AutoModelForImageClassification` for direct use in image classification tasks. - **Framework**: PyTorch ## Citation If you use this model or find it helpful, please consider citing the original AIMv2 paper: ```bibtex @article{yang2023aimv2, title={AIMv2: Advances in Multimodal Vision Models}, author={Yang, Li and others}, journal={arXiv preprint arXiv:2411.14402}, year={2023} } ```
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6", "label_7", "label_8", "label_9", "label_10", "label_11", "label_12", "label_13", "label_14", "label_15", "label_16", "label_17", "label_18", "label_19", "label_20", "label_21", "label_22", "label_23", "label_24", "label_25", "label_26", "label_27", "label_28", "label_29", "label_30", "label_31", "label_32", "label_33", "label_34", "label_35", "label_36", "label_37", "label_38", "label_39", "label_40", "label_41", "label_42", "label_43", "label_44", "label_45", "label_46", "label_47", "label_48", "label_49", "label_50", "label_51", "label_52", "label_53", "label_54", "label_55", "label_56", "label_57", "label_58", "label_59", "label_60", "label_61", "label_62", "label_63", "label_64", "label_65", "label_66", "label_67", "label_68", "label_69", "label_70", "label_71", "label_72", "label_73", "label_74", "label_75", "label_76", "label_77", "label_78", "label_79", "label_80", "label_81", "label_82", "label_83", "label_84", "label_85", "label_86", "label_87", "label_88", "label_89", "label_90", "label_91", "label_92", "label_93", "label_94", "label_95", "label_96", "label_97", "label_98", "label_99", "label_100", "label_101", "label_102", "label_103", "label_104", "label_105", "label_106", "label_107", "label_108", "label_109", "label_110", "label_111", "label_112", "label_113", "label_114", "label_115", "label_116", "label_117", "label_118", "label_119", "label_120", "label_121", "label_122", "label_123", "label_124", "label_125", "label_126", "label_127", "label_128", "label_129", "label_130", "label_131", "label_132", "label_133", "label_134", "label_135", "label_136", "label_137", "label_138", "label_139", "label_140", "label_141", "label_142", "label_143", "label_144", "label_145", "label_146", "label_147", "label_148", "label_149", "label_150", "label_151", "label_152", "label_153", "label_154", "label_155", "label_156", "label_157", "label_158", "label_159", "label_160", "label_161", "label_162", "label_163", "label_164", "label_165", "label_166", "label_167", "label_168", "label_169", "label_170", "label_171", "label_172", "label_173", "label_174", "label_175", "label_176", "label_177", "label_178", "label_179", "label_180", "label_181", "label_182", "label_183", "label_184", "label_185", "label_186", "label_187", "label_188", "label_189", "label_190", "label_191", "label_192", "label_193", "label_194", "label_195", "label_196", "label_197", "label_198", "label_199", "label_200", "label_201", "label_202", "label_203", "label_204", "label_205", "label_206", "label_207", "label_208", "label_209", "label_210", "label_211", "label_212", "label_213", "label_214", "label_215", "label_216", "label_217", "label_218", "label_219", "label_220", "label_221", "label_222", "label_223", "label_224", "label_225", "label_226", "label_227", "label_228", "label_229", "label_230", "label_231", "label_232", "label_233", "label_234", "label_235", "label_236", "label_237", "label_238", "label_239", "label_240", "label_241", "label_242", "label_243", "label_244", "label_245", "label_246", "label_247", "label_248", "label_249", "label_250", "label_251", "label_252", "label_253", "label_254", "label_255", "label_256", "label_257", "label_258", "label_259", "label_260", "label_261", "label_262", "label_263", "label_264", "label_265", "label_266", "label_267", "label_268", "label_269", "label_270", "label_271", "label_272", "label_273", "label_274", "label_275", "label_276", "label_277", "label_278", "label_279", "label_280", "label_281", "label_282", "label_283", "label_284", "label_285", "label_286", "label_287", "label_288", "label_289", "label_290", "label_291", "label_292", "label_293", "label_294", "label_295", "label_296", "label_297", "label_298", "label_299", "label_300", "label_301", "label_302", "label_303", "label_304", "label_305", "label_306", "label_307", "label_308", "label_309", "label_310", "label_311", "label_312", "label_313", "label_314", "label_315", "label_316", "label_317", "label_318", "label_319", "label_320", "label_321", "label_322", "label_323", "label_324", "label_325", "label_326", "label_327", "label_328", "label_329", "label_330", "label_331", "label_332", "label_333", "label_334", "label_335", "label_336", "label_337", "label_338", "label_339", "label_340", "label_341", "label_342", "label_343", "label_344", "label_345", "label_346", "label_347", "label_348", "label_349", "label_350", "label_351", "label_352", "label_353", "label_354", "label_355", "label_356", "label_357", "label_358", "label_359", "label_360", "label_361", "label_362", "label_363", "label_364", "label_365", "label_366", "label_367", "label_368", "label_369", "label_370", "label_371", "label_372", "label_373", "label_374", "label_375", "label_376", "label_377", "label_378", "label_379", "label_380", "label_381", "label_382", "label_383", "label_384", "label_385", "label_386", "label_387", "label_388", "label_389", "label_390", "label_391", "label_392", "label_393", "label_394", "label_395", "label_396", "label_397", "label_398", "label_399", "label_400", "label_401", "label_402", "label_403", "label_404", "label_405", "label_406", "label_407", "label_408", "label_409", "label_410", "label_411", "label_412", "label_413", "label_414", "label_415", "label_416", "label_417", "label_418", "label_419", "label_420", "label_421", "label_422", "label_423", "label_424", "label_425", "label_426", "label_427", "label_428", "label_429", "label_430", "label_431", "label_432", "label_433", "label_434", "label_435", "label_436", "label_437", "label_438", "label_439", "label_440", "label_441", "label_442", "label_443", "label_444", "label_445", "label_446", "label_447", "label_448", "label_449", "label_450", "label_451", "label_452", "label_453", "label_454", "label_455", "label_456", "label_457", "label_458", "label_459", "label_460", "label_461", "label_462", "label_463", "label_464", "label_465", "label_466", "label_467", "label_468", "label_469", "label_470", "label_471", "label_472", "label_473", "label_474", "label_475", "label_476", "label_477", "label_478", "label_479", "label_480", "label_481", "label_482", "label_483", "label_484", "label_485", "label_486", "label_487", "label_488", "label_489", "label_490", "label_491", "label_492", "label_493", "label_494", "label_495", "label_496", "label_497", "label_498", "label_499", "label_500", "label_501", "label_502", "label_503", "label_504", "label_505", "label_506", "label_507", "label_508", "label_509", "label_510", "label_511", "label_512", "label_513", "label_514", "label_515", "label_516", "label_517", "label_518", "label_519", "label_520", "label_521", "label_522", "label_523", "label_524", "label_525", "label_526", "label_527", "label_528", "label_529", "label_530", "label_531", "label_532", "label_533", "label_534", "label_535", "label_536", "label_537", "label_538", "label_539", "label_540", "label_541", "label_542", "label_543", "label_544", "label_545", "label_546", "label_547", "label_548", "label_549", "label_550", "label_551", "label_552", "label_553", "label_554", "label_555", "label_556", "label_557", "label_558", "label_559", "label_560", "label_561", "label_562", "label_563", "label_564", "label_565", "label_566", "label_567", "label_568", "label_569", "label_570", "label_571", "label_572", "label_573", "label_574", "label_575", "label_576", "label_577", "label_578", "label_579", "label_580", "label_581", "label_582", "label_583", "label_584", "label_585", "label_586", "label_587", "label_588", "label_589", "label_590", "label_591", "label_592", "label_593", "label_594", "label_595", "label_596", "label_597", "label_598", "label_599", "label_600", "label_601", "label_602", "label_603", "label_604", "label_605", "label_606", "label_607", "label_608", "label_609", "label_610", "label_611", "label_612", "label_613", "label_614", "label_615", "label_616", "label_617", "label_618", "label_619", "label_620", "label_621", "label_622", "label_623", "label_624", "label_625", "label_626", "label_627", "label_628", "label_629", "label_630", "label_631", "label_632", "label_633", "label_634", "label_635", "label_636", "label_637", "label_638", "label_639", "label_640", "label_641", "label_642", "label_643", "label_644", "label_645", "label_646", "label_647", "label_648", "label_649", "label_650", "label_651", "label_652", "label_653", "label_654", "label_655", "label_656", "label_657", "label_658", "label_659", "label_660", "label_661", "label_662", "label_663", "label_664", "label_665", "label_666", "label_667", "label_668", "label_669", "label_670", "label_671", "label_672", "label_673", "label_674", "label_675", "label_676", "label_677", "label_678", "label_679", "label_680", "label_681", "label_682", "label_683", "label_684", "label_685", "label_686", "label_687", "label_688", "label_689", "label_690", "label_691", "label_692", "label_693", "label_694", "label_695", "label_696", "label_697", "label_698", "label_699", "label_700", "label_701", "label_702", "label_703", "label_704", "label_705", "label_706", "label_707", "label_708", "label_709", "label_710", "label_711", "label_712", "label_713", "label_714", "label_715", "label_716", "label_717", "label_718", "label_719", "label_720", "label_721", "label_722", "label_723", "label_724", "label_725", "label_726", "label_727", "label_728", "label_729", "label_730", "label_731", "label_732", "label_733", "label_734", "label_735", "label_736", "label_737", "label_738", "label_739", "label_740", "label_741", "label_742", "label_743", "label_744", "label_745", "label_746", "label_747", "label_748", "label_749", "label_750", "label_751", "label_752", "label_753", "label_754", "label_755", "label_756", "label_757", "label_758", "label_759", "label_760", "label_761", "label_762", "label_763", "label_764", "label_765", "label_766", "label_767", "label_768", "label_769", "label_770", "label_771", "label_772", "label_773", "label_774", "label_775", "label_776", "label_777", "label_778", "label_779", "label_780", "label_781", "label_782", "label_783", "label_784", "label_785", "label_786", "label_787", "label_788", "label_789", "label_790", "label_791", "label_792", "label_793", "label_794", "label_795", "label_796", "label_797", "label_798", "label_799", "label_800", "label_801", "label_802", "label_803", "label_804", "label_805", "label_806", "label_807", "label_808", "label_809", "label_810", "label_811", "label_812", "label_813", "label_814", "label_815", "label_816", "label_817", "label_818", "label_819", "label_820", "label_821", "label_822", "label_823", "label_824", "label_825", "label_826", "label_827", "label_828", "label_829", "label_830", "label_831", "label_832", "label_833", "label_834", "label_835", "label_836", "label_837", "label_838", "label_839", "label_840", "label_841", "label_842", "label_843", "label_844", "label_845", "label_846", "label_847", "label_848", "label_849", "label_850", "label_851", "label_852", "label_853", "label_854", "label_855", "label_856", "label_857", "label_858", "label_859", "label_860", "label_861", "label_862", "label_863", "label_864", "label_865", "label_866", "label_867", "label_868", "label_869", "label_870", "label_871", "label_872", "label_873", "label_874", "label_875", "label_876", "label_877", "label_878", "label_879", "label_880", "label_881", "label_882", "label_883", "label_884", "label_885", "label_886", "label_887", "label_888", "label_889", "label_890", "label_891", "label_892", "label_893", "label_894", "label_895", "label_896", "label_897", "label_898", "label_899", "label_900", "label_901", "label_902", "label_903", "label_904", "label_905", "label_906", "label_907", "label_908", "label_909", "label_910", "label_911", "label_912", "label_913", "label_914", "label_915", "label_916", "label_917", "label_918", "label_919", "label_920", "label_921", "label_922", "label_923", "label_924", "label_925", "label_926", "label_927", "label_928", "label_929", "label_930", "label_931", "label_932", "label_933", "label_934", "label_935", "label_936", "label_937", "label_938", "label_939", "label_940", "label_941", "label_942", "label_943", "label_944", "label_945", "label_946", "label_947", "label_948", "label_949", "label_950", "label_951", "label_952", "label_953", "label_954", "label_955", "label_956", "label_957", "label_958", "label_959", "label_960", "label_961", "label_962", "label_963", "label_964", "label_965", "label_966", "label_967", "label_968", "label_969", "label_970", "label_971", "label_972", "label_973", "label_974", "label_975", "label_976", "label_977", "label_978", "label_979", "label_980", "label_981", "label_982", "label_983", "label_984", "label_985", "label_986", "label_987", "label_988", "label_989", "label_990", "label_991", "label_992", "label_993", "label_994", "label_995", "label_996", "label_997", "label_998", "label_999" ]
n1hal/swinv2-plantclef-1k
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-plantclef-1k This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window16-256](https://huggingface.co/microsoft/swinv2-base-patch4-window16-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2482 - Accuracy: 0.7565 - F1: 0.7560 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:| | 1.6253 | 1.0 | 11441 | 1.4319 | 0.6419 | 0.6402 | | 1.1142 | 2.0 | 22882 | 1.1573 | 0.7031 | 0.7021 | | 0.8357 | 3.0 | 34323 | 1.0814 | 0.7228 | 0.7235 | | 0.604 | 4.0 | 45764 | 1.1028 | 0.7323 | 0.7312 | | 0.3991 | 5.0 | 57205 | 1.1261 | 0.7373 | 0.7371 | | 0.2405 | 6.0 | 68646 | 1.1802 | 0.7373 | 0.7377 | | 0.1543 | 7.0 | 80087 | 1.2014 | 0.7453 | 0.7450 | | 0.097 | 8.0 | 91528 | 1.2337 | 0.7474 | 0.7472 | | 0.054 | 9.0 | 102969 | 1.2416 | 0.7532 | 0.7529 | | 0.0414 | 10.0 | 114410 | 1.2482 | 0.7565 | 0.7560 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0 - Datasets 3.1.0 - Tokenizers 0.20.1
[ "1355868", "1355869", "1355901", "1356304", "1356305", "1356308", "1356310", "1356314", "1356320", "1356323", "1356328", "1356330", "1356333", "1355907", "1356355", "1356357", "1356361", "1356364", "1356382", "1356394", "1356395", "1356396", "1356397", "1356398", "1355908", "1356422", "1356428", "1356433", "1356448", "1356455", "1356457", "1356458", "1356460", "1356464", "1356469", "1355932", "1356472", "1356475", "1356476", "1356488", "1356538", "1356567", "1356570", "1356571", "1356575", "1356576", "1355935", "1356583", "1356588", "1356595", "1356597", "1356603", "1356608", "1356609", "1356634", "1356663", "1356672", "1355936", "1356674", "1356692", "1356697", "1356709", "1356729", "1356732", "1356752", "1356757", "1356766", "1356780", "1355937", "1356781", "1356784", "1356785", "1356786", "1356790", "1356792", "1356793", "1356803", "1356804", "1356817", "1355964", "1356843", "1356857", "1356905", "1356939", "1356942", "1356979", "1356985", "1357004", "1357009", "1357026", "1355968", "1357059", "1357096", "1357099", "1357103", "1357104", "1357105", "1357111", "1357114", "1357126", "1357148", "1355969", "1357176", "1357207", "1357227", "1357244", "1357268", "1357275", "1357291", "1357314", "1357317", "1357320", "1355870", "1355971", "1357330", "1357350", "1357351", "1357379", "1357392", "1357401", "1357413", "1357416", "1357438", "1357492", "1355972", "1357495", "1357526", "1357546", "1357549", "1357560", "1357568", "1357598", "1357599", "1357603", "1357608", "1355977", "1357617", "1357630", "1357635", "1357652", "1357697", "1357705", "1357711", "1357723", "1357742", "1357743", "1355978", "1357770", "1357771", "1357799", "1357815", "1357848", "1357867", "1357868", "1357872", "1357875", "1357911", "1355984", "1357950", "1357953", "1357968", "1357998", "1358003", "1358024", "1358026", "1358094", "1358095", "1358105", "1355990", "1358113", "1358126", "1358134", "1358140", "1358147", "1358151", "1358171", "1358203", "1358204", "1358213", "1355994", "1358257", "1358289", "1358318", "1358322", "1358335", "1358340", "1358347", "1358349", "1358386", "1358413", "1355995", "1358416", "1358446", "1358454", "1358469", "1358491", "1358494", "1358500", "1358501", "1358503", "1358505", "1356007", "1358507", "1358509", "1358511", "1358515", "1358517", "1358525", "1358543", "1358548", "1358552", "1358572", "1356008", "1358590", "1358592", "1358595", "1358605", "1358609", "1358610", "1358613", "1358614", "1358616", "1358617", "1355872", "1356012", "1358622", "1358624", "1358643", "1358654", "1358671", "1358674", "1358682", "1358684", "1358689", "1358706", "1356013", "1358710", "1358713", "1358716", "1358718", "1358722", "1358742", "1358745", "1358746", "1358747", "1358751", "1356017", "1358752", "1358766", "1358768", "1358770", "1358784", "1358785", "1358788", "1358789", "1358827", "1358846", "1356022", "1358851", "1358852", "1358854", "1358876", "1358950", "1358969", "1358986", "1359006", "1359009", "1359020", "1356023", "1359029", "1359079", "1359081", "1359086", "1359124", "1359138", "1359160", "1359161", "1359162", "1359168", "1356040", "1359169", "1359172", "1359181", "1359182", "1359195", "1359197", "1359205", "1359216", "1359266", "1359268", "1356045", "1359274", "1359284", "1359296", "1359297", "1359321", "1359322", "1359333", "1359344", "1359410", "1359413", "1356052", "1359431", "1359442", "1359450", "1359452", "1359483", "1359485", "1359488", "1359498", "1359517", "1359519", "1356062", "1359525", "1359545", "1359548", "1359562", "1359575", "1359578", "1359581", "1359596", "1359601", "1359616", "1356063", "1359620", "1359622", "1359625", "1359627", "1359635", "1359639", "1359649", "1359658", "1359659", "1359662", "1355881", "1356064", "1359668", "1359669", "1359673", "1359675", "1359676", "1359677", "1359678", "1359680", "1359681", "1359686", "1356065", "1359688", "1359750", "1359752", "1359757", "1359758", "1359760", "1359761", "1359772", "1359792", "1359804", "1356066", "1359835", "1359845", "1359860", "1359871", "1359873", "1359943", "1359959", "1359965", "1359973", "1359978", "1356067", "1359981", "1360006", "1360075", "1360101", "1360116", "1360120", "1360132", "1360135", "1360141", "1360145", "1356075", "1360153", "1360155", "1360159", "1360161", "1360162", "1360164", "1360182", "1360185", "1360187", "1360193", "1356078", "1360203", "1360222", "1360231", "1360247", "1360257", "1360259", "1360260", "1360261", "1360262", "1360267", "1356079", "1360271", "1360275", "1360290", "1360293", "1360297", "1360300", "1360307", "1360313", "1360316", "1360317", "1356082", "1360318", "1360319", "1360322", "1360323", "1360324", "1360326", "1360328", "1360330", "1360334", "1360338", "1356084", "1360345", "1360348", "1360350", "1360354", "1360358", "1360385", "1360399", "1360404", "1360406", "1360422", "1356086", "1360433", "1360434", "1360443", "1360449", "1360450", "1360451", "1360457", "1360459", "1360468", "1360476", "1355882", "1356091", "1360489", "1360495", "1360497", "1360507", "1360520", "1360523", "1360531", "1360539", "1360542", "1360547", "1356094", "1360548", "1360549", "1360550", "1360555", "1360557", "1360562", "1360566", "1360570", "1360588", "1360590", "1356095", "1360595", "1360607", "1360608", "1360613", "1360614", "1360638", "1360671", "1360674", "1360676", "1360701", "1356105", "1360709", "1360711", "1360719", "1360739", "1360744", "1360759", "1360761", "1360771", "1360790", "1360793", "1356106", "1360798", "1360801", "1360811", "1360814", "1360816", "1360825", "1360831", "1360841", "1360860", "1360861", "1356107", "1360872", "1360878", "1360897", "1360898", "1360925", "1360926", "1360931", "1360945", "1360968", "1360971", "1356108", "1360978", "1360982", "1360985", "1360989", "1360995", "1360998", "1360999", "1361000", "1361001", "1361012", "1356109", "1361017", "1361019", "1361020", "1361050", "1361055", "1361058", "1361074", "1361075", "1361078", "1361081", "1356111", "1361087", "1361089", "1361104", "1361106", "1361109", "1361117", "1361133", "1361195", "1361245", "1361307", "1356115", "1361329", "1361335", "1361487", "1361492", "1361497", "1361501", "1361510", "1361514", "1361516", "1361519", "1355884", "1356117", "1361533", "1361535", "1361540", "1361543", "1361569", "1361572", "1361574", "1361592", "1361626", "1361634", "1356118", "1361635", "1361636", "1361637", "1361650", "1361652", "1361653", "1361654", "1361656", "1361669", "1361673", "1356123", "1361682", "1361685", "1361687", "1361691", "1361701", "1361702", "1361710", "1361711", "1361720", "1361721", "1356126", "1361742", "1361743", "1361747", "1361759", "1361760", "1361771", "1361772", "1361790", "1361792", "1361800", "1356133", "1361801", "1361819", "1361823", "1361824", "1361829", "1361847", "1361863", "1361864", "1361883", "1361898", "1356134", "1361900", "1361926", "1361947", "1361953", "1361960", "1361993", "1361996", "1362009", "1362019", "1362038", "1356136", "1362040", "1362100", "1362194", "1362246", "1362251", "1362283", "1362294", "1362327", "1362332", "1362350", "1356144", "1362353", "1362380", "1362399", "1362406", "1362417", "1362422", "1362427", "1362429", "1362438", "1362441", "1356149", "1362443", "1362447", "1362454", "1362462", "1362465", "1362473", "1362485", "1362486", "1362492", "1362499", "1356153", "1362533", "1362626", "1362645", "1362648", "1362653", "1362711", "1362746", "1362842", "1362856", "1362892", "1355886", "1356154", "1362899", "1362912", "1362946", "1362951", "1362952", "1362963", "1362982", "1362985", "1362987", "1362990", "1356158", "1363021", "1363046", "1363068", "1363101", "1363125", "1363128", "1363129", "1363130", "1363142", "1363148", "1356159", "1363165", "1363173", "1363179", "1363180", "1363199", "1363212", "1363213", "1363214", "1363216", "1363217", "1356162", "1363218", "1363224", "1363226", "1363227", "1363229", "1363232", "1363236", "1363248", "1363250", "1363261", "1356167", "1363264", "1363267", "1363268", "1363275", "1363281", "1363283", "1363293", "1363324", "1363326", "1363328", "1356168", "1363329", "1363332", "1363336", "1363337", "1363339", "1363341", "1363358", "1363362", "1363386", "1363402", "1356171", "1363410", "1363424", "1363433", "1363440", "1363442", "1363446", "1363451", "1363455", "1363456", "1363458", "1356180", "1363459", "1363460", "1363461", "1363467", "1363468", "1363469", "1363475", "1363481", "1363483", "1363487", "1356200", "1363490", "1363496", "1363500", "1363501", "1363508", "1363526", "1363575", "1363593", "1363595", "1363598", "1356201", "1363600", "1363603", "1363610", "1363615", "1363624", "1363638", "1363642", "1363647", "1363651", "1363660", "1355897", "1356208", "1363671", "1363676", "1363687", "1363698", "1363699", "1363705", "1363706", "1363707", "1363728", "1363729", "1356209", "1363730", "1363731", "1363732", "1363733", "1363734", "1363737", "1363740", "1363749", "1363764", "1363769", "1356214", "1363770", "1363778", "1363791", "1363793", "1363799", "1363801", "1363802", "1363803", "1363813", "1363814", "1356215", "1363815", "1363822", "1363832", "1363835", "1363842", "1363849", "1363850", "1363858", "1363878", "1363879", "1356217", "1363882", "1363883", "1363884", "1363885", "1363886", "1363887", "1363889", "1363892", "1363893", "1363896", "1356229", "1363897", "1363901", "1363906", "1363908", "1363909", "1363910", "1363911", "1363915", "1363920", "1363944", "1356240", "1363947", "1363953", "1363960", "1363972", "1363974", "1363978", "1363987", "1363988", "1363991", "1363993", "1356257", "1363994", "1363996", "1363997", "1363999", "1364006", "1364016", "1364019", "1364028", "1364029", "1364031", "1356268", "1364032", "1364039", "1364040", "1364046", "1364048", "1364049", "1364058", "1364059", "1364060", "1364062", "1356276", "1364063", "1364064", "1364065", "1364066", "1364067", "1364068", "1364079", "1364085", "1364090", "1364093", "1355898", "1356277", "1364100", "1364114", "1364120", "1364122", "1364127", "1364138", "1364143", "1364145", "1364151", "1364152", "1356278", "1364154", "1364157", "1364160", "1364164", "1364167", "1364168", "1364170", "1364171", "1364172", "1364173", "1356282", "1365253", "1367432", "1369068", "1369309", "1370072", "1370637", "1370859", "1374048", "1374114", "1379532", "1356284", "1380238", "1380273", "1382106", "1383635", "1384485", "1386159", "1388692", "1388727", "1388787", "1388801", "1356286", "1388802", "1388812", "1388913", "1389018", "1389022", "1389059", "1389228", "1389311", "1389369", "1389405", "1356290", "1389495", "1389510", "1389550", "1389553", "1389570", "1389576", "1389581", "1389589", "1389627", "1389635", "1356294", "1389833", "1389966", "1389974", "1389976", "1390068", "1390095", "1390300", "1390584", "1390603", "1390637", "1356295", "1390653", "1390659", "1390663", "1390666", "1390669", "1390671", "1390674", "1390680", "1390687", "1390691", "1356300", "1390699", "1390725", "1390740", "1390943", "1390944", "1390949", "1390952", "1390954", "1390972", "1391028", "1356303", "1391037", "1391092", "1391099", "1391103", "1391104", "1391110", "1391112", "1391161", "1391192", "1391226" ]
Towen/vit-base-patch16-224-in21k-finetuned
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-finetuned This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1228 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.5651 | 0.9816 | 40 | 0.7021 | 0.5 | | 0.3002 | 1.9877 | 81 | 0.7162 | 0.625 | | 0.251 | 2.9939 | 122 | 0.8250 | 0.625 | | 0.1628 | 4.0 | 163 | 0.8735 | 0.625 | | 0.1763 | 4.9816 | 203 | 0.7803 | 0.625 | | 0.1694 | 5.9877 | 244 | 0.3916 | 0.6875 | | 0.1572 | 6.9939 | 285 | 0.6275 | 0.8125 | | 0.1343 | 8.0 | 326 | 1.3112 | 0.625 | | 0.1629 | 8.9816 | 366 | 0.5798 | 0.625 | | 0.1675 | 9.9877 | 407 | 0.4662 | 0.8125 | | 0.1254 | 10.9939 | 448 | 0.4484 | 0.8125 | | 0.136 | 12.0 | 489 | 0.3055 | 0.8125 | | 0.1303 | 12.9816 | 529 | 0.2235 | 0.875 | | 0.177 | 13.9877 | 570 | 0.4362 | 0.8125 | | 0.125 | 14.9939 | 611 | 0.5964 | 0.625 | | 0.1059 | 16.0 | 652 | 0.5711 | 0.6875 | | 0.1012 | 16.9816 | 692 | 0.1228 | 1.0 | | 0.0945 | 17.9877 | 733 | 0.1478 | 1.0 | | 0.1169 | 18.9939 | 774 | 0.2164 | 0.9375 | | 0.0968 | 19.6319 | 800 | 0.2333 | 0.875 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "normal", "pneumonia" ]
yarak001/resnet-18
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "tench, tinca tinca", "goldfish, carassius auratus", "great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias", "tiger shark, galeocerdo cuvieri", "hammerhead, hammerhead shark", "electric ray, crampfish, numbfish, torpedo", "stingray", "cock", "hen", "ostrich, struthio camelus", "brambling, fringilla montifringilla", "goldfinch, carduelis carduelis", "house finch, linnet, carpodacus mexicanus", "junco, snowbird", "indigo bunting, indigo finch, indigo bird, passerina cyanea", "robin, american robin, turdus migratorius", "bulbul", "jay", "magpie", "chickadee", "water ouzel, dipper", "kite", "bald eagle, american eagle, haliaeetus leucocephalus", "vulture", "great grey owl, great gray owl, strix nebulosa", "european fire salamander, salamandra salamandra", "common newt, triturus vulgaris", "eft", "spotted salamander, ambystoma maculatum", "axolotl, mud puppy, ambystoma mexicanum", "bullfrog, rana catesbeiana", "tree frog, tree-frog", "tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui", "loggerhead, loggerhead turtle, caretta caretta", "leatherback turtle, leatherback, leathery turtle, dermochelys coriacea", "mud turtle", "terrapin", "box turtle, box tortoise", "banded gecko", "common iguana, iguana, iguana iguana", "american chameleon, anole, anolis carolinensis", "whiptail, whiptail lizard", "agama", "frilled lizard, chlamydosaurus kingi", "alligator lizard", "gila monster, heloderma suspectum", "green lizard, lacerta viridis", "african chameleon, chamaeleo chamaeleon", "komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis", "african crocodile, nile crocodile, crocodylus niloticus", "american alligator, alligator mississipiensis", "triceratops", "thunder snake, worm snake, carphophis amoenus", "ringneck snake, ring-necked snake, ring snake", "hognose snake, puff adder, sand viper", "green snake, grass snake", "king snake, kingsnake", "garter snake, grass snake", "water snake", "vine snake", "night snake, hypsiglena torquata", "boa constrictor, constrictor constrictor", "rock python, rock snake, python sebae", "indian cobra, naja naja", "green mamba", "sea snake", "horned viper, cerastes, sand viper, horned asp, cerastes cornutus", "diamondback, diamondback rattlesnake, crotalus adamanteus", "sidewinder, horned rattlesnake, crotalus cerastes", "trilobite", "harvestman, daddy longlegs, phalangium opilio", "scorpion", "black and gold garden spider, argiope aurantia", "barn spider, araneus cavaticus", "garden spider, aranea diademata", "black widow, latrodectus mactans", "tarantula", "wolf spider, hunting spider", "tick", "centipede", "black grouse", "ptarmigan", "ruffed grouse, partridge, bonasa umbellus", "prairie chicken, prairie grouse, prairie fowl", "peacock", "quail", "partridge", "african grey, african gray, psittacus erithacus", "macaw", "sulphur-crested cockatoo, kakatoe galerita, cacatua galerita", "lorikeet", "coucal", "bee eater", "hornbill", "hummingbird", "jacamar", "toucan", "drake", "red-breasted merganser, mergus serrator", "goose", "black swan, cygnus atratus", "tusker", "echidna, spiny anteater, anteater", "platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus", "wallaby, brush kangaroo", "koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus", "wombat", "jellyfish", "sea anemone, anemone", "brain coral", "flatworm, platyhelminth", "nematode, nematode worm, roundworm", "conch", "snail", "slug", "sea slug, nudibranch", "chiton, coat-of-mail shell, sea cradle, polyplacophore", "chambered nautilus, pearly nautilus, nautilus", "dungeness crab, cancer magister", "rock crab, cancer irroratus", "fiddler crab", "king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica", "american lobster, northern lobster, maine lobster, homarus americanus", "spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish", "crayfish, crawfish, crawdad, crawdaddy", "hermit crab", "isopod", "white stork, ciconia ciconia", "black stork, ciconia nigra", "spoonbill", "flamingo", "little blue heron, egretta caerulea", "american egret, great white heron, egretta albus", "bittern", "crane", "limpkin, aramus pictus", "european gallinule, porphyrio porphyrio", "american coot, marsh hen, mud hen, water hen, fulica americana", "bustard", "ruddy turnstone, arenaria interpres", "red-backed sandpiper, dunlin, erolia alpina", "redshank, tringa totanus", "dowitcher", "oystercatcher, oyster catcher", "pelican", "king penguin, aptenodytes patagonica", "albatross, mollymawk", "grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus", "killer whale, killer, orca, grampus, sea wolf, orcinus orca", "dugong, dugong dugon", "sea lion", "chihuahua", "japanese spaniel", "maltese dog, maltese terrier, maltese", "pekinese, pekingese, peke", "shih-tzu", "blenheim spaniel", "papillon", "toy terrier", "rhodesian ridgeback", "afghan hound, afghan", "basset, basset hound", "beagle", "bloodhound, sleuthhound", "bluetick", "black-and-tan coonhound", "walker hound, walker foxhound", "english foxhound", "redbone", "borzoi, russian wolfhound", "irish wolfhound", "italian greyhound", "whippet", "ibizan hound, ibizan podenco", "norwegian elkhound, elkhound", "otterhound, otter hound", "saluki, gazelle hound", "scottish deerhound, deerhound", "weimaraner", "staffordshire bullterrier, staffordshire bull terrier", "american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier", "bedlington terrier", "border terrier", "kerry blue terrier", "irish terrier", "norfolk terrier", "norwich terrier", "yorkshire terrier", "wire-haired fox terrier", "lakeland terrier", "sealyham terrier, sealyham", "airedale, airedale terrier", "cairn, cairn terrier", "australian terrier", "dandie dinmont, dandie dinmont terrier", "boston bull, boston terrier", "miniature schnauzer", "giant schnauzer", "standard schnauzer", "scotch terrier, scottish terrier, scottie", "tibetan terrier, chrysanthemum dog", "silky terrier, sydney silky", "soft-coated wheaten terrier", "west highland white terrier", "lhasa, lhasa apso", "flat-coated retriever", "curly-coated retriever", "golden retriever", "labrador retriever", "chesapeake bay retriever", "german short-haired pointer", "vizsla, hungarian pointer", "english setter", "irish setter, red setter", "gordon setter", "brittany spaniel", "clumber, clumber spaniel", "english springer, english springer spaniel", "welsh springer spaniel", "cocker spaniel, english cocker spaniel, cocker", "sussex spaniel", "irish water spaniel", "kuvasz", "schipperke", "groenendael", "malinois", "briard", "kelpie", "komondor", "old english sheepdog, bobtail", "shetland sheepdog, shetland sheep dog, shetland", "collie", "border collie", "bouvier des flandres, bouviers des flandres", "rottweiler", "german shepherd, german shepherd dog, german police dog, alsatian", "doberman, doberman pinscher", "miniature pinscher", "greater swiss mountain dog", "bernese mountain dog", "appenzeller", "entlebucher", "boxer", "bull mastiff", "tibetan mastiff", "french bulldog", "great dane", "saint bernard, st bernard", "eskimo dog, husky", "malamute, malemute, alaskan malamute", "siberian husky", "dalmatian, coach dog, carriage dog", "affenpinscher, monkey pinscher, monkey dog", "basenji", "pug, pug-dog", "leonberg", "newfoundland, newfoundland dog", "great pyrenees", "samoyed, samoyede", "pomeranian", "chow, chow chow", "keeshond", "brabancon griffon", "pembroke, pembroke welsh corgi", "cardigan, cardigan welsh corgi", "toy poodle", "miniature poodle", "standard poodle", "mexican hairless", "timber wolf, grey wolf, gray wolf, canis lupus", "white wolf, arctic wolf, canis lupus tundrarum", "red wolf, maned wolf, canis rufus, canis niger", "coyote, prairie wolf, brush wolf, canis latrans", "dingo, warrigal, warragal, canis dingo", "dhole, cuon alpinus", "african hunting dog, hyena dog, cape hunting dog, lycaon pictus", "hyena, hyaena", "red fox, vulpes vulpes", "kit fox, vulpes macrotis", "arctic fox, white fox, alopex lagopus", "grey fox, gray fox, urocyon cinereoargenteus", "tabby, tabby cat", "tiger cat", "persian cat", "siamese cat, siamese", "egyptian cat", "cougar, puma, catamount, mountain lion, painter, panther, felis concolor", "lynx, catamount", "leopard, panthera pardus", "snow leopard, ounce, panthera uncia", "jaguar, panther, panthera onca, felis onca", "lion, king of beasts, panthera leo", "tiger, panthera tigris", "cheetah, chetah, acinonyx jubatus", "brown bear, bruin, ursus arctos", "american black bear, black bear, ursus americanus, euarctos americanus", "ice bear, polar bear, ursus maritimus, thalarctos maritimus", "sloth bear, melursus ursinus, ursus ursinus", "mongoose", "meerkat, mierkat", "tiger beetle", "ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle", "ground beetle, carabid beetle", "long-horned beetle, longicorn, longicorn beetle", "leaf beetle, chrysomelid", "dung beetle", "rhinoceros beetle", "weevil", "fly", "bee", "ant, emmet, pismire", "grasshopper, hopper", "cricket", "walking stick, walkingstick, stick insect", "cockroach, roach", "mantis, mantid", "cicada, cicala", "leafhopper", "lacewing, lacewing fly", "dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk", "damselfly", "admiral", "ringlet, ringlet butterfly", "monarch, monarch butterfly, milkweed butterfly, danaus plexippus", "cabbage butterfly", "sulphur butterfly, sulfur butterfly", "lycaenid, lycaenid butterfly", "starfish, sea star", "sea urchin", "sea cucumber, holothurian", "wood rabbit, cottontail, cottontail rabbit", "hare", "angora, angora rabbit", "hamster", "porcupine, hedgehog", "fox squirrel, eastern fox squirrel, sciurus niger", "marmot", "beaver", "guinea pig, cavia cobaya", "sorrel", "zebra", "hog, pig, grunter, squealer, sus scrofa", "wild boar, boar, sus scrofa", "warthog", "hippopotamus, hippo, river horse, hippopotamus amphibius", "ox", "water buffalo, water ox, asiatic buffalo, bubalus bubalis", "bison", "ram, tup", "bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis", "ibex, capra ibex", "hartebeest", "impala, aepyceros melampus", "gazelle", "arabian camel, dromedary, camelus dromedarius", "llama", "weasel", "mink", "polecat, fitch, foulmart, foumart, mustela putorius", "black-footed ferret, ferret, mustela nigripes", "otter", "skunk, polecat, wood pussy", "badger", "armadillo", "three-toed sloth, ai, bradypus tridactylus", "orangutan, orang, orangutang, pongo pygmaeus", "gorilla, gorilla gorilla", "chimpanzee, chimp, pan troglodytes", "gibbon, hylobates lar", "siamang, hylobates syndactylus, symphalangus syndactylus", "guenon, guenon monkey", "patas, hussar monkey, erythrocebus patas", "baboon", "macaque", "langur", "colobus, colobus monkey", "proboscis monkey, nasalis larvatus", "marmoset", "capuchin, ringtail, cebus capucinus", "howler monkey, howler", "titi, titi monkey", "spider monkey, ateles geoffroyi", "squirrel monkey, saimiri sciureus", "madagascar cat, ring-tailed lemur, lemur catta", "indri, indris, indri indri, indri brevicaudatus", "indian elephant, elephas maximus", "african elephant, loxodonta africana", "lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens", "giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca", "barracouta, snoek", "eel", "coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch", "rock beauty, holocanthus tricolor", "anemone fish", "sturgeon", "gar, garfish, garpike, billfish, lepisosteus osseus", "lionfish", "puffer, pufferfish, blowfish, globefish", "abacus", "abaya", "academic gown, academic robe, judge's robe", "accordion, piano accordion, squeeze box", "acoustic guitar", "aircraft carrier, carrier, flattop, attack aircraft carrier", "airliner", "airship, dirigible", "altar", "ambulance", "amphibian, amphibious vehicle", "analog clock", "apiary, bee house", "apron", "ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin", "assault rifle, assault gun", "backpack, back pack, knapsack, packsack, rucksack, haversack", "bakery, bakeshop, bakehouse", "balance beam, beam", "balloon", "ballpoint, ballpoint pen, ballpen, biro", "band aid", "banjo", "bannister, banister, balustrade, balusters, handrail", "barbell", "barber chair", "barbershop", "barn", "barometer", "barrel, cask", "barrow, garden cart, lawn cart, wheelbarrow", "baseball", "basketball", "bassinet", "bassoon", "bathing cap, swimming cap", "bath towel", "bathtub, bathing tub, bath, tub", "beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon", "beacon, lighthouse, beacon light, pharos", "beaker", "bearskin, busby, shako", "beer bottle", "beer glass", "bell cote, bell cot", "bib", "bicycle-built-for-two, tandem bicycle, tandem", "bikini, two-piece", "binder, ring-binder", "binoculars, field glasses, opera glasses", "birdhouse", "boathouse", "bobsled, bobsleigh, bob", "bolo tie, bolo, bola tie, bola", "bonnet, poke bonnet", "bookcase", "bookshop, bookstore, bookstall", "bottlecap", "bow", "bow tie, bow-tie, bowtie", "brass, memorial tablet, plaque", "brassiere, bra, bandeau", "breakwater, groin, groyne, mole, bulwark, seawall, jetty", "breastplate, aegis, egis", "broom", "bucket, pail", "buckle", "bulletproof vest", "bullet train, bullet", "butcher shop, meat market", "cab, hack, taxi, taxicab", "caldron, cauldron", "candle, taper, wax light", "cannon", "canoe", "can opener, tin opener", "cardigan", "car mirror", "carousel, carrousel, merry-go-round, roundabout, whirligig", "carpenter's kit, tool kit", "carton", "car wheel", "cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm", "cassette", "cassette player", "castle", "catamaran", "cd player", "cello, violoncello", "cellular telephone, cellular phone, cellphone, cell, mobile phone", "chain", "chainlink fence", "chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour", "chain saw, chainsaw", "chest", "chiffonier, commode", "chime, bell, gong", "china cabinet, china closet", "christmas stocking", "church, church building", "cinema, movie theater, movie theatre, movie house, picture palace", "cleaver, meat cleaver, chopper", "cliff dwelling", "cloak", "clog, geta, patten, sabot", "cocktail shaker", "coffee mug", "coffeepot", "coil, spiral, volute, whorl, helix", "combination lock", "computer keyboard, keypad", "confectionery, confectionary, candy store", "container ship, containership, container vessel", "convertible", "corkscrew, bottle screw", "cornet, horn, trumpet, trump", "cowboy boot", "cowboy hat, ten-gallon hat", "cradle", "crane", "crash helmet", "crate", "crib, cot", "crock pot", "croquet ball", "crutch", "cuirass", "dam, dike, dyke", "desk", "desktop computer", "dial telephone, dial phone", "diaper, nappy, napkin", "digital clock", "digital watch", "dining table, board", "dishrag, dishcloth", "dishwasher, dish washer, dishwashing machine", "disk brake, disc brake", "dock, dockage, docking facility", "dogsled, dog sled, dog sleigh", "dome", "doormat, welcome mat", "drilling platform, offshore rig", "drum, membranophone, tympan", "drumstick", "dumbbell", "dutch oven", "electric fan, blower", "electric guitar", "electric locomotive", "entertainment center", "envelope", "espresso maker", "face powder", "feather boa, boa", "file, file cabinet, filing cabinet", "fireboat", "fire engine, fire truck", "fire screen, fireguard", "flagpole, flagstaff", "flute, transverse flute", "folding chair", "football helmet", "forklift", "fountain", "fountain pen", "four-poster", "freight car", "french horn, horn", "frying pan, frypan, skillet", "fur coat", "garbage truck, dustcart", "gasmask, respirator, gas helmet", "gas pump, gasoline pump, petrol pump, island dispenser", "goblet", "go-kart", "golf ball", "golfcart, golf cart", "gondola", "gong, tam-tam", "gown", "grand piano, grand", "greenhouse, nursery, glasshouse", "grille, radiator grille", "grocery store, grocery, food market, market", "guillotine", "hair slide", "hair spray", "half track", "hammer", "hamper", "hand blower, blow dryer, blow drier, hair dryer, hair drier", "hand-held computer, hand-held microcomputer", "handkerchief, hankie, hanky, hankey", "hard disc, hard disk, fixed disk", "harmonica, mouth organ, harp, mouth harp", "harp", "harvester, reaper", "hatchet", "holster", "home theater, home theatre", "honeycomb", "hook, claw", "hoopskirt, crinoline", "horizontal bar, high bar", "horse cart, horse-cart", "hourglass", "ipod", "iron, smoothing iron", "jack-o'-lantern", "jean, blue jean, denim", "jeep, landrover", "jersey, t-shirt, tee shirt", "jigsaw puzzle", "jinrikisha, ricksha, rickshaw", "joystick", "kimono", "knee pad", "knot", "lab coat, laboratory coat", "ladle", "lampshade, lamp shade", "laptop, laptop computer", "lawn mower, mower", "lens cap, lens cover", "letter opener, paper knife, paperknife", "library", "lifeboat", "lighter, light, igniter, ignitor", "limousine, limo", "liner, ocean liner", "lipstick, lip rouge", "loafer", "lotion", "loudspeaker, speaker, speaker unit, loudspeaker system, speaker system", "loupe, jeweler's loupe", "lumbermill, sawmill", "magnetic compass", "mailbag, postbag", "mailbox, letter box", "maillot", "maillot, tank suit", "manhole cover", "maraca", "marimba, xylophone", "mask", "matchstick", "maypole", "maze, labyrinth", "measuring cup", "medicine chest, medicine cabinet", "megalith, megalithic structure", "microphone, mike", "microwave, microwave oven", "military uniform", "milk can", "minibus", "miniskirt, mini", "minivan", "missile", "mitten", "mixing bowl", "mobile home, manufactured home", "model t", "modem", "monastery", "monitor", "moped", "mortar", "mortarboard", "mosque", "mosquito net", "motor scooter, scooter", "mountain bike, all-terrain bike, off-roader", "mountain tent", "mouse, computer mouse", "mousetrap", "moving van", "muzzle", "nail", "neck brace", "necklace", "nipple", "notebook, notebook computer", "obelisk", "oboe, hautboy, hautbois", "ocarina, sweet potato", "odometer, hodometer, mileometer, milometer", "oil filter", "organ, pipe organ", "oscilloscope, scope, cathode-ray oscilloscope, cro", "overskirt", "oxcart", "oxygen mask", "packet", "paddle, boat paddle", "paddlewheel, paddle wheel", "padlock", "paintbrush", "pajama, pyjama, pj's, jammies", "palace", "panpipe, pandean pipe, syrinx", "paper towel", "parachute, chute", "parallel bars, bars", "park bench", "parking meter", "passenger car, coach, carriage", "patio, terrace", "pay-phone, pay-station", "pedestal, plinth, footstall", "pencil box, pencil case", "pencil sharpener", "perfume, essence", "petri dish", "photocopier", "pick, plectrum, plectron", "pickelhaube", "picket fence, paling", "pickup, pickup truck", "pier", "piggy bank, penny bank", "pill bottle", "pillow", "ping-pong ball", "pinwheel", "pirate, pirate ship", "pitcher, ewer", "plane, carpenter's plane, woodworking plane", "planetarium", "plastic bag", "plate rack", "plow, plough", "plunger, plumber's helper", "polaroid camera, polaroid land camera", "pole", "police van, police wagon, paddy wagon, patrol wagon, wagon, black maria", "poncho", "pool table, billiard table, snooker table", "pop bottle, soda bottle", "pot, flowerpot", "potter's wheel", "power drill", "prayer rug, prayer mat", "printer", "prison, prison house", "projectile, missile", "projector", "puck, hockey puck", "punching bag, punch bag, punching ball, punchball", "purse", "quill, quill pen", "quilt, comforter, comfort, puff", "racer, race car, racing car", "racket, racquet", "radiator", "radio, wireless", "radio telescope, radio reflector", "rain barrel", "recreational vehicle, rv, r.v.", "reel", "reflex camera", "refrigerator, icebox", "remote control, remote", "restaurant, eating house, eating place, eatery", "revolver, six-gun, six-shooter", "rifle", "rocking chair, rocker", "rotisserie", "rubber eraser, rubber, pencil eraser", "rugby ball", "rule, ruler", "running shoe", "safe", "safety pin", "saltshaker, salt shaker", "sandal", "sarong", "sax, saxophone", "scabbard", "scale, weighing machine", "school bus", "schooner", "scoreboard", "screen, crt screen", "screw", "screwdriver", "seat belt, seatbelt", "sewing machine", "shield, buckler", "shoe shop, shoe-shop, shoe store", "shoji", "shopping basket", "shopping cart", "shovel", "shower cap", "shower curtain", "ski", "ski mask", "sleeping bag", "slide rule, slipstick", "sliding door", "slot, one-armed bandit", "snorkel", "snowmobile", "snowplow, snowplough", "soap dispenser", "soccer ball", "sock", "solar dish, solar collector, solar furnace", "sombrero", "soup bowl", "space bar", "space heater", "space shuttle", "spatula", "speedboat", "spider web, spider's web", "spindle", "sports car, sport car", "spotlight, spot", "stage", "steam locomotive", "steel arch bridge", "steel drum", "stethoscope", "stole", "stone wall", "stopwatch, stop watch", "stove", "strainer", "streetcar, tram, tramcar, trolley, trolley car", "stretcher", "studio couch, day bed", "stupa, tope", "submarine, pigboat, sub, u-boat", "suit, suit of clothes", "sundial", "sunglass", "sunglasses, dark glasses, shades", "sunscreen, sunblock, sun blocker", "suspension bridge", "swab, swob, mop", "sweatshirt", "swimming trunks, bathing trunks", "swing", "switch, electric switch, electrical switch", "syringe", "table lamp", "tank, army tank, armored combat vehicle, armoured combat vehicle", "tape player", "teapot", "teddy, teddy bear", "television, television system", "tennis ball", "thatch, thatched roof", "theater curtain, theatre curtain", "thimble", "thresher, thrasher, threshing machine", "throne", "tile roof", "toaster", "tobacco shop, tobacconist shop, tobacconist", "toilet seat", "torch", "totem pole", "tow truck, tow car, wrecker", "toyshop", "tractor", "trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi", "tray", "trench coat", "tricycle, trike, velocipede", "trimaran", "tripod", "triumphal arch", "trolleybus, trolley coach, trackless trolley", "trombone", "tub, vat", "turnstile", "typewriter keyboard", "umbrella", "unicycle, monocycle", "upright, upright piano", "vacuum, vacuum cleaner", "vase", "vault", "velvet", "vending machine", "vestment", "viaduct", "violin, fiddle", "volleyball", "waffle iron", "wall clock", "wallet, billfold, notecase, pocketbook", "wardrobe, closet, press", "warplane, military plane", "washbasin, handbasin, washbowl, lavabo, wash-hand basin", "washer, automatic washer, washing machine", "water bottle", "water jug", "water tower", "whiskey jug", "whistle", "wig", "window screen", "window shade", "windsor tie", "wine bottle", "wing", "wok", "wooden spoon", "wool, woolen, woollen", "worm fence, snake fence, snake-rail fence, virginia fence", "wreck", "yawl", "yurt", "web site, website, internet site, site", "comic book", "crossword puzzle, crossword", "street sign", "traffic light, traffic signal, stoplight", "book jacket, dust cover, dust jacket, dust wrapper", "menu", "plate", "guacamole", "consomme", "hot pot, hotpot", "trifle", "ice cream, icecream", "ice lolly, lolly, lollipop, popsicle", "french loaf", "bagel, beigel", "pretzel", "cheeseburger", "hotdog, hot dog, red hot", "mashed potato", "head cabbage", "broccoli", "cauliflower", "zucchini, courgette", "spaghetti squash", "acorn squash", "butternut squash", "cucumber, cuke", "artichoke, globe artichoke", "bell pepper", "cardoon", "mushroom", "granny smith", "strawberry", "orange", "lemon", "fig", "pineapple, ananas", "banana", "jackfruit, jak, jack", "custard apple", "pomegranate", "hay", "carbonara", "chocolate sauce, chocolate syrup", "dough", "meat loaf, meatloaf", "pizza, pizza pie", "potpie", "burrito", "red wine", "espresso", "cup", "eggnog", "alp", "bubble", "cliff, drop, drop-off", "coral reef", "geyser", "lakeside, lakeshore", "promontory, headland, head, foreland", "sandbar, sand bar", "seashore, coast, seacoast, sea-coast", "valley, vale", "volcano", "ballplayer, baseball player", "groom, bridegroom", "scuba diver", "rapeseed", "daisy", "yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum", "corn", "acorn", "hip, rose hip, rosehip", "buckeye, horse chestnut, conker", "coral fungus", "agaric", "gyromitra", "stinkhorn, carrion fungus", "earthstar", "hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa", "bolete", "ear, spike, capitulum", "toilet tissue, toilet paper, bathroom tissue" ]
markytools/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # markytools/mtools_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6181 - Accuracy: 0.898 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7006 | 0.992 | 62 | 2.5412 | 0.817 | | 1.8683 | 2.0 | 125 | 1.7993 | 0.865 | | 1.6044 | 2.976 | 186 | 1.6181 | 0.898 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
crocutacrocuto/dinov2-base-MEGter-3
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "aardvark", "bird", "black-and-white colobus", "blue duiker", "blue monkey", "buffalo", "bushbuck", "bushpig", "chimpanzee", "civet_genet", "elephant", "galago_potto", "golden cat", "gorilla", "guineafowl", "leopard", "lhoests monkey", "mandrill", "mongoose", "monkey", "olive baboon", "pangolin", "porcupine", "red colobus_red-capped mangabey", "red duiker", "rodent", "serval", "spotted hyena", "squirrel", "water chevrotain", "yellow-backed duiker" ]
kiranshivaraju/test
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5386 - Accuracy: 0.76 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6505 | 1.0 | 10 | 0.6001 | 0.7333 | | 0.5898 | 2.0 | 20 | 0.5551 | 0.7467 | | 0.5592 | 3.0 | 30 | 0.5386 | 0.76 | ### Framework versions - Transformers 4.47.0.dev0 - Pytorch 2.4.1+cpu - Datasets 2.20.0 - Tokenizers 0.20.3
[ "bad", "good" ]
marwaALzaabi/plant-disease-detection-vit
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # plant-disease-detection-vit This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0002 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0007 | 1.0 | 45 | 0.0004 | 1.0 | | 0.0004 | 2.0 | 90 | 0.0317 | 0.9889 | | 0.0003 | 3.0 | 135 | 0.0003 | 1.0 | | 0.0002 | 4.0 | 180 | 0.0002 | 1.0 | | 0.0002 | 5.0 | 225 | 0.0002 | 1.0 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.4.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "corn_(maize)___common_rust_", "potato___early_blight", "tomato___bacterial_spot" ]
marwaALzaabi/plant-identification-vit
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # plant-identification-vit This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0315 - Accuracy: 0.8096 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0085 | 1.0 | 953 | 1.0659 | 0.7762 | | 0.6805 | 2.0 | 1906 | 0.8413 | 0.8029 | | 0.5039 | 3.0 | 2859 | 0.7920 | 0.8069 | | 0.3847 | 4.0 | 3812 | 0.7760 | 0.8102 | | 0.2826 | 5.0 | 4765 | 0.8024 | 0.8049 | | 0.2229 | 6.0 | 5718 | 0.8382 | 0.8099 | | 0.1064 | 7.0 | 6671 | 0.8983 | 0.8074 | | 0.0676 | 8.0 | 7624 | 0.9672 | 0.8072 | | 0.027 | 9.0 | 8577 | 1.0089 | 0.8099 | | 0.0209 | 10.0 | 9530 | 1.0315 | 0.8096 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.4.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "lactuca virosa l.", "pelargonium capitatum (l.) l'hér.", "phyllanthus tenellus roxb.", "trifolium campestre schreb.", "sedum allantoides rose", "sedum burrito moran", "sedum clavatum r.t. clausen", "sedum compressum rose", "sedum cyaneum j. rudolph", "sedum decumbens r.t. clausen", "sedum furfuraceum moran", "sedum hernandezii j. meyrán", "sedum japonicum siebold ex miq.", "sedum makinoi maxim.", "trifolium cherleri l.", "sedum nussbaumerianum bitter", "sedum praealtum a.dc.", "sedum rubrotinctum r.t. clausen", "uncinia rubra colenso ex boott", "empetrum rubrum vahl ex willd.", "acalypha macrostachya jacq.", "pelargonium echinatum curtis", "pelargonium × hortorum l.h. bailey", "pelargonium sidoides dc.", "pelargonium tomentosum jacq.", "trifolium hirtum all.", "lavandula canariensis (l.) mill.", "acacia auriculiformis benth.", "acacia brevispica harms", "acacia caven (molina) molina", "acacia cognata domin", "acacia confluens maiden & blakeley", "acacia podalyriifolia g.don", "acacia saligna (labill.) wendl.", "dalbergia retusa hemsl.", "lupinus nootkatensis sims", "trifolium hybridum l.", "prosopis pallida (willd.) kunth", "cedrela fissilis vell.", "nepenthes mirabilis (lour.) druce", "nepenthes × neglecta macfarl.", "nothofagus betuloides (mirb.) oerst.", "nothofagus dombeyi (mirb.) oerst.", "nothofagus nitida (phil.) krasser", "nymphaea rubra roxb. ex andrews", "dendrobium anosmum lindl.", "dendrobium kingianum bidwill ex lindl.", "trifolium incarnatum l.", "dendrobium munificum (finet) schltr.", "dendrobium thyrsiflorum b.s.williams", "dendrobium victoriae-reginae loher", "erycina pusilla (l.) n.h.williams & m.w.chase", "adlumia fungosa (aiton) britton, sterns & poggenb.", "peperomia albovittata c. dc.", "peperomia clusiifolia (jacq.) hook.", "peperomia columella rauh & hutchison", "peperomia ferreyrae yunck.", "peperomia graveolens rauh & barthlott", "trifolium lappaceum l.", "peperomia maculosa (l.) hook.", "peperomia pecuniifolia trel. & standl.", "peperomia polybotrya kunth", "peperomia prostrata b.s. williams", "peperomia tetragona ruiz & pav.", "peperomia verticillata (l.) a.dietr.", "hebe andersonii (lindl. & j. paxton) cockayne", "lomatia ferruginea r. br.", "anemone pavoniana boiss.", "anemone tomentosa (maxim.) c.pei", "trifolium michelianum savi", "hepatica nobilis mill.", "fragaria × ananassa (duchesne ex weston) duchesne ex rozier", "margaritopsis haematocarpa (standl.) c.m.taylor", "angostura granulosa (kallunki) kallunki", "smilax excelsa l.", "coussapoa villosa poepp. & endl.", "cyphostemma cyphopetalum (fresen.) desc. ex wild & r.b.drumm.", "cyphostemma juttae (dinter & gilg) desc.", "cyphostemma serpens (hochst. ex a.rich.) desc.", "nepenthes alata blanco", "trifolium micranthum viv.", "nepenthes truncata macfarl.", "lavandula spp.", "cymbalaria muralis p.gaertn.", "aphelandra aurantiaca lindl.", "guatteria dolichopoda donn. sm.", "stenanona costaricensis r.e. fr.", "schefflera morototoni (aubl.) maguire", "bourreria andrieuxii (dc.) hemsl.", "bourreria costaricensis (standl.) a.h. gentry", "alibertia edulis (rich.) a. rich.", "trifolium montanum l.", "acacia senegalensis (houtt.) roberty", "trifolium ochroleucon huds.", "hypericum perforatum l.", "trifolium pannonicum jacq.", "trifolium patens schreb.", "trifolium pratense l.", "trifolium purpureum loisel.", "trifolium resupinatum l.", "trifolium scabrum l.", "trifolium spumosum l.", "trifolium squamosum l.", "trifolium stellatum l.", "trifolium striatum l.", "egeria densa planch.", "trifolium subterraneum l.", "trifolium tomentosum l.", "punica granatum l.", "alcea rosea l.", "althaea cannabina l.", "althaea officinalis l.", "nymphaea alba l.", "hunnemannia fumariifolia sweet", "papaver atlanticum (ball) coss.", "papaver rupifragum boiss. & reut.", "ibicella lutea (lindl.) van eselt.", "goniolimon tataricum (l.) boiss.", "adonis aestivalis l.", "adonis annua l.", "adonis flammea jacq.", "adonis microcarpa dc.", "anemone coronaria l.", "anemone palmata l.", "oldenlandia corymbosa l.", "hyoscyamus albus l.", "hyoscyamus niger l.", "tradescantia fluminensis vell.", "daphne gnidium l.", "daphne laureola l.", "daphne oleoides schreb.", "berula erecta (huds.) coville", "chaerophyllum temulum l.", "daucus muricatus (l.) l.", "meum athamanticum jacq.", "thapsia garganica l.", "thapsia villosa l.", "centranthus calcitrapae (l.) dufr.", "tradescantia zebrina heynh. ex bosse", "fedia cornucopiae (l.) gaertn.", "pancratium canariense ker gawl.", "pancratium maritimum l.", "anthericum liliago l.", "butomus umbellatus l.", "danthonia decumbens (l.) dc.", "leersia oryzoides (l.) sw.", "phalaris coerulescens desf.", "oncostema elongata (parl.) speta", "oncostema peruviana (l.) speta", "lamium amplexicaule l.", "elodea canadensis michx.", "neotinea maculata (desf.) stearn", "ophrys apifera huds.", "ophrys bombyliflora link", "ophrys fusca link", "ophrys lutea cav.", "ophrys scolopax cav.", "ophrys speculum link", "ophrys tenthredinifera willd.", "groenlandia densa (l.) fourr.", "lavandula dentata l.", "myosoton aquaticum (l.) moench", "cirsium palustre (l.) scop.", "lamium galeobdolon (l.) l.", "lavandula angustifolia mill.", "casuarina cunninghamiana miq.", "trifolium rubens l.", "falcaria vulgaris bernh.", "hypericum triquetrifolium turra", "wigandia caracasana kunth", "nymphaea lotus l.", "striga asiatica (l.) kuntze", "sedum villosum l.", "epipactis helleborine (l.) crantz", "thesium humifusum dc.", "acacia etbaica schweinf.", "centranthus angustifolius (mill.) dc.", "centranthus lecoqii jord.", "sedum anglicum huds.", "sedum rupestre l.", "ophrys insectifera l.", "freesia refracta (jacq.) klatt", "melilotus albus medik.", "diatelia tuberaria (l.) demoly", "tradescantia pallida (rose) d.r. hunt", "cenchrus longispinus (hack.) fernald", "schefflera actinophylla (endl.) harms", "cenchrus clandestinus (hochst. ex chiov.) morrone", "cenchrus purpureus (schumach.) morrone", "cenchrus setaceus (forssk.) morrone", "anemone hortensis l.", "tradescantia cerinthoides kunth", "papaver argemone l.", "pelargonium graveolens l'hér.", "trifolium fragiferum l.", "papaver hybridum l.", "papaver rhoeas l.", "papaver dubium l.", "papaver somniferum l.", "daucus carota l.", "smilax aspera l.", "aizoon canariense l.", "zannichellia palustris l.", "secale cereale l.", "cenchrus ciliaris l.", "asystasia gangetica (l.) t. anderson", "cenchrus echinatus l.", "phalaris aquatica l.", "phalaris arundinacea l.", "phalaris canariensis l.", "phalaris minor retz.", "phalaris paradoxa l.", "sedum hispanicum l.", "crotalaria juncea l.", "lupinus angustifolius l.", "lupinus luteus l.", "nymphaea nouchali burm. f.", "lupinus pilosus l.", "lupinus albus l.", "trifolium alexandrinum l.", "trifolium dubium sibth.", "trifolium glomeratum l.", "trifolium nigrescens viv.", "trifolium repens l.", "dalbergia melanoxylon guill. & perr.", "melilotus indicus (l.) all.", "melilotus officinalis (l.) pall.", "dryopteris aemula (aiton) kuntze", "acacia dealbata link", "acacia longifolia (andrews) willd.", "acacia mearnsii de wild.", "acacia pycnantha benth.", "acacia retinodes schltdl.", "barbarea verna (mill.) asch.", "barbarea intermedia boreau", "myosurus minimus l.", "fragaria vesca l.", "duchesnea indica (jacks.) focke", "dryopteris affinis (lowe) fraser-jenk.", "centranthus ruber (l.) dc.", "guizotia abyssinica (l. f.) cass.", "tagetes minuta l.", "tagetes patula l.", "calendula arvensis (vaill.) l.", "lapsana communis l.", "carthamus tinctorius l.", "lactuca sativa l.", "lactuca serriola l.", "arthraxon hispidus (thunb.) makino", "dryopteris filix-mas (l.) schott", "microchloa kunthii desv.", "harpachne schimperi hochst. ex a. rich.", "cedrela odorata l.", "aeschynomene americana l.", "prosopis alba griseb.", "prosopis juliflora (sw.) dc.", "crotalaria spectabilis roth", "crotalaria chrysochlora baker f. ex harms", "crotalaria deflersii schweinf.", "crotalaria polysperma kotschy", "nephrolepis cordifolia (l.) c. presl", "crotalaria uguenensis taub.", "crotalaria verrucosa l.", "lupinus perennis l.", "lupinus polyphyllus lindl.", "dalbergia latifolia roxb.", "cordyla africana lour.", "acacia xanthophloea benth.", "acacia drepanolobium harms ex y. sjöstedt", "acacia hockii de wild.", "acacia angustissima (mill.) kuntze", "nephrolepis exaltata (l.) schott", "gomphocarpus integer (n.e. br.) bullock", "trachelospermum jasminoides (lindl.) lem.", "tradescantia spathacea sw.", "gymnosporia putterlickioides loes.", "conostomium kenyense bremek.", "conostomium quadrangulare (rendle) cufod.", "mussaenda frondosa l.", "mussaenda erythrophylla schumach. & thonn.", "morinda citrifolia l.", "pilocarpus racemosus vahl", "osmunda regalis l.", "pelargonium alchemilloides (l.) l'hér.", "pelargonium quinquelobatum hochst. ex a. rich.", "pelargonium glechomoides a. rich.", "gynura aurantiaca (blume) dc.", "gynura procumbens (lour.) merr.", "tagetes erecta l.", "aspilia mossambicensis (oliv.) wild", "aspilia pluriseta schweinf.", "schkuhria pinnata (lam.) thell.", "montanoa hibiscifolia benth.", "achyranthes aspera l.", "acalypha crenata hochst. ex a. rich.", "acalypha hispida burm. f.", "phyllanthus fischeri pax", "phyllanthus suffrutescens pax", "phyllanthus acidus (l.) skeels", "phyllanthus amarus schumach. & thonn.", "triadica sebifera (l.) small", "petiveria alliacea l.", "peperomia pellucida (l.) kunth", "guaiacum officinale l.", "cirsium arvense (l.) scop.", "lithodora fruticosa (l.) griseb.", "hypericum annulatum moris", "mussaenda philippica a. rich.", "wigandia urens (ruiz & pav.) kunth", "boscia mossambicensis klotzsch", "boscia coriacea pax", "mecardonia procumbens (mill.) small", "browallia americana l.", "acacia mangium willd.", "casuarina equisetifolia l.", "aristea abyssinica pax", "humulus lupulus l.", "aristea ecklonii baker", "erianthemum dregei (eckl. & zeyh.) tiegh.", "cucurbita ficifolia bouché", "cucurbita pepo l.", "luffa acutangula (l.) roxb.", "lagenaria siceraria (molina) standl.", "lagenaria sphaerica (sond.) naudin", "asystasia riparia lindau", "barringtonia asiatica (l.) kurz", "couroupita guianensis aubl.", "vaccaria hispanica (mill.) rauschert", "asystasia charmian s. moore", "raphia farinifera (gaertn.) hyl.", "zamioculcas zamiifolia (lodd.) engl.", "kniphofia linearifolia baker", "smilax anceps willd.", "vanilla planifolia andrews", "ansellia africana lindl.", "hypericum balearicum l.", "ophrys druentica p.delforge & viglione", "macrosyringion longiflorum (vohl) rothm.", "spergularia rubra (l.) j. presl & c. presl", "pyracantha koidzumii (hayata) rehder", "lactuca plumieri (l.) gren. & godr.", "aizoanthemum hispanicum (l.) h.e.k.hartmann", "phedimus aizoon (l.) 't hart", "phedimus spurius (m.bieb) 't hart", "nephrolepis cordifolia (l.) c.presl", "cereus jamacaru dc.", "sedum pachyphyllum rose", "sedum dendroideum moc. & sessé ex dc.", "liriodendron chinensis (hemsl.) sarg.", "moehringia trinervia (l.) clairv.", "atocion rupestre (l.) b.oxelman", "tagetes tenuifolia cav.", "aegopodium podagraria l.", "acacia melanoxylon r.br.", "acacia saligna (labill.) h.l.wendl.", "patzkea paniculata (l.) g.h.loos", "perovskia abrotanoides kar.", "ophrys occidentalis (scappaticci) scappaticci & m.demange", "erechtites hieraciifolius (l.) raf. ex dc.", "cenchrus longisetus m.c.johnst.", "calendula officinalis l.", "nothofagus obliqua (mirb.) oerst.", "calendula arvensis l.", "carthamus carduncellus l.", "carthamus mitissimus l.", "chaerophyllum aureum l.", "chaerophyllum hirsutum l.", "chaerophyllum villarsii w.d.j.koch", "cirsium alsophilum (pollini) greuter", "cirsium canum (l.) all.", "cirsium dissectum (l.) hill", "calendula stellata cav.", "cirsium eriophorum (l.) scop.", "cirsium erisithales (jacq.) scop.", "cirsium ferox (l.) dc.", "cirsium filipendulum lange", "cirsium glabrum dc.", "cirsium heterophyllum (l.) hill", "cirsium morisianum rchb.f.", "cirsium oleraceum (l.) scop.", "cirsium rivulare (jacq.) all.", "cirsium tuberosum (l.) all.", "carthamus caeruleus l.", "collomia grandiflora douglas ex lindl.", "cucurbita maxima duchesne", "cymbalaria aequitriloba (viv.) a.chev.", "cymbalaria hepaticifolia (poir.) wettst.", "cytinus hypocistis (l.) l.", "cytinus ruber fritsch", "daphne alpina l.", "daphne cneorum l.", "daphne mezereum l.", "daphne striata tratt.", "carthamus lanatus l.", "dorotheanthus bellidiformis (burm.f.) n.e.br.", "dryas octopetala l.", "dryopteris carthusiana (vill.) h.p.fuchs", "dryopteris cristata (l.) a.gray", "dryopteris dilatata (hoffm.) a.gray", "dryopteris expansa (c.presl) fraser-jenk. & jermy", "dryopteris villarii (bellardi) woyn. ex schinz & thell.", "empetrum nigrum l.", "epipactis atrorubens (hoffm.) besser", "epipactis leptochila (godfery) godfery", "cirsium monspessulanum (l.) hill", "epipactis microphylla (ehrh.) sw.", "epipactis muelleri godfery", "epipactis palustris (l.) crantz", "epipactis phyllanthes g.e.sm.", "epipactis purpurata sm.", "erucastrum gallicum (willd.) o.e.schulz", "erucastrum incanum (l.) w.d.j.koch", "erucastrum nasturtiifolium (poir.) o.e.schulz", "fedia graciliflora fisch. & c.a.mey.", "fragaria moschata weston", "cirsium vulgare (savi) ten.", "helminthotheca echioides (l.) holub", "freesia x hybrida l.h.bailey", "galega orientalis lam.", "alliaria petiolata (m.bieb.) cavara & grande", "geropogon hybridus (l.) sch.bip.", "gomphocarpus fruticosus (l.) r.br.", "gomphocarpus physocarpus e.mey.", "guizotia abyssinica (l.f.) cass.", "hebe andersonii (lindl. & paxton) cockayne", "hebe brachysiphon summerh.", "hebe ochracea ashwin", "hyoseris radiata l.", "hebe salicifolia (g.forst.) pennell", "helicodiceros muscivorus (l.f.) engl.", "hippophae rhamnoides l.", "hypericum calycinum l.", "hypericum coris l.", "hypericum elodes l.", "hypericum x inodorum mill.", "hypericum maculatum crantz", "hypericum nummularium l.", "hypericum pulchrum l.", "lactuca muralis (l.) gaertn.", "hypericum richeri vill.", "dierama pulcherrimum (hook.f.) baker", "iva xanthiifolia nutt.", "kniphofia uvaria (l.) hook.", "lactuca alpina (l.) benth. & hook.f.", "lactuca macrophylla (willd.) a.gray", "lactuca viminea (l.) j.presl & c.presl", "lamium album l.", "lamium maculatum (l.) l.", "lathraea clandestina l.", "lactuca saligna l.", "lathraea squamaria l.", "lavandula x intermedia emeric ex loisel.", "lavandula latifolia medik.", "limnanthes douglasii r.br.", "liriodendron tulipifera l.", "lupinus arboreus sims", "lupinus x regalis bergmans", "maianthemum bifolium (l.) f.w.schmidt", "melilotus altissimus thuill.", "melilotus officinalis (l.) lam.", "lactuca tenerrima pourr.", "melilotus spicatus (sm.) breistr.", "mercurialis ambigua l.f.", "mercurialis tomentosa l.", "moehringia ciliata (scop.) dalla torre", "moehringia muscosa l.", "moehringia pentandra j.gay", "narthecium ossifragum (l.) huds.", "neotinea lactea (poir.) r.m.bateman, pridgeon & m.w.chase", "neotinea ustulata (l.) r.m.bateman, pridgeon & m.w.chase", "noccaea caerulescens (j.presl & c.presl) f.k.mey.", "limbarda crithmoides (l.) dumort.", "noccaea montana (l.) f.k.mey.", "noccaea rotundifolia (l.) moench", "nymphaea candida c.presl", "nymphoides peltata (s.g.gmel.) kuntze", "ophrys arachnitiformis gren. & m.philippe", "ophrys aranifera huds.", "ophrys aveyronensis (j.j.wood) p.delforge", "ophrys aymoninii (breistr.) buttler", "ophrys bertolonii moretti", "ophrys catalaunica o.danesch & e.danesch", "sedum acre l.", "ophrys exaltata ten.", "ophrys fuciflora (f.w.schmidt) moench", "ophrys incubacea bianca", "ophrys lupercalis devillers & devillers-tersch.", "ophrys morisii (martelli) soó", "ophrys passionis sennen", "ophrys provincialis (baumann & künkele) paulus", "ophrys saratoi e.g.camus", "ophrys sulcata devillers & devillers-tersch.", "oreopteris limbosperma (bellardi ex all.) holub", "sedum album l.", "anemone x hybrida paxton", "anemone alpina l.", "pancratium illyricum l.", "anemone apennina l.", "anemone baldensis l.", "papaver alpinum l.", "papaver croceum ledeb.", "papaver orientale l.", "papaver pseudoorientale (fedde) medw.", "papaver rhaeticum leresche", "sedum amplexicaule dc.", "anemone halleri all.", "anemone hepatica l.", "pelargonium x hybridum (l.) aiton", "pelargonium inquinans (l.) aiton", "pelargonium peltatum (l.) aiton", "anemone montana hoppe", "anemone nemorosa l.", "anemone pulsatilla l.", "anemone rubra lam.", "phedimus stellatus (l.) raf.", "sedum andegavense (dc.) desv.", "anemone sylvestris l.", "anemone trifolia l.", "anemone vernalis l.", "angelica heterocarpa j.lloyd", "angelica razulii gouan", "angelica sylvestris l.", "pleurospermum austriacum (l.) hoffm.", "sasa palmata (burb.) camus", "anthericum ramosum l.", "pyracantha coccinea m.roem.", "pelargonium odoratissimum (l.) l'hér.", "sedum brevifolium dc.", "pyracantha rogersiana (a.b.jacks.) coltm.-rog.", "holodiscus discolor (pursh) maxim.", "sagittaria graminea michx.", "sagittaria latifolia willd.", "sagittaria sagittifolia l.", "diascia rigescens e.mey. ex benth.", "diascia vigilis hilliard & burtt", "sedum sarmentosum bunge", "sedum alpestre vill.", "sedum annuum l.", "sedum caeruleum l.", "sedum kamtschaticum fisch. & c.a.mey.", "sedum mexicanum britton", "sedum ochroleucum chaix", "sedum sexangulare l.", "metasequoia glyptostroboides hu & w.c.cheng", "telekia speciosa (schreb.) baumg.", "thesium alpinum l.", "thesium linophyllon l.", "tradescantia x andersoniana f.ludw. & rohweder", "tradescantia virginiana l.", "sedum caespitosum (cav.) dc.", "trifolium alpestre l.", "trifolium alpinum l.", "trifolium pallescens schreb.", "trifolium spadiceum l.", "trifolium thalii vill.", "viscaria alpina (l.) g.don", "viscaria vulgaris bernh.", "anemone hupehensis (lemoine) lemoine", "anemone vitifolia buch.-ham. ex dc.", "adenostyles alpina (l.) bluff & fingerh.", "sedum cepaea l.", "alcea biennis winterl", "anemone narcissiflora l.", "anemone ranunculoides l.", "angelica archangelica l.", "chaerophyllum bulbosum l.", "cirsium acaulon (l.) scop.", "cirsium spinosissimum (l.) scop.", "cymbalaria muralis p.gaertn., b.mey. & scherb.", "eritrichium nanum (l.) schrad. ex gaudin", "fragaria viridis weston", "sedum dasyphyllum l.", "honckenya peploides (l.) ehrh.", "hypericum hyssopifolium chaix", "hypericum mutilum l.", "lactuca perennis l.", "sedum atratum l.", "thesium pyrenaicum pourr.", "trifolium medium l.", "trinia glauca (l.) dumort.", "sedum montanum perrier & songeon", "trifolium badium schreb.", "sedum forsterianum sm.", "freesia refracta (jacq.) eckl. ex klatt", "freesia alba (g.l.mey.) grumbleton", "cobaea scandens cav.", "fragaria virginiana duchesne", "anemone pratensis l.", "sagittaria lancifolia l.", "neotinea tridentata (scop.) r.m.bateman, pridgeon & m.w.chase", "adenostyles alliariae (gouan) a.kern.", "ophrys pseudoscolopax (moggr.) paulus & gack", "sedum palmeri s.watson", "sedum hirsutum all.", "hypericum olympicum l.", "tradescantia zebrina bosse", "dryopteris erythrosora (eaton) kuntze", "adenostyles leucophylla (willd.) rchb.", "calycanthus floridus l.", "phyllanthus niruri l.", "ophrys araneola sensu auct.plur.", "anemone blanda schott & kotschy", "aralia elata (miq.) seem.", "ophrys virescens philippe", "sedum litoreum guss.", "clethra alnifolia l.", "corispermum pallasii steven", "hypericum patulum thunb.", "hypericum x hidcoteense hilling ex geerinck", "lupinus nootkatensis donn ex sims", "smilax rotundifolia l.", "perovskia atriplicifolia benth.", "nothofagus antarctica (g.forst.) oerst.", "iva annua l.", "dryopteris fragrans (l.) schott", "sedum multiceps coss. & durieu", "neotinea conica (willd.) r.m.bateman", "tetraclinis articulata (vahl) mast.", "alcea setosa (boiss.) alef.", "chaerophyllum aromaticum l.", "cucurbita moschata duchesne", "daphne sericea vahl", "lamium orvala l.", "paederota bonarota (l.) l.", "rhodothamnus chamaecistus (l.) rchb.", "ophrys lunulata parl.", "sedum rubens l.", "schefflera actinophylla harms", "astydamia latifolia baill.", "lavandula pinnata moench", "angelica atropurpurea l.", "acacia pravissima f.muell.", "adonis pyrenaica dc.", "adonis vernalis l.", "barbarea rupicola moris", "barbarea vulgaris r.br.", "acacia baileyana f.muell.", "pelargonium peltatum (l.) l'hér.", "sedum sediforme (jacq.) pau", "bellium bellidioides l.", "bocoa prouacensis aubl.", "nephrolepis biserrata (sw.) schott", "vanilla pompona schiede", "guatteria anteridifera scharf & maas", "caraipa densifolia mart.", "caraipa punctulata ducke", "vismia cayennensis (jacq.) pers.", "vismia macrophylla kunth", "vismia sessilifolia (aubl.) choisy", "alliaria petiolata (m. bieb.) cavara & grande", "dryopteris wallichiana (spreng.) hyl.", "xylopia frutescens aubl.", "xylopia nitida dunal", "xylopia sericea a.st.-hil.", "phyllanthus urinaria l.", "pogonophora schomburgkiana miers ex benth.", "sagotia racemosa baill.", "bonafousia undulata (vahl) a.dc.", "guarea gomma pulle", "nymphoides indica (l.) kuntze", "boscia angustifolia a. rich.", "anthurium hookeri kunth", "anthurium jenmanii engl.", "anthurium scandens (aubl.) engl.", "dracontium polyphyllum l.", "crotalaria retusa l.", "peperomia magnoliifolia (jacq.) a.dietr.", "peperomia obtusifolia (l.) a.dietr.", "peperomia quadrangularis (j.v.thomps.) a.dietr.", "peperomia rotundifolia (l.) kunth", "peperomia serpens (sw.) loudon", "fibigia clypeata (l.) medik.", "declieuxia fruticosa (willd. ex roem. & schult.) kuntze", "mauritia flexuosa l.f.", "urera baccifera (l.) gaudich. ex wedd.", "pereskia bleo (kunth) dc.", "cereus hexagonus (l.) mill.", "xylopia crinita r.e.fr.", "vanilla planifolia jacks.", "aciotis rubricaulis (mart. ex dc.) triana", "guatteria amplifolia triana & planch.", "luffa cylindrica (l.) m.roem.", "maerua angolensis dc.", "cryptostegia madagascariensis bojer ex decne.", "schefflera decaphylla (seem.) harms", "aphelandra scabra (vahl) sm.", "asystasia gangetica (l.) t.anderson", "piriqueta cistoides (l.) griseb.", "acalypha wilkesiana müll.arg.", "breynia disticha j.r.forst. & g.forst.", "crotalaria incana l.", "crotalaria pallida aiton", "kigelia africana (lam.) benth.", "acalypha virginica l.", "guatteria citriodora ducke", "acalypha indica l.", "guatteria punctata (aubl.) r.a.howard", "anthurium crystallinum linden & andré", "cryptopus elatus (thouars) lindl.", "cryptostegia grandiflora r.br.", "eranthemum pulchellum andrews", "alocasia cucullata (lour.) g.don", "alocasia longiloba miq.", "alocasia macrorrhizos (l.) g.don", "mercurialis perennis l.", "faujasia salicifolia (pers.) c.jeffrey", "fernelia buxifolia lam.", "fragaria x ananassa duchesne ex rozier", "geniostoma borbonicum (lam.) spreng.", "hernandia mascarenensis (meisn.) kubitzki", "hypericum lanceolatum lam.", "acacia auriculiformis a.cunn. ex benth.", "leonitis nepetifolia (l.) r.br.", "mussaenda arcuata poir.", "nephrolepis abrupta (bory) mett.", "phyllanthus reticulatus poir.", "acacia heterophylla (lam.) willd.", "paederia foetida l.", "pelargonium x asperum ehrh. ex willd.", "pereskia grandifolia haw.", "antirhea borbonica j.f.gmel.", "phyllanthus emblica l.", "phyllanthus niruroides müll.arg.", "phyllanthus phillyreifolius poir.", "pongamia pinnata (l.) pierre", "acacia podalyriifolia a.cunn. ex g.don", "hypericum androsaemum l.", "schefflera actinophylla (f.muell.) harms", "acalypha hispida burm.f.", "acalypha integrifolia willd.", "stemodia verticillata (mill.) hassl.", "stoebe passerinoides (lam.) willd.", "strongylodon macrobotrys a.gray", "tradescantia pallida (rose) d.r.hunt", "trimezia martinicensis (jacq.) herb.", "vangueria madagascariensis j.f.gmel.", "vepris lanceolata (lam.) g.don", "hypericum australe ten.", "zaleya pentandra (l.) c.jeffrey", "cereus repandus (l.) mill.", "bismarckia nobilis hildebr. & h.wendl.", "acacia nilotica (l.) delile", "schefflera arboricola (hayata) merr.", "fragaria chiloensis (l.) mill.", "aphelandra squarrosa nees", "gynura aurantiaca (blume) sch.bip. ex dc.", "pereskia aculeata mill.", "peperomia argyreia (miq.) e.morren", "pelargonium quercifolium (l. f.) l'hér.", "hypericum canariense l.", "peperomia caperata yunck.", "peperomia obtusifolia (l.) a. dietr.", "anthurium andraeanum linden ex andré", "anthurium scherzerianum schott", "zamia furfuracea l.f.", "dendrobium nobile lindl.", "nandina domestica thunb.", "nymphaea nouchali burm.f.", "falcataria moluccana (miq.) barneby & j.w.grimes", "acacia tortuosa (l.) willd.", "hypericum empetrifolium willd.", "bourreria succulenta jacq.", "crotalaria brevidens benth.", "crotalaria pumila ortega", "dendrobium chrysotoxum lindl.", "dendrobium crumenatum sw.", "dendrobium moschatum (buch.-ham.) sw.", "entada gigas (l.) fawc. & rendle", "fittonia albivenis (lindl. ex veitch) r.k. brummitt", "illicium verum hook.f.", "lactuca floridana (l.) gaertn.", "hypericum hircinum l.", "mussaenda pubescens dryand.", "neolamarckia cadamba (roxb.) bosser", "nymphaea ampla (salisb.) dc.", "phyllanthus epiphyllanthus l.", "phyllanthus mimosoides sw.", "pilosocereus royeni (l.) byles & g. rowley", "selenicereus anthonyanus (alexander) d.hunt", "selenicereus grandiflorus (l.) britton & rose", "zamia pumila l.", "acalypha aristata kunth", "hypericum hirsutum l.", "chamerion latifolium (l.) holub", "cirsium discolor (muhl. ex willd.) spreng.", "cirsium foliosum (hook.) dc.", "cirsium muticum michx.", "cirsium undulatum (nutt.) spreng.", "comptonia peregrina (l.) j.m. coult.", "conoclinium coelestinum (l.) dc.", "dalea purpurea vent.", "daucus pusillus michx.", "diervilla lonicera mill.", "hypericum humifusum l.", "dryopteris carthusiana (vill.) h.p. fuchs", "dryopteris cristata (l.) a. gray", "dryopteris intermedia (muhl. ex willd.) a. gray", "dryopteris marginalis (l.) a. gray", "duchesnea indica (andrews) teschem.", "epipactis gigantea douglas ex hook.", "hypericum kalmianum l.", "hypericum prolificum l.", "hypericum punctatum lam.", "anemone canadensis l.", "hypericum linariifolium vahl", "iva frutescens l.", "anemone multifida poir.", "anemone virginiana l.", "lactuca biennis (moench) fernald", "lactuca canadensis l.", "lamium maculatum l.", "leersia virginica willd.", "angelica lucida l.", "lupinus argenteus pursh", "lupinus bicolor lindl.", "hypericum montanum l.", "maianthemum canadense desf.", "maianthemum racemosum (l.) link", "maianthemum stellatum (l.) link", "maianthemum trifolium (l.) sloboda", "mertensia maritima (l.) gray", "mertensia paniculata (aiton) g. don", "mertensia virginica (l.) pers. ex link", "mitella diphylla l.", "moehringia lateriflora (l.) fenzl", "nymphaea mexicana zucc.", "hypericum perfoliatum l.", "nymphaea odorata aiton", "nymphaea tetragona georgi", "osmunda cinnamomea l.", "osmunda claytoniana l.", "papaver nudicaule l.", "aralia hispida vent.", "aralia nudicaulis l.", "aralia racemosa l.", "pyracantha coccinea m. roem.", "sedum divergens s. watson", "hypericum tetrapterum fr.", "sedum lanceolatum torr.", "sedum oreganum nutt.", "sedum spathulifolium hook.", "sedum ternatum michx.", "smilax herbacea l.", "smilax tamnoides l.", "tradescantia occidentalis (britton) smyth", "tradescantia ohiensis raf.", "balsamorhiza sagittata (pursh) nutt.", "barbarea orthoceras ledeb.", "hypericum tomentosum l.", "barbarea vulgaris w.t. aiton", "calochortus macrocarpus douglas", "entada phaseoloides (l.) merr.", "cerbera manghas l.", "hernandia cordigera vieill.", "acacia simplex (sparman) pedley", "dubouzetia confusa guillaumin & virot", "dubouzetia campanulata pancher ex brongn. & gris", "acacia spirorbis labill.", "atractocarpus platyxylon (vieill. ex pancher & sebert) guillaumin", "pelargonium zonale (l.) l'hér.", "lamium bifidum cirillo", "nepenthes vieillardii hook. f.", "dendrobium closterium rchb.f.", "dracophyllum verticillatum labill.", "schefflera spp.", "tagetes lucida cav.", "anisocampium niponicum (mett.) y.c.liu, w.l. chiou & m. kato", "mertensia ciliata (james ex torr.) g. don", "lithops fulviceps n.e.br.", "lithops karasmontana n.e.br.", "lithops marmorata n.e.br.", "lamium flexuosum ten.", "lithops pseudotruncatella n.e.br.", "fittonia albivenis (lindl. ex veitch) brummitt", "lithops spp.", "cereus hildmannianus k.schum.", "neobuxbaumia polylopha (dc.) backeb.", "calycanthus occidentalis hook. & arn.", "tradescantia subaspera ker gawl.", "sedum kamtschaticum fisch.", "sedum lineare thunb.", "sedum morganianum e.walther", "lamium garganicum l.", "cucurbita foetidissima kunth", "eucryphia cordifolia cav.", "dryopteris cycadina (franch. & sav.) c. chr.", "dryopteris erythrosora (d.c. eaton) kuntze", "dryopteris sieboldii (t. moore) kuntze", "rumohra adiantiformis (g. forst.) ching", "oxydendrum arboreum (l.) dc.", "lupinus albifrons benth.", "lycoris radiata (l'hér.) herb.", "garrya elliptica douglas ex lindl.", "lamium hybridum vill.", "lycoris squamigera maxim.", "pelargonium crispum (p.j. bergius) l'hér.", "pelargonium x hortorum l.h. bailey", "loropetalum chinense (r. br.) oliv.", "hypericum frondosum michx.", "liriope muscari (decne.) l.h.bailey", "limnanthes douglasii r. br.", "nothofagus alpina (poepp. & endl.) oerst.", "nothofagus pumilio (poepp. & endl.) krasser", "abeliophyllum distichum nakai", "lamium purpureum l.", "dendrobium aphyllum (roxb.) c.e.c.fisch.", "dendrobium spp.", "vanilla planifolia jacks. ex andrews", "heteromorpha arborescens (spreng.) cham. & schltdl.", "lapageria rosea ruiz & pav.", "peperomia argyreia (hook.f.) e.morren", "peperomia dolabriformis kunth", "hebe franciscana (eastw.) souster", "maurandya barclayana lindl.", "sasa palmata (burb.) e.g.camus", "lavandula canariensis mill.", "anemone patens l.", "fragaria virginiana mill.", "fragaria x ananassa (duchesne ex weston) duchesne ex rozier", "mussaenda philippica a.rich.", "limonia acidissima groff", "illicium floridanum j. ellis", "browallia speciosa hook.", "alocasia cuprea k.koch", "alocasia micholitziana sander", "alocasia odora (lindl.) k.koch", "lavandula minutolii bolle", "daphne odora thunb.", "alocasia sanderiana w.bull", "daphne tangutica maxim.", "kniphofia uvaria (l.) oken", "zamia furfuracea l.f. ex aiton", "guaiacum sanctum l.", "nepenthes spp.", "pelargonium spp.", "pancratium maritimum l. l.", "sedum palmeri s. watson", "lavandula multifida l.", "aphelandra sinclairiana nees", "alocasia macrorrhizos (l.) g. don", "acacia confusa merr.", "acacia koaia hillebr.", "nephrolepis falcata (cav.) c. chr.", "tradescantia zebrina hort. ex bosse", "calodendrum capense (l.f.) thunb.", "pelargonium grandiflorum willd.", "hypericum revolutum vahl", "schkuhria pinnata (lam.) kuntze ex thell.", "lavandula stoechas l.", "maytenus boaria molina", "melampodium perfoliatum (cav.) kunth", "hernandia nymphaeifolia (j.presl) kubitzki", "mazus pumilus (burm.f.) steenis", "melampodium divaricatum (rich.) dc.", "breynia vitis-idaea (burm.f.) c.e.c.fisch.", "prosopis farcta (banks & sol.) j.f.macbr.", "acacia farnesiana willd.", "freycinetia cumingiana gaudich.", "crotalaria laburnifolia l.", "acacia mellifera (vahl) benth.", "dalbergia sissoo dc.", "dischidia nummularia r.br.", "schefflera heptaphylla (l.) frodin", "schinopsis balansae engl.", "smilax china l.", "barringtonia acutangula (l.) gaertn.", "maerua triphylla a. rich.", "asystasia mysorensis (roth) t.anderson", "lactuca virosa habl.", "calendula arvensis m.bieb.", "mercurialis annua l.", "acacia seyal delile", "herbertia lahue (molina) goldblatt", "hypericum hypericoides (l.) crantz", "hypericum tetrapetalum lam.", "keckiella cordifolia (benth.) straw", "liriope muscari (decne.) l.h. bailey", "lupinus chamissonis eschsch.", "lupinus diffusus nutt.", "lupinus formosus greene", "lupinus hirsutissimus benth.", "lupinus subcarnosus hook.", "acacia tortilis (forssk.) hayne", "lupinus texensis hook.", "lyonothamnus floribundus a. gray", "mazus pumilus (burm. f.) steenis", "melampodium leucanthum torr. & a. gray", "metasequoia glyptostroboides hu & w.c. cheng", "aralia californica s. watson", "nyctaginia capitata choisy", "aralia spinosa l.", "pelargonium panduriforme eckl. & zeyh.", "pelargonium zonale (l.) l'hér. ex aiton", "galega officinalis l.", "prosopis glandulosa torr.", "prosopis pubescens benth.", "sagittaria montevidensis cham. & schltdl.", "sedum albomarginatum r.t. clausen", "sedum glaucophyllum r.t. clausen", "sedum laxum (britton) a. berger", "sedum moranense kunth", "sedum niveum davidson", "sedum obtusatum a. gray", "sedum pulchellum michx.", "lupinus cosentinii guss.", "smilax bona-nox l.", "smilax glauca walter", "smilax laurifolia l.", "stanleya pinnata (pursh) britton", "tagetes lemmonii a. gray", "tradescantia crassifolia cav.", "xylococcus bicolor nutt.", "brodiaea elegans hoover", "calochortus eurycarpus s. watson", "calochortus gunnisonii s. watson", "lupinus micranthus guss.", "calochortus leichtlinii hook. f.", "calochortus luteus douglas ex lindl.", "calochortus splendens douglas ex benth.", "calochortus tolmiei hook. & arn.", "calylophus hartwegii (benth.) p.h. raven", "chaerophyllum tainturieri hook.", "cirsium altissimum (l.) hill", "cirsium horridulum michx.", "cirsium texanum buckley", "diervilla sessilifolia buckley", "melilotus italicus (l.) lam.", "acacia berlandieri benth.", "elephantopus elatus bertol.", "alocasia cucullata (lour.) g. don", "acacia redolens maslin", "acacia rigidula benth.", "hackelia velutina (piper) i.m. johnst.", "haplophyton crooksii (l.d. benson) l.d. benson", "aextoxicon punctatum ruiz & pav.", "lithops aucampiae l. bolus", "lithops olivacea l. bolus", "melilotus sulcatus desf.", "cyrtanthus elatus (jacq.) traub", "guatteria aeruginosa standl.", "dischidia ovata benth.", "trachelospermum asiaticum (siebold & zucc.) nakai", "alocasia baginda kurniawan & p.c.boyce", "alocasia lauterbachiana (engl.) a.hay", "alocasia reginula a.hay", "alocasia wentii engl. & k.krause", "alocasia zebrina schott ex van houtte", "anthurium clarinervium matuda", "trifolium angustifolium l.", "anthurium faustomirandae pérez-farr. & croat", "anthurium salvinii hemsl.", "anthurium schlechtendalii kunth", "anthurium veitchii mast.", "anthurium warocqueanum t.moore", "wodyetia bifurcata a.k.irvine", "cereus forbesii c.f.först.", "cereus uruguayanus r. kiesling", "coryphantha elephantidens (lem.) lem.", "pilosocereus chrysostele (vaupel) byles & g.d.rowley", "trifolium arvense l.", "pilosocereus pachycladus f. ritter", "selenicereus anthonyanus (alexander) d.r. hunt", "stenocactus multicostatus (hildm.) a. berger ex a.w. hill", "tephrocactus geometricus (a. cast.) backeb.", "pterocephalus perennis coult.", "maytenus ilicifolia mart. ex reissek", "cyanotis somaliensis c.b.clarke", "tradescantia × andersoniana w.ludw. & rohweder", "tradescantia sillamontana matuda", "cirsium palustre (l.) coss. ex scop.", "trifolium aureum pollich", "erechtites minima (poir.) dc.", "lactuca alpina (l.) a.gray", "leptinella potentillina f.muell.", "leucophyta brownii cass.", "melampodium divaricatum (rich. ex rich.) dc.", "othonna capensis l.h.bailey", "rhodanthe chlorocephala (turcz.) paul g.wilson", "tagetes lemmonii a.gray", "tagetes lunulata ortega", "sedum adolphii raym.-hamet" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-26Nov24-001
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-26Nov24-001 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0282 - Accuracy: 0.9939 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.2849 | 4.4444 | 10 | 0.6545 | 0.8837 | | 0.2089 | 8.8889 | 20 | 0.1889 | 0.9694 | | 0.0278 | 13.3333 | 30 | 0.0619 | 0.9878 | | 0.0034 | 17.7778 | 40 | 0.0349 | 0.9918 | | 0.0012 | 22.2222 | 50 | 0.0282 | 0.9918 | | 0.0008 | 26.6667 | 60 | 0.0282 | 0.9939 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-26Nov24-002
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-26Nov24-002 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0879 - Accuracy: 0.9796 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2513 | 8.0 | 10 | 0.6332 | 0.8837 | | 0.2079 | 16.0 | 20 | 0.1661 | 0.9735 | | 0.0259 | 24.0 | 30 | 0.0879 | 0.9796 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-26Nov24-003
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-26Nov24-003 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0833 - Accuracy: 0.9816 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2511 | 8.0 | 10 | 0.6333 | 0.8837 | | 0.2044 | 16.0 | 20 | 0.1589 | 0.9776 | | 0.0239 | 24.0 | 30 | 0.0833 | 0.9816 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-26Nov24-005
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-26Nov24-005 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0274 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.4638 | 4.4444 | 10 | 0.8922 | 0.7714 | | 0.6108 | 8.8889 | 20 | 0.3243 | 0.9347 | | 0.234 | 13.3333 | 30 | 0.1423 | 0.9735 | | 0.1045 | 17.7778 | 40 | 0.0655 | 0.9980 | | 0.0578 | 22.2222 | 50 | 0.0395 | 0.9939 | | 0.0332 | 26.6667 | 60 | 0.0274 | 1.0 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
platzi/platzi-vit-model-Jaime-Bermudez
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-vit-model-Jaime-Bermudez This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0241 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1392 | 3.8462 | 500 | 0.0241 | 0.9925 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
platzi/platzi-vit-model-Nicolas
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-vit-model-Nicolas This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1528 - Accuracy: 0.9624 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0662 | 3.8462 | 500 | 0.1528 | 0.9624 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
platzi/platzi-vit-model-jonnathan
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-vit-model-jonnathan This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0462 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0563 | 3.8462 | 500 | 0.0462 | 0.9925 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
nguyenthethang1995/finetuned-bank-images
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-bank-images This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4629 - Accuracy: 0.9125 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 160 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.8836 | 1.1765 | 100 | 0.6595 | 0.8818 | | 0.681 | 2.3529 | 200 | 0.5422 | 0.8965 | | 0.5669 | 3.5294 | 300 | 0.4629 | 0.9125 | ### Framework versions - Transformers 4.48.0.dev0 - Pytorch 2.4.1 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "bank-1", "bank-11", "bank-142", "bank-346", "bank-347", "bank-348", "bank-349", "bank-350", "bank-351", "bank-352", "bank-353", "bank-354", "bank-355", "bank-143", "bank-356", "bank-358", "bank-359", "bank-360", "bank-361", "bank-362", "bank-363", "bank-364", "bank-369", "bank-370", "bank-147", "bank-371", "bank-372", "bank-373", "bank-374", "bank-375", "bank-376", "bank-377", "bank-378", "bank-379", "bank-380", "bank-148", "bank-381", "bank-383", "bank-387", "bank-388", "bank-48", "bank-5", "bank-51", "bank-53", "bank-55", "bank-56", "bank-149", "bank-57", "bank-58", "bank-63", "bank-69", "bank-72", "bank-79", "bank-8", "bank-81", "bank-99", "bank-15", "bank-150", "bank-152", "bank-153", "bank-154", "bank-111", "bank-155", "bank-156", "bank-158", "bank-161", "bank-162", "bank-163", "bank-166", "bank-168", "bank-170", "bank-172", "bank-114", "bank-175", "bank-176", "bank-177", "bank-181", "bank-19", "bank-190", "bank-192", "bank-193", "bank-2", "bank-20", "bank-12", "bank-200", "bank-201", "bank-202", "bank-203", "bank-206", "bank-209", "bank-210", "bank-211", "bank-217", "bank-219", "bank-120", "bank-221", "bank-226", "bank-229", "bank-230", "bank-235", "bank-236", "bank-24", "bank-249", "bank-253", "bank-257", "bank-134", "bank-259", "bank-261", "bank-276", "bank-277", "bank-279", "bank-28", "bank-280", "bank-281", "bank-282", "bank-286", "bank-135", "bank-287", "bank-288", "bank-290", "bank-291", "bank-296", "bank-298", "bank-300", "bank-301", "bank-302", "bank-303", "bank-136", "bank-304", "bank-305", "bank-306", "bank-307", "bank-308", "bank-317", "bank-318", "bank-319", "bank-32", "bank-320", "bank-138", "bank-326", "bank-328", "bank-329", "bank-332", "bank-339", "bank-340", "bank-341", "bank-342", "bank-343", "bank-344" ]
skohli01/finetuned-parkinson-classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-parkinson-classification This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4110 - Accuracy: 0.9091 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 0.8891 | 0.3636 | | No log | 2.0 | 3 | 0.5901 | 0.6364 | | No log | 3.0 | 5 | 0.5270 | 0.6364 | | No log | 4.0 | 6 | 0.4946 | 0.7273 | | No log | 5.0 | 7 | 0.4724 | 0.8182 | | No log | 6.0 | 9 | 0.4406 | 0.8182 | | 0.3043 | 7.0 | 11 | 0.4110 | 0.9091 | | 0.3043 | 8.0 | 12 | 0.4048 | 0.9091 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "healthy", "parkinson" ]
kiranshivaraju/convnext-xlarge-v12-v2
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "bad", "good" ]
kiranshivaraju/convnext-xlarge-v12-v4
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "bad", "good" ]
pyb-camag/swin-tiny-patch4-window7-224-finetuned-ginger
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-ginger This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6161 - Accuracy: 0.7112 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.2795 | 0.9973 | 275 | 0.8344 | 0.7773 | | 0.1696 | 1.9982 | 551 | 1.2580 | 0.6245 | | 0.1425 | 2.9991 | 827 | 1.0643 | 0.5959 | | 0.1329 | 4.0 | 1103 | 0.8342 | 0.6281 | | 0.1237 | 4.9973 | 1378 | 1.4786 | 0.6110 | | 0.0743 | 5.9982 | 1654 | 1.1068 | 0.6283 | | 0.1083 | 6.9991 | 1930 | 0.8262 | 0.8321 | | 0.0667 | 8.0 | 2206 | 0.6214 | 0.7564 | | 0.0743 | 8.9973 | 2481 | 0.7777 | 0.7342 | | 0.0527 | 9.9982 | 2757 | 0.6794 | 0.6985 | | 0.076 | 10.9991 | 3033 | 0.7436 | 0.6429 | | 0.0423 | 11.9674 | 3300 | 0.6161 | 0.7112 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "alpinia galanga", "alpinia officinarum", "boesenbergia rotunda", "kaempferia galanga", "kaempferia parviflora", "zingiber montanum", "zingiber officinale", "zingiber zerumbet" ]
jtgraham/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2031 - Accuracy: 0.9459 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3727 | 1.0 | 370 | 0.2756 | 0.9337 | | 0.2145 | 2.0 | 740 | 0.2168 | 0.9378 | | 0.1835 | 3.0 | 1110 | 0.1918 | 0.9459 | | 0.147 | 4.0 | 1480 | 0.1857 | 0.9472 | | 0.1315 | 5.0 | 1850 | 0.1818 | 0.9472 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
Chamoda8298/vit-Facial-Expression-Recognition
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-Facial-Expression-Recognition This model is a fine-tuned version of [motheecreator/vit-Facial-Expression-Recognition](https://huggingface.co/motheecreator/vit-Facial-Expression-Recognition) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3629 - Accuracy: 0.8758 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 2048 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.5354 | 2.1633 | 500 | 0.3629 | 0.8758 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "angry", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
artucathur/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0890 - Accuracy: 0.9706 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.5186 | 0.9979 | 351 | 0.1378 | 0.9544 | | 0.3586 | 1.9986 | 703 | 0.0987 | 0.9662 | | 0.3499 | 2.9936 | 1053 | 0.0890 | 0.9706 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck" ]
artucathur/swin-tiny-patch4-window7-224-swinnn
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-swinnn This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0883 - Accuracy: 0.8232 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:-----:|:---------------:|:--------:| | 0.2802 | 0.9979 | 351 | 0.2783 | 0.3222 | | 0.2702 | 1.9986 | 703 | 0.2652 | 0.376 | | 0.2565 | 2.9993 | 1055 | 0.2474 | 0.431 | | 0.2448 | 4.0 | 1407 | 0.2358 | 0.4558 | | 0.2433 | 4.9979 | 1758 | 0.2223 | 0.4994 | | 0.2095 | 5.9986 | 2110 | 0.2058 | 0.5434 | | 0.2197 | 6.9993 | 2462 | 0.1963 | 0.568 | | 0.2093 | 8.0 | 2814 | 0.1906 | 0.5764 | | 0.2047 | 8.9979 | 3165 | 0.1888 | 0.5874 | | 0.1952 | 9.9986 | 3517 | 0.1743 | 0.6192 | | 0.1926 | 10.9993 | 3869 | 0.1740 | 0.6234 | | 0.1838 | 12.0 | 4221 | 0.1667 | 0.6448 | | 0.1822 | 12.9979 | 4572 | 0.1629 | 0.6468 | | 0.1838 | 13.9986 | 4924 | 0.1587 | 0.6638 | | 0.1689 | 14.9993 | 5276 | 0.1563 | 0.675 | | 0.1697 | 16.0 | 5628 | 0.1472 | 0.6916 | | 0.1643 | 16.9979 | 5979 | 0.1435 | 0.6912 | | 0.1655 | 17.9986 | 6331 | 0.1395 | 0.706 | | 0.1555 | 18.9993 | 6683 | 0.1371 | 0.714 | | 0.1577 | 20.0 | 7035 | 0.1321 | 0.7258 | | 0.1575 | 20.9979 | 7386 | 0.1318 | 0.7284 | | 0.141 | 21.9986 | 7738 | 0.1228 | 0.7438 | | 0.151 | 22.9993 | 8090 | 0.1260 | 0.7392 | | 0.1403 | 24.0 | 8442 | 0.1178 | 0.7558 | | 0.1434 | 24.9979 | 8793 | 0.1185 | 0.7534 | | 0.1465 | 25.9986 | 9145 | 0.1162 | 0.759 | | 0.1362 | 26.9993 | 9497 | 0.1121 | 0.769 | | 0.138 | 28.0 | 9849 | 0.1099 | 0.769 | | 0.1293 | 28.9979 | 10200 | 0.1094 | 0.7754 | | 0.1273 | 29.9986 | 10552 | 0.1091 | 0.7768 | | 0.1363 | 30.9993 | 10904 | 0.1078 | 0.7766 | | 0.1293 | 32.0 | 11256 | 0.1091 | 0.7736 | | 0.1275 | 32.9979 | 11607 | 0.1068 | 0.7806 | | 0.1263 | 33.9986 | 11959 | 0.1040 | 0.7888 | | 0.1243 | 34.9993 | 12311 | 0.1019 | 0.7954 | | 0.1237 | 36.0 | 12663 | 0.1016 | 0.7958 | | 0.1243 | 36.9979 | 13014 | 0.0993 | 0.7988 | | 0.1194 | 37.9986 | 13366 | 0.1011 | 0.7986 | | 0.1213 | 38.9993 | 13718 | 0.0959 | 0.8064 | | 0.1155 | 40.0 | 14070 | 0.0942 | 0.8108 | | 0.1179 | 40.9979 | 14421 | 0.0950 | 0.8072 | | 0.1057 | 41.9986 | 14773 | 0.0924 | 0.8166 | | 0.1042 | 42.9993 | 15125 | 0.0924 | 0.8152 | | 0.1151 | 44.0 | 15477 | 0.0928 | 0.8132 | | 0.1122 | 44.9979 | 15828 | 0.0920 | 0.8146 | | 0.11 | 45.9986 | 16180 | 0.0906 | 0.8152 | | 0.1096 | 46.9993 | 16532 | 0.0894 | 0.82 | | 0.1082 | 48.0 | 16884 | 0.0885 | 0.821 | | 0.108 | 48.9979 | 17235 | 0.0886 | 0.8204 | | 0.112 | 49.8934 | 17550 | 0.0883 | 0.8232 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.1.0a0+32f93b1 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6", "label_7", "label_8", "label_9" ]
SudeepM27/apple-leaf-disease-detection
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> MobileVITV2 based Image Classification model to classify apple leaf diseases ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> <!-- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. --> - **Developed by:** Sudeep Mungara <!-- ### Model Sources [optional] <!-- Provide the basic links for the model. --> <!-- - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] --> ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> To classify if the apple leaf is healthy, rust, scab or has multiple diseases <!-- ### Direct Use --> <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> <!-- [More Information Needed] ### Downstream Use [optional] --> <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> <!-- [More Information Needed] ### Out-of-Scope Use --> <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> <!-- [More Information Needed] --> <!-- ## Bias, Risks, and Limitations --> <!-- This section is meant to convey both technical and sociotechnical limitations. --> <!-- [More Information Needed] --> <!-- ### Recommendations --> <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> <!-- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. --> ## How to Get Started with the Model ```python from PIL import Image import torch from transformers import AutoImageProcessor, AutoModelForImageClassification processor = AutoImageProcessor.from_pretrained("SudeepM27/apple-leaf-disease-detection") model = AutoModelForImageClassification.from_pretrained("SudeepM27/apple-leaf-disease-detection") model.eval() image_path = "path to image" # Replace with your test image path image = Image.open(image_path) inputs = processor(images=image, return_tensors="pt") # Perform inference with torch.no_grad(): outputs = model(**inputs) # Get the predicted class logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() predicted_label = model.config.id2label[predicted_class_idx] print(f"Predicted class index: {predicted_class_idx}") print(f"Predicted label: {predicted_label}") ``` <!-- ## Training Details --> <!-- ### Training Data --> <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> <!-- [More Information Needed] --> <!-- ### Training Procedure --> <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> <!-- #### Preprocessing [optional] --> <!-- [More Information Needed] --> <!-- #### Training Hyperparameters --> <!-- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> --> <!-- #### Speeds, Sizes, Times [optional] --> <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> <!-- [More Information Needed] --> <!-- ## Evaluation --> <!-- This section describes the evaluation protocols and provides the results. --> <!-- ### Testing Data, Factors & Metrics --> <!-- #### Testing Data --> <!-- This should link to a Dataset Card if possible. --> <!-- [More Information Needed] --> <!-- #### Factors --> <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> <!-- [More Information Needed] --> <!-- #### Metrics --> <!-- These are the evaluation metrics being used, ideally with a description of why. --> <!-- [More Information Needed] --> <!-- ### Results --> <!-- [More Information Needed] --> <!-- #### Summary --> <!-- ## Model Examination [optional] --> <!-- Relevant interpretability work for the model goes here --> <!-- [More Information Needed] --> <!-- ## Environmental Impact --> <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> <!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). --> <!-- - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] --> <!-- ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] --> <!-- ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> <!-- **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> <!-- [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] -->
[ "healthy", "multiple_disease", "rust", "scab" ]
nadahh/APTOS2019LLM
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "mild", "moderate", "no dr", "proliferative dr", "severe" ]
Ayushij074/swinv2-tiny-patch4-window8-256-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-finetuned-eurosat This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0776 - Accuracy: 0.9686 - Precision Overall: 0.9697 - Recall Overall: 0.9686 - F1 Overall: 0.9679 - Precision T0: 0.9333 - Recall T0: 0.7368 - F1 T0: 0.8235 - Precision T1: 0.8893 - Recall T1: 0.9765 - F1 T1: 0.9308 - Precision T2: 0.9939 - Recall T2: 0.9889 - F1 T2: 0.9914 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision Overall | Recall Overall | F1 Overall | Precision T0 | Recall T0 | F1 T0 | Precision T1 | Recall T1 | F1 T1 | Precision T2 | Recall T2 | F1 T2 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:-----------------:|:--------------:|:----------:|:------------:|:---------:|:------:|:------------:|:---------:|:------:|:------------:|:---------:|:------:| | 1.4522 | 0.9524 | 10 | 1.1098 | 0.2801 | 0.7200 | 0.2801 | 0.3433 | 0.1199 | 0.9053 | 0.2118 | 0.0303 | 0.0392 | 0.0342 | 0.9555 | 0.2821 | 0.4356 | | 0.9137 | 1.9286 | 20 | 0.6271 | 0.7386 | 0.5455 | 0.7386 | 0.6276 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7386 | 1.0 | 0.8497 | | 0.6332 | 2.9048 | 30 | 0.5053 | 0.7498 | 0.6819 | 0.7498 | 0.6553 | 0.0 | 0.0 | 0.0 | 0.6667 | 0.0627 | 0.1147 | 0.7513 | 0.9990 | 0.8576 | | 0.4977 | 3.9762 | 41 | 0.3550 | 0.8656 | 0.8133 | 0.8656 | 0.8374 | 0.0 | 0.0 | 0.0 | 0.6511 | 0.8196 | 0.7257 | 0.9332 | 0.9606 | 0.9467 | | 0.396 | 4.9524 | 51 | 0.3010 | 0.8820 | 0.8191 | 0.8820 | 0.8494 | 0.0 | 0.0 | 0.0 | 0.7279 | 0.7765 | 0.7514 | 0.9213 | 0.9939 | 0.9562 | | 0.3438 | 5.9286 | 61 | 0.2886 | 0.8730 | 0.8732 | 0.8730 | 0.8659 | 0.4182 | 0.4842 | 0.4488 | 0.8424 | 0.5451 | 0.6619 | 0.9248 | 0.9949 | 0.9586 | | 0.3112 | 6.9048 | 71 | 0.2835 | 0.8574 | 0.8846 | 0.8574 | 0.8492 | 0.3626 | 0.6526 | 0.4662 | 0.8850 | 0.3922 | 0.5435 | 0.9346 | 0.9970 | 0.9648 | | 0.2826 | 7.9762 | 82 | 0.2100 | 0.9149 | 0.9115 | 0.9149 | 0.9103 | 0.6437 | 0.5895 | 0.6154 | 0.9005 | 0.7098 | 0.7939 | 0.9401 | 0.9990 | 0.9686 | | 0.2627 | 8.9524 | 92 | 0.2164 | 0.9081 | 0.9144 | 0.9081 | 0.9060 | 0.5484 | 0.7158 | 0.6210 | 0.9066 | 0.6471 | 0.7551 | 0.9516 | 0.9939 | 0.9723 | | 0.2385 | 9.9286 | 102 | 0.2145 | 0.9089 | 0.9217 | 0.9089 | 0.9065 | 0.5515 | 0.7895 | 0.6494 | 0.9576 | 0.6196 | 0.7524 | 0.9480 | 0.9949 | 0.9709 | | 0.2448 | 10.9048 | 112 | 0.2168 | 0.9111 | 0.9216 | 0.9111 | 0.9093 | 0.5746 | 0.8105 | 0.6725 | 0.9425 | 0.6431 | 0.7646 | 0.9496 | 0.9899 | 0.9693 | | 0.2353 | 11.9762 | 123 | 0.1721 | 0.9276 | 0.9302 | 0.9276 | 0.9252 | 0.6729 | 0.7579 | 0.7129 | 0.9534 | 0.7216 | 0.8214 | 0.9490 | 0.9970 | 0.9724 | | 0.2207 | 12.9524 | 133 | 0.1434 | 0.9447 | 0.9431 | 0.9447 | 0.9418 | 0.8986 | 0.6526 | 0.7561 | 0.9185 | 0.8392 | 0.8770 | 0.9537 | 1.0 | 0.9763 | | 0.1997 | 13.9286 | 143 | 0.1606 | 0.9283 | 0.9290 | 0.9283 | 0.9264 | 0.6635 | 0.7263 | 0.6935 | 0.9220 | 0.7412 | 0.8217 | 0.9563 | 0.9960 | 0.9757 | | 0.2022 | 14.9048 | 153 | 0.1486 | 0.9380 | 0.9392 | 0.9380 | 0.9329 | 0.9796 | 0.5053 | 0.6667 | 0.8544 | 0.8745 | 0.8643 | 0.9572 | 0.9960 | 0.9762 | | 0.1899 | 15.9762 | 164 | 0.1402 | 0.9425 | 0.9420 | 0.9425 | 0.9409 | 0.7812 | 0.7895 | 0.7853 | 0.9355 | 0.7961 | 0.8602 | 0.9591 | 0.9949 | 0.9767 | | 0.1925 | 16.9524 | 174 | 0.1338 | 0.9432 | 0.9413 | 0.9432 | 0.9410 | 0.8767 | 0.6737 | 0.7619 | 0.8893 | 0.8510 | 0.8697 | 0.9609 | 0.9929 | 0.9766 | | 0.177 | 17.9286 | 184 | 0.1521 | 0.9395 | 0.9408 | 0.9395 | 0.9351 | 0.9608 | 0.5158 | 0.6712 | 0.8407 | 0.8902 | 0.8648 | 0.9646 | 0.9929 | 0.9786 | | 0.1889 | 18.9048 | 194 | 0.1357 | 0.9447 | 0.9442 | 0.9447 | 0.9415 | 0.9508 | 0.6105 | 0.7436 | 0.8871 | 0.8627 | 0.8748 | 0.9583 | 0.9980 | 0.9777 | | 0.1972 | 19.9762 | 205 | 0.1303 | 0.9462 | 0.9444 | 0.9462 | 0.9441 | 0.8701 | 0.7053 | 0.7791 | 0.9185 | 0.8392 | 0.8770 | 0.9582 | 0.9970 | 0.9772 | | 0.1923 | 20.9524 | 215 | 0.1344 | 0.9403 | 0.9412 | 0.9403 | 0.9389 | 0.7308 | 0.8 | 0.7638 | 0.9431 | 0.7804 | 0.8541 | 0.9609 | 0.9949 | 0.9776 | | 0.1859 | 21.9286 | 225 | 0.1228 | 0.9455 | 0.9431 | 0.9455 | 0.9434 | 0.8228 | 0.6842 | 0.7471 | 0.9076 | 0.8471 | 0.8763 | 0.9638 | 0.9960 | 0.9796 | | 0.1784 | 22.9048 | 235 | 0.1194 | 0.9470 | 0.9452 | 0.9470 | 0.9448 | 0.8889 | 0.6737 | 0.7665 | 0.8907 | 0.8627 | 0.8765 | 0.9647 | 0.9949 | 0.9796 | | 0.175 | 23.9762 | 246 | 0.1190 | 0.9432 | 0.9426 | 0.9432 | 0.9421 | 0.7449 | 0.7684 | 0.7565 | 0.9156 | 0.8078 | 0.8583 | 0.9685 | 0.9949 | 0.9815 | | 0.1722 | 24.9524 | 256 | 0.1204 | 0.9477 | 0.9461 | 0.9477 | 0.9461 | 0.8068 | 0.7474 | 0.7760 | 0.9254 | 0.8275 | 0.8737 | 0.9648 | 0.9980 | 0.9811 | | 0.1727 | 25.9286 | 266 | 0.1168 | 0.9492 | 0.9473 | 0.9492 | 0.9475 | 0.8519 | 0.7263 | 0.7841 | 0.9114 | 0.8471 | 0.8780 | 0.9657 | 0.9970 | 0.9811 | | 0.1774 | 26.9048 | 276 | 0.1190 | 0.9507 | 0.9513 | 0.9507 | 0.9497 | 0.7822 | 0.8316 | 0.8061 | 0.9585 | 0.8157 | 0.8814 | 0.9657 | 0.9970 | 0.9811 | | 0.1769 | 27.9762 | 287 | 0.1093 | 0.9500 | 0.9484 | 0.9500 | 0.9481 | 0.8904 | 0.6842 | 0.7738 | 0.892 | 0.8745 | 0.8832 | 0.9685 | 0.9949 | 0.9815 | | 0.1536 | 28.9524 | 297 | 0.1069 | 0.9507 | 0.9496 | 0.9507 | 0.9484 | 0.9275 | 0.6737 | 0.7805 | 0.9024 | 0.8706 | 0.8862 | 0.9639 | 0.9980 | 0.9806 | | 0.1625 | 29.9286 | 307 | 0.1022 | 0.9559 | 0.9549 | 0.9559 | 0.9544 | 0.9211 | 0.7368 | 0.8187 | 0.9253 | 0.8745 | 0.8992 | 0.9658 | 0.9980 | 0.9816 | | 0.1596 | 30.9048 | 317 | 0.1000 | 0.9552 | 0.9539 | 0.9552 | 0.9542 | 0.8409 | 0.7789 | 0.8087 | 0.9205 | 0.8627 | 0.8907 | 0.9733 | 0.9960 | 0.9845 | | 0.166 | 31.9762 | 328 | 0.1028 | 0.9537 | 0.9522 | 0.9537 | 0.9526 | 0.8372 | 0.7579 | 0.7956 | 0.9095 | 0.8667 | 0.8876 | 0.9743 | 0.9949 | 0.9845 | | 0.1507 | 32.9524 | 338 | 0.1034 | 0.9500 | 0.9498 | 0.9500 | 0.9495 | 0.77 | 0.8105 | 0.7897 | 0.9149 | 0.8431 | 0.8776 | 0.9761 | 0.9909 | 0.9834 | | 0.1603 | 33.9286 | 348 | 0.0991 | 0.9515 | 0.9499 | 0.9515 | 0.9502 | 0.8554 | 0.7474 | 0.7978 | 0.8980 | 0.8627 | 0.8800 | 0.9723 | 0.9939 | 0.9830 | | 0.1431 | 34.9048 | 358 | 0.1069 | 0.9507 | 0.9542 | 0.9507 | 0.9492 | 0.9836 | 0.6316 | 0.7692 | 0.8362 | 0.9412 | 0.8856 | 0.9818 | 0.9838 | 0.9828 | | 0.1504 | 35.9762 | 369 | 0.1035 | 0.9544 | 0.9562 | 0.9544 | 0.9522 | 0.9833 | 0.6211 | 0.7613 | 0.8613 | 0.9255 | 0.8922 | 0.9781 | 0.9939 | 0.9860 | | 0.1429 | 36.9524 | 379 | 0.0987 | 0.9567 | 0.9574 | 0.9567 | 0.9547 | 0.9531 | 0.6421 | 0.7673 | 0.8713 | 0.9294 | 0.8994 | 0.9801 | 0.9939 | 0.9869 | | 0.1471 | 37.9286 | 389 | 0.1180 | 0.9507 | 0.9542 | 0.9507 | 0.9470 | 0.9804 | 0.5263 | 0.6849 | 0.8368 | 0.9451 | 0.8877 | 0.982 | 0.9929 | 0.9874 | | 0.1421 | 38.9048 | 399 | 0.1058 | 0.9507 | 0.9562 | 0.9507 | 0.9492 | 0.9831 | 0.6105 | 0.7532 | 0.82 | 0.9647 | 0.8865 | 0.9888 | 0.9798 | 0.9843 | | 0.1342 | 39.9762 | 410 | 0.0953 | 0.9567 | 0.9586 | 0.9567 | 0.9546 | 0.9836 | 0.6316 | 0.7692 | 0.8664 | 0.9412 | 0.9023 | 0.9800 | 0.9919 | 0.9859 | | 0.1434 | 40.9524 | 420 | 0.0937 | 0.9612 | 0.9648 | 0.9612 | 0.9596 | 0.9839 | 0.6421 | 0.7771 | 0.8527 | 0.9765 | 0.9104 | 0.9919 | 0.9879 | 0.9899 | | 0.1401 | 41.9286 | 430 | 0.0875 | 0.9619 | 0.9609 | 0.9619 | 0.9612 | 0.8523 | 0.7895 | 0.8197 | 0.9268 | 0.8941 | 0.9102 | 0.9801 | 0.9960 | 0.9880 | | 0.1342 | 42.9048 | 440 | 0.0875 | 0.9597 | 0.9586 | 0.9597 | 0.9587 | 0.8765 | 0.7474 | 0.8068 | 0.9059 | 0.9059 | 0.9059 | 0.9801 | 0.9939 | 0.9869 | | 0.1363 | 43.9762 | 451 | 0.1002 | 0.9597 | 0.9611 | 0.9597 | 0.9574 | 0.9836 | 0.6316 | 0.7692 | 0.8791 | 0.9412 | 0.9091 | 0.9801 | 0.9960 | 0.9880 | | 0.1375 | 44.9524 | 461 | 0.1123 | 0.9552 | 0.9560 | 0.9552 | 0.9523 | 0.9831 | 0.6105 | 0.7532 | 0.8893 | 0.9137 | 0.9014 | 0.9705 | 0.9990 | 0.9846 | | 0.1373 | 45.9286 | 471 | 0.1053 | 0.9567 | 0.9579 | 0.9567 | 0.9544 | 0.9836 | 0.6316 | 0.7692 | 0.8773 | 0.9255 | 0.9008 | 0.9762 | 0.9960 | 0.9860 | | 0.135 | 46.9048 | 481 | 0.0947 | 0.9589 | 0.9615 | 0.9589 | 0.9573 | 0.9683 | 0.6421 | 0.7722 | 0.8566 | 0.9608 | 0.9057 | 0.9879 | 0.9889 | 0.9884 | | 0.1319 | 47.9762 | 492 | 0.0995 | 0.9567 | 0.9601 | 0.9567 | 0.9548 | 0.9833 | 0.6211 | 0.7613 | 0.8478 | 0.9608 | 0.9007 | 0.9869 | 0.9879 | 0.9874 | | 0.1318 | 48.9524 | 502 | 0.0882 | 0.9604 | 0.9604 | 0.9604 | 0.9599 | 0.8875 | 0.7474 | 0.8114 | 0.8810 | 0.9294 | 0.9046 | 0.9879 | 0.9889 | 0.9884 | | 0.134 | 49.9286 | 512 | 0.0872 | 0.9589 | 0.9587 | 0.9589 | 0.9583 | 0.8765 | 0.7474 | 0.8068 | 0.8797 | 0.9176 | 0.8983 | 0.9869 | 0.9899 | 0.9884 | | 0.1233 | 50.9048 | 522 | 0.0858 | 0.9627 | 0.9651 | 0.9627 | 0.9618 | 0.9565 | 0.6947 | 0.8049 | 0.8606 | 0.9686 | 0.9114 | 0.9929 | 0.9869 | 0.9899 | | 0.1277 | 51.9762 | 533 | 0.0905 | 0.9612 | 0.9630 | 0.9612 | 0.9605 | 0.9178 | 0.7053 | 0.7976 | 0.8601 | 0.9647 | 0.9094 | 0.9939 | 0.9848 | 0.9893 | | 0.1301 | 52.9524 | 543 | 0.0870 | 0.9634 | 0.9645 | 0.9634 | 0.9628 | 0.92 | 0.7263 | 0.8118 | 0.875 | 0.9608 | 0.9159 | 0.9919 | 0.9869 | 0.9894 | | 0.1237 | 53.9286 | 553 | 0.0947 | 0.9619 | 0.9626 | 0.9619 | 0.9605 | 0.9420 | 0.6842 | 0.7927 | 0.8836 | 0.9529 | 0.9170 | 0.9849 | 0.9909 | 0.9879 | | 0.1244 | 54.9048 | 563 | 0.0936 | 0.9604 | 0.9611 | 0.9604 | 0.9590 | 0.9559 | 0.6842 | 0.7975 | 0.8856 | 0.9412 | 0.9125 | 0.981 | 0.9919 | 0.9864 | | 0.1247 | 55.9762 | 574 | 0.0946 | 0.9634 | 0.9648 | 0.9634 | 0.9629 | 0.9211 | 0.7368 | 0.8187 | 0.8723 | 0.9647 | 0.9162 | 0.9929 | 0.9848 | 0.9888 | | 0.1179 | 56.9524 | 584 | 0.0930 | 0.9656 | 0.9677 | 0.9656 | 0.9644 | 0.9701 | 0.6842 | 0.8025 | 0.8768 | 0.9765 | 0.9239 | 0.9909 | 0.9899 | 0.9904 | | 0.1249 | 57.9286 | 594 | 0.0906 | 0.9634 | 0.9647 | 0.9634 | 0.9623 | 0.9429 | 0.6947 | 0.8000 | 0.875 | 0.9608 | 0.9159 | 0.9899 | 0.9899 | 0.9899 | | 0.1258 | 58.9048 | 604 | 0.0866 | 0.9627 | 0.9628 | 0.9627 | 0.9621 | 0.8875 | 0.7474 | 0.8114 | 0.8856 | 0.9412 | 0.9125 | 0.9899 | 0.9889 | 0.9894 | | 0.1168 | 59.9762 | 615 | 0.0886 | 0.9574 | 0.9586 | 0.9574 | 0.9574 | 0.8452 | 0.7474 | 0.7933 | 0.8602 | 0.9412 | 0.8989 | 0.9949 | 0.9818 | 0.9883 | | 0.1267 | 60.9524 | 625 | 0.0951 | 0.9619 | 0.9624 | 0.9619 | 0.9605 | 0.9420 | 0.6842 | 0.7927 | 0.8864 | 0.9490 | 0.9167 | 0.9840 | 0.9919 | 0.9879 | | 0.1211 | 61.9286 | 635 | 0.0914 | 0.9612 | 0.9622 | 0.9612 | 0.9597 | 0.9552 | 0.6737 | 0.7901 | 0.8804 | 0.9529 | 0.9153 | 0.9839 | 0.9909 | 0.9874 | | 0.1258 | 62.9048 | 645 | 0.0857 | 0.9649 | 0.9659 | 0.9649 | 0.9640 | 0.9444 | 0.7158 | 0.8144 | 0.8845 | 0.9608 | 0.9211 | 0.9889 | 0.9899 | 0.9894 | | 0.1165 | 63.9762 | 656 | 0.0831 | 0.9679 | 0.9681 | 0.9679 | 0.9675 | 0.9012 | 0.7684 | 0.8295 | 0.8971 | 0.9569 | 0.9260 | 0.9929 | 0.9899 | 0.9914 | | 0.1166 | 64.9524 | 666 | 0.0841 | 0.9634 | 0.9634 | 0.9634 | 0.9634 | 0.8065 | 0.7895 | 0.7979 | 0.9073 | 0.9216 | 0.9144 | 0.9929 | 0.9909 | 0.9919 | | 0.117 | 65.9286 | 676 | 0.0885 | 0.9627 | 0.9643 | 0.9627 | 0.9615 | 0.9420 | 0.6842 | 0.7927 | 0.8693 | 0.9647 | 0.9145 | 0.9909 | 0.9889 | 0.9899 | | 0.1173 | 66.9048 | 686 | 0.0879 | 0.9656 | 0.9657 | 0.9656 | 0.9646 | 0.9452 | 0.7263 | 0.8214 | 0.9064 | 0.9490 | 0.9272 | 0.9830 | 0.9929 | 0.9879 | | 0.1133 | 67.9762 | 697 | 0.0850 | 0.9664 | 0.9681 | 0.9664 | 0.9656 | 0.9444 | 0.7158 | 0.8144 | 0.8768 | 0.9765 | 0.9239 | 0.9939 | 0.9879 | 0.9909 | | 0.1186 | 68.9524 | 707 | 0.0872 | 0.9642 | 0.9657 | 0.9642 | 0.9634 | 0.9315 | 0.7158 | 0.8095 | 0.8728 | 0.9686 | 0.9182 | 0.9929 | 0.9869 | 0.9899 | | 0.1181 | 69.9286 | 717 | 0.0842 | 0.9694 | 0.9704 | 0.9694 | 0.9687 | 0.9333 | 0.7368 | 0.8235 | 0.8893 | 0.9765 | 0.9308 | 0.9949 | 0.9899 | 0.9924 | | 0.1062 | 70.9048 | 727 | 0.0856 | 0.9686 | 0.9693 | 0.9686 | 0.9680 | 0.9342 | 0.7474 | 0.8304 | 0.8945 | 0.9647 | 0.9283 | 0.9919 | 0.9909 | 0.9914 | | 0.1159 | 71.9762 | 738 | 0.0878 | 0.9656 | 0.9682 | 0.9656 | 0.9648 | 0.9565 | 0.6947 | 0.8049 | 0.8651 | 0.9804 | 0.9191 | 0.9959 | 0.9879 | 0.9919 | | 0.1162 | 72.9524 | 748 | 0.0851 | 0.9642 | 0.9644 | 0.9642 | 0.9643 | 0.8021 | 0.8105 | 0.8063 | 0.9141 | 0.9176 | 0.9159 | 0.9929 | 0.9909 | 0.9919 | | 0.1201 | 73.9286 | 758 | 0.0828 | 0.9694 | 0.9703 | 0.9694 | 0.9685 | 0.9452 | 0.7263 | 0.8214 | 0.8957 | 0.9765 | 0.9343 | 0.9919 | 0.9909 | 0.9914 | | 0.1145 | 74.9048 | 768 | 0.0865 | 0.9656 | 0.9670 | 0.9656 | 0.9642 | 0.9552 | 0.6737 | 0.7901 | 0.8826 | 0.9725 | 0.9254 | 0.9899 | 0.9919 | 0.9909 | | 0.1172 | 75.9762 | 779 | 0.0835 | 0.9694 | 0.9693 | 0.9694 | 0.9692 | 0.8539 | 0.8 | 0.8261 | 0.9129 | 0.9451 | 0.9287 | 0.9949 | 0.9919 | 0.9934 | | 0.1077 | 76.9524 | 789 | 0.0896 | 0.9679 | 0.9695 | 0.9679 | 0.9669 | 0.9571 | 0.7053 | 0.8121 | 0.8834 | 0.9804 | 0.9294 | 0.9929 | 0.9899 | 0.9914 | | 0.1093 | 77.9286 | 799 | 0.0808 | 0.9686 | 0.9696 | 0.9686 | 0.9681 | 0.9221 | 0.7474 | 0.8256 | 0.8889 | 0.9725 | 0.9288 | 0.9949 | 0.9889 | 0.9919 | | 0.1114 | 78.9048 | 809 | 0.0823 | 0.9642 | 0.9651 | 0.9642 | 0.9639 | 0.8889 | 0.7579 | 0.8182 | 0.8781 | 0.9608 | 0.9176 | 0.9949 | 0.9848 | 0.9898 | | 0.1144 | 79.9762 | 820 | 0.0867 | 0.9686 | 0.9705 | 0.9686 | 0.9675 | 0.9706 | 0.6947 | 0.8098 | 0.8838 | 0.9843 | 0.9314 | 0.9929 | 0.9909 | 0.9919 | | 0.1101 | 80.9524 | 830 | 0.0786 | 0.9679 | 0.9675 | 0.9679 | 0.9673 | 0.8889 | 0.7579 | 0.8182 | 0.9101 | 0.9529 | 0.9310 | 0.9899 | 0.9919 | 0.9909 | | 0.1053 | 81.9286 | 840 | 0.0874 | 0.9679 | 0.9697 | 0.9679 | 0.9669 | 0.9571 | 0.7053 | 0.8121 | 0.8807 | 0.9843 | 0.9296 | 0.9939 | 0.9889 | 0.9914 | | 0.1019 | 82.9048 | 850 | 0.0842 | 0.9642 | 0.9660 | 0.9642 | 0.9640 | 0.9114 | 0.7579 | 0.8276 | 0.8702 | 0.9725 | 0.9185 | 0.9959 | 0.9818 | 0.9888 | | 0.1086 | 83.9762 | 861 | 0.0864 | 0.9694 | 0.9708 | 0.9694 | 0.9684 | 0.9577 | 0.7158 | 0.8193 | 0.8901 | 0.9843 | 0.9348 | 0.9929 | 0.9899 | 0.9914 | | 0.1108 | 84.9524 | 871 | 0.0808 | 0.9634 | 0.9646 | 0.9634 | 0.9630 | 0.8974 | 0.7368 | 0.8092 | 0.8723 | 0.9647 | 0.9162 | 0.9949 | 0.9848 | 0.9898 | | 0.1017 | 85.9286 | 881 | 0.0825 | 0.9686 | 0.9704 | 0.9686 | 0.9677 | 0.9577 | 0.7158 | 0.8193 | 0.8838 | 0.9843 | 0.9314 | 0.9939 | 0.9889 | 0.9914 | | 0.1108 | 86.9048 | 891 | 0.0800 | 0.9694 | 0.9700 | 0.9694 | 0.9688 | 0.9221 | 0.7474 | 0.8256 | 0.8953 | 0.9725 | 0.9323 | 0.9939 | 0.9899 | 0.9919 | | 0.1081 | 87.9762 | 902 | 0.0867 | 0.9679 | 0.9702 | 0.9679 | 0.9669 | 0.9710 | 0.7053 | 0.8171 | 0.8780 | 0.9882 | 0.9299 | 0.9939 | 0.9879 | 0.9909 | | 0.1076 | 88.9524 | 912 | 0.0789 | 0.9716 | 0.9720 | 0.9716 | 0.9708 | 0.9333 | 0.7368 | 0.8235 | 0.9055 | 0.9765 | 0.9396 | 0.9929 | 0.9929 | 0.9929 | | 0.1059 | 89.9286 | 922 | 0.0829 | 0.9694 | 0.9708 | 0.9694 | 0.9685 | 0.9583 | 0.7263 | 0.8263 | 0.8897 | 0.9804 | 0.9328 | 0.9929 | 0.9899 | 0.9914 | | 0.1048 | 90.9048 | 932 | 0.0771 | 0.9716 | 0.9718 | 0.9716 | 0.9709 | 0.9221 | 0.7474 | 0.8256 | 0.9084 | 0.9725 | 0.9394 | 0.9929 | 0.9929 | 0.9929 | | 0.098 | 91.9762 | 943 | 0.0796 | 0.9709 | 0.9715 | 0.9709 | 0.9701 | 0.9333 | 0.7368 | 0.8235 | 0.8986 | 0.9725 | 0.9341 | 0.9939 | 0.9929 | 0.9934 | | 0.1075 | 92.9524 | 953 | 0.0798 | 0.9679 | 0.9689 | 0.9679 | 0.9673 | 0.9211 | 0.7368 | 0.8187 | 0.8857 | 0.9725 | 0.9271 | 0.9949 | 0.9889 | 0.9919 | | 0.0937 | 93.9286 | 963 | 0.0777 | 0.9701 | 0.9709 | 0.9701 | 0.9694 | 0.9333 | 0.7368 | 0.8235 | 0.8957 | 0.9765 | 0.9343 | 0.9939 | 0.9909 | 0.9924 | | 0.1099 | 94.9048 | 973 | 0.0760 | 0.9679 | 0.9685 | 0.9679 | 0.9673 | 0.9103 | 0.7474 | 0.8208 | 0.8917 | 0.9686 | 0.9286 | 0.9939 | 0.9889 | 0.9914 | | 0.1043 | 95.9762 | 984 | 0.0763 | 0.9701 | 0.9710 | 0.9701 | 0.9695 | 0.9342 | 0.7474 | 0.8304 | 0.8957 | 0.9765 | 0.9343 | 0.9939 | 0.9899 | 0.9919 | | 0.1013 | 96.9524 | 994 | 0.0774 | 0.9686 | 0.9697 | 0.9686 | 0.9679 | 0.9333 | 0.7368 | 0.8235 | 0.8893 | 0.9765 | 0.9308 | 0.9939 | 0.9889 | 0.9914 | | 0.1097 | 97.5476 | 1000 | 0.0776 | 0.9686 | 0.9697 | 0.9686 | 0.9679 | 0.9333 | 0.7368 | 0.8235 | 0.8893 | 0.9765 | 0.9308 | 0.9939 | 0.9889 | 0.9914 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.4.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "t0", "t1", "t2" ]
alem-147/poisoned-baseline
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # poisoned-baseline This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.1656 - Accuracy: 0.5940 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.1079 | 1.0 | 130 | 1.0555 | 0.5188 | | 1.0487 | 2.0 | 260 | 2.1006 | 0.3910 | | 1.0065 | 3.0 | 390 | 4.1404 | 0.3008 | | 0.9758 | 4.0 | 520 | 2.0769 | 0.5489 | | 0.9558 | 5.0 | 650 | 1.4474 | 0.5113 | | 0.9116 | 6.0 | 780 | 1.6002 | 0.6466 | | 0.8887 | 7.0 | 910 | 2.6059 | 0.5789 | | 0.8736 | 8.0 | 1040 | 1.5122 | 0.4662 | | 0.8478 | 9.0 | 1170 | 1.7094 | 0.3910 | | 0.8845 | 10.0 | 1300 | 2.4116 | 0.5714 | | 0.8223 | 11.0 | 1430 | 2.1748 | 0.5263 | | 0.8169 | 12.0 | 1560 | 2.7392 | 0.5865 | | 0.8053 | 13.0 | 1690 | 1.9351 | 0.4286 | | 0.7562 | 14.0 | 1820 | 1.6459 | 0.5263 | | 0.7715 | 15.0 | 1950 | 0.9730 | 0.5714 | | 0.8031 | 16.0 | 2080 | 1.8118 | 0.5940 | | 0.797 | 17.0 | 2210 | 2.0251 | 0.5639 | | 0.7489 | 18.0 | 2340 | 1.6305 | 0.4662 | | 0.7661 | 19.0 | 2470 | 0.9456 | 0.6165 | | 0.6743 | 20.0 | 2600 | 1.1777 | 0.5789 | | 0.7162 | 21.0 | 2730 | 1.9899 | 0.5489 | | 0.6952 | 22.0 | 2860 | 2.1572 | 0.5188 | | 0.6998 | 23.0 | 2990 | 3.6954 | 0.4962 | | 0.7048 | 24.0 | 3120 | 1.4983 | 0.5489 | | 0.668 | 25.0 | 3250 | 1.4684 | 0.6090 | | 0.6539 | 26.0 | 3380 | 1.5490 | 0.6015 | | 0.6404 | 27.0 | 3510 | 1.0373 | 0.6090 | | 0.6337 | 28.0 | 3640 | 0.8090 | 0.6767 | | 0.6422 | 29.0 | 3770 | 2.0051 | 0.5263 | | 0.6487 | 30.0 | 3900 | 1.0576 | 0.5714 | | 0.5979 | 31.0 | 4030 | 2.6454 | 0.5414 | | 0.629 | 32.0 | 4160 | 1.6747 | 0.4962 | | 0.6262 | 33.0 | 4290 | 2.3917 | 0.5188 | | 0.6286 | 34.0 | 4420 | 1.1679 | 0.5113 | | 0.6048 | 35.0 | 4550 | 1.8266 | 0.6391 | | 0.603 | 36.0 | 4680 | 0.7241 | 0.6842 | | 0.5939 | 37.0 | 4810 | 3.3023 | 0.5338 | | 0.5756 | 38.0 | 4940 | 1.7101 | 0.6316 | | 0.558 | 39.0 | 5070 | 2.0204 | 0.3835 | | 0.5721 | 40.0 | 5200 | 1.5391 | 0.6316 | | 0.5838 | 41.0 | 5330 | 2.9189 | 0.4887 | | 0.563 | 42.0 | 5460 | 2.1778 | 0.6241 | | 0.5788 | 43.0 | 5590 | 3.7351 | 0.4135 | | 0.5361 | 44.0 | 5720 | 0.8738 | 0.6541 | | 0.5897 | 45.0 | 5850 | 1.7730 | 0.5865 | | 0.5299 | 46.0 | 5980 | 1.2070 | 0.6316 | | 0.5215 | 47.0 | 6110 | 1.1173 | 0.6316 | | 0.5385 | 48.0 | 6240 | 1.5332 | 0.6241 | | 0.5397 | 49.0 | 6370 | 2.5272 | 0.5714 | | 0.5233 | 50.0 | 6500 | 1.8423 | 0.6165 | | 0.5571 | 51.0 | 6630 | 1.4039 | 0.6391 | | 0.5377 | 52.0 | 6760 | 1.5045 | 0.5338 | | 0.4985 | 53.0 | 6890 | 3.8733 | 0.4962 | | 0.476 | 54.0 | 7020 | 1.3020 | 0.5113 | | 0.5115 | 55.0 | 7150 | 2.1457 | 0.5865 | | 0.5097 | 56.0 | 7280 | 3.9787 | 0.5414 | | 0.5148 | 57.0 | 7410 | 0.9982 | 0.6466 | | 0.4669 | 58.0 | 7540 | 8.1125 | 0.3308 | | 0.5279 | 59.0 | 7670 | 5.7709 | 0.5263 | | 0.4673 | 60.0 | 7800 | 4.8501 | 0.5414 | | 0.4956 | 61.0 | 7930 | 1.4053 | 0.5940 | | 0.4959 | 62.0 | 8060 | 0.9127 | 0.5865 | | 0.4881 | 63.0 | 8190 | 5.8092 | 0.5038 | | 0.4928 | 64.0 | 8320 | 0.8439 | 0.6090 | | 0.4519 | 65.0 | 8450 | 1.4800 | 0.5489 | | 0.4833 | 66.0 | 8580 | 2.2109 | 0.5639 | | 0.4582 | 67.0 | 8710 | 1.2669 | 0.5940 | | 0.4616 | 68.0 | 8840 | 1.0607 | 0.6316 | | 0.4803 | 69.0 | 8970 | 2.4072 | 0.4436 | | 0.521 | 70.0 | 9100 | 6.1593 | 0.4812 | | 0.4558 | 71.0 | 9230 | 1.0987 | 0.6391 | | 0.4408 | 72.0 | 9360 | 1.2993 | 0.6466 | | 0.4813 | 73.0 | 9490 | 0.9748 | 0.5714 | | 0.4842 | 74.0 | 9620 | 4.6767 | 0.4812 | | 0.4388 | 75.0 | 9750 | 4.1866 | 0.4662 | | 0.4701 | 76.0 | 9880 | 2.3781 | 0.5564 | | 0.4382 | 77.0 | 10010 | 1.8863 | 0.6165 | | 0.4433 | 78.0 | 10140 | 3.5844 | 0.5789 | | 0.4586 | 79.0 | 10270 | 3.0186 | 0.5940 | | 0.4295 | 80.0 | 10400 | 3.8892 | 0.4662 | | 0.5058 | 81.0 | 10530 | 12.1759 | 0.4962 | | 0.435 | 82.0 | 10660 | 5.5538 | 0.6090 | | 0.4462 | 83.0 | 10790 | 2.1082 | 0.5865 | | 0.4602 | 84.0 | 10920 | 3.4000 | 0.6241 | | 0.4575 | 85.0 | 11050 | 9.2871 | 0.5038 | | 0.4461 | 86.0 | 11180 | 4.2447 | 0.5113 | | 0.5138 | 87.0 | 11310 | 4.6263 | 0.5789 | | 0.4321 | 88.0 | 11440 | 3.6092 | 0.4135 | | 0.4572 | 89.0 | 11570 | 1.6996 | 0.6391 | | 0.4329 | 90.0 | 11700 | 4.1432 | 0.5639 | | 0.4427 | 91.0 | 11830 | 2.6578 | 0.4286 | | 0.4536 | 92.0 | 11960 | 3.0237 | 0.5489 | | 0.4072 | 93.0 | 12090 | 1.6931 | 0.4586 | | 0.4225 | 94.0 | 12220 | 2.9963 | 0.4812 | | 0.4277 | 95.0 | 12350 | 1.2454 | 0.5865 | | 0.4753 | 96.0 | 12480 | 5.3971 | 0.5940 | | 0.4367 | 97.0 | 12610 | 3.2193 | 0.6015 | | 0.4375 | 98.0 | 12740 | 1.1401 | 0.6541 | | 0.4197 | 99.0 | 12870 | 1.6494 | 0.5714 | | 0.4517 | 100.0 | 13000 | 3.1656 | 0.5940 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-28Nov24-001
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-28Nov24-001 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0576 - Accuracy: 0.975 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4212 | 4.0 | 10 | 0.8054 | 0.8429 | | 0.3561 | 8.0 | 20 | 0.2331 | 0.9518 | | 0.0373 | 12.0 | 30 | 0.0961 | 0.9804 | | 0.0042 | 16.0 | 40 | 0.0669 | 0.9768 | | 0.0013 | 20.0 | 50 | 0.0580 | 0.975 | | 0.0008 | 24.0 | 60 | 0.0576 | 0.975 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-28Nov24-002
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-28Nov24-002 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0274 - Accuracy: 0.9946 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.386 | 4.0 | 10 | 0.8288 | 0.7839 | | 0.3797 | 8.0 | 20 | 0.2532 | 0.9321 | | 0.0532 | 12.0 | 30 | 0.0854 | 0.9786 | | 0.0063 | 16.0 | 40 | 0.0412 | 0.9911 | | 0.0018 | 20.0 | 50 | 0.0274 | 0.9946 | | 0.0011 | 24.0 | 60 | 0.0275 | 0.9929 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-28Nov24-003
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-28Nov24-003 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1086 - Accuracy: 0.9875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4912 | 4.0 | 10 | 1.1069 | 0.6786 | | 0.6967 | 8.0 | 20 | 0.5143 | 0.9232 | | 0.2492 | 12.0 | 30 | 0.2546 | 0.9768 | | 0.0819 | 16.0 | 40 | 0.1649 | 0.975 | | 0.0326 | 20.0 | 50 | 0.1086 | 0.9875 | | 0.0167 | 24.0 | 60 | 0.0943 | 0.9857 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-28Nov24-004
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-28Nov24-004 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.3106 - Accuracy: 0.9696 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.283 | 8.0 | 10 | 0.8580 | 0.7679 | | 0.4696 | 16.0 | 20 | 0.4239 | 0.9446 | | 0.2003 | 24.0 | 30 | 0.3106 | 0.9696 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-28Nov24-005
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-28Nov24-005 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1168 - Accuracy: 0.9804 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.5698 | 4.0 | 10 | 1.2316 | 0.6214 | | 0.7717 | 8.0 | 20 | 0.6293 | 0.8696 | | 0.2702 | 12.0 | 30 | 0.3256 | 0.9571 | | 0.0817 | 16.0 | 40 | 0.2082 | 0.9679 | | 0.0301 | 20.0 | 50 | 0.1580 | 0.9661 | | 0.0123 | 24.0 | 60 | 0.1168 | 0.9804 | | 0.0074 | 28.0 | 70 | 0.1133 | 0.9786 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-28Nov24-006
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-28Nov24-006 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0747 - Accuracy: 0.9821 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.6194 | 4.0 | 10 | 1.2322 | 0.6214 | | 0.7978 | 8.0 | 20 | 0.5919 | 0.925 | | 0.2576 | 12.0 | 30 | 0.2721 | 0.9679 | | 0.0723 | 16.0 | 40 | 0.1548 | 0.9786 | | 0.0202 | 20.0 | 50 | 0.1066 | 0.9768 | | 0.0067 | 24.0 | 60 | 0.0747 | 0.9821 | | 0.0035 | 28.0 | 70 | 0.0754 | 0.9768 | | 0.0027 | 32.0 | 80 | 0.0730 | 0.9786 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
nadahh/APTOS2019LLMMULTINumerical
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "0", "1", "2", "3", "4" ]
illusion002/food-image-classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # food-image-classification This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.2130 - eval_accuracy: 0.8071 - eval_runtime: 222.1725 - eval_samples_per_second: 68.19 - eval_steps_per_second: 4.262 - epoch: 21.1193 - step: 20000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 500 ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
Sisigoks/Food_Classifer_NoviceMK-I
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Food_Classifer_NoviceMK-I This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 4.2894 - eval_accuracy: 0.3546 - eval_runtime: 65.1366 - eval_samples_per_second: 33.898 - eval_steps_per_second: 2.119 - epoch: 10.0 - step: 1380 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "abiyuch", "acerola", "allium", "cherry tomato", "chervil", "chestnut", "chia", "chickpea", "chicory", "chicory leaves", "chicory roots", "chineese plum", "chinese bayberry", "allspice", "chinese broccoli", "chinese cabbage", "chinese chestnut", "chinese chives", "chinese cinnamon", "chinese mustard", "chinese water chestnut", "chives", "cinnamon", "citrus", "almond", "clementine", "climbing bean", "cloud ear fungus", "cloudberry", "cloves", "coconut", "coconut oil", "colorado pinyon", "common bean", "common beet", "alpine sweetvetch", "common buckwheat", "common cabbage", "common chokecherry", "common grape", "common hazelnut", "common mushroom", "common oregano", "common pea", "common persimmon", "common sage", "amaranth", "common salsify", "common thyme", "common verbena", "common walnut", "common wheat", "coriander", "corn", "corn grits", "corn oil", "corn salad", "american cranberry", "cornbread", "cornmint", "cottonseed", "cottonseed oil", "cowpea", "crisp bread", "crosne", "cubanelle pepper", "cucumber", "cucurbita", "american pokeweed", "cumin", "cupuaçu", "curry powder", "custard apple", "daikon radish", "dandelion", "date", "deerberry", "dill", "dock", "andean blackberry", "dough", "durian", "eddoe", "eggplant", "elderberry", "elliott's blueberry", "endive", "enokitake", "epazote", "european chestnut", "angelica", "european cranberry", "european plum", "evening primrose", "evergreen blackberry", "evergreen huckleberry", "feijoa", "fennel", "fenugreek", "fig", "fireweed", "anise", "flaxseed", "flour", "focaccia", "fox grape", "french plantain", "french toast", "fruit preserve", "fruit salad", "fruits", "garden cress", "acorn", "annual wild rice", "garden onion", "garden onion (var.)", "garden rhubarb", "garden tomato", "garden tomato (var.)", "garland chrysanthemum", "garlic", "gentiana lutea", "german camomile", "giant butterbur", "apple", "ginger", "ginkgo nuts", "ginseng", "globe artichoke", "goji", "gooseberry", "gram bean", "grape", "grapefruit", "grapeseed oil", "apricot", "grass pea", "green apple", "green bean", "green bell pepper", "green cabbage", "green grape", "green lentil", "green onion", "green plum", "green vegetables", "arctic blackberry", "green zucchini", "groundcherry", "guarana", "guava", "half-highbush blueberry", "hard wheat", "hawthorn", "hazelnut", "heart of palm", "hedge mustard", "arrowhead", "herbs and spices", "hickory nut", "highbush blueberry", "horned melon", "horseradish", "horseradish tree", "hyacinth bean", "hyssop", "iceberg lettuce", "italian oregano", "arrowroot", "italian sweet red pepper", "jackfruit", "jalapeno pepper", "japanese chestnut", "japanese persimmon", "japanese pumpkin", "japanese walnut", "java plum", "jerusalem artichoke", "jew's ear", "asian pear", "jicama", "jostaberry", "jujube", "juniperus communis", "jute", "kai-lan", "kale", "kiwi", "kohlrabi", "komatsuna", "asparagus", "kumquat", "lambsquarters", "lantern fruit", "leek", "lemon", "lemon balm", "lemon grass", "lemon thyme", "lemon verbena", "lentils", "asparagus fern", "lettuce", "lichee", "lima bean", "lime", "linden", "lingonberry", "loganberry", "longan", "loquat", "lotus", "asparagus racemosus", "lovage", "lowbush blueberry", "lupine", "macadamia nut", "macadamia nut (m. tetraphylla)", "maitake", "malabar plum", "malabar spinach", "malus (crab apple)", "mamey sapote", "acorn squash", "avocado", "mammee apple", "mandarin orange (clementine, tangerine)", "mango", "mate", "medlar", "mentha", "mexican groundcherry", "mexican oregano", "mikan", "millet", "avocado oil", "mixed nuts", "monk fruit", "morchella (morel)", "moth bean", "mountain yam", "mugwort", "mulberry", "multigrain bread", "mundu", "mung bean", "babassu palm", "muscadine grape", "mushrooms", "muskmelon", "mustard spinach", "nance", "nanking cherry", "napa cabbage", "naranjilla", "narrowleaf cattail", "natal plum", "bagel", "nectarine", "new zealand spinach", "nopal", "nutmeg", "nuts", "oat", "oat bread", "ohelo berry", "oil palm", "oil-seed camellia", "bamboo shoots", "okra", "olive", "olive oil", "onion-family vegetables", "opium poppy", "orange bell pepper", "orange mint", "oregon yampah", "oriental wheat", "ostrich fern", "banana", "other bread", "other bread product", "other cereal product", "other fruit product", "other vegetable product", "oval-leaf huckleberry", "oxheart cabbage", "oyster mushroom", "pak choy", "pan dulce", "barley", "papaya", "parsley", "parsnip", "partridge berry", "passion fruit", "pasta", "pea shoots", "peach", "peach (var.)", "peanut", "bayberry", "peanut oil", "pear", "pecan nut", "pepper", "pepper (c. baccatum)", "pepper (c. chinense)", "pepper (c. frutescens)", "pepper (c. pubescens)", "pepper (capsicum)", "pepper (spice)", "bean", "peppermint", "persian lime", "persimmon", "pigeon pea", "piki bread", "pili nut", "pine nut", "pineapple", "pineappple sage", "pistachio", "beech nut", "pita bread", "pitanga", "pitaya", "plains prickly pear", "plantain", "pomegranate", "pomes", "poppy", "pot marjoram", "potato", "adzuki bean", "bilberry", "potato bread", "prairie turnip", "prickly pear", "prunus (cherry, plum)", "pulses", "pummelo", "purple mangosteen", "purslane", "quince", "quinoa", "biscuit", "rabbiteye blueberry", "radish", "radish (var.)", "raisin bread", "rambutan", "rape", "rapeseed oil", "rapini", "red beetroot", "red bell pepper", "bitter gourd", "red clover", "red grape", "red huckleberry", "red onion", "red raspberry", "red rice", "redcurrant", "rice", "rice bread", "rocket salad", "black cabbage", "rocket salad (ssp.)", "romaine lettuce", "roman camomile", "root vegetables", "rose hip", "roselle", "rosemary", "rowal", "rowanberry", "rubus (blackberry, raspberry)", "black chokeberry", "rye", "rye bread", "sacred lotus", "safflower", "saffron", "sago palm", "salmonberry", "sapodilla", "saskatoon berry", "savoy cabbage", "black crowberry", "scarlet bean", "sea-buckthornberry", "semolina", "sesame", "sesame oil", "sesbania flower", "shallot", "shea tree", "shiitake", "silver linden", "black elderberry", "skunk currant", "small-leaf linden", "soft-necked garlic", "sorghum", "sorrel", "sour cherry", "sour orange", "sourdock", "sourdough", "soursop", "black huckleberry", "soybean oil", "sparkleberry", "spearmint", "spelt", "spinach", "squashberry", "star anise", "star fruit", "strawberry", "strawberry guava", "black mulberry", "sugar apple", "summer grape", "summer savory", "sunburst squash (pattypan squash)", "sunflower", "sunflower oil", "swamp cabbage", "swede", "sweet basil", "sweet bay", "black plum", "sweet cherry", "sweet marjoram", "sweet orange", "sweet potato", "sweet rowanberry", "swiss chard", "tamarind", "taro", "tarragon", "tartary buckwheat", "agave", "black radish", "tea leaf willow", "teff", "thistle", "thornless blackberry", "tinda", "tortilla", "towel gourd", "tree fern", "triticale", "tronchuda cabbage", "black raisin", "tropical highland blackberry", "turmeric", "turnip", "ucuhuba", "vaccinium (blueberry, cranberry, huckleberry)", "vanilla", "walnut", "wampee", "wasabi", "water spinach", "black raspberry", "watercress", "watermelon", "wax apple", "wax gourd", "welsh onion", "wheat", "wheat bread", "white bread", "white cabbage", "white lupine", "black salsify", "white mulberry", "white mustard", "white onion", "whole wheat bread", "wild carrot", "wild celery", "wild leek", "wild rice", "winged bean", "winter savory", "black walnut", "winter squash", "yali pear", "yam", "yardlong bean", "yau choy", "yautia", "yellow bell pepper", "yellow pond-lily", "yellow wax bean", "yellow zucchini", "black-eyed pea", "zwieback", "linseed oil", "blackberry", "blackcurrant", "bog bilberry", "borage", "alaska blueberry", "boysenberry", "brassicas", "brazil nut", "breadfruit", "breadnut tree seed", "breakfast cereal", "broad bean", "broccoli", "brussel sprouts", "buffalo currant", "alaska wild rhubarb", "bulgur", "burdock", "butternut", "butternut squash", "cabbage", "calabash", "canada blueberry", "cannellini bean", "canola oil", "cantaloupe melon", "albizia gummifera", "capers", "caraway", "cardamom", "cardoon", "carob", "carrot", "cascade huckleberry", "cashew nut", "cassava", "castanospermum australe", "alfalfa", "catjang pea", "cauliflower", "celeriac", "celery leaves", "celery stalks", "cereals and cereal products", "ceylon cinnamon", "chanterelle", "chayote", "cherimoya" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-29Nov24-001
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-29Nov24-001 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1522 - Accuracy: 0.9946 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.6512 | 4.0 | 10 | 1.3078 | 0.4643 | | 1.0709 | 8.0 | 20 | 0.7778 | 0.8429 | | 0.6487 | 12.0 | 30 | 0.4804 | 0.9089 | | 0.4013 | 16.0 | 40 | 0.3228 | 0.9554 | | 0.264 | 20.0 | 50 | 0.2140 | 0.9839 | | 0.1821 | 24.0 | 60 | 0.1688 | 0.9857 | | 0.1711 | 28.0 | 70 | 0.1522 | 0.9946 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-29Nov24-002
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-29Nov24-002 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0885 - Accuracy: 0.9964 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.576 | 4.0 | 10 | 1.2765 | 0.5804 | | 1.0281 | 8.0 | 20 | 0.7736 | 0.8339 | | 0.594 | 12.0 | 30 | 0.4290 | 0.9196 | | 0.3375 | 16.0 | 40 | 0.2649 | 0.9661 | | 0.2094 | 20.0 | 50 | 0.1590 | 0.9857 | | 0.1342 | 24.0 | 60 | 0.1123 | 0.9929 | | 0.1041 | 28.0 | 70 | 0.0998 | 0.9929 | | 0.0832 | 32.0 | 80 | 0.0885 | 0.9964 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-29Nov24-003
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-29Nov24-003 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0773 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 45 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.679 | 4.0 | 10 | 1.3217 | 0.5661 | | 1.0782 | 8.0 | 20 | 0.7879 | 0.8054 | | 0.6196 | 12.0 | 30 | 0.4259 | 0.9232 | | 0.3697 | 16.0 | 40 | 0.2647 | 0.9554 | | 0.2093 | 20.0 | 50 | 0.1669 | 0.9804 | | 0.1362 | 24.0 | 60 | 0.1141 | 0.9875 | | 0.1049 | 28.0 | 70 | 0.0937 | 0.9929 | | 0.0809 | 32.0 | 80 | 0.0773 | 1.0 | | 0.0731 | 36.0 | 90 | 0.0713 | 1.0 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
flavioferlin/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.8388 - Accuracy: 0.2857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | No log | 0.6667 | 1 | 1.8176 | 0.0952 | | No log | 2.0 | 3 | 1.8961 | 0.1429 | | No log | 2.6667 | 4 | 1.9159 | 0.1429 | | No log | 4.0 | 6 | 1.8906 | 0.1905 | | No log | 4.6667 | 7 | 1.8720 | 0.1905 | | No log | 6.0 | 9 | 1.8452 | 0.1905 | | 1.8626 | 6.6667 | 10 | 1.8388 | 0.2857 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.20.3
[ "anger", "surprise", "contempt", "happy", "neutral", "fear", "sad", "disgust" ]
nadahh/APTOS2019DetectionViaLLMM
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "0", "1" ]
hayatkhan/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6614 - Accuracy: 0.877 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7472 | 0.992 | 62 | 2.6182 | 0.813 | | 1.8769 | 2.0 | 125 | 1.8375 | 0.87 | | 1.616 | 2.976 | 186 | 1.6614 | 0.877 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
nadahh/APTOS2019DetectionMultiLabelNumericalviaLVM
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "0", "1", "2", "3", "4" ]
ayatsuri/waste_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ayatsuri/waste_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1349 - Validation Loss: 0.2197 - Train Accuracy: 0.9571 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 13045, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 1.3075 | 0.7344 | 0.8988 | 0 | | 0.5513 | 0.4531 | 0.9141 | 1 | | 0.3134 | 0.3091 | 0.9448 | 2 | | 0.2058 | 0.2620 | 0.9356 | 3 | | 0.1349 | 0.2197 | 0.9571 | 4 | ### Framework versions - Transformers 4.46.3 - TensorFlow 2.17.1 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "cardboard", "compost", "glass", "metal", "paper", "plastic", "trash" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-001
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-001 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1078 - Accuracy: 0.9821 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.5783 | 4.3636 | 10 | 1.2305 | 0.65 | | 0.6956 | 8.7273 | 20 | 0.5841 | 0.8946 | | 0.1959 | 13.2727 | 30 | 0.3096 | 0.9554 | | 0.0443 | 17.6364 | 40 | 0.1980 | 0.9589 | | 0.0116 | 22.1818 | 50 | 0.1434 | 0.9679 | | 0.005 | 26.5455 | 60 | 0.1169 | 0.9786 | | 0.0026 | 31.0909 | 70 | 0.1078 | 0.9821 | | 0.0022 | 35.4545 | 80 | 0.1072 | 0.9804 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-005
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-005 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1439 - Accuracy: 0.9875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.5645 | 4.3636 | 10 | 1.2976 | 0.525 | | 1.0147 | 8.7273 | 20 | 0.7672 | 0.8393 | | 0.5992 | 13.2727 | 30 | 0.4470 | 0.9446 | | 0.343 | 17.6364 | 40 | 0.2926 | 0.9589 | | 0.2065 | 22.1818 | 50 | 0.1980 | 0.9786 | | 0.1286 | 26.5455 | 60 | 0.1439 | 0.9875 | | 0.082 | 31.0909 | 70 | 0.1141 | 0.9857 | | 0.0649 | 35.4545 | 80 | 0.1032 | 0.9839 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-006
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-006 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1037 - Accuracy: 0.9982 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.5574 | 4.3636 | 10 | 1.2378 | 0.6018 | | 0.9431 | 8.7273 | 20 | 0.6702 | 0.8893 | | 0.4807 | 13.2727 | 30 | 0.3803 | 0.95 | | 0.2673 | 17.6364 | 40 | 0.2246 | 0.9732 | | 0.1501 | 22.1818 | 50 | 0.1573 | 0.975 | | 0.0962 | 26.5455 | 60 | 0.1037 | 0.9982 | | 0.0629 | 31.0909 | 70 | 0.0842 | 0.9964 | | 0.049 | 35.4545 | 80 | 0.0726 | 0.9964 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
Pointer0111/medical_finetuned_vit
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "benign", "malignant", "normal" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-007
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-007 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1039 - Accuracy: 0.9804 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.5474 | 3.9091 | 10 | 1.2455 | 0.5696 | | 0.8254 | 7.9091 | 20 | 0.6165 | 0.8768 | | 0.2878 | 11.9091 | 30 | 0.2867 | 0.9464 | | 0.0848 | 15.9091 | 40 | 0.1632 | 0.9732 | | 0.0235 | 19.9091 | 50 | 0.1039 | 0.9804 | | 0.0077 | 23.9091 | 60 | 0.0878 | 0.9714 | | 0.0038 | 27.9091 | 70 | 0.0884 | 0.9696 | | 0.0027 | 31.9091 | 80 | 0.0813 | 0.9714 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-008
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-008 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1019 - Accuracy: 0.9732 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.538 | 3.9091 | 10 | 1.2297 | 0.5982 | | 0.7675 | 7.9091 | 20 | 0.5668 | 0.9321 | | 0.2383 | 11.9091 | 30 | 0.2734 | 0.95 | | 0.0619 | 15.9091 | 40 | 0.1570 | 0.9643 | | 0.0172 | 19.9091 | 50 | 0.1019 | 0.9732 | | 0.0058 | 23.9091 | 60 | 0.0944 | 0.9696 | | 0.0031 | 27.9091 | 70 | 0.0848 | 0.9714 | | 0.0024 | 31.9091 | 80 | 0.0830 | 0.9714 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-009
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-009 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0555 - Accuracy: 0.9893 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 25 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.3451 | 3.9091 | 10 | 0.7372 | 0.8768 | | 0.3505 | 7.9091 | 20 | 0.2424 | 0.975 | | 0.0589 | 11.9091 | 30 | 0.0942 | 0.9821 | | 0.0102 | 15.9091 | 40 | 0.0555 | 0.9893 | | 0.0037 | 19.9091 | 50 | 0.0479 | 0.9875 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-010
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-010 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1486 - Accuracy: 0.9661 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.4393 | 3.9091 | 10 | 0.9144 | 0.7089 | | 0.2882 | 7.9091 | 20 | 0.3346 | 0.9268 | | 0.0231 | 11.9091 | 30 | 0.1486 | 0.9661 | | 0.0024 | 15.9091 | 40 | 0.1199 | 0.9607 | | 0.001 | 19.9091 | 50 | 0.1071 | 0.9643 | | 0.0007 | 23.9091 | 60 | 0.1058 | 0.9625 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-011
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-011 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0929 - Accuracy: 0.9911 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.5512 | 3.9091 | 10 | 1.0872 | 0.6946 | | 0.6275 | 7.9091 | 20 | 0.4373 | 0.9143 | | 0.2596 | 11.9091 | 30 | 0.2554 | 0.9411 | | 0.1207 | 15.9091 | 40 | 0.1586 | 0.9679 | | 0.0673 | 19.9091 | 50 | 0.0929 | 0.9911 | | 0.0273 | 23.9091 | 60 | 0.0858 | 0.9893 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-012
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-012 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0763 - Accuracy: 0.9929 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.4699 | 3.9091 | 10 | 0.9471 | 0.7821 | | 0.5263 | 7.9091 | 20 | 0.3796 | 0.9214 | | 0.1867 | 11.9091 | 30 | 0.2458 | 0.9357 | | 0.0908 | 15.9091 | 40 | 0.1267 | 0.9857 | | 0.0436 | 19.9091 | 50 | 0.0763 | 0.9929 | | 0.0256 | 23.9091 | 60 | 0.0681 | 0.9929 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-013
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-013 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1053 - Accuracy: 0.9982 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.4584 | 3.9091 | 10 | 1.0501 | 0.6804 | | 0.5812 | 7.9091 | 20 | 0.4168 | 0.9339 | | 0.2353 | 11.9091 | 30 | 0.2736 | 0.9214 | | 0.1102 | 15.9091 | 40 | 0.1728 | 0.9821 | | 0.0623 | 19.9091 | 50 | 0.1053 | 0.9982 | | 0.0285 | 23.9091 | 60 | 0.1018 | 0.9964 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-014
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-014 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1032 - Accuracy: 0.9946 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.4585 | 3.9091 | 10 | 1.0507 | 0.6786 | | 0.5887 | 7.9091 | 20 | 0.4741 | 0.9018 | | 0.2577 | 11.9091 | 30 | 0.2663 | 0.9357 | | 0.1095 | 15.9091 | 40 | 0.1619 | 0.9821 | | 0.0579 | 19.9091 | 50 | 0.1032 | 0.9946 | | 0.0271 | 23.9091 | 60 | 0.0949 | 0.9929 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-015
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-30Nov24-015 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1005 - Accuracy: 0.9964 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.4584 | 3.9091 | 10 | 1.0525 | 0.675 | | 0.5841 | 7.9091 | 20 | 0.4200 | 0.9375 | | 0.2335 | 11.9091 | 30 | 0.2722 | 0.9339 | | 0.1077 | 15.9091 | 40 | 0.1627 | 0.9768 | | 0.0607 | 19.9091 | 50 | 0.1005 | 0.9964 | | 0.0273 | 23.9091 | 60 | 0.0968 | 0.9946 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
Sohaibsoussi/vit-beans_leaves_disease
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-beans_leaves_disease This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.1107 - Accuracy: 0.9766 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1672 | 1.5385 | 100 | 0.1842 | 0.9474 | | 0.03 | 3.0769 | 200 | 0.0464 | 0.9925 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.0 - Datasets 2.17.0 - Tokenizers 0.20.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
alem-147/poison-distilled-2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # models This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: -1.0568 - Accuracy: 0.4219 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0517 | 1.0 | 130 | -0.1478 | 0.4361 | | -0.2131 | 2.0 | 260 | -0.7916 | 0.4737 | | -0.6796 | 3.0 | 390 | -1.1818 | 0.3910 | | -1.0059 | 4.0 | 520 | -1.5061 | 0.4737 | | -1.4067 | 5.0 | 650 | -2.4894 | 0.3835 | | -1.6574 | 6.0 | 780 | -1.4229 | 0.4211 | | -1.8941 | 7.0 | 910 | -1.9573 | 0.3684 | | -2.2115 | 8.0 | 1040 | -1.1340 | 0.2857 | | -2.4642 | 9.0 | 1170 | -0.4890 | 0.3383 | | -2.5106 | 10.0 | 1300 | -3.0730 | 0.4586 | | -2.8623 | 11.0 | 1430 | -1.3475 | 0.4737 | | -2.8928 | 12.0 | 1560 | -3.6841 | 0.4286 | | -3.2526 | 13.0 | 1690 | -0.3445 | 0.3158 | | -3.3179 | 14.0 | 1820 | -4.3698 | 0.4511 | | -3.5768 | 15.0 | 1950 | -1.2273 | 0.3534 | | -3.4264 | 16.0 | 2080 | -0.9398 | 0.3383 | | -3.7022 | 17.0 | 2210 | -4.5067 | 0.3835 | | -3.7768 | 18.0 | 2340 | -1.5536 | 0.3308 | | -3.7878 | 19.0 | 2470 | -4.1282 | 0.4737 | | -4.0069 | 20.0 | 2600 | -1.5285 | 0.4361 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
amartyasaran/SwinCXR-3
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "covid-19", "non-covid", "normal" ]
NamLe12/vit-base-beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans This model is a fine-tuned version of [facebook/dinov2-base](https://huggingface.co/facebook/dinov2-base) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0421 - Accuracy: 0.9774 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4416 | 1.0 | 130 | 0.0421 | 0.9774 | | 0.228 | 2.0 | 260 | 0.5107 | 0.8872 | | 0.2485 | 3.0 | 390 | 0.1091 | 0.9549 | | 0.2278 | 4.0 | 520 | 0.1148 | 0.9774 | | 0.3263 | 5.0 | 650 | 0.1082 | 0.9850 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1 - Datasets 3.1.0 - Tokenizers 0.20.4
[ "angular_leaf_spot", "bean_rust", "healthy" ]
skyberg11/secret_weapon_1337
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "dodge ram pickup 3500 crew cab 2010", "cadillac cts-v sedan 2012", "audi s5 convertible 2012", "ram c-v cargo van minivan 2012", "smart fortwo convertible 2012", "audi v8 sedan 1994", "suzuki kizashi sedan 2012", "chevrolet express van 2007", "chrysler town and country minivan 2012", "rolls-royce phantom sedan 2012", "buick regal gs 2012", "aston martin v8 vantage convertible 2012", "rolls-royce phantom drophead coupe convertible 2012", "audi tt hatchback 2011", "chevrolet malibu hybrid sedan 2010", "honda odyssey minivan 2012", "bmw x6 suv 2012", "audi 100 wagon 1994", "volvo 240 sedan 1993", "plymouth neon coupe 1999", "ford focus sedan 2007", "ford gt coupe 2006", "mclaren mp4-12c coupe 2012", "dodge caliber wagon 2012", "acura tsx sedan 2012", "chrysler 300 srt-8 2010", "jeep wrangler suv 2012", "chevrolet monte carlo coupe 2007", "chevrolet hhr ss 2010", "dodge dakota crew cab 2010", "jeep compass suv 2012", "hyundai veracruz suv 2012", "buick verano sedan 2012", "isuzu ascender suv 2008", "hyundai accent sedan 2012", "audi tt rs coupe 2012", "geo metro convertible 1993", "buick enclave suv 2012", "mercedes-benz 300-class convertible 1993", "dodge caliber wagon 2007", "hyundai genesis sedan 2012", "jeep liberty suv 2012", "chevrolet traverse suv 2012", "bmw m6 convertible 2010", "bentley mulsanne sedan 2011", "jaguar xk xkr 2012", "dodge challenger srt8 2011", "chevrolet malibu sedan 2007", "hyundai sonata sedan 2012", "land rover lr2 suv 2012", "infiniti g coupe ipl 2012", "bugatti veyron 16.4 coupe 2009", "acura zdx hatchback 2012", "hyundai elantra touring hatchback 2012", "suzuki aerio sedan 2007", "ferrari 458 italia coupe 2012", "bmw z4 convertible 2012", "chevrolet corvette zr1 2012", "honda accord sedan 2012", "rolls-royce ghost sedan 2012", "audi s5 coupe 2012", "audi s4 sedan 2012", "lamborghini reventon coupe 2008", "chevrolet express cargo van 2007", "jeep grand cherokee suv 2012", "lincoln town car sedan 2011", "gmc terrain suv 2012", "toyota corolla sedan 2012", "toyota sequoia suv 2012", "mitsubishi lancer sedan 2012", "bmw activehybrid 5 sedan 2012", "acura tl type-s 2008", "land rover range rover suv 2012", "chevrolet corvette convertible 2012", "gmc yukon hybrid suv 2012", "bmw 3 series sedan 2012", "chevrolet tahoe hybrid suv 2012", "bugatti veyron 16.4 convertible 2009", "dodge durango suv 2012", "aston martin virage coupe 2012", "suzuki sx4 sedan 2012", "chrysler crossfire convertible 2008", "audi 100 sedan 1994", "audi tts coupe 2012", "bentley continental gt coupe 2007", "buick rainier suv 2007", "mazda tribute suv 2011", "bmw m3 coupe 2012", "bmw x5 suv 2007", "fiat 500 abarth 2012", "gmc savana van 2012", "chrysler pt cruiser convertible 2008", "fisker karma sedan 2012", "tesla model s sedan 2012", "bmw 3 series wagon 2012", "mercedes-benz sprinter van 2012", "hyundai azera sedan 2012", "chrysler aspen suv 2009", "acura tl sedan 2012", "audi rs 4 convertible 2008", "daewoo nubira wagon 2002", "mercedes-benz e-class sedan 2012", "dodge caravan minivan 1997", "bentley continental flying spur sedan 2007", "cadillac srx suv 2012", "maybach landaulet convertible 2012", "fiat 500 convertible 2012", "bentley arnage sedan 2009", "hyundai tucson suv 2012", "lamborghini aventador coupe 2012", "chevrolet silverado 1500 classic extended cab 2007", "ford f-450 super duty crew cab 2012", "hyundai veloster hatchback 2012", "dodge magnum wagon 2008", "gmc canyon extended cab 2012", "infiniti qx56 suv 2011", "lamborghini gallardo lp 570-4 superleggera 2012", "chrysler sebring convertible 2010", "chevrolet silverado 2500hd regular cab 2012", "chevrolet cobalt ss 2010", "hummer h2 sut crew cab 2009", "scion xd hatchback 2012", "spyker c8 coupe 2009", "ford e-series wagon van 2012", "toyota 4runner suv 2012", "porsche panamera sedan 2012", "nissan nv passenger van 2012", "audi s6 sedan 2011", "nissan 240sx coupe 1998", "toyota camry sedan 2012", "acura rl sedan 2012", "spyker c8 convertible 2009", "bmw m5 sedan 2010", "hyundai sonata hybrid sedan 2012", "mercedes-benz s-class sedan 2012", "hyundai santa fe suv 2012", "bmw 1 series convertible 2012", "ford fiesta sedan 2012", "dodge charger srt-8 2009", "aston martin virage convertible 2012", "ford freestar minivan 2007", "dodge dakota club cab 2007", "am general hummer suv 2000", "aston martin v8 vantage coupe 2012", "ford f-150 regular cab 2012", "volkswagen golf hatchback 1991", "volkswagen golf hatchback 2012", "ferrari 458 italia convertible 2012", "audi a5 coupe 2012", "volvo c30 hatchback 2012", "dodge journey suv 2012", "hummer h3t crew cab 2010", "chevrolet silverado 1500 extended cab 2012", "dodge ram pickup 3500 quad cab 2009", "dodge durango suv 2007", "ford edge suv 2012", "ford expedition el suv 2009", "ferrari ff coupe 2012", "honda odyssey minivan 2007", "hyundai elantra sedan 2007", "ford ranger supercab 2011", "nissan leaf hatchback 2012", "chevrolet silverado 1500 hybrid crew cab 2012", "volkswagen beetle hatchback 2012", "nissan juke hatchback 2012", "dodge sprinter cargo van 2009", "ford f-150 regular cab 2007", "honda accord coupe 2012", "ferrari california convertible 2012", "bmw 6 series convertible 2007", "audi s4 sedan 2007", "jeep patriot suv 2012", "chevrolet avalanche crew cab 2012", "chevrolet trailblazer ss 2009", "audi r8 coupe 2012", "eagle talon hatchback 1998", "bentley continental supersports conv. convertible 2012", "mercedes-benz sl-class coupe 2009", "volvo xc90 suv 2007", "mercedes-benz c-class sedan 2012", "bmw 1 series coupe 2012", "bmw x3 suv 2012", "dodge charger sedan 2012", "chevrolet silverado 1500 regular cab 2012", "acura integra type r 2001", "suzuki sx4 hatchback 2012", "ford mustang convertible 2007", "bentley continental gt coupe 2012", "chevrolet sonic sedan 2012", "lamborghini diablo coupe 2001", "cadillac escalade ext crew cab 2007", "chevrolet corvette ron fellows edition z06 2007", "chevrolet camaro convertible 2012", "mini cooper roadster convertible 2012", "gmc acadia suv 2012", "chevrolet impala sedan 2007" ]
griffio/vit-large-patch16-224-in21k-dungeon-geo-morphs-0-4-30Nov24-002
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-in21k-dungeon-geo-morphs-0-4-30Nov24-002 This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.2581 - Accuracy: 0.9661 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.517 | 3.9091 | 10 | 1.3386 | 0.6768 | | 0.8959 | 7.9091 | 20 | 0.8879 | 0.9089 | | 0.4053 | 11.9091 | 30 | 0.5939 | 0.9375 | | 0.173 | 15.9091 | 40 | 0.4381 | 0.95 | | 0.0766 | 19.9091 | 50 | 0.3394 | 0.9589 | | 0.0395 | 23.9091 | 60 | 0.2854 | 0.9643 | | 0.0243 | 27.9091 | 70 | 0.2581 | 0.9661 | | 0.0186 | 31.9091 | 80 | 0.2486 | 0.9661 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-in21k-dungeon-geo-morphs-0-4-30Nov24-003
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-in21k-dungeon-geo-morphs-0-4-30Nov24-003 This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1374 - Accuracy: 0.9589 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.3682 | 3.9091 | 10 | 1.0376 | 0.825 | | 0.5281 | 7.9091 | 20 | 0.5309 | 0.9196 | | 0.1424 | 11.9091 | 30 | 0.2757 | 0.9375 | | 0.033 | 15.9091 | 40 | 0.1681 | 0.9482 | | 0.0093 | 19.9091 | 50 | 0.1374 | 0.9589 | | 0.0046 | 23.9091 | 60 | 0.1288 | 0.9589 | | 0.0034 | 27.9091 | 70 | 0.1221 | 0.9571 | | 0.003 | 31.9091 | 80 | 0.1208 | 0.9571 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
griffio/vit-large-patch16-224-in21k-dungeon-geo-morphs-0-4-30Nov24-004
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-in21k-dungeon-geo-morphs-0-4-30Nov24-004 This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.1275 - Accuracy: 0.9625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.3834 | 3.9091 | 10 | 1.1055 | 0.7929 | | 0.5606 | 7.9091 | 20 | 0.5141 | 0.9286 | | 0.13 | 11.9091 | 30 | 0.2629 | 0.9518 | | 0.0283 | 15.9091 | 40 | 0.1654 | 0.9464 | | 0.0082 | 19.9091 | 50 | 0.1352 | 0.9554 | | 0.0043 | 23.9091 | 60 | 0.1337 | 0.9589 | | 0.0033 | 27.9091 | 70 | 0.1257 | 0.9607 | | 0.0029 | 31.9091 | 80 | 0.1275 | 0.9625 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "four", "one", "three", "two", "zero" ]
alfiannajih/trash-classification
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "cardboard", "glass", "metal", "paper", "plastic", "trash" ]
Sanjara/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
Smogy/SMOGY-Ai-images-detector
# AI-image-detector The purpose of this model is to classify images as AI generated or Real. ### Dataset This model was created by fine-tuning the [Organika/sdxl-detector] on dataset of AI generated and real images from reddit, kaggle and real art from public domain with their text description. Dataset was balanced to have similar number of real and generated images in each class (e.g. art, photos ...). Art images from public domain were paired with generated equivalent created from their text descriptions with style transfer (sdxl with ip-adapter) from original piece. The final dataset consisted of more than 50k images. ### Testing The testing dataset consisted of 20% split of our base dataset and images outside the training domain from specific popular (as of 2024) image generation models. Finetuning vastly improved performance over Organika/sdxl-detector during testing, especially on images created by newer models. Test split evaluation | Accuracy | Precision | Recall | F1 | |:-------------:|:---------------:|:--------:|:--------:| | 0.9818 | 0.9829 | 0.9810 | 0.9819 | Out of domain evaluation | Generative Model Family | Accuracy | |:-------------:|:---------------:| | DALL-E | 0.9076 | | FluxAi | 0.8333 | | Imagen | 0.7563 | | StableDiffusion | 0.8754 | ### License The data used to fine-tune this model was scraped from image dedicated subreddits, some of which may be copyrighted. For this reason, this model should be considered appropriate only for non-commercial use only.
[ "artificial", "human" ]
mnauf/eval-trueface-dinov2-large-5-epochs-nov_19-baseline
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eval-trueface-dinov2-large-5-epochs-nov_19-baseline This model is a fine-tuned version of [facebook/dinov2-large](https://huggingface.co/facebook/dinov2-large) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.9384 - eval_model_preparation_time: 0.0055 - eval_accuracy: 0.5680 - eval_runtime: 1478.4624 - eval_samples_per_second: 14.204 - eval_steps_per_second: 0.444 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "fake", "real" ]
osmanh/vit-base-patch16-finetuned-beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-finetuned-beans This model is a fine-tuned version of [osmanh/vit-base-patch16-finetuned-EuroSAT](https://huggingface.co/osmanh/vit-base-patch16-finetuned-EuroSAT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2010 - Accuracy: 0.9662 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3493 | 1.0 | 13 | 0.2805 | 0.9420 | | 0.2435 | 2.0 | 26 | 0.2215 | 0.9469 | | 0.1721 | 3.0 | 39 | 0.1618 | 0.9758 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
Augusto777/swinv2-tiny-patch4-window8-256-DMAE-4e-3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-DMAE-4e-3 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7507 - Accuracy: 0.7391 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.8571 | 3 | 1.3959 | 0.3043 | | No log | 1.7857 | 6 | 1.2662 | 0.3913 | | No log | 2.7143 | 9 | 1.1960 | 0.4783 | | 1.3226 | 3.9286 | 13 | 1.1950 | 0.4565 | | 1.3226 | 4.8571 | 16 | 1.1891 | 0.4783 | | 1.3226 | 5.7857 | 19 | 1.1898 | 0.4783 | | 1.1833 | 6.7143 | 22 | 1.1824 | 0.5435 | | 1.1833 | 7.9286 | 26 | 1.1618 | 0.5217 | | 1.1833 | 8.8571 | 29 | 1.1359 | 0.5652 | | 1.1384 | 9.7857 | 32 | 1.0974 | 0.5870 | | 1.1384 | 10.7143 | 35 | 1.0524 | 0.5870 | | 1.1384 | 11.9286 | 39 | 1.0083 | 0.6957 | | 1.0628 | 12.8571 | 42 | 0.9696 | 0.6739 | | 1.0628 | 13.7857 | 45 | 0.9369 | 0.6739 | | 1.0628 | 14.7143 | 48 | 0.8825 | 0.7174 | | 1.0069 | 15.9286 | 52 | 0.8396 | 0.6957 | | 1.0069 | 16.8571 | 55 | 0.8267 | 0.7174 | | 1.0069 | 17.7857 | 58 | 0.8275 | 0.7174 | | 0.9339 | 18.7143 | 61 | 0.8255 | 0.7174 | | 0.9339 | 19.9286 | 65 | 0.7899 | 0.7174 | | 0.9339 | 20.8571 | 68 | 0.7604 | 0.7174 | | 0.905 | 21.7857 | 71 | 0.7442 | 0.6957 | | 0.905 | 22.7143 | 74 | 0.7361 | 0.7391 | | 0.905 | 23.9286 | 78 | 0.7598 | 0.6957 | | 0.8465 | 24.8571 | 81 | 0.7650 | 0.7174 | | 0.8465 | 25.7857 | 84 | 0.7631 | 0.7391 | | 0.8465 | 26.7143 | 87 | 0.7561 | 0.7174 | | 0.8363 | 27.9286 | 91 | 0.7494 | 0.6957 | | 0.8363 | 28.8571 | 94 | 0.7539 | 0.7174 | | 0.8363 | 29.7857 | 97 | 0.7497 | 0.7174 | | 0.7751 | 30.7143 | 100 | 0.7477 | 0.7174 | | 0.7751 | 31.9286 | 104 | 0.7463 | 0.7609 | | 0.7751 | 32.8571 | 107 | 0.7507 | 0.7609 | | 0.7843 | 33.7857 | 110 | 0.7534 | 0.7391 | | 0.7843 | 34.7143 | 113 | 0.7542 | 0.7391 | | 0.7843 | 35.9286 | 117 | 0.7519 | 0.7391 | | 0.7435 | 36.8571 | 120 | 0.7507 | 0.7391 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "avanzada", "leve", "moderada", "no dmae" ]
alem-147/poison-distill
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # poison-distill This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: -113.4181 - Accuracy: 0.6917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | -0.7533 | 1.0 | 130 | -7.9549 | 0.5263 | | -8.2193 | 2.0 | 260 | -15.5418 | 0.4662 | | -14.3197 | 3.0 | 390 | -32.2167 | 0.4737 | | -18.5547 | 4.0 | 520 | -18.9202 | 0.5489 | | -22.6905 | 5.0 | 650 | -55.1682 | 0.4361 | | -27.5336 | 6.0 | 780 | -32.4679 | 0.3459 | | -29.5975 | 7.0 | 910 | -48.1715 | 0.3985 | | -34.1837 | 8.0 | 1040 | -67.7293 | 0.6165 | | -37.6123 | 9.0 | 1170 | -52.1341 | 0.4662 | | -40.7694 | 10.0 | 1300 | -49.0945 | 0.6767 | | -43.3691 | 11.0 | 1430 | -37.0478 | 0.5489 | | -47.6433 | 12.0 | 1560 | -73.0523 | 0.4511 | | -51.0141 | 13.0 | 1690 | -110.8840 | 0.4812 | | -54.6 | 14.0 | 1820 | -81.2219 | 0.3308 | | -57.2133 | 15.0 | 1950 | -80.8684 | 0.5113 | | -58.3442 | 16.0 | 2080 | -66.5341 | 0.4060 | | -64.7089 | 17.0 | 2210 | -75.7059 | 0.5564 | | -64.26 | 18.0 | 2340 | -77.7801 | 0.5263 | | -67.8509 | 19.0 | 2470 | -61.1841 | 0.6316 | | -71.9371 | 20.0 | 2600 | -118.1544 | 0.5038 | | -75.9672 | 21.0 | 2730 | -179.2044 | 0.4812 | | -78.0096 | 22.0 | 2860 | -129.4854 | 0.4436 | | -80.3581 | 23.0 | 2990 | -100.0687 | 0.4286 | | -84.623 | 24.0 | 3120 | -82.5292 | 0.3835 | | -86.5363 | 25.0 | 3250 | -84.6636 | 0.4211 | | -90.8566 | 26.0 | 3380 | -96.3337 | 0.5489 | | -92.2054 | 27.0 | 3510 | -110.3293 | 0.4737 | | -97.6982 | 28.0 | 3640 | -195.6973 | 0.4135 | | -95.8944 | 29.0 | 3770 | -101.9933 | 0.3609 | | -99.491 | 30.0 | 3900 | -99.8199 | 0.6541 | | -103.0877 | 31.0 | 4030 | -94.2175 | 0.6767 | | -102.7123 | 32.0 | 4160 | -98.6300 | 0.4887 | | -105.2087 | 33.0 | 4290 | -152.7768 | 0.4962 | | -105.3795 | 34.0 | 4420 | -198.8245 | 0.5263 | | -108.9734 | 35.0 | 4550 | -105.7644 | 0.4286 | | -111.1308 | 36.0 | 4680 | -121.4677 | 0.4962 | | -115.0085 | 37.0 | 4810 | -75.3733 | 0.3083 | | -114.714 | 38.0 | 4940 | -115.4598 | 0.6617 | | -117.5734 | 39.0 | 5070 | -108.3964 | 0.4135 | | -115.1971 | 40.0 | 5200 | -123.7679 | 0.3835 | | -117.5617 | 41.0 | 5330 | -69.2224 | 0.2932 | | -118.2803 | 42.0 | 5460 | -104.5906 | 0.6541 | | -119.6297 | 43.0 | 5590 | -187.3416 | 0.5188 | | -121.6325 | 44.0 | 5720 | -221.8878 | 0.5113 | | -120.9663 | 45.0 | 5850 | -176.6644 | 0.3759 | | -122.3583 | 46.0 | 5980 | -142.5218 | 0.4361 | | -126.6614 | 47.0 | 6110 | -271.1018 | 0.4962 | | -122.1615 | 48.0 | 6240 | -240.8323 | 0.3985 | | -125.4207 | 49.0 | 6370 | -103.5760 | 0.6466 | | -127.0661 | 50.0 | 6500 | -113.2718 | 0.6842 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
alem-147/poisoned-baseline2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # poisoned-baseline2 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 9.7330 - Accuracy: 0.6466 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.1307 | 1.0 | 130 | 1.0262 | 0.4962 | | 1.0927 | 2.0 | 260 | 1.7590 | 0.2632 | | 1.0507 | 3.0 | 390 | 2.1327 | 0.5188 | | 1.0081 | 4.0 | 520 | 1.3239 | 0.5714 | | 0.9565 | 5.0 | 650 | 1.1102 | 0.5489 | | 0.7963 | 6.0 | 780 | 1.3624 | 0.6917 | | 0.6663 | 7.0 | 910 | 5.5153 | 0.5564 | | 0.6336 | 8.0 | 1040 | 5.0001 | 0.5940 | | 0.5852 | 9.0 | 1170 | 9.5447 | 0.5489 | | 0.5467 | 10.0 | 1300 | 6.4452 | 0.5714 | | 0.5318 | 11.0 | 1430 | 12.3394 | 0.5038 | | 0.5177 | 12.0 | 1560 | 3.9932 | 0.5940 | | 0.4431 | 13.0 | 1690 | 1.7703 | 0.6541 | | 0.4164 | 14.0 | 1820 | 4.0616 | 0.5038 | | 0.4392 | 15.0 | 1950 | 1.3017 | 0.7744 | | 0.4356 | 16.0 | 2080 | 1.1080 | 0.7293 | | 0.3853 | 17.0 | 2210 | 5.3221 | 0.5789 | | 0.3911 | 18.0 | 2340 | 1.3064 | 0.7143 | | 0.3493 | 19.0 | 2470 | 8.7354 | 0.5564 | | 0.3017 | 20.0 | 2600 | 1.4359 | 0.6466 | | 0.3532 | 21.0 | 2730 | 5.5559 | 0.6241 | | 0.2868 | 22.0 | 2860 | 3.5486 | 0.5263 | | 0.3125 | 23.0 | 2990 | 7.4942 | 0.6617 | | 0.326 | 24.0 | 3120 | 3.8914 | 0.7143 | | 0.2561 | 25.0 | 3250 | 2.8141 | 0.6692 | | 0.2923 | 26.0 | 3380 | 6.7704 | 0.6090 | | 0.2311 | 27.0 | 3510 | 1.8806 | 0.7293 | | 0.2274 | 28.0 | 3640 | 2.3829 | 0.6316 | | 0.2481 | 29.0 | 3770 | 3.2873 | 0.5940 | | 0.2612 | 30.0 | 3900 | 1.7361 | 0.7368 | | 0.2541 | 31.0 | 4030 | 6.3135 | 0.6241 | | 0.1998 | 32.0 | 4160 | 4.0907 | 0.6842 | | 0.2628 | 33.0 | 4290 | 4.6728 | 0.7068 | | 0.2515 | 34.0 | 4420 | 3.1405 | 0.6617 | | 0.2352 | 35.0 | 4550 | 3.2859 | 0.7519 | | 0.242 | 36.0 | 4680 | 0.8856 | 0.7594 | | 0.2095 | 37.0 | 4810 | 5.4219 | 0.6692 | | 0.2173 | 38.0 | 4940 | 9.1599 | 0.6842 | | 0.1865 | 39.0 | 5070 | 2.9133 | 0.7293 | | 0.2539 | 40.0 | 5200 | 10.3407 | 0.6241 | | 0.2512 | 41.0 | 5330 | 4.8001 | 0.7218 | | 0.2304 | 42.0 | 5460 | 8.2643 | 0.6767 | | 0.1761 | 43.0 | 5590 | 6.4020 | 0.6466 | | 0.1947 | 44.0 | 5720 | 2.3283 | 0.7293 | | 0.2011 | 45.0 | 5850 | 3.2135 | 0.6842 | | 0.1789 | 46.0 | 5980 | 2.5271 | 0.7218 | | 0.1217 | 47.0 | 6110 | 3.5338 | 0.7218 | | 0.197 | 48.0 | 6240 | 2.8379 | 0.7669 | | 0.1378 | 49.0 | 6370 | 6.7036 | 0.6767 | | 0.1641 | 50.0 | 6500 | 5.3123 | 0.6692 | | 0.171 | 51.0 | 6630 | 29.0727 | 0.5489 | | 0.1694 | 52.0 | 6760 | 3.9145 | 0.7669 | | 0.1694 | 53.0 | 6890 | 20.2058 | 0.6015 | | 0.0983 | 54.0 | 7020 | 3.0154 | 0.7444 | | 0.0983 | 55.0 | 7150 | 4.5036 | 0.6992 | | 0.1116 | 56.0 | 7280 | 15.8594 | 0.5564 | | 0.1467 | 57.0 | 7410 | 1.9574 | 0.7744 | | 0.1161 | 58.0 | 7540 | 7.0993 | 0.5940 | | 0.1424 | 59.0 | 7670 | 5.0006 | 0.7368 | | 0.0921 | 60.0 | 7800 | 10.6072 | 0.6015 | | 0.1014 | 61.0 | 7930 | 3.9741 | 0.7368 | | 0.1456 | 62.0 | 8060 | 2.6188 | 0.7744 | | 0.2115 | 63.0 | 8190 | 5.3006 | 0.6617 | | 0.1167 | 64.0 | 8320 | 3.2966 | 0.6992 | | 0.0746 | 65.0 | 8450 | 2.1400 | 0.7594 | | 0.0694 | 66.0 | 8580 | 5.7985 | 0.6767 | | 0.0515 | 67.0 | 8710 | 3.5244 | 0.6767 | | 0.0513 | 68.0 | 8840 | 4.2358 | 0.6917 | | 0.1511 | 69.0 | 8970 | 6.8578 | 0.6541 | | 0.1871 | 70.0 | 9100 | 12.4745 | 0.6617 | | 0.114 | 71.0 | 9230 | 2.7450 | 0.7594 | | 0.0438 | 72.0 | 9360 | 5.2159 | 0.6842 | | 0.054 | 73.0 | 9490 | 3.8337 | 0.7143 | | 0.1645 | 74.0 | 9620 | 12.4765 | 0.5789 | | 0.0655 | 75.0 | 9750 | 3.4949 | 0.7143 | | 0.0676 | 76.0 | 9880 | 3.7470 | 0.7293 | | 0.1427 | 77.0 | 10010 | 9.8213 | 0.6316 | | 0.099 | 78.0 | 10140 | 14.3845 | 0.6015 | | 0.0943 | 79.0 | 10270 | 3.3007 | 0.7895 | | 0.0971 | 80.0 | 10400 | 4.5807 | 0.6917 | | 0.1338 | 81.0 | 10530 | 7.8281 | 0.6692 | | 0.0494 | 82.0 | 10660 | 10.0532 | 0.6617 | | 0.0384 | 83.0 | 10790 | 3.4354 | 0.7820 | | 0.0781 | 84.0 | 10920 | 7.8234 | 0.6316 | | 0.1122 | 85.0 | 11050 | 5.1243 | 0.7068 | | 0.0965 | 86.0 | 11180 | 7.5119 | 0.6617 | | 0.1852 | 87.0 | 11310 | 11.2423 | 0.6015 | | 0.0512 | 88.0 | 11440 | 2.3147 | 0.7744 | | 0.0456 | 89.0 | 11570 | 2.9752 | 0.7744 | | 0.0479 | 90.0 | 11700 | 17.1507 | 0.6241 | | 0.04 | 91.0 | 11830 | 2.8366 | 0.7068 | | 0.1437 | 92.0 | 11960 | 16.1989 | 0.5789 | | 0.0256 | 93.0 | 12090 | 3.2687 | 0.6917 | | 0.0178 | 94.0 | 12220 | 3.8819 | 0.7068 | | 0.0356 | 95.0 | 12350 | 2.6739 | 0.6992 | | 0.1282 | 96.0 | 12480 | 8.0099 | 0.6466 | | 0.0544 | 97.0 | 12610 | 11.1235 | 0.6466 | | 0.0502 | 98.0 | 12740 | 4.4413 | 0.6241 | | 0.0398 | 99.0 | 12870 | 26.8311 | 0.5188 | | 0.1161 | 100.0 | 13000 | 9.7330 | 0.6466 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
bombshelll/swin-brain-abnormality-location-classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-brain-abnormality-location-classification This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7054 - Accuracy: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.7132 | 0.9778 | 11 | 0.6919 | 0.5625 | | 0.6954 | 1.9556 | 22 | 0.7046 | 0.5 | | 0.6865 | 2.9333 | 33 | 0.7066 | 0.5062 | | 0.6773 | 3.9111 | 44 | 0.7054 | 0.5 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
[ "left", "right" ]