model_id
stringlengths
7
105
model_card
stringlengths
1
130k
model_labels
listlengths
2
80k
Omriy123/vit_epochs5_batch32_lr5e-05_size224_tiles10_seed2_q3_DA
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_epochs5_batch32_lr5e-05_size224_tiles10_seed2_q3_DA This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset. It achieves the following results on the evaluation set: - Loss: 0.1887 - Accuracy: 0.9435 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1412 | 1.0 | 469 | 0.1887 | 0.9435 | | 0.0277 | 2.0 | 938 | 0.2072 | 0.9408 | | 0.072 | 3.0 | 1407 | 0.2000 | 0.9445 | | 0.0293 | 4.0 | 1876 | 0.1896 | 0.9525 | | 0.0276 | 5.0 | 2345 | 0.2007 | 0.9539 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "cat", "dog" ]
Omriy123/vit_epochs5_batch32_lr5e-05_size224_tiles12_seed2_q3_DA
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_epochs5_batch32_lr5e-05_size224_tiles12_seed2_q3_DA This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset. It achieves the following results on the evaluation set: - Loss: 0.2620 - Accuracy: 0.9109 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1033 | 1.0 | 469 | 0.2620 | 0.9109 | | 0.0258 | 2.0 | 938 | 0.3435 | 0.9043 | | 0.0991 | 3.0 | 1407 | 0.2998 | 0.9173 | | 0.0486 | 4.0 | 1876 | 0.2879 | 0.9147 | | 0.0118 | 5.0 | 2345 | 0.3129 | 0.924 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "cat", "dog" ]
Omriy123/vit_epochs5_batch32_lr5e-05_size224_tiles4_seed3_q3_DA
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_epochs5_batch32_lr5e-05_size224_tiles4_seed3_q3_DA This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset. It achieves the following results on the evaluation set: - Loss: 0.0626 - Accuracy: 0.9819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.039 | 1.0 | 469 | 0.1028 | 0.9696 | | 0.014 | 2.0 | 938 | 0.0752 | 0.9781 | | 0.0392 | 3.0 | 1407 | 0.0821 | 0.9784 | | 0.0526 | 4.0 | 1876 | 0.0626 | 0.9819 | | 0.0623 | 5.0 | 2345 | 0.0723 | 0.9816 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "cat", "dog" ]
Omriy123/vit_epochs5_batch32_lr5e-05_size224_tiles7_seed3_q3_DA
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_epochs5_batch32_lr5e-05_size224_tiles7_seed3_q3_DA This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset. It achieves the following results on the evaluation set: - Loss: 0.1262 - Accuracy: 0.9672 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0563 | 1.0 | 469 | 0.1262 | 0.9672 | | 0.0108 | 2.0 | 938 | 0.1464 | 0.9667 | | 0.0361 | 3.0 | 1407 | 0.1436 | 0.9677 | | 0.0313 | 4.0 | 1876 | 0.1284 | 0.9717 | | 0.0389 | 5.0 | 2345 | 0.1320 | 0.9701 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "cat", "dog" ]
Omriy123/vit_epochs5_batch32_lr5e-05_size224_tiles10_seed3_q3_DA
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_epochs5_batch32_lr5e-05_size224_tiles10_seed3_q3_DA This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset. It achieves the following results on the evaluation set: - Loss: 0.2169 - Accuracy: 0.9344 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0424 | 1.0 | 469 | 0.2169 | 0.9344 | | 0.0932 | 2.0 | 938 | 0.2174 | 0.9408 | | 0.0517 | 3.0 | 1407 | 0.2282 | 0.9429 | | 0.0457 | 4.0 | 1876 | 0.2489 | 0.9405 | | 0.017 | 5.0 | 2345 | 0.2372 | 0.9469 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "cat", "dog" ]
vananhle/vitbase-nmsc-2-classes
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "nmsc", "other" ]
Omriy123/vit_epochs5_batch32_lr5e-05_size224_tiles12_seed3_q3_DA
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_epochs5_batch32_lr5e-05_size224_tiles12_seed3_q3_DA This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset. It achieves the following results on the evaluation set: - Loss: 0.2699 - Accuracy: 0.9208 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1309 | 1.0 | 469 | 0.2841 | 0.9173 | | 0.0147 | 2.0 | 938 | 0.3476 | 0.9133 | | 0.0244 | 3.0 | 1407 | 0.2699 | 0.9208 | | 0.1212 | 4.0 | 1876 | 0.2951 | 0.9248 | | 0.0073 | 5.0 | 2345 | 0.2934 | 0.9267 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "cat", "dog" ]
shravankumar147/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0660 - Accuracy: 0.9778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2289 | 1.0 | 190 | 0.0940 | 0.9715 | | 0.1339 | 2.0 | 380 | 0.0756 | 0.9741 | | 0.1044 | 3.0 | 570 | 0.0660 | 0.9778 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.13.0 - Tokenizers 0.13.3
[ "annual crop", "forest", "herbaceous vegetation", "highway", "industrial", "pasture", "permanent crop", "residential", "river", "sea or lake" ]
hchcsuim/batch-size16_DFDC_opencv-1FPS_faces-expand0-aligned_unaugmentation
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # batch-size16_DFDC_opencv-1FPS_faces-expand0-aligned_unaugmentation This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0582 - Accuracy: 0.9775 - Precision: 0.9844 - Recall: 0.9889 - F1: 0.9866 - Roc Auc: 0.9958 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| | 0.0809 | 1.0 | 18733 | 0.0582 | 0.9775 | 0.9844 | 0.9889 | 0.9866 | 0.9958 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "fake", "real" ]
kgyamfi/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5780 - Accuracy: 0.7503 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.683 | 0.9990 | 245 | 0.6377 | 0.7175 | | 0.6309 | 1.9980 | 490 | 0.5958 | 0.7406 | | 0.6332 | 2.9969 | 735 | 0.5780 | 0.7503 | ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "cross-intersection", "roundabout", "t-intersection", "y-intersection" ]
hchcsuim/batch-size16_DFDC_opencv-1FPS_faces-expand10-aligned_unaugmentation
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # batch-size16_DFDC_opencv-1FPS_faces-expand10-aligned_unaugmentation This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0559 - Accuracy: 0.9783 - Precision: 0.9829 - Recall: 0.9915 - F1: 0.9872 - Roc Auc: 0.9961 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc | |:-------------:|:------:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| | 0.0497 | 1.0000 | 18777 | 0.0559 | 0.9783 | 0.9829 | 0.9915 | 0.9872 | 0.9961 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "fake", "real" ]
hchcsuim/batch-size16_DFDC_opencv-1FPS_faces-expand40-aligned_unaugmentation
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # batch-size16_DFDC_opencv-1FPS_faces-expand40-aligned_unaugmentation This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0521 - Accuracy: 0.9798 - Precision: 0.9849 - Recall: 0.9912 - F1: 0.9880 - Roc Auc: 0.9966 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc | |:-------------:|:------:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| | 0.0861 | 1.0000 | 18506 | 0.0521 | 0.9798 | 0.9849 | 0.9912 | 0.9880 | 0.9966 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "fake", "real" ]
hchcsuim/batch-size16_DFDC_opencv-1FPS_faces-expand20-aligned_unaugmentation
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # batch-size16_DFDC_opencv-1FPS_faces-expand20-aligned_unaugmentation This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0532 - Accuracy: 0.9795 - Precision: 0.9849 - Recall: 0.9907 - F1: 0.9878 - Roc Auc: 0.9964 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc | |:-------------:|:------:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| | 0.0543 | 1.0000 | 18673 | 0.0532 | 0.9795 | 0.9849 | 0.9907 | 0.9878 | 0.9964 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "fake", "real" ]
hchcsuim/batch-size16_DFDC_opencv-1FPS_faces-expand30-aligned_unaugmentation
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # batch-size16_DFDC_opencv-1FPS_faces-expand30-aligned_unaugmentation This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0525 - Accuracy: 0.9797 - Precision: 0.9858 - Recall: 0.9902 - F1: 0.9880 - Roc Auc: 0.9966 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc | |:-------------:|:------:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| | 0.0563 | 1.0000 | 18613 | 0.0525 | 0.9797 | 0.9858 | 0.9902 | 0.9880 | 0.9966 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "fake", "real" ]
mateoluksenberg/dit-base-Classifier_CM05_V2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dit-base-Classifier_CM05_V2 This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8224 - Accuracy: 0.0 - Weighted f1: 0.0 - Micro f1: 0.0 - Macro f1: 0.0 - Weighted recall: 0.0 - Micro recall: 0.0 - Macro recall: 0.0 - Weighted precision: 0.0 - Micro precision: 0.0 - Macro precision: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 18 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:| | 0.236 | 1.0 | 1 | 0.3301 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.236 | 2.0 | 2 | 0.5344 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.236 | 3.0 | 3 | 1.4354 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.236 | 4.0 | 4 | 2.9869 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.236 | 5.0 | 5 | 3.6242 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.236 | 6.0 | 6 | 3.4178 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.236 | 7.0 | 7 | 2.9061 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1227 | 8.0 | 8 | 2.3784 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1227 | 9.0 | 9 | 1.9251 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1227 | 10.0 | 10 | 1.5871 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1227 | 11.0 | 11 | 1.3348 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1227 | 12.0 | 12 | 1.1845 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1227 | 13.0 | 13 | 1.0686 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1227 | 14.0 | 14 | 0.9755 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.1227 | 15.0 | 15 | 0.9099 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0376 | 16.0 | 16 | 0.8643 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0376 | 17.0 | 17 | 0.8357 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.0376 | 18.0 | 18 | 0.8224 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "cm05", "facturas" ]
nprasad24/bean_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # nprasad24/bean_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the <a href = "https://huggingface.co/datasets/AI-Lab-Makerere/beans">Beans</a> dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1964 - Validation Loss: 0.0917 - Train Accuracy: 0.9925 - Epoch: 4 ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification). By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations Can only be used on the beans dataset ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 5170, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.7278 | 0.3480 | 0.9699 | 0 | | 0.3124 | 0.1376 | 0.9925 | 1 | | 0.2559 | 0.1105 | 0.9850 | 2 | | 0.1914 | 0.0796 | 1.0 | 3 | | 0.1964 | 0.0917 | 0.9925 | 4 | ### Framework versions - Transformers 4.41.2 - TensorFlow 2.15.0 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
vananhle/efficientnetbo-nmsc-2-classes
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "nmsc", "other" ]
mostafasmart/vit-base-patch16-224-2class_pterygium
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-2class_pterygium This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0044 - Train Accuracy: 0.9515 - Train Top-3-accuracy: 1.0 - Validation Loss: 0.1337 - Validation Accuracy: 0.9550 - Validation Top-3-accuracy: 1.0 - Epoch: 5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 366, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 0.4319 | 0.6998 | 1.0 | 0.2552 | 0.8013 | 1.0 | 0 | | 0.1258 | 0.8484 | 1.0 | 0.1345 | 0.8810 | 1.0 | 1 | | 0.0223 | 0.9030 | 1.0 | 0.1283 | 0.9190 | 1.0 | 2 | | 0.0079 | 0.9287 | 1.0 | 0.1303 | 0.9367 | 1.0 | 3 | | 0.0054 | 0.9429 | 1.0 | 0.1333 | 0.9479 | 1.0 | 4 | | 0.0044 | 0.9515 | 1.0 | 0.1337 | 0.9550 | 1.0 | 5 | ### Framework versions - Transformers 4.41.2 - TensorFlow 2.15.0 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "notptery", "pterygium" ]
mostafasmart/vit-base-patch16-224-2class_normal
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-2class_normal This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0023 - Train Accuracy: 0.9828 - Train Top-3-accuracy: 1.0 - Validation Loss: 0.1087 - Validation Accuracy: 0.9839 - Validation Top-3-accuracy: 1.0 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 170, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 0.1803 | 0.8483 | 1.0 | 0.0796 | 0.9356 | 1.0 | 0 | | 0.0129 | 0.9548 | 1.0 | 0.0797 | 0.9663 | 1.0 | 1 | | 0.0042 | 0.9721 | 1.0 | 0.1078 | 0.9762 | 1.0 | 2 | | 0.0026 | 0.9791 | 1.0 | 0.1077 | 0.9813 | 1.0 | 3 | | 0.0023 | 0.9828 | 1.0 | 0.1087 | 0.9839 | 1.0 | 4 | ### Framework versions - Transformers 4.41.2 - TensorFlow 2.15.0 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "normaleyes", "notnormal" ]
habibi26/document-spoof-clip
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # document-spoof-clip This model is a fine-tuned version of [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0225 - Accuracy: 0.9857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.8421 | 4 | 0.5305 | 0.8429 | | No log | 1.8947 | 9 | 0.1707 | 0.9286 | | 0.6268 | 2.9474 | 14 | 0.3507 | 0.8429 | | 0.6268 | 4.0 | 19 | 0.4707 | 0.8 | | 0.2881 | 4.8421 | 23 | 0.1337 | 0.9286 | | 0.2881 | 5.8947 | 28 | 0.1293 | 0.9286 | | 0.2349 | 6.9474 | 33 | 0.0565 | 0.9714 | | 0.2349 | 8.0 | 38 | 0.0676 | 0.9571 | | 0.0907 | 8.8421 | 42 | 0.3071 | 0.9 | | 0.0907 | 9.8947 | 47 | 0.1462 | 0.9714 | | 0.1203 | 10.9474 | 52 | 0.0761 | 0.9714 | | 0.1203 | 12.0 | 57 | 0.0808 | 0.9571 | | 0.0715 | 12.8421 | 61 | 0.0204 | 0.9857 | | 0.0715 | 13.8947 | 66 | 0.0210 | 0.9857 | | 0.031 | 14.9474 | 71 | 0.0274 | 0.9714 | | 0.031 | 16.0 | 76 | 0.0448 | 0.9857 | | 0.0655 | 16.8421 | 80 | 0.0225 | 0.9857 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
[ "attack", "real" ]
djbp/swin-tiny-patch4-window7-224-category-classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-category-classification This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1410 - Accuracy: 0.6445 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.89 | 0.9948 | 96 | 1.5075 | 0.5441 | | 1.6059 | 2.0 | 193 | 1.3103 | 0.5983 | | 1.4844 | 2.9948 | 289 | 1.2595 | 0.6107 | | 1.4392 | 4.0 | 386 | 1.2414 | 0.6152 | | 1.3431 | 4.9948 | 482 | 1.1954 | 0.6285 | | 1.2897 | 6.0 | 579 | 1.1611 | 0.6384 | | 1.2222 | 6.9948 | 675 | 1.1575 | 0.6417 | | 1.212 | 8.0 | 772 | 1.1474 | 0.6421 | | 1.2087 | 8.9948 | 868 | 1.1410 | 0.6445 | | 1.1897 | 9.9482 | 960 | 1.1434 | 0.6432 | ### Framework versions - Transformers 4.41.2 - Pytorch 1.13.1+cu117 - Datasets 2.19.2 - Tokenizers 0.19.1
[ "automobiles", "beauty_and_wellness", "books_stationary", "business___professional_services", "education", "entertainment", "food___beverages", "fresh_products", "fuel", "grocery_general_store", "home_services", "home_utilities", "liquor_alcohol", "medical_health_care", "office_complex", "others_miscellaneous", "retail_outlets", "supermarket", "travels_transport___hospitality", "wholesaler_distributor_manufacturer_godown" ]
HardlyHumans/Facial-expression-detection
# Facial-Expression-Recognition This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the FER 2013 and AffectNet dataset datasets. It achieves the following results on the evaluation set: Accuracy - 0.922 Loss - 0.213 ### Model Description The vit-face-expression model is a Vision Transformer fine-tuned for the task of facial emotion recognition. It is trained on the FER2013and AffectNet datasets, which consist of facial images categorized into eight different emotions: -anger -contempt -sad -happy -neutral -disgust -fear -surprise ## Model Details The model has been fine-tuned using the following hyperparameters: | Hyperparameter | Value | |-------------------------|------------| | Train Batch Size | 32 | | Eval Batch Size | 64 | | Learning Rate | 2e-4 | | Gradient Accumulation | 2 | | LR Scheduler | Linear | | Warmup Ratio | 0.04 | | Num Epochs | 10 | ## How to Get Started with the Model Example usage: ```python from transformers import AutoImageProcessor, AutoModelForImageClassification, pipeline pipe = pipeline("image-classification", model="HardlyHumans/Facial-expression-detection") processor = AutoImageProcessor.from_pretrained("HardlyHumans/Facial-expression-detection") model = AutoModelForImageClassification.from_pretrained("HardlyHumans/Facial-expression-detection") labels = model.config.id2label outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() predicted_label = labels[predicted_class_idx] ``` ## Environmental Impact The net estimated CO2 emission using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) scale is around 8.82kg of CO2. - **Developed by:** Hardly Humans club, IIT Dharwad - **Model type:** Vision transformer - **License:** MIT - **Finetuned from model:** google/vit-base-patch16-224-in21k - **Hardware Type:** T4 - **Hours used:** 8+27 - **Cloud Provider:** Google collabotary service - **Compute Region:** South asia-1 - **Carbon Emitted:** 8.82 ### Model Architecture and Objective
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
LucasThil/swin-tiny-patch4-window7-224-finetuned-tiny-imagenet
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-tiny-imagenet This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6643 - Accuracy: 0.8234 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.8008 | 0.9996 | 703 | 0.9033 | 0.7728 | | 1.4699 | 1.9993 | 1406 | 0.7138 | 0.8114 | | 1.2742 | 2.9989 | 2109 | 0.6643 | 0.8234 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "n01443537", "n01629819", "n01641577", "n01644900", "n01698640", "n01742172", "n01768244", "n01770393", "n01774384", "n01774750", "n01784675", "n01882714", "n01910747", "n01917289", "n01944390", "n01950731", "n01983481", "n01984695", "n02002724", "n02056570", "n02058221", "n02074367", "n02094433", "n02099601", "n02099712", "n02106662", "n02113799", "n02123045", "n02123394", "n02124075", "n02125311", "n02129165", "n02132136", "n02165456", "n02226429", "n02231487", "n02233338", "n02236044", "n02268443", "n02279972", "n02281406", "n02321529", "n02364673", "n02395406", "n02403003", "n02410509", "n02415577", "n02423022", "n02437312", "n02480495", "n02481823", "n02486410", "n02504458", "n02509815", "n02666347", "n02669723", "n02699494", "n02769748", "n02788148", "n02791270", "n02793495", "n02795169", "n02802426", "n02808440", "n02814533", "n02814860", "n02815834", "n02823428", "n02837789", "n02841315", "n02843684", "n02883205", "n02892201", "n02909870", "n02917067", "n02927161", "n02948072", "n02950826", "n02963159", "n02977058", "n02988304", "n03014705", "n03026506", "n03042490", "n03085013", "n03089624", "n03100240", "n03126707", "n03160309", "n03179701", "n03201208", "n03255030", "n03355925", "n03373237", "n03388043", "n03393912", "n03400231", "n03404251", "n03424325", "n03444034", "n03447447", "n03544143", "n03584254", "n03599486", "n03617480", "n03637318", "n03649909", "n03662601", "n03670208", "n03706229", "n03733131", "n03763968", "n03770439", "n03796401", "n03814639", "n03837869", "n03838899", "n03854065", "n03891332", "n03902125", "n03930313", "n03937543", "n03970156", "n03977966", "n03980874", "n03983396", "n03992509", "n04008634", "n04023962", "n04070727", "n04074963", "n04099969", "n04118538", "n04133789", "n04146614", "n04149813", "n04179913", "n04251144", "n04254777", "n04259630", "n04265275", "n04275548", "n04285008", "n04311004", "n04328186", "n04356056", "n04366367", "n04371430", "n04376876", "n04398044", "n04399382", "n04417672", "n04456115", "n04465666", "n04486054", "n04487081", "n04501370", "n04507155", "n04532106", "n04532670", "n04540053", "n04560804", "n04562935", "n04596742", "n04598010", "n06596364", "n07056680", "n07583066", "n07614500", "n07615774", "n07646821", "n07647870", "n07657664", "n07695742", "n07711569", "n07715103", "n07720875", "n07749582", "n07753592", "n07768694", "n07871810", "n07873807", "n07875152", "n07920052", "n07975909", "n08496334", "n08620881", "n08742578", "n09193705", "n09246464", "n09256479", "n09332890", "n09428293", "n12267677", "n12520864", "n13001041", "n13652335", "n13652994", "n13719102", "n14991210" ]
hchcsuim/batch-size16_FFPP-raw_opencv-originalFPS_unaugmentation
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # batch-size16_FFPP-raw_opencv-originalFPS_unaugmentation This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0100 - Accuracy: 0.9971 - Precision: 0.9969 - Recall: 0.9994 - F1: 0.9981 - Roc Auc: 0.9999 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc | |:-------------:|:------:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| | 0.0089 | 1.0000 | 36557 | 0.0100 | 0.9971 | 0.9969 | 0.9994 | 0.9981 | 0.9999 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "fake", "real" ]
mthandazo/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.1802 - Accuracy: 0.9486 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3868 | 1.0 | 370 | 0.3201 | 0.9053 | | 0.2061 | 2.0 | 740 | 0.2514 | 0.9215 | | 0.158 | 3.0 | 1110 | 0.2354 | 0.9323 | | 0.1432 | 4.0 | 1480 | 0.2258 | 0.9310 | | 0.1339 | 5.0 | 1850 | 0.2255 | 0.9296 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
dmartincc/vedt-lg
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vedt-lg This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1817 - F1: 0.93 - Roc Auc: 0.95 - Accuracy: 0.92 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:----:|:-------:|:--------:| | 0.5369 | 1.0 | 122 | 0.5339 | 0.53 | 0.67 | 0.41 | | 0.3995 | 2.0 | 245 | 0.3591 | 0.8 | 0.84 | 0.73 | | 0.2357 | 3.0 | 367 | 0.2492 | 0.89 | 0.92 | 0.88 | | 0.1409 | 4.0 | 490 | 0.2015 | 0.91 | 0.93 | 0.9 | | 0.1137 | 4.98 | 610 | 0.1817 | 0.93 | 0.95 | 0.92 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.1
[ "depression_low", "depression_medium", "depression_high" ]
Qiliang/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0442 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.059 | 1.5385 | 100 | 0.0442 | 0.9925 | | 0.0359 | 3.0769 | 200 | 0.0564 | 0.9850 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
Omriy123/vit_epochs5_batch32_lr5e-05_size224_tiles2_seed3_q3_dropout_v2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_epochs5_batch32_lr5e-05_size224_tiles2_seed3_q3_dropout_v2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset. It achieves the following results on the evaluation set: - Loss: 0.0282 - Accuracy: 0.9909 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0327 | 1.0 | 469 | 0.0282 | 0.9909 | | 0.0059 | 2.0 | 938 | 0.0283 | 0.9925 | | 0.0013 | 3.0 | 1407 | 0.0678 | 0.9861 | | 0.0009 | 4.0 | 1876 | 0.0482 | 0.9899 | | 0.0008 | 5.0 | 2345 | 0.0443 | 0.9915 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "cat", "dog" ]
Omriy123/vit_epochs5_batch32_lr5e-05_size224_tiles3_seed3_q3_dropout_v2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_epochs5_batch32_lr5e-05_size224_tiles3_seed3_q3_dropout_v2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset. It achieves the following results on the evaluation set: - Loss: 0.0479 - Accuracy: 0.9872 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0297 | 1.0 | 469 | 0.0479 | 0.9872 | | 0.0321 | 2.0 | 938 | 0.0577 | 0.9848 | | 0.0026 | 3.0 | 1407 | 0.0619 | 0.9867 | | 0.0009 | 4.0 | 1876 | 0.0685 | 0.9864 | | 0.0009 | 5.0 | 2345 | 0.0736 | 0.9853 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "cat", "dog" ]
mohamedsaeed823/ARSL_letters_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ARSL_letters_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.3695 - Accuracy: 0.7804 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.0992 | 1.0 | 35 | 2.9555 | 0.5036 | | 2.5809 | 2.0 | 70 | 2.5300 | 0.7054 | | 2.357 | 3.0 | 105 | 2.3695 | 0.7804 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "ain", "aleff", "haa", "jeem", "kaaf", "khaa", "laam", "meem", "nun", "ra", "saad", "seen", "bb", "sheen", "ta", "taa", "thaa", "thal", "waw", "ya", "zay", "dal", "dha", "dhad", "fa", "gaaf", "ghain", "ha" ]
Omriy123/vit_epochs5_batch32_lr5e-05_size224_tiles2_seed3_q3_DA
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_epochs5_batch32_lr5e-05_size224_tiles2_seed3_q3_DA This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset. It achieves the following results on the evaluation set: - Loss: 0.0265 - Accuracy: 0.9915 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0829 | 1.0 | 469 | 0.0389 | 0.9867 | | 0.1276 | 2.0 | 938 | 0.0277 | 0.9907 | | 0.048 | 3.0 | 1407 | 0.0272 | 0.9907 | | 0.0332 | 4.0 | 1876 | 0.0281 | 0.9915 | | 0.0733 | 5.0 | 2345 | 0.0265 | 0.9915 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "cat", "dog" ]
Omriy123/vit_epochs5_batch32_lr5e-05_size224_tiles3_seed3_q3_DA
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_epochs5_batch32_lr5e-05_size224_tiles3_seed3_q3_DA This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset. It achieves the following results on the evaluation set: - Loss: 0.0311 - Accuracy: 0.9904 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0448 | 1.0 | 469 | 0.0406 | 0.9859 | | 0.1108 | 2.0 | 938 | 0.0393 | 0.9869 | | 0.1152 | 3.0 | 1407 | 0.0360 | 0.988 | | 0.0174 | 4.0 | 1876 | 0.0311 | 0.9904 | | 0.0873 | 5.0 | 2345 | 0.0333 | 0.9899 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "cat", "dog" ]
ahmedesmail16/Psoriasis-500-100aug-224-swinv2-base-patch4-window12-192-22k
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Psoriasis-500-100aug-224-swinv2-base-patch4-window12-192-22k This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window12-192-22k](https://huggingface.co/microsoft/swinv2-base-patch4-window12-192-22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1589 - Accuracy: 0.8201 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.5248 | 0.9973 | 92 | 0.7503 | 0.7694 | | 0.2461 | 1.9946 | 184 | 0.8202 | 0.7764 | | 0.1164 | 2.9919 | 276 | 0.8260 | 0.8052 | | 0.0656 | 4.0 | 369 | 1.0366 | 0.7860 | | 0.0525 | 4.9973 | 461 | 1.0025 | 0.8148 | | 0.0223 | 5.9946 | 553 | 1.1363 | 0.7965 | | 0.0022 | 6.9919 | 645 | 1.1911 | 0.8061 | | 0.009 | 8.0 | 738 | 1.2139 | 0.7965 | | 0.0073 | 8.9973 | 830 | 1.2066 | 0.8166 | | 0.0014 | 9.9729 | 920 | 1.1589 | 0.8201 | # Classification Report | Class | Precision (%) | Recall (%) | F1-Score (%) | Support | |---------------------|---------------|------------|--------------|---------| | Abnormal | 68 | 67 | 67 | 108 | | Erythrodermic | 99 | 75 | 85 | 100 | | Guttate | 94 | 84 | 89 | 114 | | Inverse | 88 | 93 | 90 | 108 | | Nail | 88 | 86 | 87 | 99 | | Normal | 84 | 87 | 85 | 82 | | Not Define | 98 | 99 | 98 | 92 | | Palm Soles | 80 | 80 | 80 | 102 | | Plaque | 73 | 92 | 81 | 84 | | Psoriatic Arthritis | 88 | 75 | 81 | 104 | | Pustular | 76 | 86 | 80 | 112 | | Scalp | 86 | 94 | 90 | 80 | | **Accuracy** | | | **84** | 1185 | | **Macro Avg** | **85** | **85** | **84** | 1185 | | **Weighted Avg** | **85** | **84** | **84** | 1185 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
[ "abnormal", "erythrodermic", "guttate", "inverse", "nail", "normal", "not define", "palm soles", "plaque", "psoriatic arthritis", "pustular", "scalp" ]
mohamedsaeed823/ARSL_letters_model-7epochs
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ARSL_letters_model-7epochs This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.8704 - Accuracy: 0.8821 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.2553 | 1.0 | 35 | 2.2824 | 0.7679 | | 2.1368 | 2.0 | 70 | 2.1504 | 0.8393 | | 2.0462 | 3.0 | 105 | 2.0528 | 0.8464 | | 1.9789 | 4.0 | 140 | 1.9739 | 0.8839 | | 1.915 | 5.0 | 175 | 1.9463 | 0.8375 | | 1.8912 | 6.0 | 210 | 1.9037 | 0.85 | | 1.8794 | 7.0 | 245 | 1.8704 | 0.8821 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "ain", "aleff", "haa", "jeem", "kaaf", "khaa", "laam", "meem", "nun", "ra", "saad", "seen", "bb", "sheen", "ta", "taa", "thaa", "thal", "waw", "ya", "zay", "dal", "dha", "dhad", "fa", "gaaf", "ghain", "ha" ]
ahmedesmail16/Psoriasis-500-100aug-224-swin-large
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Psoriasis-500-100aug-224-swin-large This model is a fine-tuned version of [microsoft/swin-large-patch4-window7-224](https://huggingface.co/microsoft/swin-large-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7503 - Accuracy: 0.8454 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.4942 | 0.9973 | 92 | 0.6791 | 0.7825 | | 0.2458 | 1.9946 | 184 | 0.6565 | 0.8087 | | 0.0935 | 2.9919 | 276 | 0.6838 | 0.8140 | | 0.056 | 4.0 | 369 | 0.8758 | 0.7913 | | 0.0267 | 4.9973 | 461 | 0.7926 | 0.8245 | | 0.0074 | 5.9946 | 553 | 0.7328 | 0.8437 | | 0.0056 | 6.9919 | 645 | 0.7332 | 0.8480 | | 0.0019 | 8.0 | 738 | 0.7667 | 0.8524 | | 0.0013 | 8.9973 | 830 | 0.7548 | 0.8437 | | 0.0006 | 9.9729 | 920 | 0.7503 | 0.8454 | # Classification Report | Class | Precision (%) | Recall (%) | F1-Score (%) | Support | |---------------------|---------------|------------|--------------|---------| | Abnormal | 68 | 81 | 74 | 108 | | Erythrodermic | 94 | 76 | 84 | 100 | | Guttate | 92 | 87 | 89 | 114 | | Inverse | 92 | 93 | 92 | 108 | | Nail | 86 | 84 | 85 | 99 | | Normal | 85 | 87 | 86 | 82 | | Not Define | 99 | 99 | 99 | 92 | | Palm Soles | 79 | 80 | 80 | 102 | | Plaque | 88 | 75 | 81 | 84 | | Psoriatic Arthritis | 83 | 82 | 83 | 104 | | Pustular | 77 | 84 | 80 | 112 | | Scalp | 88 | 94 | 91 | 80 | | **Accuracy** | | | **85** | 1185 | | **Macro Avg** | **86** | **85** | **85** | 1185 | | **Weighted Avg** | **86** | **85** | **85** | 1185 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
[ "abnormal", "erythrodermic", "guttate", "inverse", "nail", "normal", "not define", "palm soles", "plaque", "psoriatic arthritis", "pustular", "scalp" ]
ahmedesmail16/Psoriasis-500-100aug-224-beit-large
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Psoriasis-500-100aug-224-beit-large This model is a fine-tuned version of [microsoft/beit-large-patch16-224-pt22k](https://huggingface.co/microsoft/beit-large-patch16-224-pt22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1823 - Accuracy: 0.7991 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.8236 | 0.9973 | 92 | 1.1536 | 0.6358 | | 0.4282 | 1.9946 | 184 | 0.8848 | 0.7389 | | 0.2305 | 2.9919 | 276 | 0.9811 | 0.7258 | | 0.1206 | 4.0 | 369 | 0.8858 | 0.7808 | | 0.1107 | 4.9973 | 461 | 1.1129 | 0.7397 | | 0.0319 | 5.9946 | 553 | 1.1625 | 0.7703 | | 0.0073 | 6.9919 | 645 | 1.1938 | 0.7895 | | 0.0078 | 8.0 | 738 | 1.3031 | 0.7790 | | 0.0013 | 8.9973 | 830 | 1.2117 | 0.7974 | | 0.002 | 9.9729 | 920 | 1.1823 | 0.7991 | # Classification Report | Class | Precision (%) | Recall (%) | F1-Score (%) | Support | |---------------------|---------------|------------|--------------|---------| | Abnormal | 66 | 62 | 64 | 108 | | Erythrodermic | 96 | 76 | 85 | 100 | | Guttate | 95 | 83 | 89 | 114 | | Inverse | 83 | 91 | 87 | 108 | | Nail | 81 | 84 | 83 | 99 | | Normal | 81 | 79 | 80 | 82 | | Not Define | 98 | 95 | 96 | 92 | | Palm Soles | 82 | 88 | 85 | 102 | | Plaque | 70 | 88 | 78 | 84 | | Psoriatic Arthritis | 78 | 74 | 76 | 104 | | Pustular | 71 | 76 | 74 | 112 | | Scalp | 84 | 86 | 85 | 80 | | **Accuracy** | | | **82** | 1185 | | **Macro Avg** | **82** | **82** | **82** | 1185 | | **Weighted Avg** | **82** | **82** | **82** | 1185 | --- ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
[ "abnormal", "erythrodermic", "guttate", "inverse", "nail", "normal", "not define", "palm soles", "plaque", "psoriatic arthritis", "pustular", "scalp" ]
Abhiram4/VitDisease
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # VitDisease This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.3736 - Accuracy: 0.9976 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7869 | 1.0 | 192 | 0.7516 | 0.9927 | | 0.3828 | 2.0 | 384 | 0.3736 | 0.9976 | | 0.299 | 3.0 | 576 | 0.3079 | 0.9976 | ### Framework versions - Transformers 4.33.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
[ "apple___apple_scab", "apple___black_rot", "apple___cedar_apple_rust", "apple___healthy", "blueberry___healthy", "cherry_(including_sour)___powdery_mildew", "cherry_(including_sour)___healthy", "corn_(maize)___cercospora_leaf_spot gray_leaf_spot", "corn_(maize)___common_rust_", "corn_(maize)___northern_leaf_blight", "corn_(maize)___healthy", "grape___black_rot", "grape___esca_(black_measles)", "grape___leaf_blight_(isariopsis_leaf_spot)", "grape___healthy", "orange___haunglongbing_(citrus_greening)", "peach___bacterial_spot", "peach___healthy", "pepper,_bell___bacterial_spot", "pepper,_bell___healthy", "potato___early_blight", "potato___late_blight", "potato___healthy", "raspberry___healthy", "soybean___healthy", "squash___powdery_mildew", "strawberry___leaf_scorch", "strawberry___healthy", "tomato___bacterial_spot", "tomato___early_blight", "tomato___late_blight", "tomato___leaf_mold", "tomato___septoria_leaf_spot", "tomato___spider_mites two-spotted_spider_mite", "tomato___target_spot", "tomato___tomato_yellow_leaf_curl_virus", "tomato___tomato_mosaic_virus", "tomato___healthy" ]
ahmedesmail16/Psoriasis-500-100aug-224-swinv2-large
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Psoriasis-500-100aug-224-swinv2-large This model is a fine-tuned version of [microsoft/swin-large-patch4-window7-224](https://huggingface.co/microsoft/swin-large-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7383 - Accuracy: 0.8227 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.4126 | 0.9840 | 46 | 0.9408 | 0.6882 | | 0.3672 | 1.9893 | 93 | 0.6431 | 0.7703 | | 0.133 | 2.9947 | 140 | 0.5938 | 0.7921 | | 0.0624 | 4.0 | 187 | 0.6128 | 0.8035 | | 0.0473 | 4.9840 | 233 | 0.6654 | 0.8114 | | 0.0276 | 5.9893 | 280 | 0.7090 | 0.8166 | | 0.0111 | 6.9947 | 327 | 0.7133 | 0.8140 | | 0.0081 | 8.0 | 374 | 0.7639 | 0.8183 | | 0.0039 | 8.9840 | 420 | 0.7387 | 0.8236 | | 0.0065 | 9.8396 | 460 | 0.7383 | 0.8227 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
[ "abnormal", "erythrodermic", "guttate", "inverse", "nail", "normal", "not define", "palm soles", "plaque", "psoriatic arthritis", "pustular", "scalp" ]
manjoslima/vit-base-patch16-224-finetuned-flower
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-flower This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 2.3.0+cu121 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "daisy", "dandelion", "roses", "sunflowers", "tulips" ]
Omriy123/vit_epochs5_batch32_lr5e-05_size224_tiles10_seed1_q3_dropout_v2_test
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_epochs5_batch32_lr5e-05_size224_tiles10_seed1_q3_dropout_v2_test This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset. It achieves the following results on the evaluation set: - Loss: 0.2065 - Accuracy: 0.9339 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0804 | 1.0 | 469 | 0.2409 | 0.9141 | | 0.0616 | 2.0 | 938 | 0.2065 | 0.9339 | | 0.0176 | 3.0 | 1407 | 0.2520 | 0.9379 | | 0.002 | 4.0 | 1876 | 0.2771 | 0.9432 | | 0.0014 | 5.0 | 2345 | 0.2849 | 0.9429 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.19.1
[ "cat", "dog" ]
Omriy123/vit_epochs5_batch32_lr5e-05_size224_tiles10_seed1_q3_dropout_v2_test3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_epochs5_batch32_lr5e-05_size224_tiles10_seed1_q3_dropout_v2_test3 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset. It achieves the following results on the evaluation set: - Loss: 0.2836 - Accuracy: 0.9411 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0042 | 1.0 | 469 | 0.2944 | 0.9333 | | 0.0389 | 2.0 | 938 | 0.2836 | 0.9411 | | 0.0017 | 3.0 | 1407 | 0.2929 | 0.9429 | | 0.001 | 4.0 | 1876 | 0.3287 | 0.9451 | | 0.0001 | 5.0 | 2345 | 0.3298 | 0.9469 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.19.1
[ "cat", "dog" ]
Omriy123/vit_epochs1_batch32_lr5e-05_size224_tiles10_seed1_q3_dropout_v2_test10
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_epochs1_batch32_lr5e-05_size224_tiles10_seed1_q3_dropout_v2_test10 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset. It achieves the following results on the evaluation set: - Loss: 0.2319 - Accuracy: 0.944 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0051 | 1.0 | 469 | 0.2319 | 0.944 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.19.1
[ "cat", "dog" ]
youssefabdelmottaleb/swin-tiny-patch4-window7-224-SWIN-Transformer-5epochs-test
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-SWIN-Transformer-5epochs-test This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0452 - Accuracy: 0.9866 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1451 | 0.9973 | 280 | 0.1207 | 0.9602 | | 0.0673 | 1.9982 | 561 | 0.0732 | 0.9751 | | 0.0304 | 2.9991 | 842 | 0.0509 | 0.9834 | | 0.0201 | 4.0 | 1123 | 0.0486 | 0.9845 | | 0.0105 | 4.9866 | 1400 | 0.0452 | 0.9866 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
[ "glass", "metal", "paper", "plastic" ]
Omriy123/vit_epochs1_batch32_lr5e-05_size224_tiles10_seed1_q3_dropout_v2_test11
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_epochs1_batch32_lr5e-05_size224_tiles10_seed1_q3_dropout_v2_test11 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset. It achieves the following results on the evaluation set: - Loss: 0.2731 - Accuracy: 0.9381 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0233 | 1.0 | 469 | 0.2731 | 0.9381 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.19.1
[ "cat", "dog" ]
Herbertg94/vit-base-patch16-224-finetuned-flower
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-flower This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 2.3.0+cu121 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "daisy", "dandelion", "roses", "sunflowers", "tulips" ]
youssefabdelmottaleb/Garbage-Classification-SWIN-Transformer
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Garbage-Classification-SWIN-Transformer This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0440 - Accuracy: 0.9900 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1969 | 0.9973 | 280 | 0.1740 | 0.9409 | | 0.1014 | 1.9982 | 561 | 0.0752 | 0.9755 | | 0.0333 | 2.9991 | 842 | 0.0551 | 0.9824 | | 0.0332 | 4.0 | 1123 | 0.0526 | 0.9845 | | 0.0218 | 4.9973 | 1403 | 0.0511 | 0.9866 | | 0.0086 | 5.9982 | 1684 | 0.0515 | 0.9873 | | 0.0057 | 6.9991 | 1965 | 0.0462 | 0.9875 | | 0.0043 | 8.0 | 2246 | 0.0453 | 0.9891 | | 0.0012 | 8.9973 | 2526 | 0.0460 | 0.9888 | | 0.0017 | 9.9733 | 2800 | 0.0440 | 0.9900 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
[ "glass", "metal", "paper", "plastic" ]
anindyady/image_classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # image_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5790 - Accuracy: 0.4188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 1.8299 | 0.35 | | No log | 2.0 | 80 | 1.6312 | 0.4313 | | No log | 3.0 | 120 | 1.5657 | 0.45 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
suredream/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5322 - Accuracy: 0.7453 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.625 | 0.9873 | 39 | 0.6519 | 0.6472 | | 0.5965 | 2.0 | 79 | 0.5875 | 0.6661 | | 0.5349 | 2.9620 | 117 | 0.5322 | 0.7453 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "nosec", "tsec" ]
suredream/tsec_vit_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tsec_vit_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2717 - Accuracy: 0.8866 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4387 | 1.0 | 280 | 0.4179 | 0.8151 | | 0.4239 | 2.0 | 560 | 0.3611 | 0.8399 | | 0.3148 | 3.0 | 840 | 0.3156 | 0.8600 | | 0.2988 | 4.0 | 1120 | 0.3002 | 0.8729 | | 0.2498 | 5.0 | 1400 | 0.3087 | 0.8694 | | 0.3028 | 6.0 | 1680 | 0.2966 | 0.8716 | | 0.2179 | 7.0 | 1960 | 0.2742 | 0.8808 | | 0.2274 | 8.0 | 2240 | 0.2861 | 0.8814 | | 0.2195 | 9.0 | 2520 | 0.2626 | 0.8895 | | 0.1886 | 10.0 | 2800 | 0.2717 | 0.8866 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "n", "y" ]
phonghoccode/results
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2654 - Accuracy: 0.9402 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5724 | 1.0 | 34 | 0.4259 | 0.9163 | | 0.3558 | 2.0 | 68 | 0.3116 | 0.9363 | | 0.2732 | 3.0 | 102 | 0.2842 | 0.9363 | | 0.2286 | 4.0 | 136 | 0.2690 | 0.9402 | | 0.1984 | 5.0 | 170 | 0.2654 | 0.9402 | ### Framework versions - Transformers 4.43.0.dev0 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
[ "event", "event grocery", "event restaurant", "grocery", "grocery restaurant", "restaurant" ]
suredream/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0778 - Accuracy: 0.9756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.2875 | 0.9941 | 126 | 0.1540 | 0.9517 | | 0.2201 | 1.9961 | 253 | 0.0854 | 0.975 | | 0.1714 | 2.9822 | 378 | 0.0778 | 0.9756 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "annualcrop", "forest", "herbaceousvegetation", "highway", "industrial", "pasture", "permanentcrop", "residential", "river", "sealake" ]
josedonoso/vit-ecg-khan
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-ecg This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1003 - Accuracy: 0.9643 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.596 | 2.4390 | 100 | 0.5431 | 0.8214 | | 0.0656 | 4.8780 | 200 | 0.1628 | 0.95 | | 0.0192 | 7.3171 | 300 | 0.1003 | 0.9643 | | 0.0926 | 9.7561 | 400 | 0.1262 | 0.95 | | 0.0064 | 12.1951 | 500 | 0.1611 | 0.9643 | | 0.0049 | 14.6341 | 600 | 0.1539 | 0.9643 | | 0.0044 | 17.0732 | 700 | 0.1509 | 0.9643 | | 0.0041 | 19.5122 | 800 | 0.1499 | 0.9643 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "abnormal-heartbeat", "history-of-mi", "myocardial-infarction", "normal" ]
Abhiram4/PlantDiseaseDetector
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PlantDiseaseDetector This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.3197 - Accuracy: 0.9960 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8619 | 1.0 | 192 | 0.8045 | 0.9869 | | 0.4023 | 2.0 | 384 | 0.3931 | 0.9940 | | 0.3229 | 3.0 | 576 | 0.3197 | 0.9960 | ### Framework versions - Transformers 4.33.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
[ "apple___apple_scab", "apple___black_rot", "apple___cedar_apple_rust", "apple___healthy", "blueberry___healthy", "cherry_(including_sour)___powdery_mildew", "cherry_(including_sour)___healthy", "corn_(maize)___cercospora_leaf_spot gray_leaf_spot", "corn_(maize)___common_rust_", "corn_(maize)___northern_leaf_blight", "corn_(maize)___healthy", "grape___black_rot", "grape___esca_(black_measles)", "grape___leaf_blight_(isariopsis_leaf_spot)", "grape___healthy", "orange___haunglongbing_(citrus_greening)", "peach___bacterial_spot", "peach___healthy", "pepper,_bell___bacterial_spot", "pepper,_bell___healthy", "potato___early_blight", "potato___late_blight", "potato___healthy", "raspberry___healthy", "soybean___healthy", "squash___powdery_mildew", "strawberry___leaf_scorch", "strawberry___healthy", "tomato___bacterial_spot", "tomato___early_blight", "tomato___late_blight", "tomato___leaf_mold", "tomato___septoria_leaf_spot", "tomato___spider_mites two-spotted_spider_mite", "tomato___target_spot", "tomato___tomato_yellow_leaf_curl_virus", "tomato___tomato_mosaic_virus", "tomato___healthy" ]
himuraxkenji/vit_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0094 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1417 | 3.85 | 500 | 0.0094 | 1.0 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.13.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
fadhfaiz/image_classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # image_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4268 - Accuracy: 0.5062 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 1.8704 | 0.4062 | | No log | 2.0 | 80 | 1.6122 | 0.3625 | | No log | 3.0 | 120 | 1.4724 | 0.4437 | | No log | 4.0 | 160 | 1.4352 | 0.5312 | | No log | 5.0 | 200 | 1.4154 | 0.4375 | | No log | 6.0 | 240 | 1.3782 | 0.5312 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
shreyasguha/22class_skindiseases_57acc
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6", "label_7", "label_8", "label_9", "label_10", "label_11", "label_12", "label_13", "label_14", "label_15", "label_16", "label_17", "label_18", "label_19", "label_20", "label_21", "label_22" ]
Iqbaliswinning/results
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.42.3 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
domasin/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7997 - Accuracy: 0.76 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | No log | 0.5714 | 1 | 0.9145 | 0.72 | | No log | 1.7143 | 3 | 0.7997 | 0.76 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "limulus_polyphemus", "lucanus_cervus", "scorpiones" ]
dmartincc/vet-sm
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vet-sm This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8296 - Accuracy: 0.7440 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3577 | 1.0 | 375 | 1.3559 | 0.5333 | | 1.1359 | 2.0 | 750 | 1.0537 | 0.6386 | | 0.727 | 3.0 | 1125 | 0.8715 | 0.7156 | | 0.3493 | 4.0 | 1500 | 0.8288 | 0.7355 | | 0.1978 | 5.0 | 1875 | 0.8296 | 0.7440 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.21.0
[ "neutral", "calm", "happy", "sad", "angry", "fearful", "disgust", "surprised" ]
Sridhar100/vit-base-patch16-224-finetuned-flower
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-flower This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 2.3.0+cu121 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "daisy", "dandelion", "roses", "sunflowers", "tulips" ]
Augusto777/vit-base-patch16-224-ve-U13b-80RX3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-ve-U13b-80RX3 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4344 - Accuracy: 0.9130 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.74e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.33 | 0.99 | 51 | 1.3133 | 0.3478 | | 1.0288 | 2.0 | 103 | 1.0045 | 0.5652 | | 0.7322 | 2.99 | 154 | 0.7309 | 0.8043 | | 0.5476 | 4.0 | 206 | 0.6316 | 0.7826 | | 0.2863 | 4.99 | 257 | 0.5598 | 0.8043 | | 0.3149 | 6.0 | 309 | 0.5428 | 0.8478 | | 0.1489 | 6.99 | 360 | 0.5150 | 0.8696 | | 0.1134 | 8.0 | 412 | 0.4585 | 0.8043 | | 0.1613 | 8.99 | 463 | 0.6284 | 0.8478 | | 0.1855 | 10.0 | 515 | 0.5985 | 0.8478 | | 0.1908 | 10.99 | 566 | 1.0336 | 0.7391 | | 0.2293 | 12.0 | 618 | 0.7746 | 0.8043 | | 0.1414 | 12.99 | 669 | 0.6517 | 0.8261 | | 0.0877 | 14.0 | 721 | 0.5639 | 0.8261 | | 0.1302 | 14.99 | 772 | 0.7687 | 0.8261 | | 0.047 | 16.0 | 824 | 0.6773 | 0.8696 | | 0.1045 | 16.99 | 875 | 0.4344 | 0.9130 | | 0.0751 | 18.0 | 927 | 1.0160 | 0.7391 | | 0.1141 | 18.99 | 978 | 0.6643 | 0.8696 | | 0.1756 | 20.0 | 1030 | 0.5582 | 0.8913 | | 0.1212 | 20.99 | 1081 | 0.5641 | 0.8913 | | 0.0903 | 22.0 | 1133 | 0.6990 | 0.8261 | | 0.0693 | 22.99 | 1184 | 0.5548 | 0.8913 | | 0.0048 | 24.0 | 1236 | 0.6958 | 0.8478 | | 0.0785 | 24.99 | 1287 | 0.7886 | 0.8043 | | 0.0373 | 26.0 | 1339 | 0.6345 | 0.8478 | | 0.0763 | 26.99 | 1390 | 0.6830 | 0.8696 | | 0.0621 | 28.0 | 1442 | 0.7294 | 0.8478 | | 0.0367 | 28.99 | 1493 | 0.6636 | 0.8696 | | 0.0124 | 30.0 | 1545 | 0.8031 | 0.8478 | | 0.0759 | 30.99 | 1596 | 0.7076 | 0.8696 | | 0.0786 | 32.0 | 1648 | 0.8024 | 0.8261 | | 0.0487 | 32.99 | 1699 | 0.7927 | 0.8696 | | 0.0664 | 34.0 | 1751 | 0.9607 | 0.8261 | | 0.0054 | 34.99 | 1802 | 0.9702 | 0.8261 | | 0.0277 | 36.0 | 1854 | 0.8351 | 0.8261 | | 0.0025 | 36.99 | 1905 | 0.9318 | 0.8261 | | 0.0188 | 38.0 | 1957 | 0.8995 | 0.8478 | | 0.0385 | 38.99 | 2008 | 0.8928 | 0.8478 | | 0.0474 | 39.61 | 2040 | 0.8863 | 0.8478 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
[ "avanzada", "leve", "moderada", "no dmae" ]
shreyasguha/22class_skindiseases_80acc
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6", "label_7", "label_8", "label_9", "label_10", "label_11", "label_12", "label_13", "label_14", "label_15", "label_16", "label_17", "label_18", "label_19", "label_20", "label_21", "label_22" ]
shreyasguha/22class_skindiseases_76acc_possibleoverfit
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6", "label_7", "label_8", "label_9", "label_10", "label_11", "label_12", "label_13", "label_14", "label_15", "label_16", "label_17", "label_18", "label_19", "label_20", "label_21", "label_22" ]
sloshywings/my_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6229 - Accuracy: 0.908 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7124 | 0.992 | 62 | 2.5371 | 0.807 | | 1.8389 | 2.0 | 125 | 1.8040 | 0.883 | | 1.6124 | 2.976 | 186 | 1.6229 | 0.908 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
haiefff/cartoon-anime-3
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metrics No validation metrics available
[ "anime", "not_anime" ]
necrobradley/face_predict
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # face_predict This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2322 - Accuracy: 0.5625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 6 - total_train_batch_size: 192 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.9 | 3 | 2.0747 | 0.1187 | | No log | 1.8 | 6 | 2.0728 | 0.1375 | | 2.0713 | 3.0 | 10 | 2.0449 | 0.2 | | 2.0713 | 3.9 | 13 | 2.0225 | 0.2562 | | 2.0713 | 4.8 | 16 | 1.9779 | 0.2938 | | 1.9642 | 6.0 | 20 | 1.8985 | 0.3688 | | 1.9642 | 6.9 | 23 | 1.8440 | 0.4188 | | 1.9642 | 7.8 | 26 | 1.7593 | 0.4437 | | 1.7442 | 9.0 | 30 | 1.6551 | 0.4875 | | 1.7442 | 9.9 | 33 | 1.5996 | 0.4875 | | 1.7442 | 10.8 | 36 | 1.5324 | 0.5188 | | 1.5402 | 12.0 | 40 | 1.5053 | 0.525 | | 1.5402 | 12.9 | 43 | 1.4543 | 0.5188 | | 1.5402 | 13.8 | 46 | 1.4335 | 0.5188 | | 1.4064 | 15.0 | 50 | 1.3768 | 0.5938 | | 1.4064 | 15.9 | 53 | 1.3583 | 0.6 | | 1.4064 | 16.8 | 56 | 1.3464 | 0.575 | | 1.2844 | 18.0 | 60 | 1.3245 | 0.6125 | | 1.2844 | 18.9 | 63 | 1.3265 | 0.5563 | | 1.2844 | 19.8 | 66 | 1.2899 | 0.5813 | | 1.1834 | 21.0 | 70 | 1.2863 | 0.5625 | | 1.1834 | 21.9 | 73 | 1.2939 | 0.5687 | | 1.1834 | 22.8 | 76 | 1.2508 | 0.5938 | | 1.1046 | 24.0 | 80 | 1.2604 | 0.5563 | | 1.1046 | 24.9 | 83 | 1.2344 | 0.6062 | | 1.1046 | 25.8 | 86 | 1.2124 | 0.6125 | | 1.0379 | 27.0 | 90 | 1.2053 | 0.6312 | | 1.0379 | 27.9 | 93 | 1.3067 | 0.5375 | | 1.0379 | 28.8 | 96 | 1.2247 | 0.5875 | | 1.0064 | 30.0 | 100 | 1.2060 | 0.625 | | 1.0064 | 30.9 | 103 | 1.2308 | 0.575 | | 1.0064 | 31.8 | 106 | 1.1936 | 0.6188 | | 0.9611 | 33.0 | 110 | 1.2257 | 0.5938 | | 0.9611 | 33.9 | 113 | 1.2302 | 0.5563 | | 0.9611 | 34.8 | 116 | 1.2172 | 0.6 | | 0.9351 | 36.0 | 120 | 1.2355 | 0.55 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
Raidenv/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8360 - Accuracy: 0.7664 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.9333 | 7 | 3.8894 | 0.0841 | | 3.897 | 2.0 | 15 | 3.8185 | 0.0841 | | 3.8553 | 2.9333 | 22 | 3.7402 | 0.0748 | | 3.7568 | 4.0 | 30 | 3.6372 | 0.0748 | | 3.7568 | 4.9333 | 37 | 3.5482 | 0.0841 | | 3.5912 | 6.0 | 45 | 3.4069 | 0.1121 | | 3.4342 | 6.9333 | 52 | 3.2939 | 0.1308 | | 3.2601 | 8.0 | 60 | 3.1786 | 0.2150 | | 3.2601 | 8.9333 | 67 | 3.0323 | 0.2336 | | 3.0498 | 10.0 | 75 | 2.8695 | 0.2617 | | 2.849 | 10.9333 | 82 | 2.8505 | 0.2523 | | 2.6452 | 12.0 | 90 | 2.6319 | 0.2804 | | 2.6452 | 12.9333 | 97 | 2.4654 | 0.3271 | | 2.4123 | 14.0 | 105 | 2.3995 | 0.3364 | | 2.2561 | 14.9333 | 112 | 2.2584 | 0.4019 | | 2.0447 | 16.0 | 120 | 2.2000 | 0.4299 | | 2.0447 | 16.9333 | 127 | 2.0806 | 0.4393 | | 1.8569 | 18.0 | 135 | 2.0593 | 0.4393 | | 1.7447 | 18.9333 | 142 | 1.8832 | 0.4673 | | 1.5821 | 20.0 | 150 | 1.8218 | 0.5047 | | 1.5821 | 20.9333 | 157 | 1.7334 | 0.5421 | | 1.3999 | 22.0 | 165 | 1.6213 | 0.5514 | | 1.2901 | 22.9333 | 172 | 1.5932 | 0.5234 | | 1.1569 | 24.0 | 180 | 1.5256 | 0.5701 | | 1.1569 | 24.9333 | 187 | 1.4281 | 0.5888 | | 1.0903 | 26.0 | 195 | 1.3997 | 0.5794 | | 0.9674 | 26.9333 | 202 | 1.4017 | 0.5888 | | 0.98 | 28.0 | 210 | 1.2916 | 0.5981 | | 0.98 | 28.9333 | 217 | 1.3018 | 0.5981 | | 0.8772 | 30.0 | 225 | 1.2552 | 0.6355 | | 0.7842 | 30.9333 | 232 | 1.2372 | 0.6075 | | 0.7438 | 32.0 | 240 | 1.1908 | 0.6168 | | 0.7438 | 32.9333 | 247 | 1.1567 | 0.6636 | | 0.725 | 34.0 | 255 | 1.1542 | 0.6262 | | 0.6709 | 34.9333 | 262 | 1.1377 | 0.6262 | | 0.6898 | 36.0 | 270 | 1.0524 | 0.6636 | | 0.6898 | 36.9333 | 277 | 1.0272 | 0.6729 | | 0.6125 | 38.0 | 285 | 1.0399 | 0.6355 | | 0.6153 | 38.9333 | 292 | 1.0308 | 0.6822 | | 0.5898 | 40.0 | 300 | 1.0151 | 0.7009 | | 0.5898 | 40.9333 | 307 | 1.0483 | 0.6542 | | 0.5881 | 42.0 | 315 | 0.9926 | 0.7009 | | 0.54 | 42.9333 | 322 | 1.0300 | 0.6916 | | 0.4515 | 44.0 | 330 | 0.9262 | 0.7383 | | 0.4515 | 44.9333 | 337 | 0.9486 | 0.7290 | | 0.5057 | 46.0 | 345 | 0.9219 | 0.7103 | | 0.4905 | 46.9333 | 352 | 1.0184 | 0.6822 | | 0.4669 | 48.0 | 360 | 0.9337 | 0.7290 | | 0.4669 | 48.9333 | 367 | 0.9431 | 0.7103 | | 0.4437 | 50.0 | 375 | 0.9312 | 0.7009 | | 0.4754 | 50.9333 | 382 | 0.9245 | 0.7196 | | 0.4119 | 52.0 | 390 | 0.8826 | 0.7383 | | 0.4119 | 52.9333 | 397 | 0.9262 | 0.7196 | | 0.4087 | 54.0 | 405 | 0.8882 | 0.7477 | | 0.3987 | 54.9333 | 412 | 0.9282 | 0.7290 | | 0.4253 | 56.0 | 420 | 0.9004 | 0.7477 | | 0.4253 | 56.9333 | 427 | 0.8783 | 0.7477 | | 0.4134 | 58.0 | 435 | 0.8360 | 0.7664 | | 0.4024 | 58.9333 | 442 | 0.9016 | 0.7196 | | 0.3688 | 60.0 | 450 | 0.9251 | 0.6822 | | 0.3688 | 60.9333 | 457 | 0.9086 | 0.7103 | | 0.3833 | 62.0 | 465 | 0.8494 | 0.7383 | | 0.3614 | 62.9333 | 472 | 0.8299 | 0.7290 | | 0.3792 | 64.0 | 480 | 0.9015 | 0.7383 | | 0.3792 | 64.9333 | 487 | 0.8802 | 0.7196 | | 0.3632 | 66.0 | 495 | 0.8881 | 0.7009 | | 0.3405 | 66.9333 | 502 | 0.8578 | 0.7383 | | 0.3673 | 68.0 | 510 | 0.8540 | 0.7570 | | 0.3673 | 68.9333 | 517 | 0.8345 | 0.7383 | | 0.3379 | 70.0 | 525 | 0.7919 | 0.7383 | | 0.3389 | 70.9333 | 532 | 0.8384 | 0.7290 | | 0.3363 | 72.0 | 540 | 0.8306 | 0.7383 | | 0.3363 | 72.9333 | 547 | 0.8875 | 0.7477 | | 0.3494 | 74.0 | 555 | 0.9151 | 0.7009 | | 0.2989 | 74.9333 | 562 | 0.8606 | 0.7103 | | 0.3157 | 76.0 | 570 | 0.8640 | 0.7383 | | 0.3157 | 76.9333 | 577 | 0.8532 | 0.7290 | | 0.3013 | 78.0 | 585 | 0.8479 | 0.7103 | | 0.2968 | 78.9333 | 592 | 0.8839 | 0.7383 | | 0.3013 | 80.0 | 600 | 0.8837 | 0.7196 | | 0.3013 | 80.9333 | 607 | 0.8694 | 0.7103 | | 0.3247 | 82.0 | 615 | 0.8721 | 0.7290 | | 0.2515 | 82.9333 | 622 | 0.8605 | 0.7290 | | 0.3175 | 84.0 | 630 | 0.8505 | 0.7290 | | 0.3175 | 84.9333 | 637 | 0.8488 | 0.7290 | | 0.3015 | 86.0 | 645 | 0.8554 | 0.7383 | | 0.2989 | 86.9333 | 652 | 0.8707 | 0.7290 | | 0.3155 | 88.0 | 660 | 0.8712 | 0.7290 | | 0.3155 | 88.9333 | 667 | 0.8659 | 0.7290 | | 0.2871 | 90.0 | 675 | 0.8573 | 0.7290 | | 0.2872 | 90.9333 | 682 | 0.8530 | 0.7290 | | 0.2587 | 92.0 | 690 | 0.8516 | 0.7383 | | 0.2587 | 92.9333 | 697 | 0.8502 | 0.7383 | | 0.3133 | 93.3333 | 700 | 0.8501 | 0.7383 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "corrugation", "corrugation flaking_multiple squat", "corrugation spalling", "crack", "crack flaking spalling", "crack flaking spalling_multiple", "crack flaking spalling_multiple squat", "crack flaking_multiple", "crack flaking_multiple spalling squat", "crack flaking_multiple spalling_multiple", "crack flaking_multiple spalling_multiple squat", "crack flaking_multiple spalling_multiple squat_multiple", "crack flaking_multiple squat", "crack spalling", "crack spalling squat", "crack spalling_multiple", "crack spalling_multiple squat", "crack spalling_multiple squat_multiple", "crack squat", "crack_multiple", "crack_multiple flaking spalling squat", "crack_multiple flaking spalling squat_multiple", "crack_multiple flaking spalling_multiple", "crack_multiple flaking spalling_multiple squat", "crack_multiple flaking spalling_multiple squat_multiple", "crack_multiple flaking_multiple", "crack_multiple flaking_multiple spalling", "crack_multiple flaking_multiple spalling squat", "crack_multiple flaking_multiple spalling squat_multiple", "crack_multiple flaking_multiple spalling_multiple", "crack_multiple flaking_multiple spalling_multiple squat", "crack_multiple flaking_multiple spalling_multiple squat_multiple", "crack_multiple flaking_multiple squat", "crack_multiple spalling", "crack_multiple spalling squat", "crack_multiple spalling squat_multiple", "crack_multiple spalling_multiple", "crack_multiple spalling_multiple squat", "crack_multiple spalling_multiple squat_multiple", "crack_multiple squat", "crack_multiple squat_multiple", "empty", "flaking", "flaking putus spalling_multiple", "flaking spalling", "flaking spalling squat", "flaking spalling_multiple", "flaking spalling_multiple squat", "flaking squat", "flaking squat_multiple", "flaking_multiple", "flaking_multiple spalling", "flaking_multiple spalling squat", "flaking_multiple spalling squat_multiple", "flaking_multiple spalling_multiple", "flaking_multiple spalling_multiple squat", "flaking_multiple spalling_multiple squat_multiple", "flaking_multiple squat", "flaking_multiple squat_multiple", "putus", "putus spalling_multiple", "spalling", "spalling squat", "spalling squat_multiple", "spalling_multiple", "spalling_multiple squat", "spalling_multiple squat_multiple", "squat", "squat_multiple" ]
BoraErsoy2/food_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # BoraErsoy2/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3797 - Validation Loss: 0.3267 - Train Accuracy: 0.921 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 2.8218 | 1.6202 | 0.847 | 0 | | 1.2200 | 0.7952 | 0.906 | 1 | | 0.6871 | 0.4814 | 0.923 | 2 | | 0.4762 | 0.4180 | 0.911 | 3 | | 0.3797 | 0.3267 | 0.921 | 4 | ### Framework versions - Transformers 4.41.2 - TensorFlow 2.15.0 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
Abhiram4/PlantDiseaseDetectorV2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PlantDiseaseDetectorV2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.0610 - Accuracy: 0.9987 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9051 | 1.0 | 219 | 0.8025 | 0.9861 | | 0.2801 | 2.0 | 439 | 0.2606 | 0.9959 | | 0.1455 | 3.0 | 659 | 0.1402 | 0.9973 | | 0.0949 | 4.0 | 879 | 0.0942 | 0.9986 | | 0.0741 | 5.0 | 1098 | 0.0749 | 0.9984 | | 0.0623 | 6.0 | 1318 | 0.0642 | 0.9984 | | 0.0586 | 6.98 | 1533 | 0.0610 | 0.9987 | ### Framework versions - Transformers 4.33.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
[ "apple___apple_scab", "apple___black_rot", "apple___cedar_apple_rust", "apple___healthy", "blueberry___healthy", "cherry_(including_sour)___powdery_mildew", "cherry_(including_sour)___healthy", "corn_(maize)___cercospora_leaf_spot gray_leaf_spot", "corn_(maize)___common_rust_", "corn_(maize)___northern_leaf_blight", "corn_(maize)___healthy", "grape___black_rot", "grape___esca_(black_measles)", "grape___leaf_blight_(isariopsis_leaf_spot)", "grape___healthy", "orange___haunglongbing_(citrus_greening)", "peach___bacterial_spot", "peach___healthy", "pepper,_bell___bacterial_spot", "pepper,_bell___healthy", "potato___early_blight", "potato___late_blight", "potato___healthy", "raspberry___healthy", "soybean___healthy", "squash___powdery_mildew", "strawberry___leaf_scorch", "strawberry___healthy", "tomato___bacterial_spot", "tomato___early_blight", "tomato___late_blight", "tomato___leaf_mold", "tomato___septoria_leaf_spot", "tomato___spider_mites two-spotted_spider_mite", "tomato___target_spot", "tomato___tomato_yellow_leaf_curl_virus", "tomato___tomato_mosaic_virus", "tomato___healthy" ]
nightsornram/food_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # nightsornram/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3745 - Validation Loss: 0.3281 - Train Accuracy: 0.918 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 2.7892 | 1.6582 | 0.814 | 0 | | 1.2074 | 0.8517 | 0.885 | 1 | | 0.6957 | 0.5030 | 0.918 | 2 | | 0.4869 | 0.4189 | 0.912 | 3 | | 0.3745 | 0.3281 | 0.918 | 4 | ### Framework versions - Transformers 4.41.2 - TensorFlow 2.15.0 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
CelDom/vit-base-patch16-224-in21k-cifar-0.9-ep-3
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck" ]
habibi26/ktp-crop-clip
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ktp-crop-clip This model is a fine-tuned version of [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1223 - Accuracy: 0.9865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.96 | 6 | 0.8954 | 0.5270 | | 0.7112 | 1.92 | 12 | 0.6729 | 0.5405 | | 0.7112 | 2.88 | 18 | 0.6407 | 0.7297 | | 0.4413 | 4.0 | 25 | 0.1279 | 0.9459 | | 0.0935 | 4.96 | 31 | 0.1436 | 0.9730 | | 0.0935 | 5.92 | 37 | 0.0021 | 1.0 | | 0.0697 | 6.88 | 43 | 0.2862 | 0.9459 | | 0.161 | 8.0 | 50 | 0.0843 | 0.9595 | | 0.161 | 8.96 | 56 | 0.2255 | 0.9459 | | 0.0061 | 9.92 | 62 | 0.4678 | 0.9054 | | 0.0061 | 10.88 | 68 | 0.3299 | 0.9189 | | 0.0309 | 12.0 | 75 | 0.5189 | 0.9189 | | 0.0025 | 12.96 | 81 | 0.0850 | 0.9865 | | 0.0025 | 13.92 | 87 | 0.0720 | 0.9865 | | 0.0042 | 14.88 | 93 | 0.0745 | 0.9865 | | 0.0002 | 16.0 | 100 | 0.0869 | 0.9865 | | 0.0002 | 16.96 | 106 | 0.0895 | 0.9865 | | 0.0001 | 17.92 | 112 | 0.1127 | 0.9865 | | 0.0001 | 18.88 | 118 | 0.1219 | 0.9865 | | 0.0 | 19.2 | 120 | 0.1223 | 0.9865 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
[ "crop", "not_crop" ]
crapthings/vit-base-beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](None) # vit-base-beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0634 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2823 | 1.0 | 130 | 0.2185 | 0.9624 | | 0.132 | 2.0 | 260 | 0.1255 | 0.9699 | | 0.1448 | 3.0 | 390 | 0.0948 | 0.9699 | | 0.0873 | 4.0 | 520 | 0.0634 | 0.9925 | | 0.1172 | 5.0 | 650 | 0.0809 | 0.9774 | ### Framework versions - Transformers 4.43.0.dev0 - Pytorch 2.1.1+cu118 - Datasets 2.18.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DA
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DA This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9032 - Accuracy: 0.7692 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.8421 | 4 | 1.6342 | 0.1346 | | No log | 1.8947 | 9 | 1.5368 | 0.2692 | | 1.5734 | 2.9474 | 14 | 1.3871 | 0.4808 | | 1.5734 | 4.0 | 19 | 1.2443 | 0.5192 | | 1.3253 | 4.8421 | 23 | 1.1185 | 0.5577 | | 1.3253 | 5.8947 | 28 | 1.0374 | 0.5962 | | 1.044 | 6.9474 | 33 | 0.9798 | 0.6346 | | 1.044 | 8.0 | 38 | 0.9702 | 0.6731 | | 0.8487 | 8.8421 | 42 | 0.9567 | 0.6538 | | 0.8487 | 9.8947 | 47 | 0.9292 | 0.6538 | | 0.7359 | 10.9474 | 52 | 0.9042 | 0.6923 | | 0.7359 | 12.0 | 57 | 0.9032 | 0.7692 | | 0.6592 | 12.8421 | 61 | 0.9060 | 0.6538 | | 0.6592 | 13.8947 | 66 | 0.9208 | 0.6538 | | 0.6257 | 14.9474 | 71 | 0.9272 | 0.6923 | | 0.6257 | 16.0 | 76 | 1.0044 | 0.6731 | | 0.5927 | 16.8421 | 80 | 0.9176 | 0.7308 | | 0.5927 | 17.8947 | 85 | 0.9261 | 0.75 | | 0.5255 | 18.9474 | 90 | 0.9058 | 0.6538 | | 0.5255 | 20.0 | 95 | 0.9338 | 0.75 | | 0.5255 | 20.8421 | 99 | 0.9103 | 0.6923 | | 0.5098 | 21.8947 | 104 | 0.9329 | 0.75 | | 0.5098 | 22.9474 | 109 | 0.9886 | 0.75 | | 0.4347 | 24.0 | 114 | 0.9331 | 0.7308 | | 0.4347 | 24.8421 | 118 | 1.0086 | 0.6923 | | 0.4269 | 25.8947 | 123 | 1.0184 | 0.7308 | | 0.4269 | 26.9474 | 128 | 0.9698 | 0.7115 | | 0.42 | 28.0 | 133 | 0.9873 | 0.6923 | | 0.42 | 28.8421 | 137 | 0.9996 | 0.6923 | | 0.4239 | 29.8947 | 142 | 0.9858 | 0.6923 | | 0.4239 | 30.9474 | 147 | 0.9964 | 0.7308 | | 0.3713 | 32.0 | 152 | 1.0338 | 0.7115 | | 0.3713 | 32.8421 | 156 | 1.0474 | 0.6923 | | 0.4133 | 33.6842 | 160 | 1.0444 | 0.6923 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
dhritic9/vit-base-brain-mri-dementia-detection
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-brain-mri-dementia-detection This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1089 - Accuracy: 0.9789 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.8826 | 0.3125 | 100 | 0.9027 | 0.575 | | 0.8908 | 0.625 | 200 | 0.8484 | 0.5984 | | 0.8229 | 0.9375 | 300 | 0.7514 | 0.6695 | | 0.5299 | 1.25 | 400 | 0.6798 | 0.7164 | | 0.5207 | 1.5625 | 500 | 0.6466 | 0.7375 | | 0.4967 | 1.875 | 600 | 0.6303 | 0.7461 | | 0.3977 | 2.1875 | 700 | 0.7240 | 0.7719 | | 0.2744 | 2.5 | 800 | 0.3544 | 0.8734 | | 0.4271 | 2.8125 | 900 | 0.3037 | 0.8938 | | 0.2484 | 3.125 | 1000 | 0.4111 | 0.8602 | | 0.0797 | 3.4375 | 1100 | 0.3782 | 0.8953 | | 0.0662 | 3.75 | 1200 | 0.3096 | 0.9172 | | 0.0894 | 4.0625 | 1300 | 0.2818 | 0.9289 | | 0.1005 | 4.375 | 1400 | 0.2164 | 0.9469 | | 0.0997 | 4.6875 | 1500 | 0.3378 | 0.9109 | | 0.0715 | 5.0 | 1600 | 0.3627 | 0.9133 | | 0.0567 | 5.3125 | 1700 | 0.3061 | 0.9234 | | 0.0558 | 5.625 | 1800 | 0.2393 | 0.9461 | | 0.0061 | 5.9375 | 1900 | 0.1738 | 0.9586 | | 0.0449 | 6.25 | 2000 | 0.2094 | 0.9492 | | 0.0073 | 6.5625 | 2100 | 0.1834 | 0.9539 | | 0.0425 | 6.875 | 2200 | 0.2847 | 0.9266 | | 0.0397 | 7.1875 | 2300 | 0.4031 | 0.9125 | | 0.0284 | 7.5 | 2400 | 0.2995 | 0.9406 | | 0.0158 | 7.8125 | 2500 | 0.1909 | 0.9664 | | 0.006 | 8.125 | 2600 | 0.3524 | 0.9297 | | 0.0017 | 8.4375 | 2700 | 0.1908 | 0.9617 | | 0.0026 | 8.75 | 2800 | 0.1787 | 0.9625 | | 0.001 | 9.0625 | 2900 | 0.1329 | 0.9688 | | 0.0497 | 9.375 | 3000 | 0.1878 | 0.9594 | | 0.09 | 9.6875 | 3100 | 0.1754 | 0.9648 | | 0.0046 | 10.0 | 3200 | 0.1584 | 0.9672 | | 0.0006 | 10.3125 | 3300 | 0.2008 | 0.9648 | | 0.0008 | 10.625 | 3400 | 0.1272 | 0.975 | | 0.028 | 10.9375 | 3500 | 0.1453 | 0.9766 | | 0.0005 | 11.25 | 3600 | 0.1256 | 0.975 | | 0.0005 | 11.5625 | 3700 | 0.1089 | 0.9789 | | 0.0004 | 11.875 | 3800 | 0.1098 | 0.9781 | | 0.0003 | 12.1875 | 3900 | 0.1779 | 0.9625 | | 0.0163 | 12.5 | 4000 | 0.2500 | 0.9539 | | 0.0003 | 12.8125 | 4100 | 0.1556 | 0.9734 | | 0.0003 | 13.125 | 4200 | 0.1205 | 0.9742 | | 0.0002 | 13.4375 | 4300 | 0.1543 | 0.9719 | | 0.0002 | 13.75 | 4400 | 0.1548 | 0.975 | | 0.0003 | 14.0625 | 4500 | 0.1497 | 0.975 | | 0.0002 | 14.375 | 4600 | 0.2317 | 0.9641 | | 0.0003 | 14.6875 | 4700 | 0.1418 | 0.9781 | | 0.0002 | 15.0 | 4800 | 0.1537 | 0.9734 | | 0.0002 | 15.3125 | 4900 | 0.1426 | 0.9781 | | 0.0002 | 15.625 | 5000 | 0.1253 | 0.9820 | | 0.0002 | 15.9375 | 5100 | 0.1128 | 0.9836 | | 0.0002 | 16.25 | 5200 | 0.1246 | 0.9805 | | 0.0002 | 16.5625 | 5300 | 0.1137 | 0.9828 | | 0.0001 | 16.875 | 5400 | 0.1101 | 0.9844 | | 0.0001 | 17.1875 | 5500 | 0.1112 | 0.9844 | | 0.0001 | 17.5 | 5600 | 0.1121 | 0.9844 | | 0.0001 | 17.8125 | 5700 | 0.1129 | 0.9836 | | 0.0001 | 18.125 | 5800 | 0.1135 | 0.9844 | | 0.0001 | 18.4375 | 5900 | 0.1140 | 0.9844 | | 0.0001 | 18.75 | 6000 | 0.1146 | 0.9844 | | 0.0001 | 19.0625 | 6100 | 0.1150 | 0.9844 | | 0.0001 | 19.375 | 6200 | 0.1153 | 0.9844 | | 0.0001 | 19.6875 | 6300 | 0.1155 | 0.9844 | | 0.0001 | 20.0 | 6400 | 0.1155 | 0.9844 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "mild_demented", "moderate_demented", "non_demented", "very_mild_demented" ]
heado/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0148 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.056 | 1.5385 | 100 | 0.0564 | 0.9850 | | 0.0375 | 3.0769 | 200 | 0.0148 | 1.0 | ### Framework versions - Transformers 4.42.4 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
jinsuzzzing/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.2189 - Accuracy: 0.9531 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0287 | 1.5385 | 100 | 0.0428 | 0.9925 | | 0.019 | 3.0769 | 200 | 0.0402 | 0.9850 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
Ma9pi2/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.1365 - Accuracy: 0.9609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0604 | 1.5385 | 100 | 0.2091 | 0.9549 | | 0.0042 | 3.0769 | 200 | 0.0460 | 0.9850 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
yim7595/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.2704 - Accuracy: 0.9219 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0429 | 1.5385 | 100 | 0.0087 | 1.0 | | 0.0027 | 3.0769 | 200 | 0.0387 | 0.9925 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
jongho-coder/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.1087 - Accuracy: 0.9688 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0046 | 1.5385 | 100 | 0.0247 | 0.9925 | | 0.0033 | 3.0769 | 200 | 0.0253 | 0.9925 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
Ain99/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.1311 - Accuracy: 0.9453 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0851 | 1.5385 | 100 | 0.1148 | 0.9549 | | 0.0123 | 3.0769 | 200 | 0.1426 | 0.9624 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
memorygreen/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.2010 - Accuracy: 0.9531 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.038 | 1.5385 | 100 | 0.0936 | 0.9774 | | 0.0037 | 3.0769 | 200 | 0.0201 | 0.9925 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
seongsu03/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.1056 - Accuracy: 0.9766 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1215 | 1.5385 | 100 | 0.3577 | 0.8947 | | 0.0054 | 3.0769 | 200 | 0.0904 | 0.9774 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
henwoo/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.1257 - Accuracy: 0.9609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0908 | 1.5385 | 100 | 0.0395 | 0.9925 | | 0.0405 | 3.0769 | 200 | 0.0881 | 0.9774 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
hbjoo/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.2667 - Accuracy: 0.9453 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0045 | 1.5385 | 100 | 0.0047 | 1.0 | | 0.0032 | 3.0769 | 200 | 0.0034 | 1.0 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
G9nine/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.2231 - Accuracy: 0.9609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0011 | 1.5385 | 100 | 0.0371 | 0.9925 | | 0.001 | 3.0769 | 200 | 0.0366 | 0.9925 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
ummykk/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.2086 - Accuracy: 0.9531 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.078 | 1.5385 | 100 | 0.0526 | 0.9850 | | 0.0038 | 3.0769 | 200 | 0.2246 | 0.9549 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
Jieuny/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.1681 - Accuracy: 0.9688 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0419 | 1.5385 | 100 | 0.0255 | 0.9925 | | 0.0119 | 3.0769 | 200 | 0.1605 | 0.9624 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
Jiwonnnnnoo/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.1684 - Accuracy: 0.9609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0023 | 1.5385 | 100 | 0.0030 | 1.0 | | 0.0018 | 3.0769 | 200 | 0.0028 | 1.0 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
2357095A/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0956 - Accuracy: 0.9766 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0997 | 1.5385 | 100 | 0.0864 | 0.9774 | | 0.0125 | 3.0769 | 200 | 0.0663 | 0.9774 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
chaeliwon/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.1311 - Accuracy: 0.9688 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1108 | 1.5385 | 100 | 0.0287 | 1.0 | | 0.0128 | 3.0769 | 200 | 0.0235 | 1.0 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
hannni/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0019 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.001 | 1.5385 | 100 | 0.0021 | 1.0 | | 0.0008 | 3.0769 | 200 | 0.0019 | 1.0 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
chrisbum/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.1727 - Accuracy: 0.9453 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0935 | 1.5385 | 100 | 0.0560 | 0.9925 | | 0.012 | 3.0769 | 200 | 0.0153 | 1.0 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
hwirang/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.2010 - Accuracy: 0.9531 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0109 | 1.5385 | 100 | 0.0752 | 0.9850 | | 0.0046 | 3.0769 | 200 | 0.0230 | 0.9925 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
zordi/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.1183 - Accuracy: 0.9688 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0043 | 1.5385 | 100 | 0.0047 | 1.0 | | 0.0031 | 3.0769 | 200 | 0.0035 | 1.0 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
kangwoosuk/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.1514 - Accuracy: 0.9688 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0731 | 1.5385 | 100 | 0.0650 | 0.9850 | | 0.0128 | 3.0769 | 200 | 0.0418 | 0.9925 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
hoony97/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.1270 - Accuracy: 0.9688 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1086 | 1.5385 | 100 | 0.0951 | 0.9699 | | 0.0061 | 3.0769 | 200 | 0.0631 | 0.9850 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
sdhed/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0224 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0905 | 1.5385 | 100 | 0.0949 | 0.9699 | | 0.0149 | 3.0769 | 200 | 0.0224 | 0.9925 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
Sexyguy/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0408 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.043 | 1.5385 | 100 | 0.0239 | 1.0 | | 0.0141 | 3.0769 | 200 | 0.0408 | 0.9925 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
gugeun/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0510 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1132 | 1.5385 | 100 | 0.1659 | 0.9474 | | 0.0193 | 3.0769 | 200 | 0.0510 | 0.9850 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]