model_id
stringlengths
7
105
model_card
stringlengths
1
130k
model_labels
listlengths
2
80k
ketutsatria/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2021 - Accuracy: 0.9472 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3533 | 1.0 | 370 | 0.3087 | 0.9215 | | 0.1977 | 2.0 | 740 | 0.2427 | 0.9269 | | 0.173 | 3.0 | 1110 | 0.2222 | 0.9350 | | 0.1609 | 4.0 | 1480 | 0.2152 | 0.9323 | | 0.1345 | 5.0 | 1850 | 0.2116 | 0.9350 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
victorwkey/vit-videogames
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-videogames This model is a fine-tuned version of [Falconsai/nsfw_image_detection](https://huggingface.co/Falconsai/nsfw_image_detection) on the Bingsu/Gameplay_Images dataset. It achieves the following results on the evaluation set: - Loss: 0.0083 - Accuracy: 0.998 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0185 | 0.5 | 500 | 0.0242 | 0.995 | | 0.0082 | 1.0 | 1000 | 0.0191 | 0.995 | | 0.0072 | 1.5 | 1500 | 0.0212 | 0.9945 | | 0.0041 | 2.0 | 2000 | 0.0143 | 0.997 | | 0.0055 | 2.5 | 2500 | 0.0154 | 0.9965 | | 0.004 | 3.0 | 3000 | 0.0128 | 0.9975 | | 0.0016 | 3.5 | 3500 | 0.0109 | 0.9975 | | 0.0014 | 4.0 | 4000 | 0.0089 | 0.998 | | 0.0021 | 4.5 | 4500 | 0.0084 | 0.998 | | 0.0005 | 5.0 | 5000 | 0.0083 | 0.998 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "among us", "apex legends", "fortnite", "forza horizon", "free fire", "genshin impact", "god of war", "minecraft", "roblox", "terraria" ]
MoGHenry/cat_dog_classifier
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cat_dog_classifier This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0884 - Accuracy: 0.9688 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 293 | 0.1872 | 0.9440 | | 0.3326 | 2.0 | 586 | 0.1120 | 0.9590 | | 0.3326 | 3.0 | 879 | 0.0877 | 0.9677 | | 0.1182 | 4.0 | 1172 | 0.0940 | 0.9641 | | 0.1182 | 5.0 | 1465 | 0.0884 | 0.9688 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "cat", "dog" ]
Fahim18/ovarian-cancer-vit-v1
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "0", "1", "2", "3", "4", "5", "6", "7" ]
Fahim18/ovarian-cancer-vit-v2
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "0", "1", "2", "3", "4", "5", "6", "7" ]
Ahmed-ibn-Harun/BrainHermorrhage-vit-base
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BrainHermorrhage-vit-base This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Ahmed-ibn-Harun/BrainHermorrhage dataset. It achieves the following results on the evaluation set: - Loss: 0.3755 - Accuracy: 0.8261 - Sensitivity: 0.7221 - Specificity: 0.9289 - F1 Score: 0.8050 - Auc: 0.9162 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Sensitivity | Specificity | F1 Score | Auc | |:-------------:|:-------:|:-----:|:---------------:|:--------:|:-----------:|:-----------:|:--------:|:------:| | 0.3331 | 0.2188 | 100 | 0.4220 | 0.7891 | 0.6868 | 0.8902 | 0.7641 | 0.8947 | | 0.4904 | 0.4376 | 200 | 0.4409 | 0.8038 | 0.7750 | 0.8324 | 0.7971 | 0.8931 | | 0.4875 | 0.6565 | 300 | 0.5088 | 0.8162 | 0.9009 | 0.7324 | 0.8298 | 0.9057 | | 0.4366 | 0.8753 | 400 | 0.3726 | 0.8314 | 0.7671 | 0.8951 | 0.8190 | 0.9190 | | 0.4663 | 1.0941 | 500 | 0.4225 | 0.8157 | 0.8910 | 0.7412 | 0.8278 | 0.9201 | | 0.2961 | 1.3129 | 600 | 0.3632 | 0.8339 | 0.7929 | 0.8745 | 0.8260 | 0.9244 | | 0.3367 | 1.5317 | 700 | 0.4454 | 0.8117 | 0.8870 | 0.7373 | 0.8241 | 0.9083 | | 0.4084 | 1.7505 | 800 | 0.5469 | 0.7408 | 0.9732 | 0.5108 | 0.7888 | 0.9068 | | 0.3161 | 1.9694 | 900 | 0.3893 | 0.8329 | 0.8672 | 0.7990 | 0.8377 | 0.9282 | | 0.4762 | 2.1882 | 1000 | 0.4871 | 0.7363 | 0.9633 | 0.5118 | 0.7842 | 0.8974 | | 0.4006 | 2.4070 | 1100 | 0.4228 | 0.7900 | 0.9326 | 0.6490 | 0.8154 | 0.9257 | | 0.4541 | 2.6258 | 1200 | 0.3389 | 0.8487 | 0.7641 | 0.9324 | 0.8340 | 0.9349 | | 0.5397 | 2.8446 | 1300 | 0.4587 | 0.7565 | 0.9732 | 0.5422 | 0.7990 | 0.9244 | | 0.2115 | 3.0635 | 1400 | 0.3976 | 0.8344 | 0.8196 | 0.8490 | 0.8312 | 0.9223 | | 0.3588 | 3.2823 | 1500 | 0.3928 | 0.8211 | 0.8949 | 0.7480 | 0.8326 | 0.9208 | | 0.3377 | 3.5011 | 1600 | 0.3943 | 0.8157 | 0.7483 | 0.8824 | 0.8015 | 0.9128 | | 0.3385 | 3.7199 | 1700 | 0.3627 | 0.8428 | 0.8256 | 0.8598 | 0.8393 | 0.9247 | | 0.3793 | 3.9387 | 1800 | 0.4015 | 0.8063 | 0.7592 | 0.8529 | 0.7958 | 0.9007 | | 0.2774 | 4.1575 | 1900 | 0.4174 | 0.8186 | 0.8018 | 0.8353 | 0.8147 | 0.9090 | | 0.2964 | 4.3764 | 2000 | 0.4120 | 0.8245 | 0.8940 | 0.7559 | 0.8352 | 0.9243 | | 0.2042 | 4.5952 | 2100 | 0.3984 | 0.8383 | 0.8414 | 0.8353 | 0.8381 | 0.9214 | | 0.2336 | 4.8140 | 2200 | 0.4263 | 0.8241 | 0.8722 | 0.7765 | 0.8314 | 0.9242 | | 0.2292 | 5.0328 | 2300 | 0.4430 | 0.8339 | 0.8186 | 0.8490 | 0.8306 | 0.9241 | | 0.265 | 5.2516 | 2400 | 0.4647 | 0.8314 | 0.7681 | 0.8941 | 0.8192 | 0.9204 | | 0.2754 | 5.4705 | 2500 | 0.5476 | 0.7886 | 0.9128 | 0.6657 | 0.8111 | 0.9116 | | 0.1859 | 5.6893 | 2600 | 0.4330 | 0.8324 | 0.8414 | 0.8235 | 0.8332 | 0.9218 | | 0.1785 | 5.9081 | 2700 | 0.4180 | 0.8369 | 0.8375 | 0.8363 | 0.8362 | 0.9199 | | 0.2057 | 6.1269 | 2800 | 0.4660 | 0.8319 | 0.8543 | 0.8098 | 0.8349 | 0.9158 | | 0.289 | 6.3457 | 2900 | 0.4399 | 0.8186 | 0.8196 | 0.8176 | 0.8180 | 0.9086 | | 0.1172 | 6.5646 | 3000 | 0.5597 | 0.8260 | 0.8474 | 0.8049 | 0.8289 | 0.9085 | | 0.1758 | 6.7834 | 3100 | 0.4902 | 0.8201 | 0.8335 | 0.8069 | 0.8217 | 0.9078 | | 0.2185 | 7.0022 | 3200 | 0.4738 | 0.8211 | 0.8295 | 0.8127 | 0.8218 | 0.9108 | | 0.2255 | 7.2210 | 3300 | 0.5072 | 0.8098 | 0.8771 | 0.7431 | 0.8210 | 0.9082 | | 0.213 | 7.4398 | 3400 | 0.4879 | 0.8379 | 0.7919 | 0.8833 | 0.8293 | 0.9126 | | 0.1528 | 7.6586 | 3500 | 0.6054 | 0.8137 | 0.8276 | 0.8 | 0.8154 | 0.9057 | | 0.1834 | 7.8775 | 3600 | 0.5653 | 0.8260 | 0.7532 | 0.8980 | 0.8115 | 0.9141 | | 0.0812 | 8.0963 | 3700 | 0.6640 | 0.8176 | 0.7284 | 0.9059 | 0.7989 | 0.9068 | | 0.1422 | 8.3151 | 3800 | 0.5916 | 0.8255 | 0.7721 | 0.8784 | 0.8149 | 0.9075 | | 0.1116 | 8.5339 | 3900 | 0.5746 | 0.8226 | 0.8583 | 0.7873 | 0.8279 | 0.9057 | | 0.1811 | 8.7527 | 4000 | 0.4679 | 0.8359 | 0.7869 | 0.8843 | 0.8267 | 0.9245 | | 0.1949 | 8.9716 | 4100 | 0.4645 | 0.8201 | 0.7641 | 0.8755 | 0.8086 | 0.9048 | | 0.036 | 9.1904 | 4200 | 0.6486 | 0.8349 | 0.7889 | 0.8804 | 0.8262 | 0.9116 | | 0.1117 | 9.4092 | 4300 | 0.5656 | 0.8236 | 0.7433 | 0.9029 | 0.8073 | 0.9125 | | 0.1101 | 9.6280 | 4400 | 0.5197 | 0.8285 | 0.8731 | 0.7843 | 0.8351 | 0.9226 | | 0.2064 | 9.8468 | 4500 | 0.6169 | 0.8270 | 0.7641 | 0.8892 | 0.8146 | 0.9132 | | 0.0647 | 10.0656 | 4600 | 0.5593 | 0.8255 | 0.7542 | 0.8961 | 0.8113 | 0.9122 | | 0.0566 | 10.2845 | 4700 | 0.5822 | 0.8245 | 0.7384 | 0.9098 | 0.8072 | 0.9182 | | 0.1324 | 10.5033 | 4800 | 0.5593 | 0.8319 | 0.7948 | 0.8686 | 0.8247 | 0.9146 | | 0.0824 | 10.7221 | 4900 | 0.6553 | 0.8117 | 0.7037 | 0.9186 | 0.7880 | 0.9130 | | 0.2134 | 10.9409 | 5000 | 0.5847 | 0.8334 | 0.8880 | 0.7794 | 0.8413 | 0.9271 | | 0.0835 | 11.1597 | 5100 | 0.6585 | 0.8314 | 0.8008 | 0.8618 | 0.8253 | 0.9130 | | 0.0936 | 11.3786 | 5200 | 0.8768 | 0.8191 | 0.7939 | 0.8441 | 0.8136 | 0.9062 | | 0.0325 | 11.5974 | 5300 | 0.6502 | 0.8423 | 0.8315 | 0.8529 | 0.8398 | 0.9209 | | 0.1054 | 11.8162 | 5400 | 0.5742 | 0.8354 | 0.8256 | 0.8451 | 0.833 | 0.9186 | | 0.0157 | 12.0350 | 5500 | 0.7790 | 0.8324 | 0.8256 | 0.8392 | 0.8305 | 0.9150 | | 0.0929 | 12.2538 | 5600 | 0.5779 | 0.8433 | 0.7978 | 0.8882 | 0.8351 | 0.9183 | | 0.0553 | 12.4726 | 5700 | 0.6642 | 0.8369 | 0.8157 | 0.8578 | 0.8326 | 0.9154 | | 0.1012 | 12.6915 | 5800 | 0.6882 | 0.8344 | 0.8494 | 0.8196 | 0.8361 | 0.9218 | | 0.1292 | 12.9103 | 5900 | 0.6949 | 0.8310 | 0.7800 | 0.8814 | 0.8211 | 0.9114 | | 0.103 | 13.1291 | 6000 | 0.7031 | 0.8398 | 0.8216 | 0.8578 | 0.8361 | 0.9162 | | 0.0652 | 13.3479 | 6100 | 0.7927 | 0.8379 | 0.8176 | 0.8578 | 0.8338 | 0.9178 | | 0.1194 | 13.5667 | 6200 | 0.7817 | 0.8211 | 0.7602 | 0.8814 | 0.8086 | 0.9125 | | 0.2684 | 13.7856 | 6300 | 0.7446 | 0.8221 | 0.7017 | 0.9412 | 0.7968 | 0.9164 | | 0.1194 | 14.0044 | 6400 | 0.7420 | 0.8334 | 0.8305 | 0.8363 | 0.8322 | 0.9152 | | 0.0548 | 14.2232 | 6500 | 0.8545 | 0.8295 | 0.8543 | 0.8049 | 0.8329 | 0.8918 | | 0.0681 | 14.4420 | 6600 | 0.8680 | 0.7915 | 0.6373 | 0.9441 | 0.7525 | 0.9094 | | 0.0627 | 14.6608 | 6700 | 0.6181 | 0.8487 | 0.8028 | 0.8941 | 0.8407 | 0.9206 | | 0.0565 | 14.8796 | 6800 | 0.7248 | 0.8241 | 0.8117 | 0.8363 | 0.8211 | 0.9108 | | 0.0879 | 15.0985 | 6900 | 0.6817 | 0.8295 | 0.8484 | 0.8108 | 0.8319 | 0.9208 | | 0.1235 | 15.3173 | 7000 | 0.7360 | 0.8344 | 0.8434 | 0.8255 | 0.8351 | 0.9143 | | 0.1256 | 15.5361 | 7100 | 0.6166 | 0.8300 | 0.7839 | 0.8755 | 0.8210 | 0.9114 | | 0.0353 | 15.7549 | 7200 | 0.7718 | 0.8339 | 0.8771 | 0.7912 | 0.8401 | 0.9231 | | 0.0838 | 15.9737 | 7300 | 0.7930 | 0.8305 | 0.7919 | 0.8686 | 0.8229 | 0.8963 | | 0.0345 | 16.1926 | 7400 | 0.9201 | 0.8231 | 0.7304 | 0.9147 | 0.8041 | 0.8816 | | 0.0263 | 16.4114 | 7500 | 0.8642 | 0.8310 | 0.7433 | 0.9176 | 0.8139 | 0.9021 | | 0.0471 | 16.6302 | 7600 | 0.8542 | 0.8324 | 0.7631 | 0.9010 | 0.8191 | 0.9031 | | 0.0894 | 16.8490 | 7700 | 0.7756 | 0.8034 | 0.7116 | 0.8941 | 0.7826 | 0.8999 | | 0.0649 | 17.0678 | 7800 | 0.7112 | 0.8344 | 0.8484 | 0.8206 | 0.8359 | 0.9077 | | 0.0567 | 17.2867 | 7900 | 0.7433 | 0.8452 | 0.8394 | 0.8510 | 0.8436 | 0.9106 | | 0.0229 | 17.5055 | 8000 | 0.8775 | 0.8255 | 0.7542 | 0.8961 | 0.8113 | 0.9017 | | 0.009 | 17.7243 | 8100 | 0.8561 | 0.8349 | 0.7958 | 0.8735 | 0.8274 | 0.9062 | | 0.0838 | 17.9431 | 8200 | 0.9441 | 0.8255 | 0.8771 | 0.7745 | 0.8333 | 0.9211 | | 0.0958 | 18.1619 | 8300 | 0.9286 | 0.8255 | 0.7374 | 0.9127 | 0.8078 | 0.8961 | | 0.0422 | 18.3807 | 8400 | 0.8053 | 0.8369 | 0.8186 | 0.8549 | 0.8331 | 0.9114 | | 0.053 | 18.5996 | 8500 | 0.8440 | 0.8388 | 0.8067 | 0.8706 | 0.8327 | 0.8972 | | 0.0462 | 18.8184 | 8600 | 0.7419 | 0.8221 | 0.8137 | 0.8304 | 0.8198 | 0.9108 | | 0.0474 | 19.0372 | 8700 | 0.8702 | 0.8231 | 0.7849 | 0.8608 | 0.8152 | 0.8997 | | 0.0257 | 19.2560 | 8800 | 0.8966 | 0.8157 | 0.7473 | 0.8833 | 0.8013 | 0.9049 | | 0.0214 | 19.4748 | 8900 | 0.9787 | 0.8275 | 0.7839 | 0.8706 | 0.8188 | 0.8877 | | 0.1409 | 19.6937 | 9000 | 0.8695 | 0.8379 | 0.7899 | 0.8853 | 0.8289 | 0.9084 | | 0.0715 | 19.9125 | 9100 | 0.9500 | 0.8245 | 0.8028 | 0.8461 | 0.8198 | 0.8975 | | 0.0331 | 20.1313 | 9200 | 0.9371 | 0.8334 | 0.8375 | 0.8294 | 0.8333 | 0.9042 | | 0.0259 | 20.3501 | 9300 | 0.8587 | 0.8374 | 0.8127 | 0.8618 | 0.8325 | 0.9124 | | 0.0093 | 20.5689 | 9400 | 0.7861 | 0.8393 | 0.8196 | 0.8588 | 0.8354 | 0.9182 | | 0.0103 | 20.7877 | 9500 | 0.7921 | 0.8359 | 0.7800 | 0.8912 | 0.8254 | 0.9119 | | 0.1187 | 21.0066 | 9600 | 0.7618 | 0.8260 | 0.7512 | 0.9 | 0.8111 | 0.9166 | | 0.0024 | 21.2254 | 9700 | 0.9334 | 0.8319 | 0.8632 | 0.8010 | 0.8363 | 0.9123 | | 0.0993 | 21.4442 | 9800 | 0.8067 | 0.8310 | 0.8682 | 0.7941 | 0.8363 | 0.9177 | | 0.145 | 21.6630 | 9900 | 0.7816 | 0.8324 | 0.7770 | 0.8873 | 0.8218 | 0.9108 | | 0.054 | 21.8818 | 10000 | 0.8371 | 0.8413 | 0.8523 | 0.8304 | 0.8423 | 0.9190 | | 0.0446 | 22.1007 | 10100 | 0.8001 | 0.8354 | 0.7899 | 0.8804 | 0.8268 | 0.9084 | | 0.1218 | 22.3195 | 10200 | 0.8164 | 0.8364 | 0.7701 | 0.9020 | 0.8240 | 0.9078 | | 0.032 | 22.5383 | 10300 | 0.8353 | 0.8359 | 0.8256 | 0.8461 | 0.8334 | 0.9157 | | 0.0804 | 22.7571 | 10400 | 0.8301 | 0.8314 | 0.7859 | 0.8765 | 0.8226 | 0.9149 | | 0.0982 | 22.9759 | 10500 | 0.8366 | 0.8339 | 0.8305 | 0.8373 | 0.8326 | 0.9160 | | 0.0153 | 23.1947 | 10600 | 0.8395 | 0.8295 | 0.7948 | 0.8637 | 0.8226 | 0.9150 | | 0.0647 | 23.4136 | 10700 | 0.8342 | 0.8364 | 0.8662 | 0.8069 | 0.8404 | 0.9230 | | 0.0906 | 23.6324 | 10800 | 0.8414 | 0.8078 | 0.8900 | 0.7265 | 0.8216 | 0.9166 | | 0.0071 | 23.8512 | 10900 | 0.8552 | 0.8354 | 0.7889 | 0.8814 | 0.8266 | 0.9053 | | 0.0254 | 24.0700 | 11000 | 0.8612 | 0.8428 | 0.7830 | 0.9020 | 0.8320 | 0.9009 | | 0.0265 | 24.2888 | 11100 | 1.0379 | 0.8245 | 0.7195 | 0.9284 | 0.8031 | 0.8937 | | 0.048 | 24.5077 | 11200 | 1.0143 | 0.8285 | 0.7611 | 0.8951 | 0.8153 | 0.8942 | | 0.0005 | 24.7265 | 11300 | 0.9883 | 0.8310 | 0.8077 | 0.8539 | 0.8262 | 0.9024 | | 0.1702 | 24.9453 | 11400 | 1.0282 | 0.8339 | 0.7512 | 0.9157 | 0.8181 | 0.9078 | | 0.0006 | 25.1641 | 11500 | 0.9612 | 0.8448 | 0.8712 | 0.8186 | 0.8480 | 0.9151 | | 0.0425 | 25.3829 | 11600 | 1.0040 | 0.8438 | 0.8612 | 0.8265 | 0.8457 | 0.9143 | | 0.0006 | 25.6018 | 11700 | 0.9840 | 0.8305 | 0.7790 | 0.8814 | 0.8205 | 0.9117 | | 0.0029 | 25.8206 | 11800 | 1.0850 | 0.8295 | 0.7294 | 0.9284 | 0.8097 | 0.9039 | | 0.0776 | 26.0394 | 11900 | 0.9524 | 0.8334 | 0.8335 | 0.8333 | 0.8327 | 0.9119 | | 0.0543 | 26.2582 | 12000 | 0.9541 | 0.8329 | 0.7572 | 0.9078 | 0.8184 | 0.9097 | | 0.0018 | 26.4770 | 12100 | 0.8137 | 0.8393 | 0.8712 | 0.8078 | 0.8436 | 0.9225 | | 0.0512 | 26.6958 | 12200 | 1.0741 | 0.8176 | 0.8712 | 0.7647 | 0.8261 | 0.8886 | | 0.0008 | 26.9147 | 12300 | 1.0294 | 0.8393 | 0.8484 | 0.8304 | 0.8400 | 0.8987 | | 0.043 | 27.1335 | 12400 | 0.9720 | 0.8334 | 0.8682 | 0.7990 | 0.8383 | 0.9135 | | 0.0013 | 27.3523 | 12500 | 0.9571 | 0.8374 | 0.7800 | 0.8941 | 0.8267 | 0.9120 | | 0.0163 | 27.5711 | 12600 | 0.9475 | 0.8305 | 0.8167 | 0.8441 | 0.8273 | 0.9102 | | 0.0034 | 27.7899 | 12700 | 0.8116 | 0.8403 | 0.8365 | 0.8441 | 0.8390 | 0.9183 | | 0.0014 | 28.0088 | 12800 | 0.9375 | 0.8305 | 0.8285 | 0.8324 | 0.8294 | 0.9139 | | 0.0008 | 28.2276 | 12900 | 1.0335 | 0.8314 | 0.7602 | 0.9020 | 0.8177 | 0.9072 | | 0.0497 | 28.4464 | 13000 | 1.0562 | 0.8285 | 0.7592 | 0.8971 | 0.8149 | 0.9039 | | 0.0319 | 28.6652 | 13100 | 0.7997 | 0.8364 | 0.8444 | 0.8284 | 0.8369 | 0.9167 | | 0.0932 | 28.8840 | 13200 | 0.8591 | 0.8167 | 0.8474 | 0.7863 | 0.8213 | 0.9142 | | 0.0007 | 29.1028 | 13300 | 0.8555 | 0.8379 | 0.8246 | 0.8510 | 0.8349 | 0.9196 | | 0.0025 | 29.3217 | 13400 | 0.9062 | 0.8359 | 0.8236 | 0.8480 | 0.8331 | 0.9147 | | 0.0117 | 29.5405 | 13500 | 0.8089 | 0.8339 | 0.8345 | 0.8333 | 0.8333 | 0.9181 | | 0.0505 | 29.7593 | 13600 | 0.9048 | 0.8329 | 0.8404 | 0.8255 | 0.8334 | 0.9167 | | 0.0484 | 29.9781 | 13700 | 1.0264 | 0.8265 | 0.8573 | 0.7961 | 0.8309 | 0.9133 | | 0.0004 | 30.1969 | 13800 | 1.0712 | 0.8349 | 0.8087 | 0.8608 | 0.8297 | 0.9053 | | 0.0157 | 30.4158 | 13900 | 1.0159 | 0.8236 | 0.8186 | 0.8284 | 0.8219 | 0.9062 | | 0.0004 | 30.6346 | 14000 | 1.0367 | 0.8305 | 0.8196 | 0.8412 | 0.8278 | 0.9022 | | 0.0003 | 30.8534 | 14100 | 0.9853 | 0.8314 | 0.8345 | 0.8284 | 0.8312 | 0.9123 | | 0.0039 | 31.0722 | 14200 | 0.9839 | 0.8413 | 0.7869 | 0.8951 | 0.8314 | 0.9124 | | 0.0505 | 31.2910 | 14300 | 1.0911 | 0.8339 | 0.8741 | 0.7941 | 0.8396 | 0.9033 | | 0.0007 | 31.5098 | 14400 | 0.8740 | 0.8374 | 0.8246 | 0.85 | 0.8345 | 0.9208 | | 0.0004 | 31.7287 | 14500 | 0.9801 | 0.8398 | 0.8295 | 0.85 | 0.8374 | 0.9208 | | 0.0592 | 31.9475 | 14600 | 1.0447 | 0.8305 | 0.8404 | 0.8206 | 0.8314 | 0.9165 | | 0.0003 | 32.1663 | 14700 | 1.1005 | 0.8245 | 0.8543 | 0.7951 | 0.8288 | 0.9129 | | 0.0002 | 32.3851 | 14800 | 1.1025 | 0.8319 | 0.8176 | 0.8461 | 0.8287 | 0.9108 | | 0.0428 | 32.6039 | 14900 | 1.0779 | 0.8310 | 0.8236 | 0.8382 | 0.8289 | 0.9096 | | 0.049 | 32.8228 | 15000 | 0.9729 | 0.8408 | 0.8295 | 0.8520 | 0.8383 | 0.9208 | | 0.0219 | 33.0416 | 15100 | 0.9851 | 0.8211 | 0.7661 | 0.8755 | 0.8098 | 0.9120 | | 0.001 | 33.2604 | 15200 | 0.9834 | 0.8349 | 0.8256 | 0.8441 | 0.8326 | 0.9166 | | 0.0009 | 33.4792 | 15300 | 1.0128 | 0.8270 | 0.7463 | 0.9069 | 0.8110 | 0.9130 | | 0.0146 | 33.6980 | 15400 | 0.9835 | 0.8300 | 0.7790 | 0.8804 | 0.8200 | 0.9097 | | 0.0184 | 33.9168 | 15500 | 0.8922 | 0.8290 | 0.8276 | 0.8304 | 0.8280 | 0.9183 | | 0.0528 | 34.1357 | 15600 | 0.9727 | 0.8398 | 0.7899 | 0.8892 | 0.8306 | 0.9107 | | 0.0018 | 34.3545 | 15700 | 1.0313 | 0.8413 | 0.8196 | 0.8627 | 0.8370 | 0.9065 | | 0.0002 | 34.5733 | 15800 | 1.0882 | 0.8374 | 0.7978 | 0.8765 | 0.8299 | 0.9065 | | 0.0002 | 34.7921 | 15900 | 1.0866 | 0.8379 | 0.8236 | 0.8520 | 0.8348 | 0.9045 | | 0.0865 | 35.0109 | 16000 | 1.0595 | 0.8300 | 0.7602 | 0.8990 | 0.8164 | 0.8971 | | 0.0004 | 35.2298 | 16100 | 1.0287 | 0.8344 | 0.7988 | 0.8696 | 0.8275 | 0.9041 | | 0.0003 | 35.4486 | 16200 | 1.0652 | 0.8305 | 0.8176 | 0.8431 | 0.8275 | 0.8877 | | 0.0006 | 35.6674 | 16300 | 1.0627 | 0.8270 | 0.7988 | 0.8549 | 0.8212 | 0.8848 | | 0.0003 | 35.8862 | 16400 | 1.1173 | 0.8339 | 0.7780 | 0.8892 | 0.8233 | 0.8843 | | 0.0002 | 36.1050 | 16500 | 1.1114 | 0.8379 | 0.8048 | 0.8706 | 0.8315 | 0.8948 | | 0.0002 | 36.3239 | 16600 | 1.1165 | 0.8379 | 0.8137 | 0.8618 | 0.8331 | 0.8968 | | 0.0004 | 36.5427 | 16700 | 1.1693 | 0.8369 | 0.8147 | 0.8588 | 0.8324 | 0.8918 | | 0.0002 | 36.7615 | 16800 | 1.1609 | 0.8364 | 0.8325 | 0.8402 | 0.8350 | 0.8856 | | 0.0007 | 36.9803 | 16900 | 1.1993 | 0.8334 | 0.8107 | 0.8559 | 0.8288 | 0.8935 | | 0.0002 | 37.1991 | 17000 | 1.0206 | 0.8374 | 0.8652 | 0.8098 | 0.8410 | 0.9128 | | 0.0024 | 37.4179 | 17100 | 0.9984 | 0.8359 | 0.7899 | 0.8814 | 0.8272 | 0.9094 | | 0.0005 | 37.6368 | 17200 | 1.1162 | 0.8388 | 0.7671 | 0.9098 | 0.8256 | 0.8987 | | 0.0008 | 37.8556 | 17300 | 0.9434 | 0.8433 | 0.8414 | 0.8451 | 0.8423 | 0.9146 | | 0.0003 | 38.0744 | 17400 | 0.9508 | 0.8457 | 0.8523 | 0.8392 | 0.8460 | 0.9200 | | 0.0003 | 38.2932 | 17500 | 1.0299 | 0.8379 | 0.8345 | 0.8412 | 0.8366 | 0.9183 | | 0.0002 | 38.5120 | 17600 | 1.0518 | 0.8438 | 0.8325 | 0.8549 | 0.8413 | 0.9178 | | 0.0015 | 38.7309 | 17700 | 1.0205 | 0.8472 | 0.8464 | 0.8480 | 0.8464 | 0.9210 | | 0.0188 | 38.9497 | 17800 | 1.0644 | 0.8438 | 0.7968 | 0.8902 | 0.8353 | 0.9183 | | 0.0002 | 39.1685 | 17900 | 1.0497 | 0.8443 | 0.8266 | 0.8618 | 0.8407 | 0.9220 | | 0.0003 | 39.3873 | 18000 | 1.0802 | 0.8443 | 0.8236 | 0.8647 | 0.8402 | 0.9210 | | 0.0002 | 39.6061 | 18100 | 1.1465 | 0.8393 | 0.7958 | 0.8824 | 0.8313 | 0.9186 | | 0.0002 | 39.8249 | 18200 | 1.0551 | 0.8467 | 0.8147 | 0.8784 | 0.8409 | 0.9185 | | 0.0002 | 40.0438 | 18300 | 1.0791 | 0.8467 | 0.8147 | 0.8784 | 0.8409 | 0.9171 | | 0.0002 | 40.2626 | 18400 | 1.0902 | 0.8487 | 0.8176 | 0.8794 | 0.8431 | 0.9175 | | 0.0002 | 40.4814 | 18500 | 1.1028 | 0.8487 | 0.8176 | 0.8794 | 0.8431 | 0.9175 | | 0.0001 | 40.7002 | 18600 | 1.1156 | 0.8487 | 0.8176 | 0.8794 | 0.8431 | 0.9165 | | 0.0001 | 40.9190 | 18700 | 1.1266 | 0.8487 | 0.8176 | 0.8794 | 0.8431 | 0.9168 | | 0.0002 | 41.1379 | 18800 | 1.0527 | 0.8472 | 0.8246 | 0.8696 | 0.8430 | 0.9186 | | 0.0002 | 41.3567 | 18900 | 1.0758 | 0.8477 | 0.8226 | 0.8725 | 0.8431 | 0.9190 | | 0.0001 | 41.5755 | 19000 | 1.0940 | 0.8492 | 0.8216 | 0.8765 | 0.8442 | 0.9199 | | 0.0268 | 41.7943 | 19100 | 0.9887 | 0.8374 | 0.8494 | 0.8255 | 0.8386 | 0.9196 | | 0.002 | 42.0131 | 19200 | 1.0890 | 0.8354 | 0.7730 | 0.8971 | 0.8237 | 0.9172 | | 0.0002 | 42.2319 | 19300 | 1.0668 | 0.8418 | 0.8147 | 0.8686 | 0.8366 | 0.9154 | | 0.0001 | 42.4508 | 19400 | 1.1239 | 0.8383 | 0.7899 | 0.8863 | 0.8293 | 0.9150 | | 0.0001 | 42.6696 | 19500 | 1.1372 | 0.8364 | 0.8285 | 0.8441 | 0.8343 | 0.9084 | | 0.0001 | 42.8884 | 19600 | 1.1153 | 0.8393 | 0.7869 | 0.8912 | 0.8297 | 0.9200 | | 0.0001 | 43.1072 | 19700 | 1.1482 | 0.8413 | 0.7790 | 0.9029 | 0.8300 | 0.9184 | | 0.0001 | 43.3260 | 19800 | 1.1535 | 0.8388 | 0.7859 | 0.8912 | 0.8291 | 0.9180 | | 0.0001 | 43.5449 | 19900 | 1.1138 | 0.8393 | 0.8236 | 0.8549 | 0.8360 | 0.9188 | | 0.0001 | 43.7637 | 20000 | 1.1321 | 0.8393 | 0.8186 | 0.8598 | 0.8352 | 0.9176 | | 0.0001 | 43.9825 | 20100 | 1.1473 | 0.8403 | 0.8147 | 0.8657 | 0.8354 | 0.9163 | | 0.0001 | 44.2013 | 20200 | 1.1550 | 0.8413 | 0.8137 | 0.8686 | 0.8360 | 0.9154 | | 0.0001 | 44.4201 | 20300 | 1.1630 | 0.8428 | 0.8127 | 0.8725 | 0.8372 | 0.9143 | | 0.0001 | 44.6389 | 20400 | 1.1718 | 0.8428 | 0.8117 | 0.8735 | 0.8370 | 0.9133 | | 0.0001 | 44.8578 | 20500 | 1.1793 | 0.8428 | 0.8117 | 0.8735 | 0.8370 | 0.9129 | | 0.0001 | 45.0766 | 20600 | 1.1869 | 0.8418 | 0.8097 | 0.8735 | 0.8358 | 0.9121 | | 0.0001 | 45.2954 | 20700 | 1.1931 | 0.8413 | 0.8087 | 0.8735 | 0.8352 | 0.9115 | | 0.0001 | 45.5142 | 20800 | 1.1990 | 0.8418 | 0.8097 | 0.8735 | 0.8358 | 0.9103 | | 0.0001 | 45.7330 | 20900 | 1.2056 | 0.8418 | 0.8087 | 0.8745 | 0.8356 | 0.9097 | | 0.0001 | 45.9519 | 21000 | 1.2116 | 0.8423 | 0.8087 | 0.8755 | 0.8361 | 0.9092 | | 0.0001 | 46.1707 | 21100 | 1.2176 | 0.8428 | 0.8087 | 0.8765 | 0.8365 | 0.9090 | | 0.0001 | 46.3895 | 21200 | 1.2233 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9082 | | 0.0001 | 46.6083 | 21300 | 1.2281 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9079 | | 0.0001 | 46.8271 | 21400 | 1.2322 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9075 | | 0.0001 | 47.0460 | 21500 | 1.2365 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9075 | | 0.0001 | 47.2648 | 21600 | 1.2402 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9074 | | 0.0001 | 47.4836 | 21700 | 1.2447 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9060 | | 0.0001 | 47.7024 | 21800 | 1.2484 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9068 | | 0.0001 | 47.9212 | 21900 | 1.2516 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9064 | | 0.0 | 48.1400 | 22000 | 1.2546 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9068 | | 0.0 | 48.3589 | 22100 | 1.2572 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9062 | | 0.0 | 48.5777 | 22200 | 1.2603 | 0.8433 | 0.8087 | 0.8775 | 0.8369 | 0.9058 | | 0.0 | 48.7965 | 22300 | 1.2628 | 0.8438 | 0.8087 | 0.8784 | 0.8374 | 0.9057 | | 0.0 | 49.0153 | 22400 | 1.2647 | 0.8438 | 0.8087 | 0.8784 | 0.8374 | 0.9053 | | 0.0 | 49.2341 | 22500 | 1.2663 | 0.8438 | 0.8087 | 0.8784 | 0.8374 | 0.9055 | | 0.0 | 49.4530 | 22600 | 1.2679 | 0.8438 | 0.8087 | 0.8784 | 0.8374 | 0.9058 | | 0.0 | 49.6718 | 22700 | 1.2687 | 0.8438 | 0.8087 | 0.8784 | 0.8374 | 0.9057 | | 0.0 | 49.8906 | 22800 | 1.2691 | 0.8438 | 0.8087 | 0.8784 | 0.8374 | 0.9061 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 2.20.0 - Tokenizers 0.21.0
[ "0_no_hermorrhage", "1_hermorrhage" ]
kaixkhazaki/vit_doclaynet_base
# Vision Transformer(ViT) for Document Classification(DocLayNet) This model is a fine-tuned Vision Transformer (ViT) for document layout classification based on the DocLayNet dataset. Trained on images of the document categories from DocLayNet dataset where the categories namely(with their indexes) are : ```python {'financial_reports': 0, 'government_tenders': 1, 'laws_and_regulations': 2, 'manuals': 3, 'patents': 4, 'scientific_articles': 5} ``` ## Model description This model is built upon the `google/vit-base-patch16-224-in21k` Vision Transformer architecture and fine-tuned specifically for document layout classification. The base ViT model uses a patch size of 16x16 pixels and was pre-trained on ImageNet-21k. The model has been optimized to recognize and classify different types of document layouts from the DocLayNet dataset. ## Training data The model was trained on DocLayNet-base dataset, which is available on the Hugging Face Hub: [pierreguillou/DocLayNet-base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) DocLayNet is a comprehensive dataset for document layout analysis, containing various document types and their corresponding layout annotations. ## Training procedure Trained for 10 epochs on a single gpu for ~10 mins. The training hyperparameters: ```python { 'batch_size': 64, 'num_epochs': 20, 'learning_rate': 1e-4, 'weight_decay': 0.05, 'warmup_ratio': 0.2, 'gradient_clip': 0.1, 'dropout_rate': 0.1, 'label_smoothing': 0.1, 'optimizer': 'AdamW' } ``` ## Evaluation results The model achieved the following performance metrics on the test set: Test Loss: 0.8622 Test Accuracy: 81.36% ## Usage ```python from transformers import pipeline # Load the model using the image-classification pipeline pipe = pipeline("image-classification", model="kaixkhazaki/vit_doclaynet_base") # Test it with an image result = pipe("path_to_image.jpg") print(result) ```
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5" ]
platzi/platzi-vit_model-johnleandrosalcedorojas
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-vit_model-johnleandrosalcedorojas This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0518 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.143 | 3.8462 | 500 | 0.0518 | 0.9850 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
MANMEET75/Swin-Transformer-Pro-Passport-Orientation-Classifier
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Swin-Transformer-Pro-Passport-Orientation-Classifier This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0948 - Accuracy: 0.975 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 5.0594 | 1.0 | 15 | 0.6024 | 0.745 | | 2.0855 | 2.0 | 30 | 0.3377 | 0.89 | | 1.7551 | 3.0 | 45 | 0.1731 | 0.94 | | 1.0192 | 4.0 | 60 | 0.1030 | 0.97 | | 0.9583 | 4.7018 | 70 | 0.0948 | 0.975 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Tokenizers 0.21.0
[ "correct-orientation", "rotated-left", "rotated-right", "upside-down" ]
kaixkhazaki/deit_doclaynet_base
# Data-efficient Image Transformer(DeiT) for Document Classification(DocLayNet) This model is a fine-tuned Data-efficient Image Transformer(DeiT) for document image classification based on the DocLayNet dataset. Trained on images of the document categories from DocLayNet dataset where the categories namely(with their indexes) are : {'financial_reports': 0, 'government_tenders': 1, 'laws_and_regulations': 2, 'manuals': 3, 'patents': 4, 'scientific_articles': 5} ## Model description DeiT(facebook/deit-base-distilled-patch16-224) finetuned on document classification ## Training data DocLayNet-base https://huggingface.co/datasets/pierreguillou/DocLayNet-base ## Training procedure hyperparameters: { 'batch_size': 128, 'num_epochs': 20, 'learning_rate': 1e-4, 'weight_decay': 0.1, 'warmup_ratio': 0.1, 'gradient_clip': 0.1, 'dropout_rate': 0.1, 'label_smoothing': 0.1 'optmizer': 'AdamW' } ## Evaluation results Test Loss: 0.8134, Test Acc: 81.56% ## Usage ```python from transformers import pipeline # Load the model using the image-classification pipeline pipe = pipeline("image-classification", model="kaixkhazaki/vit_doclaynet_base") # Test it with an image result = pipe("path_to_image.jpg") print(result)
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5" ]
riandika/image_classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # image_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5394 - Accuracy: 0.4813 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 8.2876 | 1.0 | 10 | 2.0733 | 0.1375 | | 8.2701 | 2.0 | 20 | 2.0678 | 0.15 | | 8.2385 | 3.0 | 30 | 2.0564 | 0.1875 | | 8.1938 | 4.0 | 40 | 2.0484 | 0.2188 | | 8.1243 | 5.0 | 50 | 2.0263 | 0.2437 | | 8.043 | 6.0 | 60 | 2.0065 | 0.2812 | | 7.9327 | 7.0 | 70 | 1.9940 | 0.275 | | 7.7842 | 8.0 | 80 | 1.9588 | 0.3438 | | 7.6389 | 9.0 | 90 | 1.9299 | 0.3125 | | 7.4825 | 10.0 | 100 | 1.8830 | 0.4 | | 7.3337 | 11.0 | 110 | 1.8519 | 0.35 | | 7.1512 | 12.0 | 120 | 1.8171 | 0.4188 | | 7.0169 | 13.0 | 130 | 1.7624 | 0.4188 | | 6.8618 | 14.0 | 140 | 1.7341 | 0.45 | | 6.7244 | 15.0 | 150 | 1.6903 | 0.45 | | 6.5857 | 16.0 | 160 | 1.6709 | 0.4688 | | 6.4774 | 17.0 | 170 | 1.6624 | 0.425 | | 6.3616 | 18.0 | 180 | 1.6314 | 0.4437 | | 6.2635 | 19.0 | 190 | 1.6173 | 0.4437 | | 6.1831 | 20.0 | 200 | 1.5929 | 0.4938 | | 6.1224 | 21.0 | 210 | 1.5841 | 0.45 | | 6.0711 | 22.0 | 220 | 1.5622 | 0.4625 | | 5.9769 | 23.0 | 230 | 1.5617 | 0.5062 | | 5.9176 | 24.0 | 240 | 1.5491 | 0.4813 | | 5.8776 | 25.0 | 250 | 1.5262 | 0.5687 | | 5.8347 | 26.0 | 260 | 1.5287 | 0.4875 | | 5.781 | 27.0 | 270 | 1.5284 | 0.4625 | | 5.7451 | 28.0 | 280 | 1.5018 | 0.4875 | | 5.6745 | 29.0 | 290 | 1.5057 | 0.4875 | | 5.6253 | 30.0 | 300 | 1.5090 | 0.4938 | | 5.6111 | 31.0 | 310 | 1.5275 | 0.4688 | | 5.5742 | 32.0 | 320 | 1.5008 | 0.525 | | 5.5516 | 33.0 | 330 | 1.4795 | 0.5188 | | 5.4796 | 34.0 | 340 | 1.4834 | 0.5062 | | 5.4958 | 35.0 | 350 | 1.4916 | 0.5125 | | 5.4824 | 36.0 | 360 | 1.4925 | 0.4938 | | 5.4659 | 37.0 | 370 | 1.4847 | 0.5062 | | 5.4715 | 38.0 | 380 | 1.4670 | 0.5 | | 5.4735 | 39.0 | 390 | 1.4733 | 0.525 | | 5.4789 | 40.0 | 400 | 1.4881 | 0.4813 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Tokenizers 0.21.0
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
digo-prayudha/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0453 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0573 | 1.5385 | 100 | 0.0518 | 0.9925 | | 0.0122 | 3.0769 | 200 | 0.0453 | 0.9925 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
digo-prayudha/vit-emotion-classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-emotion-classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the [FastJobs/Visual_Emotional_Analysis](https://huggingface.co/datasets/FastJobs/Visual_Emotional_Analysis) dataset. It achieves the following results on the evaluation set: - Loss: 1.3802 - Accuracy: 0.6125 ## Intended uses & limitations ### Intended Uses - Emotion classification from visual inputs (images). ### Limitations - May reflect biases from the training dataset. - Performance may degrade in domains outside the training data. - Not suitable for critical or sensitive decision-making tasks. ## Training and evaluation data This model was trained on the [FastJobs/Visual_Emotional_Analysis](https://huggingface.co/datasets/FastJobs/Visual_Emotional_Analysis) dataset. The dataset contains: - **800 images** annotated with **8 emotion labels**: - Anger - Contempt - Disgust - Fear - Happy - Neutral - Sad - Surprise ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8454 | 2.5 | 100 | 1.4373 | 0.4813 | | 0.2022 | 5.0 | 200 | 1.4067 | 0.55 | | 0.0474 | 7.5 | 300 | 1.3802 | 0.6125 | | 0.0368 | 10.0 | 400 | 1.4388 | 0.5938 | ## How to use this model ```python from transformers import AutoImageProcessor, ViTForImageClassification import torch from PIL import Image import requests from huggingface_hub import login login(api_key) image = Image.open("image.jpg").convert("RGB") image_processor = AutoImageProcessor.from_pretrained("digo-prayudha/vit-emotion-classification") model = ViTForImageClassification.from_pretrained("digo-prayudha/vit-emotion-classification") inputs = image_processor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]) ``` ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "anger", "contempt", "disgust", "fear", "happy", "neutral", "sad", "surprise" ]
tinutmap/categorAI_img
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # categorAI_img This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7080 - Accuracy: 0.8378 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.9091 | 5 | 1.8872 | 0.3784 | | 7.7979 | 1.9091 | 10 | 1.7777 | 0.6419 | | 7.7979 | 2.9091 | 15 | 1.6224 | 0.6622 | | 6.9519 | 3.9091 | 20 | 1.4667 | 0.6959 | | 6.9519 | 4.9091 | 25 | 1.3353 | 0.7365 | | 5.7562 | 5.9091 | 30 | 1.2522 | 0.7703 | | 5.7562 | 6.9091 | 35 | 1.1617 | 0.7838 | | 4.7446 | 7.9091 | 40 | 1.0967 | 0.7635 | | 4.7446 | 8.9091 | 45 | 1.0362 | 0.7568 | | 4.0655 | 9.9091 | 50 | 0.9349 | 0.8108 | | 4.0655 | 10.9091 | 55 | 0.9393 | 0.7905 | | 3.5041 | 11.9091 | 60 | 0.8859 | 0.7838 | | 3.5041 | 12.9091 | 65 | 0.9039 | 0.7770 | | 3.0788 | 13.9091 | 70 | 0.8123 | 0.8041 | | 3.0788 | 14.9091 | 75 | 0.7946 | 0.8243 | | 2.7461 | 15.9091 | 80 | 0.8003 | 0.8311 | | 2.7461 | 16.9091 | 85 | 0.8101 | 0.7703 | | 2.4988 | 17.9091 | 90 | 0.7111 | 0.8176 | | 2.4988 | 18.9091 | 95 | 0.7439 | 0.8243 | | 2.3122 | 19.9091 | 100 | 0.7542 | 0.7905 | | 2.3122 | 20.9091 | 105 | 0.7323 | 0.8311 | | 2.3408 | 21.9091 | 110 | 0.7175 | 0.8243 | | 2.3408 | 22.9091 | 115 | 0.7652 | 0.8041 | | 2.2846 | 23.9091 | 120 | 0.7211 | 0.8176 | | 2.2846 | 24.9091 | 125 | 0.7080 | 0.8378 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1.post306 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "accessories", "light", "panels", "seat", "storage", "table", "work surfaces" ]
MoGHenry/cat_dog_classifier_with_small_datasest
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cat_dog_classifier_with_small_datasest This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1369 - Accuracy: 0.95 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 70 | 0.5422 | 0.8571 | | No log | 2.0 | 140 | 0.5221 | 0.8786 | | No log | 3.0 | 210 | 0.4977 | 0.8571 | | No log | 4.0 | 280 | 0.4617 | 0.8786 | | No log | 5.0 | 350 | 0.3932 | 0.9143 | | No log | 6.0 | 420 | 0.3411 | 0.9143 | | No log | 7.0 | 490 | 0.2884 | 0.9143 | | 0.4971 | 8.0 | 560 | 0.2429 | 0.9286 | | 0.4971 | 9.0 | 630 | 0.2151 | 0.9429 | | 0.4971 | 10.0 | 700 | 0.1962 | 0.9286 | | 0.4971 | 11.0 | 770 | 0.1727 | 0.9357 | | 0.4971 | 12.0 | 840 | 0.1676 | 0.95 | | 0.4971 | 13.0 | 910 | 0.1764 | 0.9286 | | 0.4971 | 14.0 | 980 | 0.1565 | 0.9429 | | 0.2878 | 15.0 | 1050 | 0.1578 | 0.9429 | | 0.2878 | 16.0 | 1120 | 0.1577 | 0.9429 | | 0.2878 | 17.0 | 1190 | 0.1393 | 0.9429 | | 0.2878 | 18.0 | 1260 | 0.1472 | 0.9429 | | 0.2878 | 19.0 | 1330 | 0.1315 | 0.95 | | 0.2878 | 20.0 | 1400 | 0.1369 | 0.95 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "cats", "dogs" ]
ariG23498/vit_base_patch16_224.augreg2_in21k_ft_in1k.ft_food101
Supervised Fine Tuned version of `timm/vit_base_patch16_224.augreg2_in21k_ft_in1k` on the `ethz/food101` dataset. Artefeacts: 1. [Hugging Face Space on Food Classification using this model](https://huggingface.co/spaces/ariG23498/food-classification) 2. [Training code for the model](https://github.com/ariG23498/timm-wrapper-examples/blob/main/%2304_sft.ipynb)
[ "apple_pie", "baby_back_ribs", "baklava", "beef_carpaccio", "beef_tartare", "beet_salad", "beignets", "bibimbap", "bread_pudding", "breakfast_burrito", "bruschetta", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare", "waffles" ]
DUCBUI/swin-tiny-patch4-window7-224-finetuned-azure-poc-img-classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-azure-poc-img-classification This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3382 - Accuracy: 0.8562 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 5.3507 | 1.0 | 14 | 0.8673 | 0.6557 | | 3.1899 | 2.0 | 28 | 0.4795 | 0.8184 | | 1.5944 | 3.0 | 42 | 0.3992 | 0.8436 | | 1.3952 | 4.0 | 56 | 0.3741 | 0.8499 | | 1.1452 | 5.0 | 70 | 0.3601 | 0.8575 | | 1.0478 | 6.0 | 84 | 0.3470 | 0.8562 | | 1.0049 | 7.0 | 98 | 0.3360 | 0.8562 | | 0.9361 | 8.0 | 112 | 0.3318 | 0.8525 | | 0.9477 | 9.0 | 126 | 0.3297 | 0.8512 | | 0.8568 | 10.0 | 140 | 0.3382 | 0.8562 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "concrete_anchors", "steel_connectors", "steel_fasteners", "wood_connectors", "wood_fasteners" ]
nguyenkhoa/mobilevitv2_Liveness_detection_v1.0
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/nguyenkhoaht002/liveness_detection/runs/uhi1thq6) # mobilevitv2_Liveness_detection_v1.0 This model is a fine-tuned version of [apple/mobilevitv2-1.0-imagenet1k-256](https://huggingface.co/apple/mobilevitv2-1.0-imagenet1k-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0046 - Accuracy: 0.9988 - F1: 0.9988 - Recall: 0.9988 - Precision: 0.9988 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.1093 | 0.2048 | 128 | 0.0679 | 0.9929 | 0.9929 | 0.9929 | 0.9929 | | 0.0234 | 0.4096 | 256 | 0.0170 | 0.9962 | 0.9962 | 0.9962 | 0.9962 | | 0.0186 | 0.6144 | 384 | 0.0131 | 0.9973 | 0.9973 | 0.9973 | 0.9973 | | 0.0068 | 0.8192 | 512 | 0.0089 | 0.9980 | 0.9981 | 0.9980 | 0.9980 | | 0.0049 | 1.024 | 640 | 0.0067 | 0.9985 | 0.9985 | 0.9985 | 0.9985 | | 0.0113 | 1.2288 | 768 | 0.0064 | 0.9983 | 0.9984 | 0.9983 | 0.9983 | | 0.0061 | 1.4336 | 896 | 0.0060 | 0.9983 | 0.9983 | 0.9983 | 0.9984 | | 0.0025 | 1.6384 | 1024 | 0.0058 | 0.9983 | 0.9983 | 0.9983 | 0.9984 | | 0.0019 | 1.8432 | 1152 | 0.0053 | 0.9987 | 0.9986 | 0.9987 | 0.9987 | | 0.0056 | 2.048 | 1280 | 0.0051 | 0.9987 | 0.9987 | 0.9987 | 0.9987 | | 0.0015 | 2.2528 | 1408 | 0.0050 | 0.9987 | 0.9987 | 0.9987 | 0.9987 | | 0.0055 | 2.4576 | 1536 | 0.0049 | 0.9988 | 0.9987 | 0.9988 | 0.9988 | | 0.0023 | 2.6624 | 1664 | 0.0049 | 0.9989 | 0.9988 | 0.9989 | 0.9989 | | 0.0027 | 2.8672 | 1792 | 0.0046 | 0.9988 | 0.9988 | 0.9988 | 0.9988 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "live", "spoof" ]
nguyenkhoa/vit_Liveness_detection_v1.0
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/nguyenkhoaht002/liveness_detection/runs/wuepbg02) # vit_Liveness_detection_v1.0 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0047 - Accuracy: 0.9988 - F1: 0.9988 - Recall: 0.9988 - Precision: 0.9988 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.0254 | 0.2048 | 128 | 0.0148 | 0.9946 | 0.9946 | 0.9946 | 0.9946 | | 0.0256 | 0.4096 | 256 | 0.0180 | 0.9945 | 0.9944 | 0.9945 | 0.9945 | | 0.0113 | 0.6144 | 384 | 0.0133 | 0.9955 | 0.9955 | 0.9955 | 0.9955 | | 0.0116 | 0.8192 | 512 | 0.0070 | 0.9976 | 0.9976 | 0.9976 | 0.9976 | | 0.0084 | 1.024 | 640 | 0.0072 | 0.9976 | 0.9976 | 0.9976 | 0.9976 | | 0.0048 | 1.2288 | 768 | 0.0084 | 0.9976 | 0.9976 | 0.9976 | 0.9976 | | 0.0041 | 1.4336 | 896 | 0.0078 | 0.9975 | 0.9975 | 0.9975 | 0.9975 | | 0.0015 | 1.6384 | 1024 | 0.0049 | 0.9983 | 0.9983 | 0.9983 | 0.9983 | | 0.0047 | 1.8432 | 1152 | 0.0068 | 0.9977 | 0.9977 | 0.9977 | 0.9977 | | 0.0012 | 2.048 | 1280 | 0.0075 | 0.9975 | 0.9975 | 0.9975 | 0.9975 | | 0.0025 | 2.2528 | 1408 | 0.0095 | 0.9971 | 0.9971 | 0.9971 | 0.9971 | | 0.0013 | 2.4576 | 1536 | 0.0084 | 0.9976 | 0.9976 | 0.9976 | 0.9976 | | 0.0026 | 2.6624 | 1664 | 0.0056 | 0.9985 | 0.9985 | 0.9985 | 0.9985 | | 0.0001 | 2.8672 | 1792 | 0.0096 | 0.9976 | 0.9976 | 0.9976 | 0.9976 | | 0.0001 | 3.072 | 1920 | 0.0049 | 0.9987 | 0.9987 | 0.9987 | 0.9987 | | 0.0009 | 3.2768 | 2048 | 0.0085 | 0.9978 | 0.9978 | 0.9978 | 0.9978 | | 0.0003 | 3.4816 | 2176 | 0.0078 | 0.9980 | 0.9980 | 0.9980 | 0.9980 | | 0.0002 | 3.6864 | 2304 | 0.0057 | 0.9985 | 0.9985 | 0.9985 | 0.9985 | | 0.0 | 3.8912 | 2432 | 0.0043 | 0.9988 | 0.9988 | 0.9988 | 0.9988 | | 0.0 | 4.096 | 2560 | 0.0046 | 0.9987 | 0.9987 | 0.9987 | 0.9987 | | 0.0 | 4.3008 | 2688 | 0.0045 | 0.9988 | 0.9988 | 0.9988 | 0.9988 | | 0.0 | 4.5056 | 2816 | 0.0046 | 0.9988 | 0.9988 | 0.9988 | 0.9988 | | 0.0 | 4.7104 | 2944 | 0.0047 | 0.9988 | 0.9988 | 0.9988 | 0.9988 | | 0.0 | 4.9152 | 3072 | 0.0047 | 0.9988 | 0.9988 | 0.9988 | 0.9988 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "live", "spoof" ]
Aditi3004/resnet-50-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-50-finetuned-eurosat This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6309 - Accuracy: 0.5625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 3 | 0.6749 | 0.5625 | | No log | 2.0 | 6 | 0.6746 | 0.5625 | | No log | 3.0 | 9 | 0.6696 | 0.5625 | | 2.1049 | 4.0 | 12 | 0.6614 | 0.5312 | | 2.1049 | 5.0 | 15 | 0.6552 | 0.5625 | | 2.1049 | 6.0 | 18 | 0.6494 | 0.5625 | | 2.0436 | 7.0 | 21 | 0.6427 | 0.5625 | | 2.0436 | 8.0 | 24 | 0.6399 | 0.5625 | | 2.0436 | 9.0 | 27 | 0.6325 | 0.5625 | | 1.7828 | 10.0 | 30 | 0.6314 | 0.5625 | | 1.7828 | 11.0 | 33 | 0.6309 | 0.5625 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "fake", "real" ]
yithh/ViT-DeepfakeDetection
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "real", "fake" ]
ancerlop/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0609 - Accuracy: 0.9804 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8097 | 1.0 | 190 | 0.0964 | 0.9681 | | 0.6562 | 2.0 | 380 | 0.0756 | 0.9759 | | 0.4472 | 3.0 | 570 | 0.0609 | 0.9804 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "annualcrop", "forest", "herbaceousvegetation", "highway", "industrial", "pasture", "permanentcrop", "residential", "river", "sealake" ]
KuRRe8/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2230 - Accuracy: 0.9378 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4 | 1.0 | 370 | 0.2674 | 0.9310 | | 0.2138 | 2.0 | 740 | 0.1898 | 0.9459 | | 0.1672 | 3.0 | 1110 | 0.1665 | 0.9526 | | 0.1332 | 4.0 | 1480 | 0.1575 | 0.9567 | | 0.1305 | 5.0 | 1850 | 0.1563 | 0.9553 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
Umsakwa/vit-beans-classifier
# Umsakwa/Uddayvit-image-classification-model This Vision Transformer (ViT) model has been fine-tuned for image classification tasks on the [Beans Dataset](https://huggingface.co/datasets/beans), which consists of images of beans categorized into three classes: - **Angular Leaf Spot** - **Bean Rust** - **Healthy** ## Model Details - **Architecture**: Vision Transformer (ViT) - **Base Model**: `google/vit-base-patch16-224-in21k` - **Framework**: PyTorch - **Task**: Image Classification - **Labels**: 3 (angular_leaf_spot, bean_rust, healthy) - **Input Shape**: 224x224 RGB images - **Training Dataset**: [Beans Dataset](https://huggingface.co/datasets/beans) - **Fine-Tuning**: The model was fine-tuned on the Beans dataset to classify plant diseases in beans. ### Model Description The model uses the ViT architecture, which processes image patches using a transformer-based approach. It has been trained to classify bean diseases with high accuracy. This makes it particularly useful for agricultural applications, such as early disease detection and plant health monitoring. - **Developed by**: Udday (Umsakwa) - **Language(s)**: N/A (Image-based) - **License**: Apache-2.0 - **Finetuned from**: `google/vit-base-patch16-224-in21k` ### Model Sources - **Repository**: [Umsakwa/Uddayvit-image-classification-model](https://huggingface.co/Umsakwa/Uddayvit-image-classification-model) ## Uses ### Direct Use This model can be directly used for classifying bean leaf images into one of three categories: angular leaf spot, bean rust, or healthy. ### Downstream Use The model may also be fine-tuned further for similar agricultural image classification tasks or integrated into larger plant health monitoring systems. ### Out-of-Scope Use - The model is not suitable for non-agricultural image classification tasks without further fine-tuning. - Not robust to extreme distortions, occlusions, or very low-resolution images. ## Bias, Risks, and Limitations - **Bias**: The dataset may contain biases due to specific environmental or geographic conditions of the sampled plants. - **Limitations**: Performance may degrade on datasets significantly different from the training dataset. ### Recommendations - Users should ensure the model is evaluated on their specific dataset before deployment. - Additional fine-tuning may be required for domain-specific applications. ## How to Get Started with the Model To use this model for inference: ```python from transformers import ViTForImageClassification, ViTImageProcessor # Load model and processor model = ViTForImageClassification.from_pretrained("Umsakwa/Uddayvit-image-classification-model") processor = ViTImageProcessor.from_pretrained("Umsakwa/Uddayvit-image-classification-model") # Prepare an image image = processor(images="path_to_image.jpg", return_tensors="pt") # Run inference outputs = model(**image) predictions = outputs.logits.argmax(-1)
[ "angular_leaf_spot", "bean_rust", "healthy" ]
vikas117/finetuned-ai-real-cifake
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-ai-real-cifake This model is a fine-tuned version of [ongtrandong2/ai_vs_real_image_detection](https://huggingface.co/ongtrandong2/ai_vs_real_image_detection) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0622 - Accuracy: 0.9756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.4286 | 0.3617 | 200 | 0.4256 | 0.8549 | | 0.3499 | 0.7233 | 400 | 0.1693 | 0.9353 | | 0.2664 | 1.0850 | 600 | 0.3467 | 0.8690 | | 0.2303 | 1.4467 | 800 | 0.4582 | 0.8398 | | 0.1553 | 1.8083 | 1000 | 0.2135 | 0.9186 | | 0.1751 | 2.1700 | 1200 | 0.0793 | 0.9715 | | 0.1383 | 2.5316 | 1400 | 0.0638 | 0.9753 | | 0.1375 | 2.8933 | 1600 | 0.0622 | 0.9756 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "ai", "real" ]
haywoodsloan/ai-image-detector-dev-deploy
# Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metrics loss: 0.09541802108287811 f1: 0.9853826686447335 precision: 0.9808886765408504 recall: 0.9899180291938807 auc: 0.9957081876919603 accuracy: 0.9794339738473816
[ "artificial", "real" ]
kedimestan/swin-base-patch4-window7-224
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-base-patch4-window7-224 Bu model, [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) modelin retinoblastoma veri seti için ince ayar yapılmış versiyonudur. Test veri setindeki sonuçları aşağıdaki gibidir: - Loss: 0.3809 - Accuracy: 1.0 ## Model description Swin tabanlı bir modeldir. 5e-5 öğrenme oranı ile 10 epoch ile eğitim yapıldı. Optimizasyon için Adam kullanıldı. veri seti %80-%20 olacak şekilde sırasyıla eğitim ve test verisetine bölündü. The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | No log | 0.5714 | 1 | 0.8119 | 0.3913 | | No log | 1.7143 | 3 | 0.6544 | 0.5217 | | No log | 2.8571 | 5 | 0.3809 | 1.0 | | No log | 4.0 | 7 | 0.2352 | 1.0 | | No log | 4.5714 | 8 | 0.1875 | 1.0 | | 0.4988 | 5.7143 | 10 | 0.1481 | 1.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "healthy", "rr" ]
nguyenkhoa/dinov2_Liveness_detection_v2.2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/nguyenkhoaht002/liveness_detection/runs/b39fcrkm) # dinov2_Liveness_detection_v2.2 This model is a fine-tuned version of [facebook/dinov2-small](https://huggingface.co/facebook/dinov2-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1307 - Accuracy: 0.9781 - F1: 0.9781 - Recall: 0.9781 - Precision: 0.9783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.3279 | 0.2048 | 128 | 0.2858 | 0.8749 | 0.8772 | 0.8749 | 0.8773 | | 0.2389 | 0.4096 | 256 | 0.2696 | 0.8881 | 0.8819 | 0.8881 | 0.9196 | | 0.186 | 0.6144 | 384 | 0.1614 | 0.9383 | 0.9386 | 0.9383 | 0.9381 | | 0.2048 | 0.8192 | 512 | 0.1568 | 0.9404 | 0.9411 | 0.9404 | 0.9415 | | 0.1662 | 1.024 | 640 | 0.1474 | 0.9426 | 0.9433 | 0.9426 | 0.9436 | | 0.1257 | 1.2288 | 768 | 0.1186 | 0.9578 | 0.9573 | 0.9578 | 0.9604 | | 0.1215 | 1.4336 | 896 | 0.1202 | 0.9556 | 0.9560 | 0.9556 | 0.9561 | | 0.0917 | 1.6384 | 1024 | 0.1045 | 0.9611 | 0.9611 | 0.9611 | 0.9611 | | 0.1256 | 1.8432 | 1152 | 0.0971 | 0.9633 | 0.9630 | 0.9633 | 0.9645 | | 0.0676 | 2.048 | 1280 | 0.1524 | 0.9487 | 0.9477 | 0.9487 | 0.9545 | | 0.0458 | 2.2528 | 1408 | 0.1149 | 0.9641 | 0.9643 | 0.9641 | 0.9642 | | 0.0462 | 2.4576 | 1536 | 0.1233 | 0.9630 | 0.9632 | 0.9630 | 0.9631 | | 0.0453 | 2.6624 | 1664 | 0.1030 | 0.9671 | 0.9670 | 0.9671 | 0.9679 | | 0.0631 | 2.8672 | 1792 | 0.0896 | 0.967 | 0.9672 | 0.967 | 0.9671 | | 0.0358 | 3.072 | 1920 | 0.0966 | 0.9735 | 0.9734 | 0.9735 | 0.9738 | | 0.0229 | 3.2768 | 2048 | 0.1250 | 0.9675 | 0.9676 | 0.9675 | 0.9676 | | 0.0272 | 3.4816 | 2176 | 0.1148 | 0.9691 | 0.9693 | 0.9691 | 0.9692 | | 0.0253 | 3.6864 | 2304 | 0.1130 | 0.9757 | 0.9755 | 0.9757 | 0.9761 | | 0.0249 | 3.8912 | 2432 | 0.1091 | 0.9716 | 0.9717 | 0.9716 | 0.9715 | | 0.0049 | 4.096 | 2560 | 0.1420 | 0.9756 | 0.9756 | 0.9756 | 0.9755 | | 0.0159 | 4.3008 | 2688 | 0.1423 | 0.9775 | 0.9774 | 0.9775 | 0.9777 | | 0.0026 | 4.5056 | 2816 | 0.1454 | 0.9774 | 0.9773 | 0.9774 | 0.9776 | | 0.0059 | 4.7104 | 2944 | 0.1445 | 0.9785 | 0.9785 | 0.9785 | 0.9785 | | 0.0011 | 4.9152 | 3072 | 0.1307 | 0.9781 | 0.9781 | 0.9781 | 0.9783 | ### Evaluate results - Accuaracy: 0.81 - F1: 0.86 - Recall: 0.85 - Precision: 0.65 - APCER: 0.2001 - BPCER: 0.1458 - ACER: 0.1729 ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "live", "spoof" ]
nadahh/APTOS2019DetectionViaLLMMConvNextV2
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "0", "1", "2", "3", "4" ]
nguyenkhoa/dinov2_Liveness_detection_v2.2.1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/nguyenkhoaht002/liveness_detection/runs/zdrli5b6) # dinov2_Liveness_detection_v2.2.1 This model is a fine-tuned version of [nguyenkhoa/dinov2_Liveness_detection_v2.1.4](https://huggingface.co/nguyenkhoa/dinov2_Liveness_detection_v2.1.4) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0301 - Accuracy: 0.9910 - F1: 0.9910 - Recall: 0.9910 - Precision: 0.9910 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 768 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.4052 | 0.3232 | 64 | 0.0775 | 0.9712 | 0.9713 | 0.9712 | 0.9712 | | 0.0784 | 0.6465 | 128 | 0.0545 | 0.9803 | 0.9803 | 0.9803 | 0.9804 | | 0.0639 | 0.9697 | 192 | 0.0615 | 0.9771 | 0.9772 | 0.9771 | 0.9772 | | 0.0479 | 1.2929 | 256 | 0.0572 | 0.9795 | 0.9794 | 0.9795 | 0.9800 | | 0.0439 | 1.6162 | 320 | 0.0422 | 0.9844 | 0.9844 | 0.9844 | 0.9844 | | 0.0392 | 1.9394 | 384 | 0.0564 | 0.9803 | 0.9801 | 0.9803 | 0.9810 | | 0.0374 | 2.2626 | 448 | 0.0464 | 0.9837 | 0.9837 | 0.9837 | 0.9837 | | 0.0273 | 2.5859 | 512 | 0.0378 | 0.9861 | 0.9861 | 0.9861 | 0.9861 | | 0.0271 | 2.9091 | 576 | 0.0336 | 0.9883 | 0.9883 | 0.9883 | 0.9884 | | 0.021 | 3.2323 | 640 | 0.0418 | 0.9859 | 0.9859 | 0.9859 | 0.9859 | | 0.019 | 3.5556 | 704 | 0.0454 | 0.9848 | 0.9849 | 0.9848 | 0.9849 | | 0.0177 | 3.8788 | 768 | 0.0359 | 0.9883 | 0.9883 | 0.9883 | 0.9883 | | 0.0134 | 4.2020 | 832 | 0.0410 | 0.9874 | 0.9874 | 0.9874 | 0.9877 | | 0.0102 | 4.5253 | 896 | 0.0314 | 0.9910 | 0.9910 | 0.9910 | 0.9910 | | 0.0103 | 4.8485 | 960 | 0.0301 | 0.9910 | 0.9910 | 0.9910 | 0.9910 | ### Evaluate results - Accuracy: 0.8626 - F1: 0.8909 - Recall: 0.9924 - Precision: 0.6903 - APCER: 0.1940 - BPCER: 0.0076 - ACER: 0.1008 ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "live", "spoof" ]
Melo1512/vit-msn-small-ultralytics_yolo_cropped_lateral_flow_ivalidation
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-ultralytics_yolo_cropped_lateral_flow_ivalidation This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1774 - Accuracy: 0.9489 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.9231 | 3 | 0.8678 | 0.4277 | | No log | 1.8462 | 6 | 0.6171 | 0.7 | | No log | 2.7692 | 9 | 0.4174 | 0.8723 | | 0.6518 | 4.0 | 13 | 0.5366 | 0.7106 | | 0.6518 | 4.9231 | 16 | 0.3255 | 0.8851 | | 0.6518 | 5.8462 | 19 | 0.6159 | 0.6809 | | 0.4119 | 6.7692 | 22 | 0.3017 | 0.9191 | | 0.4119 | 8.0 | 26 | 0.5130 | 0.7128 | | 0.4119 | 8.9231 | 29 | 0.2183 | 0.9255 | | 0.3387 | 9.8462 | 32 | 0.2523 | 0.9149 | | 0.3387 | 10.7692 | 35 | 0.1774 | 0.9489 | | 0.3387 | 12.0 | 39 | 0.2376 | 0.9255 | | 0.3055 | 12.9231 | 42 | 0.3930 | 0.8383 | | 0.3055 | 13.8462 | 45 | 0.2308 | 0.9234 | | 0.3055 | 14.7692 | 48 | 0.1587 | 0.9468 | | 0.2909 | 16.0 | 52 | 0.6113 | 0.6830 | | 0.2909 | 16.9231 | 55 | 0.2910 | 0.8915 | | 0.2909 | 17.8462 | 58 | 0.3612 | 0.8447 | | 0.2227 | 18.7692 | 61 | 0.3117 | 0.8787 | | 0.2227 | 20.0 | 65 | 0.2684 | 0.9170 | | 0.2227 | 20.9231 | 68 | 0.3767 | 0.8404 | | 0.2129 | 21.8462 | 71 | 0.2527 | 0.9234 | | 0.2129 | 22.7692 | 74 | 0.3270 | 0.8745 | | 0.2129 | 24.0 | 78 | 0.4314 | 0.8064 | | 0.213 | 24.9231 | 81 | 0.2874 | 0.9 | | 0.213 | 25.8462 | 84 | 0.4797 | 0.7894 | | 0.213 | 26.7692 | 87 | 0.4896 | 0.7851 | | 0.1758 | 28.0 | 91 | 0.3144 | 0.8723 | | 0.1758 | 28.9231 | 94 | 0.5881 | 0.7213 | | 0.1758 | 29.8462 | 97 | 0.5599 | 0.7298 | | 0.1766 | 30.7692 | 100 | 0.3413 | 0.8702 | | 0.1766 | 32.0 | 104 | 0.3453 | 0.8638 | | 0.1766 | 32.9231 | 107 | 0.3634 | 0.8596 | | 0.1583 | 33.8462 | 110 | 0.3799 | 0.8468 | | 0.1583 | 34.7692 | 113 | 0.3840 | 0.8468 | | 0.1583 | 36.0 | 117 | 0.3890 | 0.8447 | | 0.1969 | 36.9231 | 120 | 0.3950 | 0.8426 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "invalid", "valid" ]
Melo1512/vit-msn-small-corect_dataset_lateral_flow_ivalidation
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-corect_dataset_lateral_flow_ivalidation This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2930 - Accuracy: 0.9048 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.9231 | 3 | 0.6350 | 0.6337 | | No log | 1.8462 | 6 | 0.5047 | 0.8022 | | No log | 2.7692 | 9 | 0.3701 | 0.8791 | | 0.5485 | 4.0 | 13 | 0.5379 | 0.7436 | | 0.5485 | 4.9231 | 16 | 0.2748 | 0.8938 | | 0.5485 | 5.8462 | 19 | 0.3004 | 0.8974 | | 0.3335 | 6.7692 | 22 | 0.3492 | 0.8681 | | 0.3335 | 8.0 | 26 | 0.2497 | 0.8974 | | 0.3335 | 8.9231 | 29 | 0.4304 | 0.8315 | | 0.3087 | 9.8462 | 32 | 0.3479 | 0.8791 | | 0.3087 | 10.7692 | 35 | 0.3796 | 0.8645 | | 0.3087 | 12.0 | 39 | 0.4152 | 0.8352 | | 0.2614 | 12.9231 | 42 | 0.3199 | 0.9011 | | 0.2614 | 13.8462 | 45 | 0.3434 | 0.8718 | | 0.2614 | 14.7692 | 48 | 0.4001 | 0.8462 | | 0.2471 | 16.0 | 52 | 0.3220 | 0.8901 | | 0.2471 | 16.9231 | 55 | 0.3540 | 0.8718 | | 0.2471 | 17.8462 | 58 | 0.4019 | 0.8535 | | 0.2817 | 18.7692 | 61 | 0.3152 | 0.8974 | | 0.2817 | 20.0 | 65 | 0.3978 | 0.8571 | | 0.2817 | 20.9231 | 68 | 0.4289 | 0.8388 | | 0.2353 | 21.8462 | 71 | 0.3146 | 0.8974 | | 0.2353 | 22.7692 | 74 | 0.3206 | 0.8864 | | 0.2353 | 24.0 | 78 | 0.3715 | 0.8828 | | 0.2339 | 24.9231 | 81 | 0.3446 | 0.8938 | | 0.2339 | 25.8462 | 84 | 0.2930 | 0.9048 | | 0.2339 | 26.7692 | 87 | 0.4349 | 0.8205 | | 0.2301 | 28.0 | 91 | 0.3630 | 0.8681 | | 0.2301 | 28.9231 | 94 | 0.3669 | 0.8645 | | 0.2301 | 29.8462 | 97 | 0.5037 | 0.7912 | | 0.2115 | 30.7692 | 100 | 0.3449 | 0.8828 | | 0.2115 | 32.0 | 104 | 0.3280 | 0.9011 | | 0.2115 | 32.9231 | 107 | 0.4031 | 0.8425 | | 0.2033 | 33.8462 | 110 | 0.3612 | 0.8535 | | 0.2033 | 34.7692 | 113 | 0.3163 | 0.8901 | | 0.2033 | 36.0 | 117 | 0.3234 | 0.8864 | | 0.1807 | 36.9231 | 120 | 0.3307 | 0.8791 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "invalid", "valid" ]
hiro123321/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6665 - Accuracy: 0.891 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7354 | 1.0 | 63 | 2.5723 | 0.802 | | 1.8922 | 2.0 | 126 | 1.8342 | 0.882 | | 1.6329 | 2.96 | 186 | 1.6665 | 0.891 | ### Framework versions - Transformers 4.48.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
Melo1512/vit-msn-small-corect_cleaned_dataset_lateral_flow_ivalidation
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-corect_cleaned_dataset_lateral_flow_ivalidation This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2318 - Accuracy: 0.9231 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.9231 | 3 | 0.6468 | 0.5604 | | No log | 1.8462 | 6 | 0.4227 | 0.8462 | | No log | 2.7692 | 9 | 0.3390 | 0.8608 | | 0.5336 | 4.0 | 13 | 0.3115 | 0.8864 | | 0.5336 | 4.9231 | 16 | 0.2986 | 0.8938 | | 0.5336 | 5.8462 | 19 | 0.2318 | 0.9231 | | 0.3565 | 6.7692 | 22 | 0.2767 | 0.9121 | | 0.3565 | 8.0 | 26 | 0.2490 | 0.9084 | | 0.3565 | 8.9231 | 29 | 0.3151 | 0.8938 | | 0.3166 | 9.8462 | 32 | 0.2404 | 0.9231 | | 0.3166 | 10.7692 | 35 | 0.2520 | 0.9158 | | 0.3166 | 12.0 | 39 | 0.2515 | 0.9048 | | 0.2657 | 12.9231 | 42 | 0.2344 | 0.9121 | | 0.2657 | 13.8462 | 45 | 0.2187 | 0.9194 | | 0.2657 | 14.7692 | 48 | 0.2289 | 0.9194 | | 0.259 | 16.0 | 52 | 0.2251 | 0.9194 | | 0.259 | 16.9231 | 55 | 0.2238 | 0.9231 | | 0.259 | 17.8462 | 58 | 0.2312 | 0.9121 | | 0.2514 | 18.4615 | 60 | 0.2305 | 0.9084 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "invalid", "valid" ]
Melo1512/vit-msn-small-corect_deepcleaned_dataset_lateral_flow_ivalidation
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-corect_deepcleaned_dataset_lateral_flow_ivalidation This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2229 - Accuracy: 0.9194 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.9231 | 3 | 0.6175 | 0.7216 | | No log | 1.8462 | 6 | 0.4141 | 0.8352 | | No log | 2.7692 | 9 | 0.7408 | 0.5788 | | 0.5817 | 4.0 | 13 | 0.2757 | 0.9158 | | 0.5817 | 4.9231 | 16 | 0.2847 | 0.8791 | | 0.5817 | 5.8462 | 19 | 0.2456 | 0.9011 | | 0.3724 | 6.7692 | 22 | 0.2547 | 0.9121 | | 0.3724 | 8.0 | 26 | 0.3007 | 0.8828 | | 0.3724 | 8.9231 | 29 | 0.3043 | 0.9011 | | 0.3155 | 9.8462 | 32 | 0.2603 | 0.9048 | | 0.3155 | 10.7692 | 35 | 0.2481 | 0.9158 | | 0.3155 | 12.0 | 39 | 0.2229 | 0.9194 | | 0.2844 | 12.9231 | 42 | 0.3036 | 0.8791 | | 0.2844 | 13.8462 | 45 | 0.2579 | 0.9084 | | 0.2844 | 14.7692 | 48 | 0.2434 | 0.9158 | | 0.2517 | 16.0 | 52 | 0.2718 | 0.9048 | | 0.2517 | 16.9231 | 55 | 0.2513 | 0.9121 | | 0.2517 | 17.8462 | 58 | 0.2503 | 0.9121 | | 0.2468 | 18.4615 | 60 | 0.2491 | 0.9121 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "invalid", "valid" ]
nguyenkhoa/dinov2_Liveness_detection_v2.2.2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/nguyenkhoaht002/liveness_detection/runs/qnoxkjvk) # dinov2_Liveness_detection_v2.2.2 This model is a fine-tuned version of [nguyenkhoa/dinov2_Liveness_detection_v2.2.1](https://huggingface.co/nguyenkhoa/dinov2_Liveness_detection_v2.2.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0219 - Accuracy: 0.9934 - F1: 0.9934 - Recall: 0.9934 - Precision: 0.9934 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 768 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.069 | 0.3062 | 64 | 0.0579 | 0.9788 | 0.9790 | 0.9788 | 0.9790 | | 0.0482 | 0.6124 | 128 | 0.0518 | 0.9814 | 0.9815 | 0.9814 | 0.9815 | | 0.0404 | 0.9187 | 192 | 0.0325 | 0.9882 | 0.9882 | 0.9882 | 0.9882 | | 0.0325 | 1.2249 | 256 | 0.0407 | 0.9855 | 0.9855 | 0.9855 | 0.9855 | | 0.0322 | 1.5311 | 320 | 0.0281 | 0.9901 | 0.9901 | 0.9901 | 0.9902 | | 0.0283 | 1.8373 | 384 | 0.0347 | 0.9884 | 0.9884 | 0.9884 | 0.9885 | | 0.0256 | 2.1435 | 448 | 0.0271 | 0.9907 | 0.9907 | 0.9907 | 0.9907 | | 0.0207 | 2.4498 | 512 | 0.0359 | 0.9874 | 0.9874 | 0.9874 | 0.9874 | | 0.0192 | 2.7560 | 576 | 0.0253 | 0.9917 | 0.9916 | 0.9917 | 0.9917 | | 0.017 | 3.0622 | 640 | 0.0272 | 0.9908 | 0.9908 | 0.9908 | 0.9908 | | 0.0134 | 3.3684 | 704 | 0.0255 | 0.9916 | 0.9916 | 0.9916 | 0.9915 | | 0.0132 | 3.6746 | 768 | 0.0232 | 0.9925 | 0.9925 | 0.9925 | 0.9925 | | 0.0114 | 3.9809 | 832 | 0.0260 | 0.9919 | 0.9919 | 0.9919 | 0.9918 | | 0.0074 | 4.2871 | 896 | 0.0242 | 0.9927 | 0.9927 | 0.9927 | 0.9927 | | 0.0079 | 4.5933 | 960 | 0.0219 | 0.9931 | 0.9931 | 0.9931 | 0.9932 | | 0.0072 | 4.8995 | 1024 | 0.0219 | 0.9934 | 0.9934 | 0.9934 | 0.9934 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0 ### Evaluate results - APCER: 0.1804 - BPCER: 0.0123 - ACER: 0.0963 - Accuracy: 0.8706 - F1: 0.8982 - Recall: 0.9877 - Precision: 0.7046
[ "live", "spoof" ]
nguyenkhoa/dinov2_Liveness_detection_v2.1.1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/nguyenkhoaht002/liveness_detection/runs/w4ug6u93) # dinov2_Liveness_detection_v2.1.1 This model is a fine-tuned version of [nguyenkhoa/dinov2_Liveness_detection_v2.1](https://huggingface.co/nguyenkhoa/dinov2_Liveness_detection_v2.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0419 - Accuracy: 0.9920 - F1: 0.9921 - Recall: 0.9920 - Precision: 0.9920 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 768 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.0487 | 1.2190 | 128 | 0.0476 | 0.9841 | 0.9842 | 0.9841 | 0.9841 | | 0.0216 | 2.4381 | 256 | 0.0318 | 0.9902 | 0.9902 | 0.9902 | 0.9903 | | 0.0081 | 3.6571 | 384 | 0.0426 | 0.9896 | 0.9896 | 0.9896 | 0.9896 | | 0.0012 | 4.8762 | 512 | 0.0419 | 0.9920 | 0.9921 | 0.9920 | 0.9920 | # Evaluate results - Accuracy: 0.8675 - F1: 0.8973 - Recall: 0.9506 - Precision: 0.7105 - APCER: 0.1687 - BPCER: 0.0494 - ACER: 0.1091 ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "live", "spoof" ]
FeruzaBoynazarovaas/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4531 - Accuracy: 0.8316 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 13.0226 | 0.96 | 18 | 3.0353 | 0.5657 | | 11.4628 | 1.96 | 36 | 2.5629 | 0.6397 | | 9.6079 | 2.96 | 54 | 2.2381 | 0.6869 | | 8.2561 | 3.96 | 72 | 1.9910 | 0.7407 | | 7.4298 | 4.96 | 90 | 1.7888 | 0.7744 | | 7.0857 | 5.96 | 108 | 1.6669 | 0.7879 | | 6.3554 | 6.96 | 126 | 1.5553 | 0.8283 | | 5.8062 | 7.96 | 144 | 1.5177 | 0.8283 | | 5.6472 | 8.96 | 162 | 1.4658 | 0.8215 | | 5.5685 | 9.96 | 180 | 1.4531 | 0.8316 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "chang yutgich", "dazmol", "choynak", "dimaxod", "drel", "duxovka", "koffe mashina", "misarovka", "mixer", "naushnik", "pishirish uchun qolib", "planshet", "havo sovutgich", "radio", "smartsoat", "soch fen", "tefal tarozi", "usb", "xavo tozalagich", "zaryadka", "kir yuvish mashinasi", "muzlatgich", "obogrivatel", "smartphone", "televizor", "tikuv mashinasi", "blender" ]
RE-N-Y/aesthetic-shadow-v2
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "hq", "lq" ]
Melo1512/vit-msn-small-lateral_flow_ivalidation_green
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-lateral_flow_ivalidation_green This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2917 - Accuracy: 0.9602 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.9231 | 6 | 0.5034 | 0.9239 | | 0.704 | 2.0 | 13 | 0.4524 | 0.9410 | | 0.704 | 2.9231 | 19 | 0.2633 | 0.9594 | | 0.505 | 4.0 | 26 | 0.4748 | 0.8092 | | 0.4456 | 4.9231 | 32 | 0.2917 | 0.9602 | | 0.4456 | 6.0 | 39 | 0.2621 | 0.9222 | | 0.3908 | 6.9231 | 45 | 0.4519 | 0.8191 | | 0.3628 | 8.0 | 52 | 0.4093 | 0.8623 | | 0.3628 | 8.9231 | 58 | 0.2705 | 0.9354 | | 0.372 | 10.0 | 65 | 0.4137 | 0.8546 | | 0.36 | 10.9231 | 71 | 0.3493 | 0.8815 | | 0.36 | 12.0 | 78 | 0.2190 | 0.9457 | | 0.36 | 12.9231 | 84 | 0.3190 | 0.9033 | | 0.3363 | 14.0 | 91 | 0.3380 | 0.8948 | | 0.3363 | 14.9231 | 97 | 0.3342 | 0.8982 | | 0.327 | 16.0 | 104 | 0.4212 | 0.8328 | | 0.3257 | 16.9231 | 110 | 0.5167 | 0.7844 | | 0.3257 | 18.0 | 117 | 0.5848 | 0.7275 | | 0.3175 | 18.9231 | 123 | 0.4091 | 0.8336 | | 0.3377 | 20.0 | 130 | 0.2838 | 0.9162 | | 0.3377 | 20.9231 | 136 | 0.6106 | 0.7263 | | 0.3129 | 22.0 | 143 | 0.6295 | 0.7164 | | 0.3129 | 22.9231 | 149 | 0.7898 | 0.5932 | | 0.3138 | 24.0 | 156 | 0.9408 | 0.4846 | | 0.3106 | 24.9231 | 162 | 0.3485 | 0.8832 | | 0.3106 | 26.0 | 169 | 0.5201 | 0.7866 | | 0.3157 | 26.9231 | 175 | 0.7210 | 0.6672 | | 0.2896 | 28.0 | 182 | 0.7981 | 0.6330 | | 0.2896 | 28.9231 | 188 | 0.7667 | 0.6429 | | 0.2867 | 30.0 | 195 | 0.7687 | 0.6544 | | 0.2786 | 30.9231 | 201 | 1.1714 | 0.5210 | | 0.2786 | 32.0 | 208 | 1.1744 | 0.4273 | | 0.2823 | 32.9231 | 214 | 0.9260 | 0.5445 | | 0.2864 | 34.0 | 221 | 0.7140 | 0.6920 | | 0.2864 | 34.9231 | 227 | 0.6098 | 0.7331 | | 0.2707 | 36.0 | 234 | 0.6993 | 0.6784 | | 0.2921 | 36.9231 | 240 | 0.8719 | 0.6176 | | 0.2921 | 38.0 | 247 | 0.8337 | 0.6061 | | 0.2849 | 38.9231 | 253 | 0.4396 | 0.8255 | | 0.2657 | 40.0 | 260 | 1.0982 | 0.5017 | | 0.2657 | 40.9231 | 266 | 1.0934 | 0.5175 | | 0.2659 | 42.0 | 273 | 0.8629 | 0.6369 | | 0.2659 | 42.9231 | 279 | 1.4602 | 0.4140 | | 0.2645 | 44.0 | 286 | 1.9095 | 0.3422 | | 0.2424 | 44.9231 | 292 | 1.2180 | 0.4397 | | 0.2424 | 46.0 | 299 | 0.7686 | 0.6424 | | 0.2495 | 46.9231 | 305 | 0.9899 | 0.5796 | | 0.2454 | 48.0 | 312 | 1.0291 | 0.5535 | | 0.2454 | 48.9231 | 318 | 0.7534 | 0.6822 | | 0.2473 | 50.0 | 325 | 0.6591 | 0.7092 | | 0.2716 | 50.9231 | 331 | 0.5840 | 0.7455 | | 0.2716 | 52.0 | 338 | 1.2430 | 0.4765 | | 0.234 | 52.9231 | 344 | 1.2993 | 0.5145 | | 0.2482 | 54.0 | 351 | 0.6042 | 0.7173 | | 0.2482 | 54.9231 | 357 | 0.8892 | 0.6027 | | 0.2339 | 56.0 | 364 | 1.8546 | 0.3161 | | 0.2461 | 56.9231 | 370 | 1.0859 | 0.5359 | | 0.2461 | 58.0 | 377 | 0.8690 | 0.6176 | | 0.2395 | 58.9231 | 383 | 0.7557 | 0.6694 | | 0.2159 | 60.0 | 390 | 1.0534 | 0.5701 | | 0.2159 | 60.9231 | 396 | 0.9856 | 0.5813 | | 0.2309 | 62.0 | 403 | 1.0000 | 0.5500 | | 0.2309 | 62.9231 | 409 | 1.1940 | 0.5180 | | 0.2117 | 64.0 | 416 | 1.1581 | 0.5154 | | 0.2307 | 64.9231 | 422 | 0.9987 | 0.5338 | | 0.2307 | 66.0 | 429 | 1.0850 | 0.5415 | | 0.2068 | 66.9231 | 435 | 0.9428 | 0.6014 | | 0.2126 | 68.0 | 442 | 1.2380 | 0.5115 | | 0.2126 | 68.9231 | 448 | 0.9993 | 0.5860 | | 0.2176 | 70.0 | 455 | 1.1910 | 0.5021 | | 0.2096 | 70.9231 | 461 | 1.2468 | 0.5120 | | 0.2096 | 72.0 | 468 | 0.7588 | 0.6920 | | 0.2092 | 72.9231 | 474 | 0.9003 | 0.6309 | | 0.1968 | 74.0 | 481 | 1.1697 | 0.5646 | | 0.1968 | 74.9231 | 487 | 0.8789 | 0.6446 | | 0.2027 | 76.0 | 494 | 1.1352 | 0.5599 | | 0.1965 | 76.9231 | 500 | 1.0836 | 0.5599 | | 0.1965 | 78.0 | 507 | 1.0188 | 0.5902 | | 0.2267 | 78.9231 | 513 | 1.0287 | 0.5975 | | 0.1967 | 80.0 | 520 | 0.8465 | 0.6544 | | 0.1967 | 80.9231 | 526 | 1.1881 | 0.5470 | | 0.1842 | 82.0 | 533 | 1.2352 | 0.5368 | | 0.1842 | 82.9231 | 539 | 1.1064 | 0.5701 | | 0.1952 | 84.0 | 546 | 0.8088 | 0.6608 | | 0.1873 | 84.9231 | 552 | 0.9342 | 0.6086 | | 0.1873 | 86.0 | 559 | 0.9807 | 0.6056 | | 0.185 | 86.9231 | 565 | 1.0165 | 0.5898 | | 0.1993 | 88.0 | 572 | 1.1511 | 0.5475 | | 0.1993 | 88.9231 | 578 | 1.1766 | 0.5406 | | 0.1707 | 90.0 | 585 | 1.1201 | 0.5663 | | 0.1852 | 90.9231 | 591 | 1.1162 | 0.5701 | | 0.1852 | 92.0 | 598 | 1.1273 | 0.5680 | | 0.1904 | 92.3077 | 600 | 1.1280 | 0.5680 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "invalid", "valid" ]
Melo1512/vit-msn-small-lateral_flow_ivalidation_green_test
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-lateral_flow_ivalidation_green_test This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2823 - Accuracy: 0.8810 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | No log | 0.9231 | 6 | 0.4106 | 0.7918 | | 0.5328 | 2.0 | 13 | 0.2823 | 0.8810 | | 0.5328 | 2.9231 | 19 | 0.3051 | 0.8587 | | 0.4244 | 4.0 | 26 | 0.2913 | 0.8996 | | 0.3755 | 4.9231 | 32 | 0.2841 | 0.9052 | | 0.3755 | 6.0 | 39 | 0.3204 | 0.8829 | | 0.3569 | 6.9231 | 45 | 0.2982 | 0.8810 | | 0.3157 | 8.0 | 52 | 0.3317 | 0.8643 | | 0.3157 | 8.9231 | 58 | 0.5731 | 0.7249 | | 0.3177 | 9.2308 | 60 | 0.5725 | 0.7305 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "invalid", "valid" ]
ellabettison/vit-base-beans-demo-v5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the ellabettison/logo-matching dataset. It achieves the following results on the evaluation set: - Loss: 0.4710 - Accuracy: 0.5788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5174 | 1.0 | 28 | 0.4711 | 0.5886 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "x", "cci ammunition", "offshore alert", "agence française anticorruption", "ochre", "pain news network", "cardiovascular credentialing international", "pinterest", "tripadvisor", "the chronicle", "microsoft", "embassy of the united states", "cci-legal", "invelopment partners", "ntpc", "cytora", "billner group", "bionicle", "hintons", "los angeles times", "youtube", "creditinfo", "acciaierie d'italia", "g7", "returnloads.net", "v lex", "acquisition international", "lexisnexis", "american association of equine practitioners", "ground news", "flipkart", "the guardian", "mtv", "manga panda", "thompson reuters", "cheq", "info clipper", "instagram", "tata steel", "cbonds", "cutting crime impact", "b2bhint", "capital", "citizens climate international", "eurometal", "opencorporates", "stagedoor", "owler", "pittsburgh business times", "md calc", "center for climate integrity", "oxfam", "dare2compete", "mlb", "times observer", "euromoney", "blockwarts", "avis", "hackmanac", "initiative for responsible mining assurance", "behind the art", "csr europe", "multiplan", "forbes", "hargreaves jones", "coventry city council", "children's cancer institute", "marketscreener.com", "international society of paediatric oncology", "ghanaweb", "construo", "futures forum", "la nuova musica", "handicap international", "world steel association", "tice news", "jus mundi", "euro-mediterranean economists association", "united healthcare", "panalgo", "radiotimes.com", "cnn", "trafalgar releasing", "associated press", "vehicle service pros", "open sanctions", "it brief", "the observer", "thomson reuters", "arcelor mittal mining", "aperam", "victoria university", "facebook", "fishbowl", "aston business school", "contactout", "imdb", "business wire", "metal bulletin", "global witness", "clubhouse", "bloomberg", "reuters", "pappers justice", "lego", "central index", "coutts", "tedx", "morris james", "easyjet", "aip publishing", "mining magazine", "cisco", "chuo chemical industries", "community clinic inc", "goodfirms", "ev hire", "apple", "alrosa", "post office", "westerman hattori", "childhood cancer international", "offshorealert", "appdynamics", "lamidey noury medical", "certificate in climate and investing", "taylor & francis group", "opensanctions", "travel weekly", "air india", "medicaid", "cci global", "competitive capabilities international", "inquisitr", "hindustan times", "unum", "lnm hacks", "european bank", "south african chamber of commerce", "sony music", "azur air", "google", "world benchmarking alliance", "market screener", "economic times", "trafalgar strategy", "ameritas", "sennovate", "any run", "xapien", "medica", "commonwealth cyber initiative", "delta", "marquee magic", "bt", "essar", "daily journal", "chambers of commerce and industry france", "capital consulting international", "pubmed", "zee news", "asian legal business", "mirror", "esa", "krea university", "british association for screen entertainment", "cci training center", "acxpa", "idex", "uniamerica", "market index", "cliffs", "lnm", "baxters", "trafalgar scientific", "fox news", "duplo", "jus connect", "futurefive", "yu-gi-oh", "express explained", "vitacost", "wikileaks", "europe 1", "iowa citizens for community improvement", "link now", "security media group", "women deliver", "k2fly", "broker chooser", "3vb", "drexel university", "ejn", "rfe/rl", "international mining", "the virginian pilot", "optum", "linguee", "twitter", "the zebra network", "crown castle", "trafalgar", "competition commission of india", "intel", "chevron", "easy leadz", "cran r", "justia", "united health care workers of ontario", "bbc", "computer & communications innovations" ]
nguyenkhoa/dinov2_Liveness_detection_v2.1.2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/nguyenkhoaht002/liveness_detection/runs/wt5k0v8b) # dinov2_Liveness_detection_v2.1.2 This model is a fine-tuned version of [nguyenkhoa/dinov2_Liveness_detection_v2.1.1](https://huggingface.co/nguyenkhoa/dinov2_Liveness_detection_v2.1.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0303 - Accuracy: 0.9936 - F1: 0.9936 - Recall: 0.9936 - Precision: 0.9936 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 768 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.0746 | 0.6095 | 64 | 0.0390 | 0.9852 | 0.9852 | 0.9852 | 0.9855 | | 0.034 | 1.2190 | 128 | 0.0360 | 0.9871 | 0.9872 | 0.9871 | 0.9872 | | 0.0201 | 1.8286 | 192 | 0.0303 | 0.9899 | 0.9899 | 0.9899 | 0.9898 | | 0.0129 | 2.4381 | 256 | 0.0263 | 0.9912 | 0.9912 | 0.9912 | 0.9913 | | 0.0105 | 3.0476 | 320 | 0.0232 | 0.9936 | 0.9935 | 0.9936 | 0.9936 | | 0.0049 | 3.6571 | 384 | 0.0356 | 0.9913 | 0.9913 | 0.9913 | 0.9914 | | 0.0035 | 4.2667 | 448 | 0.0281 | 0.9933 | 0.9933 | 0.9933 | 0.9933 | | 0.0015 | 4.8762 | 512 | 0.0303 | 0.9936 | 0.9936 | 0.9936 | 0.9936 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "live", "spoof" ]
nguyenkhoa/dinov2_Liveness_detection_v2.1.3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/nguyenkhoaht002/liveness_detection/runs/svxcqjbb) # dinov2_Liveness_detection_v2.1.3 This model is a fine-tuned version of [nguyenkhoa/dinov2_Liveness_detection_v2.1.2](https://huggingface.co/nguyenkhoa/dinov2_Liveness_detection_v2.1.2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0123 - Accuracy: 0.9976 - F1: 0.9976 - Recall: 0.9976 - Precision: 0.9976 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 768 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.1276 | 0.3232 | 64 | 0.0239 | 0.992 | 0.9920 | 0.992 | 0.9921 | | 0.0273 | 0.6465 | 128 | 0.0253 | 0.9908 | 0.9908 | 0.9908 | 0.9908 | | 0.0236 | 0.9697 | 192 | 0.0257 | 0.9908 | 0.9908 | 0.9908 | 0.9908 | | 0.015 | 1.2929 | 256 | 0.0223 | 0.9936 | 0.9936 | 0.9936 | 0.9936 | | 0.0133 | 1.6162 | 320 | 0.0144 | 0.9954 | 0.9954 | 0.9954 | 0.9954 | | 0.0149 | 1.9394 | 384 | 0.0271 | 0.9913 | 0.9913 | 0.9913 | 0.9914 | | 0.0097 | 2.2626 | 448 | 0.0234 | 0.9922 | 0.9922 | 0.9922 | 0.9922 | | 0.009 | 2.5859 | 512 | 0.0149 | 0.9954 | 0.9954 | 0.9954 | 0.9954 | | 0.0076 | 2.9091 | 576 | 0.0184 | 0.9952 | 0.9952 | 0.9952 | 0.9952 | | 0.0045 | 3.2323 | 640 | 0.0201 | 0.9951 | 0.9951 | 0.9951 | 0.9951 | | 0.0032 | 3.5556 | 704 | 0.0169 | 0.9958 | 0.9958 | 0.9958 | 0.9958 | | 0.0029 | 3.8788 | 768 | 0.0178 | 0.9961 | 0.9960 | 0.9961 | 0.9961 | | 0.002 | 4.2020 | 832 | 0.0148 | 0.9969 | 0.9969 | 0.9969 | 0.9969 | | 0.001 | 4.5253 | 896 | 0.0135 | 0.9973 | 0.9973 | 0.9973 | 0.9973 | | 0.0007 | 4.8485 | 960 | 0.0123 | 0.9976 | 0.9976 | 0.9976 | 0.9976 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "live", "spoof" ]
kedimestan/mobilevit-x-small
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilevit-x-small This model is a fine-tuned version of [apple/mobilevit-x-small](https://huggingface.co/apple/mobilevit-x-small) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0196 - Accuracy: 0.9959 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6911 | 1.0 | 34 | 0.6932 | 0.5083 | | 0.6584 | 2.0 | 68 | 0.6287 | 0.7510 | | 0.5388 | 3.0 | 102 | 0.4852 | 0.8734 | | 0.3891 | 4.0 | 136 | 0.3065 | 0.9357 | | 0.2915 | 5.0 | 170 | 0.2005 | 0.9647 | | 0.2319 | 6.0 | 204 | 0.1498 | 0.9689 | | 0.2038 | 7.0 | 238 | 0.1228 | 0.9710 | | 0.1641 | 8.0 | 272 | 0.0892 | 0.9855 | | 0.1525 | 9.0 | 306 | 0.0778 | 0.9834 | | 0.1584 | 10.0 | 340 | 0.0565 | 0.9896 | | 0.1194 | 11.0 | 374 | 0.0491 | 0.9917 | | 0.1222 | 12.0 | 408 | 0.0436 | 0.9896 | | 0.1229 | 13.0 | 442 | 0.0360 | 0.9979 | | 0.1334 | 14.0 | 476 | 0.0326 | 0.9959 | | 0.122 | 15.0 | 510 | 0.0425 | 0.9896 | | 0.096 | 16.0 | 544 | 0.0315 | 0.9959 | | 0.0989 | 17.0 | 578 | 0.0303 | 0.9938 | | 0.1085 | 18.0 | 612 | 0.0262 | 0.9959 | | 0.0957 | 19.0 | 646 | 0.0232 | 0.9959 | | 0.1129 | 20.0 | 680 | 0.0266 | 0.9959 | | 0.0843 | 21.0 | 714 | 0.0234 | 0.9959 | | 0.0868 | 22.0 | 748 | 0.0217 | 0.9959 | | 0.0867 | 23.0 | 782 | 0.0233 | 0.9959 | | 0.0947 | 24.0 | 816 | 0.0204 | 0.9959 | | 0.0786 | 25.0 | 850 | 0.0199 | 0.9959 | | 0.1009 | 26.0 | 884 | 0.0212 | 0.9959 | | 0.0785 | 27.0 | 918 | 0.0204 | 0.9959 | | 0.0811 | 28.0 | 952 | 0.0180 | 0.9959 | | 0.0883 | 29.0 | 986 | 0.0193 | 0.9959 | | 0.0988 | 30.0 | 1020 | 0.0196 | 0.9959 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "healthy", "rr" ]
Melo1512/vit-msn-small-lateral_flow_ivalidation_green_channel
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-lateral_flow_ivalidation_green_channel This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5924 - Accuracy: 0.7454 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: reduce_lr_on_plateau - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 100 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.9312 | 0.9231 | 6 | 0.9478 | 0.2788 | | 0.7936 | 2.0 | 13 | 0.8935 | 0.2491 | | 0.718 | 2.9231 | 19 | 0.8518 | 0.2249 | | 0.6787 | 4.0 | 26 | 0.8097 | 0.2119 | | 0.6538 | 4.9231 | 32 | 0.7776 | 0.2770 | | 0.6265 | 6.0 | 39 | 0.7428 | 0.3662 | | 0.5812 | 6.9231 | 45 | 0.7191 | 0.4424 | | 0.6138 | 8.0 | 52 | 0.6996 | 0.4963 | | 0.57 | 8.9231 | 58 | 0.6900 | 0.5409 | | 0.5595 | 10.0 | 65 | 0.6804 | 0.5762 | | 0.5288 | 10.9231 | 71 | 0.6727 | 0.6004 | | 0.5094 | 12.0 | 78 | 0.6613 | 0.6320 | | 0.5073 | 12.9231 | 84 | 0.6490 | 0.6636 | | 0.4504 | 14.0 | 91 | 0.6386 | 0.6970 | | 0.4868 | 14.9231 | 97 | 0.6305 | 0.7156 | | 0.4799 | 16.0 | 104 | 0.6264 | 0.7175 | | 0.4861 | 16.9231 | 110 | 0.6235 | 0.7175 | | 0.4975 | 18.0 | 117 | 0.6163 | 0.7286 | | 0.4712 | 18.9231 | 123 | 0.6127 | 0.7398 | | 0.468 | 20.0 | 130 | 0.6107 | 0.7416 | | 0.4562 | 20.9231 | 136 | 0.6070 | 0.7454 | | 0.5195 | 22.0 | 143 | 0.6056 | 0.7454 | | 0.4385 | 22.9231 | 149 | 0.6033 | 0.7416 | | 0.4211 | 24.0 | 156 | 0.6050 | 0.7342 | | 0.4364 | 24.9231 | 162 | 0.6023 | 0.7361 | | 0.4327 | 26.0 | 169 | 0.5980 | 0.7416 | | 0.4757 | 26.9231 | 175 | 0.6000 | 0.7361 | | 0.4287 | 28.0 | 182 | 0.5924 | 0.7454 | | 0.4313 | 28.9231 | 188 | 0.5970 | 0.7361 | | 0.4483 | 30.0 | 195 | 0.5962 | 0.7398 | | 0.3956 | 30.9231 | 201 | 0.5976 | 0.7305 | | 0.41 | 32.0 | 208 | 0.6060 | 0.7212 | | 0.4371 | 32.9231 | 214 | 0.6050 | 0.7193 | | 0.4169 | 34.0 | 221 | 0.6045 | 0.7212 | | 0.3882 | 34.9231 | 227 | 0.6020 | 0.7230 | | 0.5097 | 36.0 | 234 | 0.6011 | 0.7286 | | 0.476 | 36.9231 | 240 | 0.6027 | 0.7268 | | 0.387 | 38.0 | 247 | 0.6012 | 0.7249 | | 0.4744 | 38.9231 | 253 | 0.6017 | 0.7230 | | 0.4712 | 40.0 | 260 | 0.6025 | 0.7230 | | 0.4242 | 40.9231 | 266 | 0.6022 | 0.7230 | | 0.4087 | 42.0 | 273 | 0.6021 | 0.7230 | | 0.4009 | 42.9231 | 279 | 0.6026 | 0.7230 | | 0.4219 | 44.0 | 286 | 0.6026 | 0.7230 | | 0.4208 | 44.9231 | 292 | 0.6024 | 0.7230 | | 0.3644 | 46.0 | 299 | 0.6013 | 0.7230 | | 0.4458 | 46.9231 | 305 | 0.5997 | 0.7286 | | 0.425 | 48.0 | 312 | 0.5991 | 0.7286 | | 0.3982 | 48.9231 | 318 | 0.5995 | 0.7286 | | 0.4167 | 50.0 | 325 | 0.5992 | 0.7286 | | 0.4112 | 50.9231 | 331 | 0.5992 | 0.7286 | | 0.4073 | 52.0 | 338 | 0.5992 | 0.7286 | | 0.4413 | 52.9231 | 344 | 0.5991 | 0.7286 | | 0.4326 | 54.0 | 351 | 0.5991 | 0.7286 | | 0.4206 | 54.9231 | 357 | 0.5992 | 0.7286 | | 0.3776 | 56.0 | 364 | 0.5993 | 0.7286 | | 0.3792 | 56.9231 | 370 | 0.5994 | 0.7286 | | 0.4075 | 58.0 | 377 | 0.5995 | 0.7286 | | 0.4412 | 58.9231 | 383 | 0.5995 | 0.7286 | | 0.4137 | 60.0 | 390 | 0.5995 | 0.7286 | | 0.424 | 60.9231 | 396 | 0.5995 | 0.7286 | | 0.3988 | 62.0 | 403 | 0.5997 | 0.7286 | | 0.4167 | 62.9231 | 409 | 0.5996 | 0.7286 | | 0.41 | 64.0 | 416 | 0.5997 | 0.7286 | | 0.4235 | 64.9231 | 422 | 0.5997 | 0.7286 | | 0.4544 | 66.0 | 429 | 0.5998 | 0.7286 | | 0.4495 | 66.9231 | 435 | 0.5997 | 0.7286 | | 0.424 | 68.0 | 442 | 0.5997 | 0.7286 | | 0.4053 | 68.9231 | 448 | 0.5997 | 0.7286 | | 0.426 | 70.0 | 455 | 0.5999 | 0.7286 | | 0.3865 | 70.9231 | 461 | 0.6000 | 0.7286 | | 0.3732 | 72.0 | 468 | 0.6001 | 0.7286 | | 0.4289 | 72.9231 | 474 | 0.6002 | 0.7286 | | 0.4524 | 74.0 | 481 | 0.6002 | 0.7286 | | 0.4081 | 74.9231 | 487 | 0.6002 | 0.7286 | | 0.384 | 76.0 | 494 | 0.6001 | 0.7286 | | 0.4177 | 76.9231 | 500 | 0.6000 | 0.7286 | | 0.3777 | 78.0 | 507 | 0.6000 | 0.7286 | | 0.4226 | 78.9231 | 513 | 0.6000 | 0.7286 | | 0.419 | 80.0 | 520 | 0.6000 | 0.7286 | | 0.3956 | 80.9231 | 526 | 0.6000 | 0.7286 | | 0.3669 | 82.0 | 533 | 0.6000 | 0.7286 | | 0.3902 | 82.9231 | 539 | 0.6000 | 0.7286 | | 0.4193 | 84.0 | 546 | 0.6001 | 0.7286 | | 0.4115 | 84.9231 | 552 | 0.6001 | 0.7286 | | 0.3923 | 86.0 | 559 | 0.6001 | 0.7286 | | 0.4011 | 86.9231 | 565 | 0.6001 | 0.7286 | | 0.4765 | 88.0 | 572 | 0.6000 | 0.7286 | | 0.4034 | 88.9231 | 578 | 0.5999 | 0.7286 | | 0.3867 | 90.0 | 585 | 0.5998 | 0.7286 | | 0.4201 | 90.9231 | 591 | 0.5998 | 0.7286 | | 0.4346 | 92.0 | 598 | 0.5998 | 0.7286 | | 0.4171 | 92.3077 | 600 | 0.5998 | 0.7286 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "invalid", "valid" ]
Melo1512/vit-msn-small-lateral_flow_ivalidation_train_test
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-lateral_flow_ivalidation_train_test This model is a fine-tuned version of [Melo1512/vit-msn-small-lateral_flow_ivalidation_train_test](https://huggingface.co/Melo1512/vit-msn-small-lateral_flow_ivalidation_train_test) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3718 - Accuracy: 0.9121 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 100 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.4023 | 0.9231 | 3 | 0.4126 | 0.8938 | | 0.4055 | 1.8462 | 6 | 0.4096 | 0.8974 | | 0.4112 | 2.7692 | 9 | 0.4275 | 0.8974 | | 0.4246 | 4.0 | 13 | 0.3958 | 0.9048 | | 0.376 | 4.9231 | 16 | 0.3980 | 0.9121 | | 0.4226 | 5.8462 | 19 | 0.4029 | 0.9084 | | 0.3751 | 6.7692 | 22 | 0.3749 | 0.9048 | | 0.4135 | 8.0 | 26 | 0.3757 | 0.9121 | | 0.3673 | 8.9231 | 29 | 0.4174 | 0.8901 | | 0.3749 | 9.8462 | 32 | 0.4077 | 0.9048 | | 0.4119 | 10.7692 | 35 | 0.4181 | 0.8901 | | 0.3946 | 12.0 | 39 | 0.4189 | 0.8901 | | 0.3335 | 12.9231 | 42 | 0.4029 | 0.9084 | | 0.3717 | 13.8462 | 45 | 0.3963 | 0.9011 | | 0.3493 | 14.7692 | 48 | 0.3797 | 0.9158 | | 0.3686 | 16.0 | 52 | 0.3761 | 0.9121 | | 0.3999 | 16.9231 | 55 | 0.3774 | 0.9158 | | 0.3221 | 17.8462 | 58 | 0.3757 | 0.9158 | | 0.3902 | 18.7692 | 61 | 0.3774 | 0.9121 | | 0.3649 | 20.0 | 65 | 0.3962 | 0.9011 | | 0.3553 | 20.9231 | 68 | 0.3718 | 0.9121 | | 0.3761 | 21.8462 | 71 | 0.3934 | 0.9121 | | 0.3422 | 22.7692 | 74 | 0.4271 | 0.8828 | | 0.3247 | 24.0 | 78 | 0.3727 | 0.9194 | | 0.3417 | 24.9231 | 81 | 0.3793 | 0.9121 | | 0.3499 | 25.8462 | 84 | 0.4293 | 0.8791 | | 0.3397 | 26.7692 | 87 | 0.4216 | 0.8901 | | 0.346 | 28.0 | 91 | 0.4001 | 0.8901 | | 0.3337 | 28.9231 | 94 | 0.4168 | 0.8864 | | 0.3268 | 29.8462 | 97 | 0.4123 | 0.8938 | | 0.3274 | 30.7692 | 100 | 0.4187 | 0.8828 | | 0.3757 | 32.0 | 104 | 0.4026 | 0.8974 | | 0.3727 | 32.9231 | 107 | 0.4021 | 0.8938 | | 0.3431 | 33.8462 | 110 | 0.4024 | 0.9011 | | 0.3626 | 34.7692 | 113 | 0.4200 | 0.8901 | | 0.3381 | 36.0 | 117 | 0.4080 | 0.8938 | | 0.3411 | 36.9231 | 120 | 0.4279 | 0.8791 | | 0.3229 | 37.8462 | 123 | 0.4422 | 0.8718 | | 0.3736 | 38.7692 | 126 | 0.4285 | 0.8791 | | 0.4145 | 40.0 | 130 | 0.4402 | 0.8718 | | 0.3456 | 40.9231 | 133 | 0.4226 | 0.8828 | | 0.3567 | 41.8462 | 136 | 0.4113 | 0.8901 | | 0.339 | 42.7692 | 139 | 0.4445 | 0.8645 | | 0.3142 | 44.0 | 143 | 0.4204 | 0.8791 | | 0.3461 | 44.9231 | 146 | 0.4006 | 0.8974 | | 0.3583 | 45.8462 | 149 | 0.3991 | 0.9011 | | 0.3651 | 46.7692 | 152 | 0.4293 | 0.8681 | | 0.3098 | 48.0 | 156 | 0.4082 | 0.8901 | | 0.375 | 48.9231 | 159 | 0.4095 | 0.8864 | | 0.3435 | 49.8462 | 162 | 0.4529 | 0.8498 | | 0.3452 | 50.7692 | 165 | 0.4440 | 0.8608 | | 0.3316 | 52.0 | 169 | 0.4181 | 0.8791 | | 0.3344 | 52.9231 | 172 | 0.4609 | 0.8535 | | 0.3377 | 53.8462 | 175 | 0.4775 | 0.8278 | | 0.3455 | 54.7692 | 178 | 0.4396 | 0.8681 | | 0.3202 | 56.0 | 182 | 0.4384 | 0.8755 | | 0.3119 | 56.9231 | 185 | 0.4573 | 0.8535 | | 0.3633 | 57.8462 | 188 | 0.4469 | 0.8645 | | 0.3025 | 58.7692 | 191 | 0.4437 | 0.8608 | | 0.3094 | 60.0 | 195 | 0.4472 | 0.8571 | | 0.3306 | 60.9231 | 198 | 0.4396 | 0.8681 | | 0.3266 | 61.8462 | 201 | 0.4486 | 0.8681 | | 0.3495 | 62.7692 | 204 | 0.4658 | 0.8352 | | 0.3066 | 64.0 | 208 | 0.4754 | 0.8315 | | 0.3384 | 64.9231 | 211 | 0.4518 | 0.8608 | | 0.3151 | 65.8462 | 214 | 0.4614 | 0.8535 | | 0.3233 | 66.7692 | 217 | 0.4638 | 0.8425 | | 0.3416 | 68.0 | 221 | 0.4741 | 0.8315 | | 0.3326 | 68.9231 | 224 | 0.4679 | 0.8425 | | 0.331 | 69.8462 | 227 | 0.4754 | 0.8315 | | 0.3595 | 70.7692 | 230 | 0.4603 | 0.8498 | | 0.3107 | 72.0 | 234 | 0.4412 | 0.8571 | | 0.3126 | 72.9231 | 237 | 0.4578 | 0.8571 | | 0.3205 | 73.8462 | 240 | 0.4820 | 0.8242 | | 0.3296 | 74.7692 | 243 | 0.5048 | 0.7985 | | 0.3246 | 76.0 | 247 | 0.4792 | 0.8278 | | 0.3065 | 76.9231 | 250 | 0.4842 | 0.8242 | | 0.282 | 77.8462 | 253 | 0.5049 | 0.7912 | | 0.3272 | 78.7692 | 256 | 0.5088 | 0.7875 | | 0.325 | 80.0 | 260 | 0.4933 | 0.8132 | | 0.3524 | 80.9231 | 263 | 0.4893 | 0.8132 | | 0.3019 | 81.8462 | 266 | 0.4864 | 0.8132 | | 0.3095 | 82.7692 | 269 | 0.4875 | 0.8132 | | 0.3254 | 84.0 | 273 | 0.4910 | 0.8059 | | 0.3158 | 84.9231 | 276 | 0.4918 | 0.8059 | | 0.3114 | 85.8462 | 279 | 0.4936 | 0.8059 | | 0.3348 | 86.7692 | 282 | 0.4996 | 0.7985 | | 0.3078 | 88.0 | 286 | 0.5043 | 0.7949 | | 0.3096 | 88.9231 | 289 | 0.5047 | 0.7949 | | 0.2827 | 89.8462 | 292 | 0.5054 | 0.7949 | | 0.3249 | 90.7692 | 295 | 0.5040 | 0.7949 | | 0.3277 | 92.0 | 299 | 0.5031 | 0.7985 | | 0.3522 | 92.3077 | 300 | 0.5030 | 0.7985 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "invalid", "valid" ]
Melo1512/vit-msn-small-lateral_flow_ivalidation_train_test_1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-lateral_flow_ivalidation_train_test_1 This model is a fine-tuned version of [Melo1512/vit-msn-small-lateral_flow_ivalidation_train_test_1](https://huggingface.co/Melo1512/vit-msn-small-lateral_flow_ivalidation_train_test_1) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3874 - Accuracy: 0.9084 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 10 - total_train_batch_size: 640 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: reduce_lr_on_plateau - lr_scheduler_warmup_ratio: 0.5 - num_epochs: 100 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.3744 | 0.7692 | 1 | 0.3874 | 0.9084 | | 0.3574 | 1.5385 | 2 | 0.3874 | 0.9084 | | 0.3627 | 2.3077 | 3 | 0.3875 | 0.9084 | | 0.3367 | 3.8462 | 5 | 0.3875 | 0.9084 | | 0.3752 | 4.6154 | 6 | 0.3875 | 0.9084 | | 0.3696 | 5.3846 | 7 | 0.3876 | 0.9084 | | 0.3563 | 6.9231 | 9 | 0.3877 | 0.9084 | | 0.3244 | 7.6923 | 10 | 0.3877 | 0.9084 | | 0.3727 | 8.4615 | 11 | 0.3878 | 0.9084 | | 0.3395 | 10.0 | 13 | 0.3879 | 0.9084 | | 0.35 | 10.7692 | 14 | 0.3880 | 0.9084 | | 0.3527 | 11.5385 | 15 | 0.3881 | 0.9084 | | 0.3539 | 12.3077 | 16 | 0.3881 | 0.9084 | | 0.348 | 13.8462 | 18 | 0.3883 | 0.9084 | | 0.3584 | 14.6154 | 19 | 0.3884 | 0.9084 | | 0.3491 | 15.3846 | 20 | 0.3884 | 0.9084 | | 0.3603 | 16.9231 | 22 | 0.3886 | 0.9084 | | 0.382 | 17.6923 | 23 | 0.3887 | 0.9084 | | 0.3567 | 18.4615 | 24 | 0.3888 | 0.9084 | | 0.3397 | 20.0 | 26 | 0.3889 | 0.9084 | | 0.3682 | 20.7692 | 27 | 0.3890 | 0.9084 | | 0.3336 | 21.5385 | 28 | 0.3891 | 0.9084 | | 0.3399 | 22.3077 | 29 | 0.3892 | 0.9084 | | 0.3656 | 23.8462 | 31 | 0.3893 | 0.9084 | | 0.348 | 24.6154 | 32 | 0.3894 | 0.9084 | | 0.3466 | 25.3846 | 33 | 0.3895 | 0.9084 | | 0.3614 | 26.9231 | 35 | 0.3897 | 0.9084 | | 0.3522 | 27.6923 | 36 | 0.3898 | 0.9084 | | 0.3506 | 28.4615 | 37 | 0.3898 | 0.9084 | | 0.3623 | 30.0 | 39 | 0.3900 | 0.9084 | | 0.3562 | 30.7692 | 40 | 0.3901 | 0.9084 | | 0.3515 | 31.5385 | 41 | 0.3902 | 0.9084 | | 0.3744 | 32.3077 | 42 | 0.3903 | 0.9084 | | 0.3382 | 33.8462 | 44 | 0.3904 | 0.9084 | | 0.3467 | 34.6154 | 45 | 0.3905 | 0.9084 | | 0.3713 | 35.3846 | 46 | 0.3906 | 0.9084 | | 0.3653 | 36.9231 | 48 | 0.3908 | 0.9084 | | 0.3359 | 37.6923 | 49 | 0.3909 | 0.9084 | | 0.3745 | 38.4615 | 50 | 0.3910 | 0.9084 | | 0.3594 | 40.0 | 52 | 0.3911 | 0.9084 | | 0.3567 | 40.7692 | 53 | 0.3912 | 0.9084 | | 0.3332 | 41.5385 | 54 | 0.3913 | 0.9084 | | 0.3424 | 42.3077 | 55 | 0.3914 | 0.9084 | | 0.3485 | 43.8462 | 57 | 0.3915 | 0.9084 | | 0.3795 | 44.6154 | 58 | 0.3916 | 0.9084 | | 0.321 | 45.3846 | 59 | 0.3917 | 0.9084 | | 0.3314 | 46.9231 | 61 | 0.3918 | 0.9084 | | 0.3582 | 47.6923 | 62 | 0.3919 | 0.9084 | | 0.3355 | 48.4615 | 63 | 0.3920 | 0.9084 | | 0.3688 | 50.0 | 65 | 0.3922 | 0.9084 | | 0.3414 | 50.7692 | 66 | 0.3923 | 0.9084 | | 0.3292 | 51.5385 | 67 | 0.3923 | 0.9084 | | 0.3622 | 52.3077 | 68 | 0.3924 | 0.9048 | | 0.3446 | 53.8462 | 70 | 0.3926 | 0.9048 | | 0.3678 | 54.6154 | 71 | 0.3927 | 0.9011 | | 0.3632 | 55.3846 | 72 | 0.3927 | 0.9011 | | 0.3459 | 56.9231 | 74 | 0.3929 | 0.9011 | | 0.3435 | 57.6923 | 75 | 0.3929 | 0.9011 | | 0.3461 | 58.4615 | 76 | 0.3930 | 0.9011 | | 0.3423 | 60.0 | 78 | 0.3931 | 0.9011 | | 0.3632 | 60.7692 | 79 | 0.3932 | 0.8974 | | 0.3536 | 61.5385 | 80 | 0.3933 | 0.8974 | | 0.3556 | 62.3077 | 81 | 0.3934 | 0.8974 | | 0.3391 | 63.8462 | 83 | 0.3935 | 0.8974 | | 0.3482 | 64.6154 | 84 | 0.3936 | 0.8974 | | 0.3339 | 65.3846 | 85 | 0.3937 | 0.8974 | | 0.3438 | 66.9231 | 87 | 0.3938 | 0.8974 | | 0.3198 | 67.6923 | 88 | 0.3939 | 0.8974 | | 0.3399 | 68.4615 | 89 | 0.3939 | 0.8974 | | 0.3388 | 70.0 | 91 | 0.3941 | 0.8974 | | 0.3519 | 70.7692 | 92 | 0.3942 | 0.8974 | | 0.3445 | 71.5385 | 93 | 0.3942 | 0.8974 | | 0.3423 | 72.3077 | 94 | 0.3943 | 0.8974 | | 0.3392 | 73.8462 | 96 | 0.3945 | 0.8974 | | 0.3576 | 74.6154 | 97 | 0.3945 | 0.8974 | | 0.3526 | 75.3846 | 98 | 0.3946 | 0.8974 | | 0.3714 | 76.9231 | 100 | 0.3947 | 0.8974 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "invalid", "valid" ]
bandini30/vit-base-beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0063 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0101 | 1.5385 | 100 | 0.0765 | 0.9850 | | 0.0367 | 3.0769 | 200 | 0.0063 | 1.0 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
Melo1512/vit-msn-small-lateral_flow_ivalidation_train_test_4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-lateral_flow_ivalidation_train_test_4 This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3980 - Accuracy: 0.8938 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.3 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8038 | 1.0 | 13 | 0.8368 | 0.4029 | | 0.6874 | 2.0 | 26 | 0.8356 | 0.4029 | | 0.6487 | 3.0 | 39 | 0.8336 | 0.3810 | | 0.773 | 4.0 | 52 | 0.8307 | 0.3700 | | 0.7002 | 5.0 | 65 | 0.8270 | 0.3480 | | 0.6991 | 6.0 | 78 | 0.8223 | 0.3407 | | 0.6809 | 7.0 | 91 | 0.8164 | 0.3480 | | 0.7359 | 8.0 | 104 | 0.8093 | 0.3516 | | 0.771 | 9.0 | 117 | 0.8017 | 0.3443 | | 0.6855 | 10.0 | 130 | 0.7934 | 0.3443 | | 0.6674 | 11.0 | 143 | 0.7851 | 0.3480 | | 0.6296 | 12.0 | 156 | 0.7746 | 0.3810 | | 0.5597 | 13.0 | 169 | 0.7643 | 0.3956 | | 0.5636 | 14.0 | 182 | 0.7519 | 0.4066 | | 0.5718 | 15.0 | 195 | 0.7382 | 0.4432 | | 0.5527 | 16.0 | 208 | 0.7256 | 0.4579 | | 0.5646 | 17.0 | 221 | 0.7115 | 0.5055 | | 0.4843 | 18.0 | 234 | 0.6966 | 0.5275 | | 0.492 | 19.0 | 247 | 0.6805 | 0.5788 | | 0.4865 | 20.0 | 260 | 0.6630 | 0.6117 | | 0.4198 | 21.0 | 273 | 0.6448 | 0.6410 | | 0.4203 | 22.0 | 286 | 0.6280 | 0.6740 | | 0.4547 | 23.0 | 299 | 0.6083 | 0.6923 | | 0.3916 | 24.0 | 312 | 0.5909 | 0.7143 | | 0.4329 | 25.0 | 325 | 0.5768 | 0.7289 | | 0.4645 | 26.0 | 338 | 0.5629 | 0.7399 | | 0.3376 | 27.0 | 351 | 0.5536 | 0.7436 | | 0.4417 | 28.0 | 364 | 0.5417 | 0.7729 | | 0.3908 | 29.0 | 377 | 0.5262 | 0.7619 | | 0.3715 | 30.0 | 390 | 0.5130 | 0.7729 | | 0.438 | 31.0 | 403 | 0.5059 | 0.7912 | | 0.2937 | 32.0 | 416 | 0.4937 | 0.8022 | | 0.2944 | 33.0 | 429 | 0.4871 | 0.8022 | | 0.3474 | 34.0 | 442 | 0.4820 | 0.8059 | | 0.2302 | 35.0 | 455 | 0.4776 | 0.7949 | | 0.3543 | 36.0 | 468 | 0.4690 | 0.8022 | | 0.3325 | 37.0 | 481 | 0.4640 | 0.8059 | | 0.4004 | 38.0 | 494 | 0.4584 | 0.8095 | | 0.3031 | 39.0 | 507 | 0.4548 | 0.8132 | | 0.4862 | 40.0 | 520 | 0.4520 | 0.8095 | | 0.2609 | 41.0 | 533 | 0.4498 | 0.8278 | | 0.1859 | 42.0 | 546 | 0.4450 | 0.8462 | | 0.2712 | 43.0 | 559 | 0.4408 | 0.8462 | | 0.221 | 44.0 | 572 | 0.4387 | 0.8425 | | 0.2328 | 45.0 | 585 | 0.4371 | 0.8498 | | 0.3004 | 46.0 | 598 | 0.4339 | 0.8425 | | 0.2036 | 47.0 | 611 | 0.4318 | 0.8462 | | 0.1925 | 48.0 | 624 | 0.4299 | 0.8498 | | 0.4543 | 49.0 | 637 | 0.4266 | 0.8498 | | 0.4056 | 50.0 | 650 | 0.4251 | 0.8462 | | 0.2326 | 51.0 | 663 | 0.4247 | 0.8498 | | 0.327 | 52.0 | 676 | 0.4224 | 0.8571 | | 0.2385 | 53.0 | 689 | 0.4193 | 0.8571 | | 0.2876 | 54.0 | 702 | 0.4183 | 0.8571 | | 0.2257 | 55.0 | 715 | 0.4162 | 0.8718 | | 0.252 | 56.0 | 728 | 0.4150 | 0.8755 | | 0.4299 | 57.0 | 741 | 0.4129 | 0.8645 | | 0.3146 | 58.0 | 754 | 0.4124 | 0.8755 | | 0.1993 | 59.0 | 767 | 0.4124 | 0.8755 | | 0.2507 | 60.0 | 780 | 0.4118 | 0.8791 | | 0.324 | 61.0 | 793 | 0.4101 | 0.8535 | | 0.2303 | 62.0 | 806 | 0.4090 | 0.8718 | | 0.2767 | 63.0 | 819 | 0.4072 | 0.8608 | | 0.3318 | 64.0 | 832 | 0.4071 | 0.8681 | | 0.1946 | 65.0 | 845 | 0.4064 | 0.8681 | | 0.4204 | 66.0 | 858 | 0.4055 | 0.8608 | | 0.3351 | 67.0 | 871 | 0.4031 | 0.8608 | | 0.2772 | 68.0 | 884 | 0.4013 | 0.8645 | | 0.2969 | 69.0 | 897 | 0.4000 | 0.8681 | | 0.2755 | 70.0 | 910 | 0.4021 | 0.8901 | | 0.2835 | 71.0 | 923 | 0.4005 | 0.8608 | | 0.2487 | 72.0 | 936 | 0.3998 | 0.8608 | | 0.2447 | 73.0 | 949 | 0.3987 | 0.8571 | | 0.3512 | 74.0 | 962 | 0.3970 | 0.8718 | | 0.2303 | 75.0 | 975 | 0.3975 | 0.8681 | | 0.2271 | 76.0 | 988 | 0.3976 | 0.8791 | | 0.2325 | 77.0 | 1001 | 0.3980 | 0.8938 | | 0.2517 | 78.0 | 1014 | 0.3965 | 0.8901 | | 0.2839 | 79.0 | 1027 | 0.3956 | 0.8938 | | 0.1994 | 80.0 | 1040 | 0.3940 | 0.8828 | | 0.4525 | 81.0 | 1053 | 0.3934 | 0.8864 | | 0.2178 | 82.0 | 1066 | 0.3930 | 0.8828 | | 0.2784 | 83.0 | 1079 | 0.3929 | 0.8901 | | 0.1956 | 84.0 | 1092 | 0.3930 | 0.8901 | | 0.2713 | 85.0 | 1105 | 0.3922 | 0.8828 | | 0.2331 | 86.0 | 1118 | 0.3920 | 0.8828 | | 0.3294 | 87.0 | 1131 | 0.3917 | 0.8864 | | 0.2998 | 88.0 | 1144 | 0.3911 | 0.8864 | | 0.3767 | 89.0 | 1157 | 0.3909 | 0.8864 | | 0.3126 | 90.0 | 1170 | 0.3908 | 0.8828 | | 0.2427 | 91.0 | 1183 | 0.3903 | 0.8791 | | 0.2696 | 92.0 | 1196 | 0.3898 | 0.8828 | | 0.2664 | 93.0 | 1209 | 0.3897 | 0.8828 | | 0.3718 | 94.0 | 1222 | 0.3898 | 0.8828 | | 0.2813 | 95.0 | 1235 | 0.3899 | 0.8828 | | 0.3105 | 96.0 | 1248 | 0.3898 | 0.8828 | | 0.2452 | 97.0 | 1261 | 0.3901 | 0.8828 | | 0.2775 | 98.0 | 1274 | 0.3900 | 0.8828 | | 0.3814 | 99.0 | 1287 | 0.3901 | 0.8828 | | 0.2861 | 100.0 | 1300 | 0.3901 | 0.8828 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "invalid", "valid" ]
Melo1512/vit-msn-small-lateral_flow_ivalidation_train_test_6
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-lateral_flow_ivalidation_train_test_6 This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4371 - Accuracy: 0.8791 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.3 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.6672 | 0.9231 | 6 | 0.6980 | 0.4212 | | 0.6617 | 2.0 | 13 | 0.6965 | 0.4249 | | 0.6699 | 2.9231 | 19 | 0.6944 | 0.4396 | | 0.662 | 4.0 | 26 | 0.6910 | 0.4396 | | 0.6548 | 4.9231 | 32 | 0.6873 | 0.4579 | | 0.6541 | 6.0 | 39 | 0.6825 | 0.4835 | | 0.6222 | 6.9231 | 45 | 0.6777 | 0.5311 | | 0.6555 | 8.0 | 52 | 0.6719 | 0.5421 | | 0.6226 | 8.9231 | 58 | 0.6665 | 0.5861 | | 0.5989 | 10.0 | 65 | 0.6603 | 0.6154 | | 0.5754 | 10.9231 | 71 | 0.6555 | 0.6264 | | 0.6251 | 12.0 | 78 | 0.6493 | 0.6484 | | 0.5796 | 12.9231 | 84 | 0.6446 | 0.6667 | | 0.5763 | 14.0 | 91 | 0.6390 | 0.6667 | | 0.5952 | 14.9231 | 97 | 0.6333 | 0.6850 | | 0.5675 | 16.0 | 104 | 0.6269 | 0.7033 | | 0.5453 | 16.9231 | 110 | 0.6211 | 0.7106 | | 0.5199 | 18.0 | 117 | 0.6150 | 0.7143 | | 0.541 | 18.9231 | 123 | 0.6090 | 0.7216 | | 0.5273 | 20.0 | 130 | 0.6007 | 0.7289 | | 0.495 | 20.9231 | 136 | 0.5934 | 0.7289 | | 0.4855 | 22.0 | 143 | 0.5855 | 0.7473 | | 0.4763 | 22.9231 | 149 | 0.5787 | 0.7363 | | 0.4287 | 24.0 | 156 | 0.5693 | 0.7509 | | 0.445 | 24.9231 | 162 | 0.5619 | 0.7692 | | 0.4343 | 26.0 | 169 | 0.5540 | 0.7802 | | 0.3748 | 26.9231 | 175 | 0.5467 | 0.7875 | | 0.4041 | 28.0 | 182 | 0.5421 | 0.8022 | | 0.3543 | 28.9231 | 188 | 0.5291 | 0.8205 | | 0.3972 | 30.0 | 195 | 0.5134 | 0.8278 | | 0.3716 | 30.9231 | 201 | 0.5150 | 0.8242 | | 0.3871 | 32.0 | 208 | 0.5100 | 0.8315 | | 0.3729 | 32.9231 | 214 | 0.4986 | 0.8352 | | 0.3286 | 34.0 | 221 | 0.4946 | 0.8462 | | 0.4261 | 34.9231 | 227 | 0.4957 | 0.8388 | | 0.4014 | 36.0 | 234 | 0.4850 | 0.8535 | | 0.3514 | 36.9231 | 240 | 0.4807 | 0.8535 | | 0.3883 | 38.0 | 247 | 0.4767 | 0.8535 | | 0.3219 | 38.9231 | 253 | 0.4763 | 0.8535 | | 0.4351 | 40.0 | 260 | 0.4738 | 0.8571 | | 0.3068 | 40.9231 | 266 | 0.4688 | 0.8645 | | 0.3356 | 42.0 | 273 | 0.4585 | 0.8645 | | 0.345 | 42.9231 | 279 | 0.4541 | 0.8681 | | 0.3254 | 44.0 | 286 | 0.4584 | 0.8645 | | 0.3164 | 44.9231 | 292 | 0.4592 | 0.8571 | | 0.3657 | 46.0 | 299 | 0.4534 | 0.8608 | | 0.2655 | 46.9231 | 305 | 0.4502 | 0.8645 | | 0.2981 | 48.0 | 312 | 0.4452 | 0.8645 | | 0.3508 | 48.9231 | 318 | 0.4371 | 0.8791 | | 0.3419 | 50.0 | 325 | 0.4394 | 0.8755 | | 0.2668 | 50.9231 | 331 | 0.4430 | 0.8755 | | 0.2972 | 52.0 | 338 | 0.4395 | 0.8718 | | 0.3514 | 52.9231 | 344 | 0.4371 | 0.8755 | | 0.3012 | 54.0 | 351 | 0.4330 | 0.8791 | | 0.2725 | 54.9231 | 357 | 0.4298 | 0.8791 | | 0.2547 | 56.0 | 364 | 0.4289 | 0.8718 | | 0.2896 | 56.9231 | 370 | 0.4282 | 0.8718 | | 0.3469 | 58.0 | 377 | 0.4273 | 0.8718 | | 0.3528 | 58.9231 | 383 | 0.4269 | 0.8718 | | 0.2552 | 60.0 | 390 | 0.4324 | 0.8681 | | 0.239 | 60.9231 | 396 | 0.4319 | 0.8645 | | 0.3321 | 62.0 | 403 | 0.4270 | 0.8718 | | 0.3115 | 62.9231 | 409 | 0.4184 | 0.8718 | | 0.306 | 64.0 | 416 | 0.4169 | 0.8718 | | 0.3086 | 64.9231 | 422 | 0.4176 | 0.8718 | | 0.4256 | 66.0 | 429 | 0.4196 | 0.8718 | | 0.2798 | 66.9231 | 435 | 0.4219 | 0.8718 | | 0.3016 | 68.0 | 442 | 0.4224 | 0.8718 | | 0.2791 | 68.9231 | 448 | 0.4207 | 0.8718 | | 0.2651 | 70.0 | 455 | 0.4189 | 0.8718 | | 0.2466 | 70.9231 | 461 | 0.4178 | 0.8718 | | 0.1913 | 72.0 | 468 | 0.4177 | 0.8718 | | 0.2719 | 72.9231 | 474 | 0.4164 | 0.8718 | | 0.3364 | 74.0 | 481 | 0.4166 | 0.8718 | | 0.283 | 74.9231 | 487 | 0.4179 | 0.8755 | | 0.2891 | 76.0 | 494 | 0.4174 | 0.8755 | | 0.2625 | 76.9231 | 500 | 0.4180 | 0.8755 | | 0.2843 | 78.0 | 507 | 0.4184 | 0.8718 | | 0.375 | 78.9231 | 513 | 0.4167 | 0.8755 | | 0.3107 | 80.0 | 520 | 0.4150 | 0.8755 | | 0.3742 | 80.9231 | 526 | 0.4145 | 0.8718 | | 0.2574 | 82.0 | 533 | 0.4145 | 0.8755 | | 0.329 | 82.9231 | 539 | 0.4149 | 0.8755 | | 0.2727 | 84.0 | 546 | 0.4145 | 0.8755 | | 0.2977 | 84.9231 | 552 | 0.4149 | 0.8755 | | 0.2611 | 86.0 | 559 | 0.4160 | 0.8718 | | 0.2542 | 86.9231 | 565 | 0.4170 | 0.8718 | | 0.2665 | 88.0 | 572 | 0.4171 | 0.8718 | | 0.2654 | 88.9231 | 578 | 0.4170 | 0.8718 | | 0.3059 | 90.0 | 585 | 0.4172 | 0.8718 | | 0.2377 | 90.9231 | 591 | 0.4173 | 0.8718 | | 0.2896 | 92.0 | 598 | 0.4172 | 0.8718 | | 0.3133 | 92.3077 | 600 | 0.4172 | 0.8718 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "invalid", "valid" ]
Melo1512/vit-msn-small-lateral_flow_ivalidation_train_test_7
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-msn-small-lateral_flow_ivalidation_train_test_7 This model is a fine-tuned version of [Melo1512/vit-msn-small-lateral_flow_ivalidation_train_test_7](https://huggingface.co/Melo1512/vit-msn-small-lateral_flow_ivalidation_train_test_7) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4160 - Accuracy: 0.8791 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 100 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.9231 | 3 | 0.4160 | 0.8791 | | No log | 1.8462 | 6 | 0.4668 | 0.8388 | | No log | 2.7692 | 9 | 0.5433 | 0.8022 | | 0.3869 | 4.0 | 13 | 0.5052 | 0.8168 | | 0.3869 | 4.9231 | 16 | 0.4591 | 0.8571 | | 0.3869 | 5.8462 | 19 | 0.4820 | 0.8278 | | 0.3658 | 6.7692 | 22 | 0.4953 | 0.8095 | | 0.3658 | 8.0 | 26 | 0.4497 | 0.8608 | | 0.3658 | 8.9231 | 29 | 0.4686 | 0.8315 | | 0.3439 | 9.8462 | 32 | 0.4506 | 0.8608 | | 0.3439 | 10.7692 | 35 | 0.4859 | 0.8168 | | 0.3439 | 12.0 | 39 | 0.4929 | 0.8168 | | 0.3416 | 12.9231 | 42 | 0.4957 | 0.8059 | | 0.3416 | 13.8462 | 45 | 0.5229 | 0.7875 | | 0.3416 | 14.7692 | 48 | 0.4473 | 0.8535 | | 0.324 | 16.0 | 52 | 0.5260 | 0.8059 | | 0.324 | 16.9231 | 55 | 0.4582 | 0.8462 | | 0.324 | 17.8462 | 58 | 0.5299 | 0.7839 | | 0.3273 | 18.7692 | 61 | 0.4947 | 0.8205 | | 0.3273 | 20.0 | 65 | 0.5393 | 0.7692 | | 0.3273 | 20.9231 | 68 | 0.4916 | 0.8278 | | 0.3397 | 21.8462 | 71 | 0.5360 | 0.7802 | | 0.3397 | 22.7692 | 74 | 0.5661 | 0.7656 | | 0.3397 | 24.0 | 78 | 0.6354 | 0.7216 | | 0.3344 | 24.9231 | 81 | 0.6782 | 0.7033 | | 0.3344 | 25.8462 | 84 | 0.5704 | 0.7582 | | 0.3344 | 26.7692 | 87 | 0.6537 | 0.6777 | | 0.3325 | 28.0 | 91 | 0.4798 | 0.8425 | | 0.3325 | 28.9231 | 94 | 0.5158 | 0.8059 | | 0.3325 | 29.8462 | 97 | 0.5408 | 0.7912 | | 0.3283 | 30.7692 | 100 | 0.5964 | 0.7399 | | 0.3283 | 32.0 | 104 | 0.5069 | 0.8205 | | 0.3283 | 32.9231 | 107 | 0.5396 | 0.7875 | | 0.3229 | 33.8462 | 110 | 0.5203 | 0.7985 | | 0.3229 | 34.7692 | 113 | 0.5464 | 0.7875 | | 0.3229 | 36.0 | 117 | 0.5890 | 0.7509 | | 0.3207 | 36.9231 | 120 | 0.5080 | 0.8132 | | 0.3207 | 37.8462 | 123 | 0.4944 | 0.8168 | | 0.3207 | 38.7692 | 126 | 0.4968 | 0.8095 | | 0.3286 | 40.0 | 130 | 0.4874 | 0.8132 | | 0.3286 | 40.9231 | 133 | 0.5013 | 0.8059 | | 0.3286 | 41.8462 | 136 | 0.5329 | 0.7656 | | 0.3286 | 42.7692 | 139 | 0.6199 | 0.6996 | | 0.3154 | 44.0 | 143 | 0.4854 | 0.8059 | | 0.3154 | 44.9231 | 146 | 0.5545 | 0.7509 | | 0.3154 | 45.8462 | 149 | 0.5267 | 0.7729 | | 0.3119 | 46.7692 | 152 | 0.5214 | 0.7802 | | 0.3119 | 48.0 | 156 | 0.5265 | 0.7839 | | 0.3119 | 48.9231 | 159 | 0.5137 | 0.7985 | | 0.3036 | 49.8462 | 162 | 0.5354 | 0.7839 | | 0.3036 | 50.7692 | 165 | 0.5269 | 0.7875 | | 0.3036 | 52.0 | 169 | 0.5797 | 0.7399 | | 0.2995 | 52.9231 | 172 | 0.6258 | 0.7179 | | 0.2995 | 53.8462 | 175 | 0.5512 | 0.7692 | | 0.2995 | 54.7692 | 178 | 0.5517 | 0.7619 | | 0.306 | 56.0 | 182 | 0.5590 | 0.7546 | | 0.306 | 56.9231 | 185 | 0.5514 | 0.7619 | | 0.306 | 57.8462 | 188 | 0.5597 | 0.7509 | | 0.2989 | 58.7692 | 191 | 0.5957 | 0.7326 | | 0.2989 | 60.0 | 195 | 0.5366 | 0.7766 | | 0.2989 | 60.9231 | 198 | 0.5465 | 0.7729 | | 0.2931 | 61.8462 | 201 | 0.6171 | 0.7253 | | 0.2931 | 62.7692 | 204 | 0.5768 | 0.7509 | | 0.2931 | 64.0 | 208 | 0.5706 | 0.7509 | | 0.299 | 64.9231 | 211 | 0.5962 | 0.7363 | | 0.299 | 65.8462 | 214 | 0.6220 | 0.7216 | | 0.299 | 66.7692 | 217 | 0.5929 | 0.7363 | | 0.2969 | 68.0 | 221 | 0.6136 | 0.7253 | | 0.2969 | 68.9231 | 224 | 0.6092 | 0.7289 | | 0.2969 | 69.8462 | 227 | 0.6029 | 0.7253 | | 0.3015 | 70.7692 | 230 | 0.5356 | 0.7766 | | 0.3015 | 72.0 | 234 | 0.5376 | 0.7692 | | 0.3015 | 72.9231 | 237 | 0.5886 | 0.7436 | | 0.2919 | 73.8462 | 240 | 0.5869 | 0.7436 | | 0.2919 | 74.7692 | 243 | 0.5846 | 0.7473 | | 0.2919 | 76.0 | 247 | 0.5507 | 0.7656 | | 0.288 | 76.9231 | 250 | 0.5801 | 0.7509 | | 0.288 | 77.8462 | 253 | 0.6077 | 0.7399 | | 0.288 | 78.7692 | 256 | 0.5848 | 0.7436 | | 0.2951 | 80.0 | 260 | 0.5435 | 0.7692 | | 0.2951 | 80.9231 | 263 | 0.5638 | 0.7656 | | 0.2951 | 81.8462 | 266 | 0.5795 | 0.7399 | | 0.2951 | 82.7692 | 269 | 0.5774 | 0.7509 | | 0.2875 | 84.0 | 273 | 0.5703 | 0.7509 | | 0.2875 | 84.9231 | 276 | 0.5713 | 0.7509 | | 0.2875 | 85.8462 | 279 | 0.5784 | 0.7473 | | 0.2855 | 86.7692 | 282 | 0.5904 | 0.7436 | | 0.2855 | 88.0 | 286 | 0.5917 | 0.7326 | | 0.2855 | 88.9231 | 289 | 0.5860 | 0.7473 | | 0.2964 | 89.8462 | 292 | 0.5858 | 0.7473 | | 0.2964 | 90.7692 | 295 | 0.5823 | 0.7436 | | 0.2964 | 92.0 | 299 | 0.5817 | 0.7436 | | 0.291 | 92.3077 | 300 | 0.5816 | 0.7436 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
[ "invalid", "valid" ]
AlvaroVasquezAI/beans-ViT
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beans-ViT This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0389 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1471 | 3.8462 | 500 | 0.0389 | 0.9850 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
DaniServin/vit_model0
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_model0 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0396 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1451 | 3.8462 | 500 | 0.0396 | 0.9925 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
Say2410/vit-fire-detection
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-fire-detection This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 10 ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "fire", "normal", "smoke" ]
debajyotim/swin-tiny-patch4-window7-224-finetuned-debu
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-debu This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | No log | 0.9412 | 8 | 0.3356 | 0.9244 | ### Framework versions - Transformers 4.48.0 - Pytorch 2.0.1+cpu - Datasets 3.2.0 - Tokenizers 0.21.0
[ "annualcrop", "forest", "herbaceousvegetation", "highway", "industrial", "pasture", "permanentcrop", "residential", "river", "sealake" ]
CGCTG/orientation_resnet-50
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # orientation_resnet-50 This model is a fine-tuned version of [CGCTG/orientation_resnet-50](https://huggingface.co/CGCTG/orientation_resnet-50) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5106 - Accuracy: 0.7598 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.7836 | 1.0 | 87 | 1.2301 | 0.4986 | | 0.6808 | 2.0 | 174 | 0.6659 | 0.6382 | | 0.5641 | 2.9711 | 258 | 0.5106 | 0.7598 | ### Framework versions - Transformers 4.48.0 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "0", "180", "270", "90" ]
CharlesCGCTG/model_resnet-50
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_resnet-50 This model is a fine-tuned version of [CGCTG/orientation_resnet-50](https://huggingface.co/CGCTG/orientation_resnet-50) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8638 - Accuracy: 0.4855 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.8151 | 0.9942 | 86 | 0.8638 | 0.4855 | ### Framework versions - Transformers 4.48.0 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "0", "180", "270", "90" ]
midhunesh/finetuned-indian-food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-indian-food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2692 - Accuracy: 0.9341 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.3949 | 0.3003 | 100 | 0.6593 | 0.8395 | | 0.2833 | 0.6006 | 200 | 0.3689 | 0.9001 | | 0.4671 | 0.9009 | 300 | 0.5113 | 0.8682 | | 0.1231 | 1.2012 | 400 | 0.3643 | 0.9097 | | 0.1812 | 1.5015 | 500 | 0.3605 | 0.9033 | | 0.2414 | 1.8018 | 600 | 0.3426 | 0.9203 | | 0.0845 | 2.1021 | 700 | 0.3238 | 0.9150 | | 0.1232 | 2.4024 | 800 | 0.3523 | 0.9129 | | 0.1553 | 2.7027 | 900 | 0.3726 | 0.9065 | | 0.1323 | 3.0030 | 1000 | 0.2706 | 0.9352 | | 0.1057 | 3.3033 | 1100 | 0.2697 | 0.9373 | | 0.1585 | 3.6036 | 1200 | 0.2695 | 0.9341 | | 0.0312 | 3.9039 | 1300 | 0.2692 | 0.9341 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "burger", "butter_naan", "kaathi_rolls", "kadai_paneer", "kulfi", "masala_dosa", "momos", "paani_puri", "pakode", "pav_bhaji", "pizza", "samosa", "chai", "chapati", "chole_bhature", "dal_makhani", "dhokla", "fried_rice", "idli", "jalebi" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV3 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2067 - Accuracy: 0.9589 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 3.9845 | 1.0 | 21 | 1.6832 | 0.3425 | | 2.4369 | 2.0 | 42 | 1.1981 | 0.4384 | | 1.7752 | 3.0 | 63 | 0.8412 | 0.6301 | | 1.3772 | 4.0 | 84 | 0.7895 | 0.7123 | | 1.1556 | 5.0 | 105 | 0.7385 | 0.7808 | | 1.0059 | 6.0 | 126 | 0.6626 | 0.8082 | | 0.8598 | 7.0 | 147 | 0.5403 | 0.7808 | | 0.8724 | 8.0 | 168 | 0.5520 | 0.8219 | | 0.7096 | 9.0 | 189 | 0.5182 | 0.8356 | | 0.5038 | 10.0 | 210 | 0.4133 | 0.8493 | | 0.4951 | 11.0 | 231 | 0.3548 | 0.8767 | | 0.4692 | 12.0 | 252 | 0.3845 | 0.8493 | | 0.5339 | 13.0 | 273 | 0.3178 | 0.8904 | | 0.4536 | 14.0 | 294 | 0.3252 | 0.8904 | | 0.4369 | 15.0 | 315 | 0.2785 | 0.8904 | | 0.3941 | 16.0 | 336 | 0.2900 | 0.9041 | | 0.4363 | 17.0 | 357 | 0.3426 | 0.8630 | | 0.2819 | 18.0 | 378 | 0.2839 | 0.9041 | | 0.361 | 19.0 | 399 | 0.2223 | 0.9041 | | 0.1857 | 20.0 | 420 | 0.2522 | 0.9178 | | 0.3161 | 21.0 | 441 | 0.2164 | 0.9178 | | 0.3273 | 22.0 | 462 | 0.2224 | 0.9315 | | 0.3458 | 23.0 | 483 | 0.2199 | 0.9452 | | 0.337 | 24.0 | 504 | 0.2377 | 0.9315 | | 0.1801 | 25.0 | 525 | 0.2067 | 0.9589 | | 0.3283 | 26.0 | 546 | 0.2401 | 0.9315 | | 0.2211 | 27.0 | 567 | 0.2167 | 0.9315 | | 0.1783 | 28.0 | 588 | 0.2180 | 0.9315 | | 0.2783 | 28.5854 | 600 | 0.2223 | 0.9315 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzadada", "avanzadahumedada", "avanzada", "avanzada humeda", "leve", "leveda", "moderada", "moderadada", "no dmae", "nodmaeda" ]
pylu5229/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0013 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.3571 | 1.0 | 106 | 0.0469 | 0.992 | | 0.1158 | 2.0 | 212 | 0.0013 | 1.0 | | 0.0934 | 2.9763 | 315 | 0.0003 | 1.0 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "e", "n", "normal", "s", "w" ]
ppicazo/allsky-stars-detected
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # allsky-stars-detected This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0255 - Accuracy: 0.9952 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1339 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0436 | 1.0 | 148 | 0.0582 | 0.9809 | | 0.0121 | 2.0 | 296 | 0.0405 | 0.9904 | | 0.0112 | 3.0 | 444 | 0.0383 | 0.9856 | | 0.01 | 4.0 | 592 | 0.0270 | 0.9952 | | 0.0098 | 5.0 | 740 | 0.0255 | 0.9952 | ### Framework versions - Transformers 4.49.0.dev0 - Pytorch 2.5.0+cpu - Datasets 3.0.1 - Tokenizers 0.21.0
[ "no_stars", "stars" ]
Say2410/vit-edp-fire-detection
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-edp-fire-detection This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 10 ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "default", "fire", "smoke" ]
toprove/mobile-v2-fine-tuned-prove
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "dni", "doc" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV4 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9551 - Accuracy: 0.7115 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.1112 | 1.0 | 23 | 1.4616 | 0.4423 | | 2.4301 | 2.0 | 46 | 1.3378 | 0.3846 | | 1.8107 | 3.0 | 69 | 1.1497 | 0.4423 | | 1.3272 | 4.0 | 92 | 1.2177 | 0.5 | | 1.2039 | 5.0 | 115 | 1.1250 | 0.5577 | | 1.0311 | 6.0 | 138 | 1.0660 | 0.5577 | | 1.0515 | 7.0 | 161 | 1.2242 | 0.5 | | 0.8709 | 8.0 | 184 | 1.0952 | 0.5962 | | 0.677 | 9.0 | 207 | 1.1033 | 0.5385 | | 0.6763 | 10.0 | 230 | 0.9551 | 0.7115 | | 0.5749 | 11.0 | 253 | 1.0428 | 0.6346 | | 0.4896 | 12.0 | 276 | 1.0981 | 0.6538 | | 0.4817 | 13.0 | 299 | 1.3429 | 0.4808 | | 0.4264 | 14.0 | 322 | 1.3040 | 0.6154 | | 0.5637 | 15.0 | 345 | 1.2592 | 0.4808 | | 0.3846 | 16.0 | 368 | 1.1849 | 0.6154 | | 0.5337 | 17.0 | 391 | 1.2025 | 0.6346 | | 0.34 | 18.0 | 414 | 1.0894 | 0.6346 | | 0.3511 | 19.0 | 437 | 1.2145 | 0.6346 | | 0.2539 | 20.0 | 460 | 1.1755 | 0.6346 | | 0.2683 | 21.0 | 483 | 1.2359 | 0.6731 | | 0.3144 | 22.0 | 506 | 1.2633 | 0.6538 | | 0.3249 | 23.0 | 529 | 1.2980 | 0.6346 | | 0.2363 | 24.0 | 552 | 1.1872 | 0.6538 | | 0.2876 | 25.0 | 575 | 1.2377 | 0.6923 | | 0.2694 | 26.0 | 598 | 1.2695 | 0.6538 | | 0.2307 | 27.0 | 621 | 1.2481 | 0.6731 | | 0.2508 | 28.0 | 644 | 1.3112 | 0.6731 | | 0.3558 | 29.0 | 667 | 1.3209 | 0.6731 | | 0.2418 | 30.0 | 690 | 1.3233 | 0.6538 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV5 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1815 - Accuracy: 0.6346 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.0169 | 1.0 | 10 | 1.4927 | 0.3654 | | 2.7431 | 2.0 | 20 | 1.2785 | 0.5577 | | 2.1791 | 3.0 | 30 | 1.0628 | 0.5769 | | 1.7293 | 4.0 | 40 | 1.0541 | 0.5577 | | 1.3855 | 5.0 | 50 | 0.9817 | 0.6346 | | 1.3507 | 6.0 | 60 | 0.9506 | 0.6154 | | 1.186 | 7.0 | 70 | 0.9259 | 0.5962 | | 1.1201 | 8.0 | 80 | 0.8867 | 0.6346 | | 1.0185 | 9.0 | 90 | 0.9556 | 0.5769 | | 0.981 | 10.0 | 100 | 0.9069 | 0.6923 | | 0.8472 | 11.0 | 110 | 0.9413 | 0.6346 | | 0.7333 | 12.0 | 120 | 0.9594 | 0.6346 | | 0.7 | 13.0 | 130 | 0.9818 | 0.6538 | | 0.7062 | 14.0 | 140 | 0.9588 | 0.6154 | | 0.6666 | 15.0 | 150 | 1.1824 | 0.5769 | | 0.6756 | 16.0 | 160 | 1.0542 | 0.6538 | | 0.6148 | 17.0 | 170 | 1.0112 | 0.6346 | | 0.6123 | 18.0 | 180 | 1.2390 | 0.5769 | | 0.5931 | 19.0 | 190 | 1.0358 | 0.6923 | | 0.5242 | 20.0 | 200 | 1.1471 | 0.6346 | | 0.5499 | 21.0 | 210 | 1.0452 | 0.6731 | | 0.4806 | 22.0 | 220 | 1.0887 | 0.6346 | | 0.4294 | 23.0 | 230 | 1.1078 | 0.6346 | | 0.5176 | 24.0 | 240 | 1.1218 | 0.5769 | | 0.4051 | 25.0 | 250 | 1.1255 | 0.6731 | | 0.4486 | 26.0 | 260 | 1.0775 | 0.6346 | | 0.4262 | 27.0 | 270 | 1.0711 | 0.6731 | | 0.4717 | 28.0 | 280 | 1.0975 | 0.6346 | | 0.4067 | 29.0 | 290 | 1.0647 | 0.6731 | | 0.3691 | 30.0 | 300 | 1.1139 | 0.6346 | | 0.4446 | 31.0 | 310 | 1.1270 | 0.5962 | | 0.3399 | 32.0 | 320 | 1.1498 | 0.6346 | | 0.3449 | 33.0 | 330 | 1.1864 | 0.6346 | | 0.4118 | 34.0 | 340 | 1.1989 | 0.5962 | | 0.3945 | 35.0 | 350 | 1.1928 | 0.5962 | | 0.3609 | 36.0 | 360 | 1.1815 | 0.6346 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
nguyenkhoa/dinov2_Liveness_detection_v2.1.4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/nguyenkhoaht002/liveness_detection/runs/p2slnhef) # dinov2_Liveness_detection_v2.1.4 This model is a fine-tuned version of [nguyenkhoa/dinov2_Liveness_detection_v2.1.3](https://huggingface.co/nguyenkhoa/dinov2_Liveness_detection_v2.1.3) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0167 - Accuracy: 0.9967 - F1: 0.9967 - Recall: 0.9967 - Precision: 0.9967 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 768 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.0402 | 0.5872 | 64 | 0.0219 | 0.9919 | 0.9919 | 0.9919 | 0.9920 | | 0.0231 | 1.1743 | 128 | 0.0355 | 0.9892 | 0.9891 | 0.9892 | 0.9893 | | 0.0152 | 1.7615 | 192 | 0.0178 | 0.9935 | 0.9935 | 0.9935 | 0.9935 | | 0.0085 | 2.3486 | 256 | 0.0279 | 0.9932 | 0.9932 | 0.9932 | 0.9932 | | 0.0086 | 2.9358 | 320 | 0.0196 | 0.9938 | 0.9938 | 0.9938 | 0.9938 | | 0.0041 | 3.5229 | 384 | 0.0185 | 0.9957 | 0.9957 | 0.9957 | 0.9957 | | 0.0018 | 4.1101 | 448 | 0.0191 | 0.9961 | 0.9961 | 0.9961 | 0.9961 | | 0.0008 | 4.6972 | 512 | 0.0167 | 0.9967 | 0.9967 | 0.9967 | 0.9967 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0 ### Evaluate results - APCER: 0.2754 - BPCER: 0.0162 - ACER: 0.1458 - Accuracy: 0.8032 - F1: 0.8369 - Recall: 0.9838 - Precision: 0.6088
[ "live", "spoof" ]
Maharat-lab/vit-seed-classification
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "label_0", "label_1", "label_2" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV6
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV6 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0899 - Accuracy: 0.5962 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.65 | 1.0 | 12 | 1.5637 | 0.3462 | | 1.5819 | 2.0 | 24 | 1.4962 | 0.3269 | | 1.4527 | 3.0 | 36 | 1.3413 | 0.4615 | | 1.2853 | 4.0 | 48 | 1.2034 | 0.4423 | | 1.0185 | 5.0 | 60 | 1.1204 | 0.4808 | | 0.8102 | 6.0 | 72 | 1.1004 | 0.4808 | | 0.7477 | 7.0 | 84 | 1.1705 | 0.4808 | | 0.6684 | 8.0 | 96 | 1.0245 | 0.5192 | | 0.6297 | 9.0 | 108 | 1.0010 | 0.5577 | | 0.5453 | 10.0 | 120 | 1.1905 | 0.4808 | | 0.5496 | 11.0 | 132 | 1.1227 | 0.4808 | | 0.4993 | 12.0 | 144 | 0.9619 | 0.5577 | | 0.4297 | 13.0 | 156 | 1.0743 | 0.5192 | | 0.4459 | 14.0 | 168 | 1.0195 | 0.5577 | | 0.4219 | 15.0 | 180 | 1.0888 | 0.5 | | 0.3742 | 16.0 | 192 | 1.0123 | 0.5769 | | 0.4603 | 17.0 | 204 | 1.0503 | 0.5192 | | 0.3607 | 18.0 | 216 | 1.1305 | 0.5577 | | 0.3399 | 19.0 | 228 | 1.1327 | 0.5385 | | 0.3422 | 20.0 | 240 | 1.1125 | 0.5192 | | 0.3254 | 21.0 | 252 | 1.0243 | 0.5769 | | 0.3363 | 22.0 | 264 | 1.0753 | 0.5577 | | 0.3203 | 23.0 | 276 | 1.0778 | 0.5577 | | 0.3248 | 24.0 | 288 | 1.1100 | 0.5385 | | 0.2446 | 25.0 | 300 | 1.0773 | 0.5577 | | 0.3058 | 26.0 | 312 | 1.0875 | 0.5769 | | 0.254 | 27.0 | 324 | 1.0673 | 0.5769 | | 0.2644 | 28.0 | 336 | 1.1026 | 0.5769 | | 0.2962 | 29.0 | 348 | 1.0899 | 0.5962 | | 0.2579 | 30.0 | 360 | 1.0816 | 0.5962 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV7
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV7 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9241 - Accuracy: 0.6923 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.6286 | 1.0 | 12 | 1.5624 | 0.2692 | | 1.5581 | 2.0 | 24 | 1.4706 | 0.5192 | | 1.4708 | 3.0 | 36 | 1.2967 | 0.5385 | | 1.3022 | 4.0 | 48 | 1.1556 | 0.4615 | | 0.9735 | 5.0 | 60 | 1.0354 | 0.5385 | | 0.7914 | 6.0 | 72 | 1.1014 | 0.4423 | | 0.7376 | 7.0 | 84 | 1.1087 | 0.4808 | | 0.692 | 8.0 | 96 | 0.9630 | 0.6346 | | 0.6586 | 9.0 | 108 | 0.9429 | 0.6346 | | 0.5799 | 10.0 | 120 | 0.9332 | 0.6346 | | 0.5557 | 11.0 | 132 | 1.1712 | 0.5192 | | 0.5233 | 12.0 | 144 | 1.0447 | 0.5577 | | 0.4427 | 13.0 | 156 | 0.8928 | 0.6538 | | 0.5043 | 14.0 | 168 | 1.0120 | 0.5962 | | 0.4167 | 15.0 | 180 | 0.9241 | 0.6923 | | 0.4601 | 16.0 | 192 | 0.8848 | 0.6538 | | 0.4619 | 17.0 | 204 | 0.9239 | 0.6923 | | 0.3822 | 18.0 | 216 | 0.9208 | 0.6731 | | 0.3707 | 19.0 | 228 | 1.0374 | 0.5769 | | 0.365 | 20.0 | 240 | 0.9900 | 0.6538 | | 0.3412 | 21.0 | 252 | 1.0541 | 0.6538 | | 0.3265 | 22.0 | 264 | 0.9913 | 0.6731 | | 0.3096 | 23.0 | 276 | 1.0355 | 0.6346 | | 0.3603 | 24.0 | 288 | 0.9986 | 0.6538 | | 0.2924 | 25.0 | 300 | 1.0046 | 0.6731 | | 0.3489 | 26.0 | 312 | 1.0560 | 0.6346 | | 0.2974 | 27.0 | 324 | 1.0076 | 0.6538 | | 0.2924 | 28.0 | 336 | 1.0164 | 0.6538 | | 0.3369 | 29.0 | 348 | 1.0260 | 0.6346 | | 0.2884 | 30.0 | 360 | 1.0293 | 0.6346 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
rosellaae/vit-base-patch16-224-finetuned-flower
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-flower This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0093 - Accuracy: 0.9973 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0492 | 2.7027 | 500 | 0.0093 | 0.9973 | ### Framework versions - Transformers 4.48.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "daisy", "dandelion", "roses", "sunflowers", "tulips" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV9
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV9 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2286 - Accuracy: 0.5192 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 3 | 1.5944 | 0.2308 | | No log | 2.0 | 6 | 1.5511 | 0.2692 | | No log | 3.0 | 9 | 1.4915 | 0.3462 | | 6.2974 | 4.0 | 12 | 1.4388 | 0.4615 | | 6.2974 | 5.0 | 15 | 1.3927 | 0.4615 | | 6.2974 | 6.0 | 18 | 1.3394 | 0.4423 | | 5.3611 | 7.0 | 21 | 1.3108 | 0.4423 | | 5.3611 | 8.0 | 24 | 1.3680 | 0.3462 | | 5.3611 | 9.0 | 27 | 1.2718 | 0.4038 | | 3.7205 | 10.0 | 30 | 1.2679 | 0.4231 | | 3.7205 | 11.0 | 33 | 1.3010 | 0.4038 | | 3.7205 | 12.0 | 36 | 1.2598 | 0.4231 | | 3.7205 | 13.0 | 39 | 1.2016 | 0.4231 | | 2.8178 | 14.0 | 42 | 1.1934 | 0.4423 | | 2.8178 | 15.0 | 45 | 1.1842 | 0.4808 | | 2.8178 | 16.0 | 48 | 1.1539 | 0.5 | | 2.4001 | 17.0 | 51 | 1.1308 | 0.4808 | | 2.4001 | 18.0 | 54 | 1.2173 | 0.4615 | | 2.4001 | 19.0 | 57 | 1.1670 | 0.5 | | 2.081 | 20.0 | 60 | 1.1792 | 0.5 | | 2.081 | 21.0 | 63 | 1.2286 | 0.5192 | | 2.081 | 22.0 | 66 | 1.2633 | 0.5 | | 2.081 | 23.0 | 69 | 1.2380 | 0.5 | | 1.8588 | 24.0 | 72 | 1.2498 | 0.4808 | | 1.8588 | 25.0 | 75 | 1.2591 | 0.5 | | 1.8588 | 26.0 | 78 | 1.2653 | 0.5 | | 1.7634 | 27.0 | 81 | 1.2599 | 0.5 | | 1.7634 | 28.0 | 84 | 1.2549 | 0.5 | | 1.7634 | 29.0 | 87 | 1.2545 | 0.5192 | | 1.8177 | 30.0 | 90 | 1.2547 | 0.5192 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
liamxostrander/vit-base-patch16-224-in21k-v2024-11-07
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-v2024-11-07 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1875 - Accuracy: 0.9449 - F1: 0.8664 - Precision: 0.8559 - Recall: 0.8772 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00025 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.0808 | 1.1905 | 100 | 0.1574 | 0.9408 | 0.8531 | 0.8614 | 0.8450 | | 0.0908 | 2.3810 | 200 | 0.1861 | 0.9318 | 0.8327 | 0.8321 | 0.8333 | | 0.1393 | 3.5714 | 300 | 0.2000 | 0.9298 | 0.8297 | 0.8191 | 0.8406 | | 0.0911 | 4.7619 | 400 | 0.1639 | 0.9360 | 0.8448 | 0.8345 | 0.8553 | | 0.095 | 5.9524 | 500 | 0.1779 | 0.9393 | 0.8507 | 0.8519 | 0.8494 | | 0.0767 | 7.1429 | 600 | 0.1691 | 0.9411 | 0.8563 | 0.8501 | 0.8626 | | 0.0918 | 8.3333 | 700 | 0.1709 | 0.9375 | 0.8476 | 0.8415 | 0.8538 | | 0.0742 | 9.5238 | 800 | 0.1703 | 0.9378 | 0.8471 | 0.8477 | 0.8465 | | 0.0931 | 10.7143 | 900 | 0.1779 | 0.9351 | 0.8388 | 0.8488 | 0.8289 | | 0.085 | 11.9048 | 1000 | 0.1835 | 0.9351 | 0.8427 | 0.8319 | 0.8538 | | 0.0712 | 13.0952 | 1100 | 0.1886 | 0.9339 | 0.8377 | 0.8377 | 0.8377 | | 0.0616 | 14.2857 | 1200 | 0.1863 | 0.9351 | 0.8429 | 0.8310 | 0.8553 | | 0.0628 | 15.4762 | 1300 | 0.1815 | 0.9387 | 0.8499 | 0.8474 | 0.8523 | | 0.0571 | 16.6667 | 1400 | 0.1749 | 0.9449 | 0.8685 | 0.8451 | 0.8933 | | 0.0496 | 17.8571 | 1500 | 0.1781 | 0.9384 | 0.8484 | 0.8502 | 0.8465 | | 0.0484 | 19.0476 | 1600 | 0.1859 | 0.9354 | 0.8406 | 0.8449 | 0.8363 | | 0.0487 | 20.2381 | 1700 | 0.1697 | 0.9446 | 0.8642 | 0.8630 | 0.8655 | | 0.0485 | 21.4286 | 1800 | 0.1876 | 0.9369 | 0.8470 | 0.8362 | 0.8582 | | 0.042 | 22.6190 | 1900 | 0.1835 | 0.9414 | 0.8576 | 0.8484 | 0.8670 | | 0.0367 | 23.8095 | 2000 | 0.1844 | 0.9432 | 0.8613 | 0.8557 | 0.8670 | | 0.0339 | 25.0 | 2100 | 0.1816 | 0.9411 | 0.8578 | 0.8432 | 0.8728 | | 0.0317 | 26.1905 | 2200 | 0.1817 | 0.9423 | 0.8602 | 0.8480 | 0.8728 | | 0.0349 | 27.3810 | 2300 | 0.1799 | 0.9426 | 0.8592 | 0.8574 | 0.8611 | | 0.0355 | 28.5714 | 2400 | 0.1932 | 0.9402 | 0.8540 | 0.8485 | 0.8596 | | 0.0296 | 29.7619 | 2500 | 0.1875 | 0.9449 | 0.8664 | 0.8559 | 0.8772 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "snowing", "raining", "sunny", "cloudy", "night", "snow_on_road", "partial_snow_on_road", "clear_pavement", "wet_pavement", "iced_lens" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV10
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV10 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0099 - Accuracy: 0.6731 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.8421 | 4 | 1.5825 | 0.2885 | | No log | 1.8421 | 8 | 1.4864 | 0.5385 | | No log | 2.8421 | 12 | 1.3187 | 0.5577 | | No log | 3.8421 | 16 | 1.0843 | 0.6731 | | No log | 4.8421 | 20 | 1.0044 | 0.5962 | | No log | 5.8421 | 24 | 0.8914 | 0.6346 | | No log | 6.8421 | 28 | 0.9119 | 0.5769 | | No log | 7.8421 | 32 | 0.9379 | 0.6538 | | No log | 8.8421 | 36 | 0.9471 | 0.6154 | | No log | 9.8421 | 40 | 0.9565 | 0.6538 | | No log | 10.8421 | 44 | 0.9581 | 0.6923 | | No log | 11.8421 | 48 | 0.9655 | 0.6923 | | 4.2193 | 12.8421 | 52 | 0.9880 | 0.7115 | | 4.2193 | 13.8421 | 56 | 0.9557 | 0.6923 | | 4.2193 | 14.8421 | 60 | 0.9275 | 0.6538 | | 4.2193 | 15.8421 | 64 | 1.0216 | 0.5769 | | 4.2193 | 16.8421 | 68 | 0.9646 | 0.6346 | | 4.2193 | 17.8421 | 72 | 0.9957 | 0.6731 | | 4.2193 | 18.8421 | 76 | 1.0366 | 0.6346 | | 4.2193 | 19.8421 | 80 | 0.9978 | 0.6538 | | 4.2193 | 20.8421 | 84 | 0.9941 | 0.6731 | | 4.2193 | 21.8421 | 88 | 1.0063 | 0.6923 | | 4.2193 | 22.8421 | 92 | 1.0114 | 0.6731 | | 4.2193 | 23.8421 | 96 | 1.0134 | 0.6731 | | 1.6367 | 24.8421 | 100 | 1.0097 | 0.6731 | | 1.6367 | 25.8421 | 104 | 1.0084 | 0.6731 | | 1.6367 | 26.8421 | 108 | 1.0097 | 0.6731 | | 1.6367 | 27.8421 | 112 | 1.0102 | 0.6731 | | 1.6367 | 28.8421 | 116 | 1.0100 | 0.6731 | | 1.6367 | 29.8421 | 120 | 1.0099 | 0.6731 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
ppicazo/allsky-stars-detected-v2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # allsky-stars-detected-v2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0171 - Accuracy: 0.9948 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1339 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0394 | 1.0 | 270 | 0.0407 | 0.9895 | | 0.0973 | 2.0 | 540 | 0.0709 | 0.9843 | | 0.0057 | 3.0 | 810 | 0.0425 | 0.9869 | | 0.0403 | 4.0 | 1080 | 0.0499 | 0.9869 | | 0.0608 | 5.0 | 1350 | 0.0171 | 0.9948 | ### Framework versions - Transformers 4.49.0.dev0 - Pytorch 2.5.0+cpu - Datasets 3.0.1 - Tokenizers 0.21.0
[ "no_stars", "stars" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV11
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV11 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0641 - Accuracy: 0.6538 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9013 | 20.0 | 100 | 0.8579 | 0.6346 | | 0.6158 | 40.0 | 200 | 1.0358 | 0.6538 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
hamriver/Hamilton
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Hamilton This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0118 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1294 | 3.8462 | 500 | 0.0118 | 1.0 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
TalentoTechIA/Hamilton
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Hamilton This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0248 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1387 | 3.8462 | 500 | 0.0248 | 0.9925 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
RobertoSonic/swinv2-base-patch4-window8-256-dmae-humeda-DAV14
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-base-patch4-window8-256-dmae-humeda-DAV14 This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window8-256](https://huggingface.co/microsoft/swinv2-base-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8478 - Accuracy: 0.7692 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.8421 | 4 | 1.6083 | 0.2115 | | No log | 1.8421 | 8 | 1.5519 | 0.3077 | | 7.0606 | 2.8421 | 12 | 1.4896 | 0.3846 | | 7.0606 | 3.8421 | 16 | 1.4160 | 0.3846 | | 6.3113 | 4.8421 | 20 | 1.3599 | 0.3269 | | 6.3113 | 5.8421 | 24 | 1.2338 | 0.3462 | | 6.3113 | 6.8421 | 28 | 1.1538 | 0.4808 | | 5.1603 | 7.8421 | 32 | 1.0931 | 0.5577 | | 5.1603 | 8.8421 | 36 | 1.0510 | 0.5577 | | 3.5688 | 9.8421 | 40 | 0.9583 | 0.5577 | | 3.5688 | 10.8421 | 44 | 0.9648 | 0.5577 | | 3.5688 | 11.8421 | 48 | 0.9486 | 0.6154 | | 2.9736 | 12.8421 | 52 | 0.9201 | 0.5962 | | 2.9736 | 13.8421 | 56 | 1.0203 | 0.5577 | | 2.5257 | 14.8421 | 60 | 0.8558 | 0.6154 | | 2.5257 | 15.8421 | 64 | 0.9309 | 0.5769 | | 2.5257 | 16.8421 | 68 | 0.9707 | 0.5769 | | 2.3819 | 17.8421 | 72 | 0.8505 | 0.6731 | | 2.3819 | 18.8421 | 76 | 0.9245 | 0.6538 | | 1.9541 | 19.8421 | 80 | 0.9093 | 0.6731 | | 1.9541 | 20.8421 | 84 | 0.8463 | 0.7115 | | 1.9541 | 21.8421 | 88 | 0.9135 | 0.6731 | | 1.7643 | 22.8421 | 92 | 0.8720 | 0.7115 | | 1.7643 | 23.8421 | 96 | 0.8631 | 0.7115 | | 1.5146 | 24.8421 | 100 | 0.8862 | 0.6923 | | 1.5146 | 25.8421 | 104 | 0.8584 | 0.75 | | 1.5146 | 26.8421 | 108 | 0.9111 | 0.6923 | | 1.4609 | 27.8421 | 112 | 0.8703 | 0.75 | | 1.4609 | 28.8421 | 116 | 0.8478 | 0.7692 | | 1.463 | 29.8421 | 120 | 0.8645 | 0.75 | | 1.463 | 30.8421 | 124 | 0.9137 | 0.6731 | | 1.463 | 31.8421 | 128 | 0.9311 | 0.6731 | | 1.3699 | 32.8421 | 132 | 0.9070 | 0.7115 | | 1.3699 | 33.8421 | 136 | 0.8930 | 0.7115 | | 1.2756 | 34.8421 | 140 | 0.8930 | 0.7115 | | 1.2756 | 35.8421 | 144 | 0.8935 | 0.7308 | | 1.2756 | 36.8421 | 148 | 0.8960 | 0.7308 | | 1.273 | 37.8421 | 152 | 0.8951 | 0.7308 | | 1.273 | 38.8421 | 156 | 0.8955 | 0.7308 | | 1.2626 | 39.8421 | 160 | 0.8954 | 0.7308 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-base-patch4-window8-256-dmae-humeda-DAV15
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-base-patch4-window8-256-dmae-humeda-DAV15 This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window8-256](https://huggingface.co/microsoft/swinv2-base-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8423 - Accuracy: 0.75 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 42 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.8696 | 5 | 1.5972 | 0.3077 | | 6.7562 | 1.8696 | 10 | 1.5357 | 0.3077 | | 6.7562 | 2.8696 | 15 | 1.4954 | 0.4038 | | 6.2842 | 3.8696 | 20 | 1.4612 | 0.3462 | | 6.2842 | 4.8696 | 25 | 1.3875 | 0.3269 | | 4.9858 | 5.8696 | 30 | 1.3370 | 0.3462 | | 4.9858 | 6.8696 | 35 | 1.2739 | 0.4423 | | 3.5596 | 7.8696 | 40 | 1.1774 | 0.4808 | | 3.5596 | 8.8696 | 45 | 1.1214 | 0.4808 | | 2.6814 | 9.8696 | 50 | 1.0999 | 0.5192 | | 2.6814 | 10.8696 | 55 | 1.1773 | 0.4615 | | 2.3236 | 11.8696 | 60 | 0.9874 | 0.5192 | | 2.3236 | 12.8696 | 65 | 1.1124 | 0.5 | | 1.8037 | 13.8696 | 70 | 0.8936 | 0.6538 | | 1.8037 | 14.8696 | 75 | 1.2064 | 0.4423 | | 1.6474 | 15.8696 | 80 | 0.8423 | 0.75 | | 1.6474 | 16.8696 | 85 | 1.0134 | 0.6346 | | 1.5505 | 17.8696 | 90 | 0.8965 | 0.6923 | | 1.5505 | 18.8696 | 95 | 0.9215 | 0.6538 | | 1.2697 | 19.8696 | 100 | 1.0155 | 0.6154 | | 1.2697 | 20.8696 | 105 | 0.8500 | 0.7115 | | 1.1783 | 21.8696 | 110 | 0.9573 | 0.6538 | | 1.1783 | 22.8696 | 115 | 0.8915 | 0.6923 | | 1.0235 | 23.8696 | 120 | 0.9831 | 0.6538 | | 1.0235 | 24.8696 | 125 | 0.9464 | 0.6538 | | 0.9706 | 25.8696 | 130 | 0.9413 | 0.6923 | | 0.9706 | 26.8696 | 135 | 1.0249 | 0.6346 | | 0.9409 | 27.8696 | 140 | 0.9754 | 0.6538 | | 0.9409 | 28.8696 | 145 | 0.9530 | 0.7115 | | 0.9447 | 29.8696 | 150 | 1.0266 | 0.6538 | | 0.9447 | 30.8696 | 155 | 1.0819 | 0.6538 | | 0.8352 | 31.8696 | 160 | 0.9922 | 0.6923 | | 0.8352 | 32.8696 | 165 | 0.9755 | 0.6923 | | 0.8055 | 33.8696 | 170 | 0.9768 | 0.7115 | | 0.8055 | 34.8696 | 175 | 0.9950 | 0.6923 | | 0.7481 | 35.8696 | 180 | 1.0135 | 0.6923 | | 0.7481 | 36.8696 | 185 | 1.0168 | 0.6923 | | 0.7483 | 37.8696 | 190 | 1.0091 | 0.6923 | | 0.7483 | 38.8696 | 195 | 1.0055 | 0.6923 | | 0.8145 | 39.8696 | 200 | 1.0040 | 0.6923 | | 0.8145 | 40.8696 | 205 | 1.0039 | 0.6923 | | 0.7501 | 41.8696 | 210 | 1.0038 | 0.6923 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-base-patch4-window8-256-dmae-humeda-DAV16
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-base-patch4-window8-256-dmae-humeda-DAV16 This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window8-256](https://huggingface.co/microsoft/swinv2-base-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0641 - Accuracy: 0.75 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 42 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.8696 | 5 | 1.5391 | 0.4038 | | No log | 1.8696 | 10 | 1.4350 | 0.4231 | | 6.5563 | 2.8696 | 15 | 1.3179 | 0.5385 | | 6.5563 | 3.8696 | 20 | 1.2358 | 0.5385 | | 4.5658 | 4.8696 | 25 | 0.9991 | 0.5769 | | 4.5658 | 5.8696 | 30 | 0.9567 | 0.5385 | | 4.5658 | 6.8696 | 35 | 0.8482 | 0.6154 | | 2.7201 | 7.8696 | 40 | 1.1108 | 0.4615 | | 2.7201 | 8.8696 | 45 | 0.7993 | 0.6923 | | 1.9091 | 9.8696 | 50 | 0.8539 | 0.6154 | | 1.9091 | 10.8696 | 55 | 0.8361 | 0.6731 | | 1.6858 | 11.8696 | 60 | 0.8574 | 0.6731 | | 1.6858 | 12.8696 | 65 | 0.9489 | 0.6346 | | 1.6858 | 13.8696 | 70 | 0.8122 | 0.7115 | | 1.2131 | 14.8696 | 75 | 0.8131 | 0.6538 | | 1.2131 | 15.8696 | 80 | 0.8591 | 0.6731 | | 0.8967 | 16.8696 | 85 | 0.9155 | 0.6538 | | 0.8967 | 17.8696 | 90 | 0.9712 | 0.7115 | | 0.8967 | 18.8696 | 95 | 0.9574 | 0.6731 | | 0.8657 | 19.8696 | 100 | 1.0001 | 0.7115 | | 0.8657 | 20.8696 | 105 | 1.1041 | 0.5962 | | 0.6795 | 21.8696 | 110 | 1.0165 | 0.6923 | | 0.6795 | 22.8696 | 115 | 1.0816 | 0.6538 | | 0.5608 | 23.8696 | 120 | 1.1195 | 0.7308 | | 0.5608 | 24.8696 | 125 | 1.0680 | 0.6923 | | 0.5608 | 25.8696 | 130 | 1.1495 | 0.6923 | | 0.6841 | 26.8696 | 135 | 1.0789 | 0.7115 | | 0.6841 | 27.8696 | 140 | 1.0814 | 0.7115 | | 0.4526 | 28.8696 | 145 | 1.0830 | 0.6923 | | 0.4526 | 29.8696 | 150 | 1.0641 | 0.75 | | 0.4526 | 30.8696 | 155 | 1.1337 | 0.6731 | | 0.4067 | 31.8696 | 160 | 1.0867 | 0.6923 | | 0.4067 | 32.8696 | 165 | 1.1103 | 0.6731 | | 0.4003 | 33.8696 | 170 | 1.0909 | 0.6923 | | 0.4003 | 34.8696 | 175 | 1.0950 | 0.6731 | | 0.4415 | 35.8696 | 180 | 1.0712 | 0.7115 | | 0.4415 | 36.8696 | 185 | 1.0569 | 0.7115 | | 0.4415 | 37.8696 | 190 | 1.0618 | 0.6923 | | 0.3715 | 38.8696 | 195 | 1.0770 | 0.6923 | | 0.3715 | 39.8696 | 200 | 1.0976 | 0.6923 | | 0.4178 | 40.8696 | 205 | 1.1072 | 0.6923 | | 0.4178 | 41.8696 | 210 | 1.1047 | 0.6923 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
TalentoTechIA/william_Rosero
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # william_Rosero This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0761 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0545 | 3.8462 | 500 | 0.0761 | 0.9850 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
TalentoTechIA/Andres_Yate
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Andres_Yate This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0430 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1315 | 3.8462 | 500 | 0.0430 | 0.9850 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
TalentoTechIA/JuanVergara
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # JuanVergara This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0214 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1316 | 3.8462 | 500 | 0.0214 | 0.9925 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
TalentoTechIA/Hamilton2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Hamilton2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0291 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1434 | 3.8462 | 500 | 0.0291 | 0.9925 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
TalentoTechIA/JuanDavidArdila
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # JuanDavidArdila This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0326 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0509 | 3.8462 | 500 | 0.0326 | 0.9850 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
TalentoTechIA/GiovanniV
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GiovanniV This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0518 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1277 | 3.8462 | 500 | 0.0518 | 0.9850 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
TalentoTechIA/Martin
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Martin This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0169 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1315 | 3.8462 | 500 | 0.0169 | 0.9925 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
TalentoTechIA/Wilmer
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Wilmer This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0663 - Accuracy: 0.9774 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1373 | 3.8462 | 500 | 0.0663 | 0.9774 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
TalentoTechIA/Stevensm
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Stevensm This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0792 - Accuracy: 0.9774 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1347 | 3.8462 | 500 | 0.0792 | 0.9774 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
wocclyl/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0923 - Accuracy: 0.9688 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.8527 | 1.0 | 352 | 0.1296 | 0.9582 | | 1.4876 | 2.0 | 704 | 0.1052 | 0.9666 | | 1.2634 | 2.9922 | 1053 | 0.0923 | 0.9688 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Tokenizers 0.21.0
[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck" ]
TalentoTechIA/ArmandoAlvarado
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ArmandoAlvarado This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0493 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1228 | 3.8462 | 500 | 0.0493 | 0.9850 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "angular_leaf_spot", "bean_rust", "healthy" ]
pylu5229/16_label_check_point
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 16_label_check_point This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:------:|:----:|:--------:|:---------------:| | 0.1354 | 1.0 | 563 | 1.0 | 0.0001 | | 0.0656 | 2.0 | 1126 | 1.0 | 0.0000 | | 0.0323 | 2.9991 | 1686 | 0.0000 | 1.0 | | 0.0426 | 4.0 | 2249 | 0.0000 | 1.0 | | 0.0525 | 4.9973 | 2810 | 0.0000 | 1.0 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "e", "es", "n", "ne", "nes", "normal", "ns", "nw", "nwe", "nwes", "nws", "s", "w", "we", "wes", "ws" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV17
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV17 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8543 - Accuracy: 0.7885 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 42 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | No log | 0.9302 | 10 | 1.5864 | 0.2885 | | 6.7245 | 1.9302 | 20 | 1.4466 | 0.4615 | | 5.7484 | 2.9302 | 30 | 1.0303 | 0.5962 | | 3.9879 | 3.9302 | 40 | 0.9820 | 0.5577 | | 2.769 | 4.9302 | 50 | 0.8608 | 0.6538 | | 2.3766 | 5.9302 | 60 | 0.8945 | 0.6923 | | 2.3766 | 6.9302 | 70 | 0.7773 | 0.6346 | | 1.9183 | 7.9302 | 80 | 0.8082 | 0.6346 | | 1.4993 | 8.9302 | 90 | 0.9407 | 0.6923 | | 1.3461 | 9.9302 | 100 | 0.9281 | 0.75 | | 1.2085 | 10.9302 | 110 | 0.7563 | 0.7692 | | 0.8413 | 11.9302 | 120 | 0.9108 | 0.7308 | | 0.8413 | 12.9302 | 130 | 0.8543 | 0.7885 | | 0.9607 | 13.9302 | 140 | 1.2058 | 0.6731 | | 0.837 | 14.9302 | 150 | 0.9733 | 0.7115 | | 0.7641 | 15.9302 | 160 | 1.0169 | 0.6538 | | 0.7997 | 16.9302 | 170 | 0.8486 | 0.7308 | | 0.6171 | 17.9302 | 180 | 0.9551 | 0.7885 | | 0.6171 | 18.9302 | 190 | 1.0267 | 0.7308 | | 0.6755 | 19.9302 | 200 | 1.1810 | 0.6923 | | 0.6393 | 20.9302 | 210 | 1.0516 | 0.7308 | | 0.573 | 21.9302 | 220 | 1.1029 | 0.7115 | | 0.4657 | 22.9302 | 230 | 1.0257 | 0.7885 | | 0.4626 | 23.9302 | 240 | 1.2266 | 0.6923 | | 0.4626 | 24.9302 | 250 | 1.3491 | 0.6538 | | 0.4899 | 25.9302 | 260 | 1.2055 | 0.7692 | | 0.3991 | 26.9302 | 270 | 1.1633 | 0.6923 | | 0.3778 | 27.9302 | 280 | 1.1751 | 0.7308 | | 0.443 | 28.9302 | 290 | 1.1727 | 0.75 | | 0.43 | 29.9302 | 300 | 1.3292 | 0.7115 | | 0.43 | 30.9302 | 310 | 1.1873 | 0.7115 | | 0.4425 | 31.9302 | 320 | 1.2326 | 0.6538 | | 0.3098 | 32.9302 | 330 | 1.2379 | 0.7115 | | 0.4086 | 33.9302 | 340 | 1.3020 | 0.6731 | | 0.3046 | 34.9302 | 350 | 1.2686 | 0.7115 | | 0.3503 | 35.9302 | 360 | 1.3006 | 0.6923 | | 0.3503 | 36.9302 | 370 | 1.3207 | 0.6923 | | 0.2985 | 37.9302 | 380 | 1.3626 | 0.7115 | | 0.3445 | 38.9302 | 390 | 1.3689 | 0.7115 | | 0.3017 | 39.9302 | 400 | 1.3523 | 0.7115 | | 0.3446 | 40.9302 | 410 | 1.3447 | 0.7115 | | 0.2799 | 41.9302 | 420 | 1.3395 | 0.7115 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV18
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV18 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4318 - Accuracy: 0.6731 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 3.3548 | 1.0 | 22 | 1.5527 | 0.3077 | | 3.2091 | 2.0 | 44 | 1.5476 | 0.3077 | | 2.9355 | 3.0 | 66 | 1.3747 | 0.4808 | | 2.3577 | 4.0 | 88 | 1.3003 | 0.4231 | | 1.4766 | 5.0 | 110 | 1.1989 | 0.4808 | | 1.3704 | 6.0 | 132 | 1.0006 | 0.6538 | | 1.118 | 7.0 | 154 | 1.0605 | 0.6154 | | 1.0178 | 8.0 | 176 | 0.9814 | 0.5962 | | 0.8974 | 9.0 | 198 | 1.0096 | 0.6538 | | 0.672 | 10.0 | 220 | 1.1785 | 0.6154 | | 0.6533 | 11.0 | 242 | 1.3520 | 0.5577 | | 0.4788 | 12.0 | 264 | 0.9816 | 0.75 | | 0.5707 | 13.0 | 286 | 1.0714 | 0.6538 | | 0.424 | 14.0 | 308 | 1.2733 | 0.6923 | | 0.4466 | 15.0 | 330 | 1.1687 | 0.7115 | | 0.4284 | 16.0 | 352 | 1.1879 | 0.6731 | | 0.4528 | 17.0 | 374 | 1.1845 | 0.6346 | | 0.3062 | 18.0 | 396 | 1.2126 | 0.7115 | | 0.2953 | 19.0 | 418 | 1.4019 | 0.6154 | | 0.3169 | 20.0 | 440 | 1.3913 | 0.6538 | | 0.3235 | 21.0 | 462 | 1.1455 | 0.6346 | | 0.3069 | 22.0 | 484 | 1.4152 | 0.6538 | | 0.2402 | 23.0 | 506 | 1.1008 | 0.7115 | | 0.2055 | 24.0 | 528 | 1.2535 | 0.6538 | | 0.265 | 25.0 | 550 | 1.3737 | 0.6538 | | 0.2097 | 26.0 | 572 | 1.4048 | 0.6923 | | 0.2185 | 27.0 | 594 | 1.3243 | 0.6923 | | 0.1778 | 28.0 | 616 | 1.3771 | 0.6346 | | 0.1815 | 29.0 | 638 | 1.3688 | 0.5769 | | 0.2201 | 30.0 | 660 | 1.3827 | 0.6346 | | 0.1808 | 31.0 | 682 | 1.3749 | 0.6731 | | 0.2013 | 32.0 | 704 | 1.4271 | 0.6538 | | 0.1822 | 33.0 | 726 | 1.4023 | 0.6923 | | 0.1602 | 34.0 | 748 | 1.3908 | 0.6731 | | 0.1303 | 35.0 | 770 | 1.4396 | 0.6731 | | 0.1848 | 36.0 | 792 | 1.4828 | 0.6923 | | 0.0982 | 37.0 | 814 | 1.4231 | 0.6923 | | 0.1267 | 38.0 | 836 | 1.4148 | 0.6731 | | 0.1467 | 39.0 | 858 | 1.3743 | 0.6731 | | 0.0891 | 40.0 | 880 | 1.4194 | 0.6923 | | 0.1252 | 41.0 | 902 | 1.4187 | 0.6923 | | 0.1189 | 42.0 | 924 | 1.4043 | 0.6346 | | 0.1328 | 43.0 | 946 | 1.4175 | 0.6731 | | 0.172 | 44.0 | 968 | 1.4249 | 0.6731 | | 0.1324 | 45.0 | 990 | 1.4320 | 0.6731 | | 0.1363 | 46.0 | 1012 | 1.4320 | 0.6731 | | 0.125 | 47.0 | 1034 | 1.4318 | 0.6731 | | 0.1122 | 47.7442 | 1050 | 1.4318 | 0.6731 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV19
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV19 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4438 - Accuracy: 0.7308 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.3 - num_epochs: 42 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.6103 | 1.0 | 22 | 1.6051 | 0.2885 | | 1.4727 | 2.0 | 44 | 1.4581 | 0.4808 | | 1.0885 | 3.0 | 66 | 1.1148 | 0.5385 | | 0.8317 | 4.0 | 88 | 1.1753 | 0.4808 | | 0.5767 | 5.0 | 110 | 1.0842 | 0.5192 | | 0.4983 | 6.0 | 132 | 0.9595 | 0.5769 | | 0.4392 | 7.0 | 154 | 0.9108 | 0.6538 | | 0.38 | 8.0 | 176 | 0.7973 | 0.6923 | | 0.367 | 9.0 | 198 | 0.8772 | 0.6346 | | 0.2892 | 10.0 | 220 | 0.9240 | 0.6346 | | 0.2627 | 11.0 | 242 | 1.1102 | 0.6154 | | 0.1956 | 12.0 | 264 | 0.8497 | 0.7115 | | 0.2529 | 13.0 | 286 | 0.9588 | 0.6923 | | 0.1933 | 14.0 | 308 | 1.4496 | 0.5962 | | 0.2023 | 15.0 | 330 | 1.2467 | 0.6346 | | 0.1725 | 16.0 | 352 | 1.1693 | 0.6731 | | 0.1604 | 17.0 | 374 | 1.1374 | 0.6346 | | 0.1909 | 18.0 | 396 | 0.9065 | 0.7115 | | 0.1577 | 19.0 | 418 | 1.1488 | 0.6538 | | 0.1323 | 20.0 | 440 | 1.3994 | 0.6923 | | 0.1342 | 21.0 | 462 | 1.1350 | 0.6731 | | 0.1024 | 22.0 | 484 | 1.2422 | 0.6538 | | 0.1054 | 23.0 | 506 | 1.0670 | 0.75 | | 0.0809 | 24.0 | 528 | 1.2367 | 0.6731 | | 0.0856 | 25.0 | 550 | 1.1758 | 0.7308 | | 0.0781 | 26.0 | 572 | 1.1735 | 0.6731 | | 0.1136 | 27.0 | 594 | 1.5008 | 0.6923 | | 0.0784 | 28.0 | 616 | 1.2966 | 0.7308 | | 0.0648 | 29.0 | 638 | 1.2018 | 0.7115 | | 0.0941 | 30.0 | 660 | 1.0879 | 0.6731 | | 0.0654 | 31.0 | 682 | 1.2646 | 0.7115 | | 0.0967 | 32.0 | 704 | 1.0537 | 0.75 | | 0.0717 | 33.0 | 726 | 1.4332 | 0.7115 | | 0.0715 | 34.0 | 748 | 1.2683 | 0.7308 | | 0.0773 | 35.0 | 770 | 1.3363 | 0.6731 | | 0.0767 | 36.0 | 792 | 1.3192 | 0.6731 | | 0.0343 | 37.0 | 814 | 1.2926 | 0.7115 | | 0.0524 | 38.0 | 836 | 1.4072 | 0.7115 | | 0.052 | 39.0 | 858 | 1.4377 | 0.6923 | | 0.0247 | 40.0 | 880 | 1.4420 | 0.6923 | | 0.0256 | 41.0 | 902 | 1.4403 | 0.7115 | | 0.0384 | 42.0 | 924 | 1.4438 | 0.7308 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV20
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-dmae-humeda-DAV20 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5291 - Accuracy: 0.6923 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 3.1476 | 1.0 | 22 | 1.5220 | 0.3462 | | 3.0161 | 2.0 | 44 | 1.4201 | 0.4231 | | 2.6066 | 3.0 | 66 | 1.2502 | 0.4615 | | 1.9733 | 4.0 | 88 | 1.0608 | 0.5385 | | 1.5227 | 5.0 | 110 | 0.9184 | 0.6538 | | 1.1822 | 6.0 | 132 | 0.9237 | 0.6538 | | 1.0809 | 7.0 | 154 | 0.9022 | 0.6346 | | 0.8904 | 8.0 | 176 | 0.8389 | 0.7308 | | 0.8172 | 9.0 | 198 | 0.8809 | 0.6731 | | 0.6112 | 10.0 | 220 | 0.9043 | 0.7308 | | 0.5935 | 11.0 | 242 | 1.2283 | 0.5769 | | 0.4656 | 12.0 | 264 | 0.8913 | 0.7308 | | 0.5363 | 13.0 | 286 | 0.8755 | 0.7308 | | 0.4195 | 14.0 | 308 | 0.9897 | 0.6538 | | 0.3861 | 15.0 | 330 | 0.9968 | 0.6923 | | 0.3865 | 16.0 | 352 | 1.2396 | 0.6154 | | 0.3542 | 17.0 | 374 | 1.0785 | 0.6538 | | 0.3784 | 18.0 | 396 | 0.9859 | 0.7115 | | 0.2699 | 19.0 | 418 | 1.1501 | 0.7115 | | 0.2161 | 20.0 | 440 | 1.1033 | 0.6538 | | 0.2629 | 21.0 | 462 | 1.1783 | 0.6923 | | 0.2898 | 22.0 | 484 | 1.0924 | 0.7308 | | 0.2589 | 23.0 | 506 | 1.1429 | 0.7115 | | 0.2096 | 24.0 | 528 | 1.1767 | 0.6731 | | 0.2235 | 25.0 | 550 | 1.3683 | 0.6346 | | 0.1683 | 26.0 | 572 | 1.0724 | 0.75 | | 0.2041 | 27.0 | 594 | 1.2481 | 0.75 | | 0.2238 | 28.0 | 616 | 1.3583 | 0.6923 | | 0.218 | 29.0 | 638 | 1.1183 | 0.7115 | | 0.1971 | 30.0 | 660 | 1.1319 | 0.6923 | | 0.1732 | 31.0 | 682 | 1.2364 | 0.6731 | | 0.1551 | 32.0 | 704 | 1.2510 | 0.6731 | | 0.1977 | 33.0 | 726 | 1.3023 | 0.6731 | | 0.2059 | 34.0 | 748 | 1.3325 | 0.7115 | | 0.1637 | 35.0 | 770 | 1.2952 | 0.7308 | | 0.1517 | 36.0 | 792 | 1.3332 | 0.6923 | | 0.1035 | 37.0 | 814 | 1.3744 | 0.7115 | | 0.1696 | 38.0 | 836 | 1.4287 | 0.7308 | | 0.1186 | 39.0 | 858 | 1.4970 | 0.6923 | | 0.1338 | 40.0 | 880 | 1.4538 | 0.7115 | | 0.1309 | 41.0 | 902 | 1.4894 | 0.6923 | | 0.0998 | 42.0 | 924 | 1.4104 | 0.6923 | | 0.1794 | 43.0 | 946 | 1.4863 | 0.7115 | | 0.1529 | 44.0 | 968 | 1.5677 | 0.7115 | | 0.1656 | 45.0 | 990 | 1.5333 | 0.6731 | | 0.124 | 46.0 | 1012 | 1.5137 | 0.6923 | | 0.1227 | 47.0 | 1034 | 1.5245 | 0.6923 | | 0.115 | 47.7442 | 1050 | 1.5291 | 0.6923 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "avanzada", "avanzada humeda", "leve", "moderada", "no dmae" ]
mikedata/real_vs_fake_image_model_vit_base
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # real_vs_fake_image_model_vit_base This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0189 - Accuracy: 0.9953 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0094 | 0.1883 | 100 | 0.0243 | 0.9941 | | 0.0165 | 0.3766 | 200 | 0.0351 | 0.9901 | | 0.0239 | 0.5650 | 300 | 0.0470 | 0.9876 | | 0.0179 | 0.7533 | 400 | 0.0678 | 0.9856 | | 0.0166 | 0.9416 | 500 | 0.0296 | 0.9920 | | 0.0138 | 1.1299 | 600 | 0.0337 | 0.9926 | | 0.0574 | 1.3183 | 700 | 0.1020 | 0.9772 | | 0.0256 | 1.5066 | 800 | 0.0612 | 0.9847 | | 0.0327 | 1.6949 | 900 | 0.0616 | 0.9846 | | 0.0086 | 1.8832 | 1000 | 0.0272 | 0.9923 | | 0.008 | 2.0716 | 1100 | 0.0329 | 0.9920 | | 0.0014 | 2.2599 | 1200 | 0.0250 | 0.9939 | | 0.0132 | 2.4482 | 1300 | 0.0248 | 0.9937 | | 0.0189 | 2.6365 | 1400 | 0.0266 | 0.9936 | | 0.0034 | 2.8249 | 1500 | 0.0225 | 0.9948 | | 0.009 | 3.0132 | 1600 | 0.0240 | 0.9942 | | 0.0009 | 3.2015 | 1700 | 0.0244 | 0.9942 | | 0.0054 | 3.3898 | 1800 | 0.0339 | 0.9928 | | 0.0046 | 3.5782 | 1900 | 0.0248 | 0.9945 | | 0.0135 | 3.7665 | 2000 | 0.0245 | 0.9945 | | 0.0274 | 3.9548 | 2100 | 0.0241 | 0.9947 | | 0.0031 | 4.1431 | 2200 | 0.0225 | 0.9947 | | 0.0121 | 4.3315 | 2300 | 0.0210 | 0.9952 | | 0.0055 | 4.5198 | 2400 | 0.0209 | 0.9953 | | 0.0183 | 4.7081 | 2500 | 0.0197 | 0.9955 | | 0.0077 | 4.8964 | 2600 | 0.0189 | 0.9953 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "real", "fake" ]
alyzbane/2025-01-21-14-35-49-resnet-50
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2025-01-21-14-35-49-resnet-50 This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1215 - Precision: 0.9786 - Recall: 0.9778 - F1: 0.9778 - Accuracy: 0.9788 - Top1 Accuracy: 0.9778 - Error Rate: 0.0212 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 3407 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Top1 Accuracy | Error Rate | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:| | 1.5858 | 1.0 | 34 | 1.5129 | 0.6583 | 0.5407 | 0.5138 | 0.5418 | 0.5407 | 0.4582 | | 1.3909 | 2.0 | 68 | 1.1807 | 0.7779 | 0.6519 | 0.6537 | 0.6522 | 0.6519 | 0.3478 | | 1.059 | 3.0 | 102 | 0.7503 | 0.8897 | 0.8889 | 0.8867 | 0.8901 | 0.8889 | 0.1099 | | 0.6942 | 4.0 | 136 | 0.4029 | 0.9427 | 0.9407 | 0.9402 | 0.9427 | 0.9407 | 0.0573 | | 0.4241 | 5.0 | 170 | 0.2325 | 0.9673 | 0.9630 | 0.9624 | 0.9655 | 0.9630 | 0.0345 | | 0.3235 | 6.0 | 204 | 0.1702 | 0.9673 | 0.9630 | 0.9630 | 0.9650 | 0.9630 | 0.0350 | | 0.259 | 7.0 | 238 | 0.1359 | 0.9722 | 0.9704 | 0.9704 | 0.9719 | 0.9704 | 0.0281 | | 0.2231 | 8.0 | 272 | 0.1225 | 0.9722 | 0.9704 | 0.9704 | 0.9719 | 0.9704 | 0.0281 | | 0.2167 | 9.0 | 306 | 0.1253 | 0.9722 | 0.9704 | 0.9704 | 0.9719 | 0.9704 | 0.0281 | | 0.1973 | 10.0 | 340 | 0.1215 | 0.9786 | 0.9778 | 0.9778 | 0.9788 | 0.9778 | 0.0212 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.20.3
[ "ilang-ilang", "mango", "narra", "royal palm", "tabebuia" ]
alyzbane/2025-01-21-15-21-31-convnextv2-tiny-1k-224
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2025-01-21-15-21-31-convnextv2-tiny-1k-224 This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0591 - Precision: 0.9799 - Recall: 0.9778 - F1: 0.9776 - Accuracy: 0.976 - Top1 Accuracy: 0.9778 - Error Rate: 0.0240 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 3407 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Top1 Accuracy | Error Rate | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:| | 0.7126 | 1.0 | 135 | 1.3514 | 0.7672 | 0.6593 | 0.6371 | 0.6614 | 0.6593 | 0.3386 | | 0.4328 | 2.0 | 270 | 0.2026 | 0.9348 | 0.9333 | 0.9330 | 0.9347 | 0.9333 | 0.0653 | | 0.3438 | 3.0 | 405 | 0.0591 | 0.9799 | 0.9778 | 0.9776 | 0.976 | 0.9778 | 0.0240 | | 0.2082 | 4.0 | 540 | 0.0919 | 0.9725 | 0.9704 | 0.9703 | 0.9719 | 0.9704 | 0.0281 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.20.3
[ "ilang-ilang", "mango", "narra", "royal palm", "tabebuia" ]