model_id
stringlengths 7
105
| model_card
stringlengths 1
130k
| model_labels
listlengths 2
80k
|
---|---|---|
gruhntm/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0701
- Accuracy: 0.9778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3121 | 1.0 | 152 | 0.1428 | 0.9532 |
| 0.2403 | 2.0 | 304 | 0.0959 | 0.9653 |
| 0.1688 | 3.0 | 456 | 0.0701 | 0.9778 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"annual crop land",
"forest",
"brushland or shrubland",
"highway or road",
"industrial buildings or commercial buildings",
"pasture land",
"permanent crop land",
"residential buildings or homes or apartments",
"river",
"lake or sea"
] |
Yudsky/natural_scene_resnet
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5"
] |
assloj/convnext-tiny-finetuned-sagi
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"ryotsu",
"nakagawa",
"reko",
"ohara",
"others"
] |
itsLeen/finetuned-fake-food
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-fake-food
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4855
- Accuracy: 0.8548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6061 | 1.0 | 176 | 0.5937 | 0.6855 |
| 0.481 | 2.0 | 352 | 0.5138 | 0.8226 |
| 0.5522 | 3.0 | 528 | 0.4973 | 0.8065 |
| 0.4092 | 4.0 | 704 | 0.5557 | 0.7903 |
| 0.4882 | 5.0 | 880 | 0.4998 | 0.7984 |
| 0.4442 | 6.0 | 1056 | 0.4647 | 0.8387 |
| 0.5749 | 7.0 | 1232 | 0.4464 | 0.8306 |
| 0.4529 | 8.0 | 1408 | 0.5366 | 0.8065 |
| 0.5287 | 9.0 | 1584 | 0.4633 | 0.8387 |
| 0.3821 | 10.0 | 1760 | 0.4983 | 0.8387 |
| 0.2409 | 11.0 | 1936 | 0.4855 | 0.8548 |
| 0.2025 | 12.0 | 2112 | 0.5102 | 0.8387 |
| 0.2045 | 13.0 | 2288 | 0.4942 | 0.8387 |
| 0.4097 | 14.0 | 2464 | 0.4954 | 0.8387 |
| 0.5798 | 15.0 | 2640 | 0.4941 | 0.8387 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"0",
"1"
] |
mateoluksenberg/Seed_Classifier
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Seed_Classifier
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6239
- Accuracy: 0.0
- Weighted f1: 0.0
- Micro f1: 0.0
- Macro f1: 0.0
- Weighted recall: 0.0
- Micro recall: 0.0
- Macro recall: 0.0
- Weighted precision: 0.0
- Micro precision: 0.0
- Macro precision: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 18
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 0.3646 | 1.0 | 1 | 2.4555 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3646 | 2.0 | 2 | 2.4605 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3646 | 3.0 | 3 | 2.6009 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3646 | 4.0 | 4 | 2.7374 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3646 | 5.0 | 5 | 2.7640 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3646 | 6.0 | 6 | 2.7441 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3646 | 7.0 | 7 | 2.7820 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3442 | 8.0 | 8 | 2.8067 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3442 | 9.0 | 9 | 2.8143 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3442 | 10.0 | 10 | 2.7966 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3442 | 11.0 | 11 | 2.7836 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3442 | 12.0 | 12 | 2.7537 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3442 | 13.0 | 13 | 2.7233 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3442 | 14.0 | 14 | 2.6946 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3442 | 15.0 | 15 | 2.6638 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3003 | 16.0 | 16 | 2.6449 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3003 | 17.0 | 17 | 2.6312 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3003 | 18.0 | 18 | 2.6239 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"black",
"brown",
"buff",
"imperfect black"
] |
pramudyalyza/vit-base-patch16-224-emotion-classifier
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7954
- Accuracy: 0.375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9066 | 1.0 | 40 | 1.9540 | 0.275 |
| 1.76 | 2.0 | 80 | 1.8608 | 0.35 |
| 1.651 | 3.0 | 120 | 1.8128 | 0.3688 |
| 1.5967 | 4.0 | 160 | 1.7954 | 0.375 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
djbp/swin-base-patch4-window7-224-in22k-MM_NMM_Classification_base_V10
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-patch4-window7-224-in22k-MM_NMM_Classification_base_V10
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4069
- Accuracy: 0.8359
- Auc Overall: 0.9463
- Auc Class 0: 0.9637
- Auc Class 1: 0.9465
- Auc Class 2: 0.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 1.13.1+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
[
"invalid",
"mid market",
"non mid market"
] |
pkr7098/vit-cifar100-cifar100
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-cifar100-cifar100
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar100 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1612
- Accuracy: 0.2223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.7207 | 1.0 | 5313 | 3.8632 | 0.0985 |
| 3.5093 | 2.0 | 10626 | 3.5664 | 0.1472 |
| 3.3675 | 3.0 | 15939 | 3.4389 | 0.166 |
| 2.9505 | 4.0 | 21252 | 3.2326 | 0.2093 |
| 3.1158 | 5.0 | 26565 | 3.1612 | 0.2223 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.0.1+cu117
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"apple",
"aquarium_fish",
"bowl",
"boy",
"bridge",
"bus",
"butterfly",
"camel",
"can",
"castle",
"caterpillar",
"cattle",
"baby",
"chair",
"chimpanzee",
"clock",
"cloud",
"cockroach",
"couch",
"cra",
"crocodile",
"cup",
"dinosaur",
"bear",
"dolphin",
"elephant",
"flatfish",
"forest",
"fox",
"girl",
"hamster",
"house",
"kangaroo",
"keyboard",
"beaver",
"lamp",
"lawn_mower",
"leopard",
"lion",
"lizard",
"lobster",
"man",
"maple_tree",
"motorcycle",
"mountain",
"bed",
"mouse",
"mushroom",
"oak_tree",
"orange",
"orchid",
"otter",
"palm_tree",
"pear",
"pickup_truck",
"pine_tree",
"bee",
"plain",
"plate",
"poppy",
"porcupine",
"possum",
"rabbit",
"raccoon",
"ray",
"road",
"rocket",
"beetle",
"rose",
"sea",
"seal",
"shark",
"shrew",
"skunk",
"skyscraper",
"snail",
"snake",
"spider",
"bicycle",
"squirrel",
"streetcar",
"sunflower",
"sweet_pepper",
"table",
"tank",
"telephone",
"television",
"tiger",
"tractor",
"bottle",
"train",
"trout",
"tulip",
"turtle",
"wardrobe",
"whale",
"willow_tree",
"wolf",
"woman",
"worm"
] |
duuke/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# duuke/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4973
- Validation Loss: 0.3968
- Train Accuracy: 0.908
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 16000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7644 | 1.6595 | 0.807 | 0 |
| 1.2028 | 0.7923 | 0.89 | 1 |
| 0.7072 | 0.5094 | 0.912 | 2 |
| 0.4973 | 0.3968 | 0.908 | 3 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.17.0
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
mateoluksenberg/Seed_Classifier_V2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Seed_Classifier_V2
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6985
- Accuracy: 0.0
- Weighted f1: 0.0
- Micro f1: 0.0
- Macro f1: 0.0
- Weighted recall: 0.0
- Micro recall: 0.0
- Macro recall: 0.0
- Weighted precision: 0.0
- Micro precision: 0.0
- Macro precision: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 18
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 0.3829 | 1.0 | 1 | 1.0584 | 0.5 | 0.3333 | 0.5 | 0.3333 | 0.5 | 0.5 | 0.5 | 0.25 | 0.5 | 0.25 |
| 0.3829 | 2.0 | 2 | 1.2877 | 0.25 | 0.2 | 0.25 | 0.2 | 0.25 | 0.25 | 0.25 | 0.1667 | 0.25 | 0.1667 |
| 0.3829 | 3.0 | 3 | 2.2985 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3829 | 4.0 | 4 | 2.4998 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3829 | 5.0 | 5 | 2.2230 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3829 | 6.0 | 6 | 1.9467 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3829 | 7.0 | 7 | 1.7201 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3628 | 8.0 | 8 | 1.5736 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3628 | 9.0 | 9 | 1.5412 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3628 | 10.0 | 10 | 1.5484 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3628 | 11.0 | 11 | 1.5762 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3628 | 12.0 | 12 | 1.5907 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3628 | 13.0 | 13 | 1.6231 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3628 | 14.0 | 14 | 1.6462 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3628 | 15.0 | 15 | 1.6710 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3175 | 16.0 | 16 | 1.6883 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3175 | 17.0 | 17 | 1.6994 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3175 | 18.0 | 18 | 1.6985 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"black",
"brown",
"buff",
"imperfect black"
] |
Bang18/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9525
- Accuracy: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 3 | 1.5195 | 0.2 |
| No log | 2.0 | 6 | 1.4667 | 0.2 |
| No log | 3.0 | 9 | 1.4288 | 0.0 |
| No log | 4.0 | 12 | 1.4128 | 0.0 |
| No log | 5.0 | 15 | 1.4065 | 0.2 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cpu
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"multi-sarcasm",
"text-sarcasm",
"non-sarcasm",
"image-sarcasm"
] |
viniFiedler/vit-base-patch16-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8960
- Model Preparation Time: 0.0037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time |
|:-------------:|:-------:|:----:|:---------------:|:----------------------:|
| 7.7174 | 0.9874 | 59 | 7.7848 | 0.0037 |
| 7.6016 | 1.9916 | 119 | 7.7339 | 0.0037 |
| 7.4761 | 2.9958 | 179 | 7.6441 | 0.0037 |
| 7.2852 | 4.0 | 239 | 7.5057 | 0.0037 |
| 7.083 | 4.9874 | 298 | 7.3286 | 0.0037 |
| 6.8119 | 5.9916 | 358 | 7.1090 | 0.0037 |
| 6.5497 | 6.9958 | 418 | 6.8711 | 0.0037 |
| 6.1656 | 8.0 | 478 | 6.6169 | 0.0037 |
| 5.8334 | 8.9874 | 537 | 6.3286 | 0.0037 |
| 5.3878 | 9.9916 | 597 | 6.0292 | 0.0037 |
| 5.0134 | 10.9958 | 657 | 5.7486 | 0.0037 |
| 4.6087 | 12.0 | 717 | 5.4834 | 0.0037 |
| 4.2544 | 12.9874 | 776 | 5.2186 | 0.0037 |
| 3.8669 | 13.9916 | 836 | 4.9842 | 0.0037 |
| 3.5993 | 14.9958 | 896 | 4.7566 | 0.0037 |
| 3.2331 | 16.0 | 956 | 4.5623 | 0.0037 |
| 2.9124 | 16.9874 | 1015 | 4.3663 | 0.0037 |
| 2.6122 | 17.9916 | 1075 | 4.1944 | 0.0037 |
| 2.466 | 18.9958 | 1135 | 4.0160 | 0.0037 |
| 2.2074 | 20.0 | 1195 | 3.8582 | 0.0037 |
| 2.0851 | 20.9874 | 1254 | 3.7160 | 0.0037 |
| 1.8354 | 21.9916 | 1314 | 3.5740 | 0.0037 |
| 1.7343 | 22.9958 | 1374 | 3.4548 | 0.0037 |
| 1.5804 | 24.0 | 1434 | 3.3600 | 0.0037 |
| 1.3193 | 24.9874 | 1493 | 3.2336 | 0.0037 |
| 1.328 | 25.9916 | 1553 | 3.1294 | 0.0037 |
| 1.163 | 26.9958 | 1613 | 3.0355 | 0.0037 |
| 1.0761 | 28.0 | 1673 | 2.9737 | 0.0037 |
| 0.9834 | 28.9874 | 1732 | 2.8952 | 0.0037 |
| 0.9141 | 29.9916 | 1792 | 2.7900 | 0.0037 |
| 0.8862 | 30.9958 | 1852 | 2.7381 | 0.0037 |
| 0.7757 | 32.0 | 1912 | 2.6868 | 0.0037 |
| 0.7475 | 32.9874 | 1971 | 2.6134 | 0.0037 |
| 0.6518 | 33.9916 | 2031 | 2.5770 | 0.0037 |
| 0.6766 | 34.9958 | 2091 | 2.5278 | 0.0037 |
| 0.5741 | 36.0 | 2151 | 2.5009 | 0.0037 |
| 0.5877 | 36.9874 | 2210 | 2.4436 | 0.0037 |
| 0.4996 | 37.9916 | 2270 | 2.4148 | 0.0037 |
| 0.5316 | 38.9958 | 2330 | 2.3809 | 0.0037 |
| 0.4896 | 40.0 | 2390 | 2.3330 | 0.0037 |
| 0.501 | 40.9874 | 2449 | 2.3055 | 0.0037 |
| 0.4052 | 41.9916 | 2509 | 2.3000 | 0.0037 |
| 0.398 | 42.9958 | 2569 | 2.2854 | 0.0037 |
| 0.3702 | 44.0 | 2629 | 2.2536 | 0.0037 |
| 0.3629 | 44.9874 | 2688 | 2.2342 | 0.0037 |
| 0.3729 | 45.9916 | 2748 | 2.2190 | 0.0037 |
| 0.3206 | 46.9958 | 2808 | 2.2078 | 0.0037 |
| 0.38 | 48.0 | 2868 | 2.1726 | 0.0037 |
| 0.3379 | 48.9874 | 2927 | 2.1600 | 0.0037 |
| 0.3248 | 49.9916 | 2987 | 2.1453 | 0.0037 |
| 0.3577 | 50.9958 | 3047 | 2.1153 | 0.0037 |
| 0.2946 | 52.0 | 3107 | 2.1232 | 0.0037 |
| 0.2938 | 52.9874 | 3166 | 2.1076 | 0.0037 |
| 0.289 | 53.9916 | 3226 | 2.0892 | 0.0037 |
| 0.3044 | 54.9958 | 3286 | 2.0692 | 0.0037 |
| 0.277 | 56.0 | 3346 | 2.0667 | 0.0037 |
| 0.2774 | 56.9874 | 3405 | 2.0554 | 0.0037 |
| 0.2717 | 57.9916 | 3465 | 2.0369 | 0.0037 |
| 0.2722 | 58.9958 | 3525 | 2.0261 | 0.0037 |
| 0.2325 | 60.0 | 3585 | 2.0419 | 0.0037 |
| 0.2387 | 60.9874 | 3644 | 2.0073 | 0.0037 |
| 0.2343 | 61.9916 | 3704 | 2.0230 | 0.0037 |
| 0.2281 | 62.9958 | 3764 | 2.0228 | 0.0037 |
| 0.2597 | 64.0 | 3824 | 1.9956 | 0.0037 |
| 0.223 | 64.9874 | 3883 | 1.9902 | 0.0037 |
| 0.2213 | 65.9916 | 3943 | 1.9778 | 0.0037 |
| 0.1835 | 66.9958 | 4003 | 1.9945 | 0.0037 |
| 0.2247 | 68.0 | 4063 | 1.9703 | 0.0037 |
| 0.1819 | 68.9874 | 4122 | 1.9623 | 0.0037 |
| 0.2096 | 69.9916 | 4182 | 1.9686 | 0.0037 |
| 0.186 | 70.9958 | 4242 | 1.9764 | 0.0037 |
| 0.1956 | 72.0 | 4302 | 1.9606 | 0.0037 |
| 0.197 | 72.9874 | 4361 | 1.9432 | 0.0037 |
| 0.1867 | 73.9916 | 4421 | 1.9461 | 0.0037 |
| 0.1994 | 74.9958 | 4481 | 1.9547 | 0.0037 |
| 0.1631 | 76.0 | 4541 | 1.9373 | 0.0037 |
| 0.184 | 76.9874 | 4600 | 1.9329 | 0.0037 |
| 0.1518 | 77.9916 | 4660 | 1.9355 | 0.0037 |
| 0.1774 | 78.9958 | 4720 | 1.9367 | 0.0037 |
| 0.1558 | 80.0 | 4780 | 1.9211 | 0.0037 |
| 0.1859 | 80.9874 | 4839 | 1.9256 | 0.0037 |
| 0.1673 | 81.9916 | 4899 | 1.9271 | 0.0037 |
| 0.1531 | 82.9958 | 4959 | 1.9332 | 0.0037 |
| 0.1763 | 84.0 | 5019 | 1.9154 | 0.0037 |
| 0.1594 | 84.9874 | 5078 | 1.9143 | 0.0037 |
| 0.17 | 85.9916 | 5138 | 1.9098 | 0.0037 |
| 0.1246 | 86.9958 | 5198 | 1.9123 | 0.0037 |
| 0.1699 | 88.0 | 5258 | 1.9066 | 0.0037 |
| 0.1627 | 88.9874 | 5317 | 1.9054 | 0.0037 |
| 0.1663 | 89.9916 | 5377 | 1.9040 | 0.0037 |
| 0.1349 | 90.9958 | 5437 | 1.9031 | 0.0037 |
| 0.1578 | 92.0 | 5497 | 1.9065 | 0.0037 |
| 0.1553 | 92.9874 | 5556 | 1.8997 | 0.0037 |
| 0.1393 | 93.9916 | 5616 | 1.8972 | 0.0037 |
| 0.1652 | 94.9958 | 5676 | 1.8960 | 0.0037 |
| 0.1677 | 96.0 | 5736 | 1.9002 | 0.0037 |
| 0.1544 | 96.9874 | 5795 | 1.8966 | 0.0037 |
| 0.1359 | 97.9916 | 5855 | 1.8966 | 0.0037 |
| 0.1495 | 98.7448 | 5900 | 1.8965 | 0.0037 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"1",
"10",
"100",
"1000",
"1001",
"1002",
"1003",
"1004",
"1005",
"1006",
"1007",
"1008",
"1009",
"101",
"1010",
"1011",
"1012",
"1013",
"1014",
"1015",
"1016",
"1017",
"1018",
"1019",
"102",
"1020",
"1021",
"1022",
"1023",
"1024",
"1025",
"1026",
"1027",
"1028",
"1029",
"103",
"1030",
"1032",
"1033",
"1034",
"1035",
"1036",
"1037",
"1038",
"1039",
"104",
"1040",
"1041",
"1042",
"1043",
"1044",
"1045",
"1046",
"1047",
"1048",
"1049",
"105",
"1050",
"1051",
"1052",
"1053",
"1054",
"1055",
"1056",
"1057",
"1058",
"1059",
"106",
"1060",
"1061",
"1062",
"1063",
"1064",
"1065",
"1066",
"1067",
"1068",
"1069",
"107",
"1070",
"1071",
"1072",
"1073",
"1074",
"1075",
"1076",
"1077",
"1078",
"1079",
"108",
"1080",
"1081",
"1082",
"1083",
"1084",
"1085",
"1086",
"1087",
"1088",
"1089",
"109",
"1090",
"1091",
"1092",
"1093",
"1094",
"1095",
"1096",
"1097",
"1098",
"1099",
"11",
"110",
"1100",
"1101",
"1102",
"1103",
"1104",
"1105",
"1106",
"1107",
"1108",
"111",
"1110",
"1111",
"1112",
"1113",
"1114",
"1115",
"1116",
"1117",
"1118",
"1119",
"112",
"1120",
"1121",
"1122",
"1123",
"1124",
"1125",
"1126",
"1127",
"1128",
"1129",
"113",
"1130",
"1131",
"1132",
"1133",
"1134",
"1135",
"1136",
"1137",
"1138",
"1139",
"114",
"1140",
"1141",
"1142",
"1143",
"1144",
"1145",
"1146",
"1147",
"1148",
"1149",
"115",
"1150",
"1151",
"1152",
"1153",
"1154",
"1155",
"1156",
"1157",
"1158",
"1159",
"116",
"1160",
"1161",
"1162",
"1163",
"1164",
"1165",
"1166",
"1167",
"1168",
"1169",
"117",
"1170",
"1171",
"1172",
"1173",
"1174",
"1175",
"1176",
"1177",
"1178",
"1179",
"118",
"1180",
"1181",
"1182",
"1183",
"1184",
"1185",
"1186",
"1187",
"1188",
"1189",
"119",
"1190",
"1191",
"1192",
"1193",
"1194",
"1195",
"1196",
"1198",
"1199",
"12",
"120",
"1200",
"1201",
"1202",
"1203",
"1204",
"1205",
"1206",
"1207",
"1208",
"1209",
"121",
"1210",
"1211",
"1212",
"1213",
"1214",
"1215",
"1216",
"1217",
"1218",
"1219",
"122",
"1220",
"1221",
"1222",
"1223",
"1224",
"1225",
"1226",
"1227",
"1228",
"1229",
"123",
"1230",
"1231",
"1232",
"1233",
"1235",
"1236",
"1237",
"1238",
"1239",
"124",
"1240",
"1241",
"1242",
"1243",
"1244",
"1245",
"1246",
"1247",
"1248",
"1249",
"125",
"1250",
"1251",
"1252",
"1253",
"1254",
"1255",
"1256",
"1257",
"1258",
"1259",
"126",
"1260",
"1261",
"1262",
"1263",
"1264",
"1265",
"1266",
"1267",
"1268",
"1269",
"127",
"1270",
"1271",
"1272",
"1273",
"1274",
"1275",
"1276",
"1277",
"1278",
"1279",
"128",
"1280",
"1282",
"1283",
"1284",
"1285",
"1286",
"1287",
"1288",
"1289",
"129",
"1290",
"1291",
"1292",
"1293",
"1294",
"1295",
"1296",
"1297",
"1298",
"1299",
"13",
"130",
"1300",
"1301",
"1302",
"1303",
"1304",
"1305",
"1306",
"1307",
"1308",
"1309",
"131",
"1310",
"1311",
"1312",
"1313",
"1314",
"1315",
"1316",
"1317",
"1318",
"1319",
"132",
"1320",
"1321",
"1322",
"1323",
"1324",
"1325",
"1326",
"1327",
"1328",
"1329",
"133",
"1330",
"1331",
"1332",
"1333",
"1334",
"1335",
"1336",
"1337",
"1338",
"1339",
"134",
"1341",
"1342",
"1343",
"1344",
"1345",
"1346",
"1347",
"1348",
"1349",
"135",
"1350",
"1351",
"1352",
"1353",
"1354",
"1355",
"1356",
"1357",
"1358",
"1359",
"136",
"1360",
"1361",
"1362",
"1363",
"1364",
"1365",
"1366",
"1367",
"1368",
"1369",
"137",
"1370",
"1371",
"1372",
"1373",
"1374",
"1375",
"1376",
"1377",
"1378",
"1379",
"138",
"1380",
"1381",
"1382",
"1383",
"1384",
"1385",
"1386",
"1387",
"1388",
"1389",
"139",
"1390",
"1391",
"1392",
"1393",
"1394",
"1395",
"1396",
"1397",
"1398",
"1399",
"14",
"140",
"1400",
"1401",
"1402",
"1403",
"1404",
"1405",
"1406",
"1407",
"1408",
"1409",
"141",
"1410",
"1411",
"1412",
"1413",
"1415",
"1416",
"1417",
"1418",
"1419",
"142",
"1420",
"1421",
"1422",
"1423",
"1424",
"1425",
"1426",
"1427",
"1428",
"1429",
"143",
"1430",
"1431",
"1432",
"1433",
"1434",
"1435",
"1436",
"1437",
"1438",
"1439",
"144",
"1440",
"1441",
"1442",
"1443",
"1444",
"1445",
"1446",
"1447",
"1448",
"1449",
"145",
"1450",
"1451",
"1452",
"1453",
"1454",
"1455",
"1456",
"1457",
"1458",
"1459",
"146",
"1460",
"1461",
"1462",
"1463",
"1464",
"1465",
"1466",
"1467",
"1468",
"1469",
"147",
"1470",
"1471",
"1472",
"1473",
"1474",
"1475",
"1476",
"1477",
"1478",
"1479",
"148",
"1480",
"1481",
"1482",
"1483",
"1484",
"1485",
"1486",
"1487",
"1488",
"1489",
"149",
"1490",
"1491",
"1492",
"1493",
"1494",
"1495",
"1496",
"1497",
"1498",
"1499",
"15",
"150",
"1500",
"1501",
"1502",
"1503",
"1504",
"1505",
"1506",
"1507",
"1508",
"1509",
"151",
"1510",
"1511",
"1512",
"1513",
"1514",
"1515",
"1516",
"1517",
"1518",
"1519",
"152",
"1520",
"1521",
"1522",
"1523",
"1524",
"1525",
"1526",
"1527",
"1528",
"1529",
"153",
"1530",
"1531",
"1532",
"1533",
"1534",
"1535",
"1536",
"1537",
"1538",
"1539",
"154",
"1540",
"1541",
"1542",
"1543",
"1544",
"1545",
"1546",
"1547",
"1548",
"1549",
"155",
"1550",
"1551",
"1552",
"1553",
"1554",
"1555",
"1556",
"1557",
"1558",
"1559",
"156",
"1561",
"1562",
"1563",
"1564",
"1565",
"1566",
"1567",
"1568",
"1569",
"157",
"1570",
"1571",
"1572",
"1573",
"1574",
"1575",
"1576",
"1577",
"1578",
"1579",
"158",
"1580",
"1581",
"1582",
"1583",
"1584",
"1585",
"1586",
"1587",
"1588",
"1589",
"159",
"1590",
"1592",
"1593",
"1594",
"1595",
"1596",
"1597",
"1598",
"1599",
"16",
"160",
"1601",
"1602",
"1603",
"1604",
"1605",
"1606",
"1607",
"1608",
"1609",
"161",
"1610",
"1611",
"1612",
"1613",
"1614",
"1615",
"1616",
"1617",
"1618",
"1619",
"162",
"1620",
"1621",
"1622",
"1623",
"1625",
"1626",
"1627",
"1628",
"1629",
"163",
"1630",
"1631",
"1632",
"1633",
"1634",
"1635",
"1636",
"1637",
"1638",
"1639",
"164",
"1640",
"1641",
"1642",
"1643",
"1644",
"1645",
"1646",
"1647",
"1648",
"1649",
"165",
"1650",
"1651",
"1652",
"1653",
"1654",
"1655",
"1656",
"1657",
"1658",
"1659",
"166",
"1660",
"1661",
"1662",
"1663",
"1664",
"1665",
"1666",
"1667",
"1668",
"1669",
"167",
"1670",
"1671",
"1672",
"1673",
"1674",
"1675",
"1676",
"1677",
"1678",
"1679",
"168",
"1680",
"1681",
"1682",
"1683",
"1684",
"1685",
"1686",
"1687",
"1688",
"1689",
"169",
"1690",
"1691",
"1692",
"1693",
"1694",
"1695",
"1696",
"1697",
"1698",
"1699",
"17",
"170",
"1700",
"1701",
"1702",
"1703",
"1704",
"1705",
"1706",
"1707",
"1708",
"1709",
"171",
"1710",
"1711",
"1712",
"1713",
"1714",
"1715",
"1716",
"1717",
"1718",
"1719",
"172",
"1720",
"1721",
"1722",
"1723",
"1724",
"1725",
"1726",
"1727",
"1728",
"1729",
"173",
"1730",
"1731",
"1732",
"1733",
"1734",
"1735",
"1736",
"1737",
"1738",
"1739",
"174",
"1740",
"1741",
"1742",
"1743",
"1744",
"1745",
"1746",
"1747",
"1749",
"175",
"1750",
"1751",
"1752",
"1753",
"1754",
"1755",
"1756",
"1757",
"1758",
"1759",
"176",
"1760",
"1761",
"1762",
"1763",
"1764",
"1765",
"1766",
"1767",
"1768",
"1769",
"177",
"1770",
"1771",
"1772",
"1773",
"1774",
"1775",
"1776",
"1777",
"1778",
"1779",
"178",
"1780",
"1781",
"1782",
"1783",
"1784",
"1785",
"1786",
"1787",
"1788",
"1789",
"179",
"1790",
"1791",
"1792",
"1793",
"1794",
"1795",
"1796",
"1797",
"1798",
"1799",
"18",
"180",
"1800",
"1801",
"1802",
"1803",
"1804",
"1805",
"1806",
"1807",
"1808",
"1809",
"181",
"1810",
"1811",
"1812",
"1813",
"1814",
"1815",
"1816",
"1817",
"1818",
"1819",
"182",
"1820",
"1821",
"1822",
"1823",
"1824",
"1825",
"1826",
"1827",
"1828",
"1829",
"183",
"1830",
"1831",
"1832",
"1833",
"1834",
"1835",
"1836",
"1837",
"1838",
"1839",
"184",
"1840",
"1841",
"1842",
"1843",
"1844",
"1845",
"1846",
"1847",
"1848",
"1849",
"185",
"1850",
"1851",
"1852",
"1853",
"1854",
"1855",
"1856",
"1857",
"1858",
"1859",
"186",
"1860",
"1861",
"1862",
"1863",
"1864",
"1865",
"1866",
"1867",
"1868",
"1869",
"187",
"1870",
"1871",
"1872",
"1873",
"1874",
"1875",
"1876",
"1877",
"1878",
"1879",
"188",
"1880",
"1881",
"1882",
"1883",
"1884",
"1885",
"1886",
"1887",
"1888",
"1889",
"189",
"1890",
"1891",
"1892",
"1893",
"1894",
"1895",
"1896",
"1897",
"1898",
"1899",
"19",
"190",
"1900",
"1901",
"1902",
"1903",
"1904",
"1905",
"1906",
"1907",
"1908",
"1909",
"191",
"1910",
"1911",
"1912",
"1913",
"1914",
"1915",
"1916",
"1917",
"1918",
"1919",
"192",
"1920",
"1921",
"1922",
"1923",
"1924",
"1925",
"1926",
"1927",
"1928",
"1929",
"193",
"1930",
"1931",
"1932",
"1933",
"1934",
"1935",
"1936",
"1937",
"1938",
"1939",
"194",
"1940",
"1941",
"1942",
"1943",
"1944",
"1945",
"1946",
"1947",
"1948",
"1949",
"195",
"1950",
"1951",
"1952",
"1953",
"1954",
"1955",
"1956",
"1957",
"1958",
"1959",
"196",
"1960",
"1961",
"1962",
"1963",
"1964",
"1965",
"1966",
"1967",
"1968",
"1969",
"197",
"1970",
"1971",
"1972",
"1973",
"1974",
"1975",
"1976",
"1977",
"1978",
"1979",
"198",
"1980",
"1981",
"1982",
"1983",
"1984",
"1985",
"1986",
"1987",
"1988",
"1989",
"199",
"1990",
"1991",
"1992",
"1994",
"1995",
"1996",
"1997",
"1998",
"1999",
"2",
"20",
"200",
"2000",
"2001",
"2002",
"2003",
"2004",
"2005",
"2006",
"2007",
"2008",
"2009",
"201",
"2010",
"2011",
"2012",
"2013",
"2014",
"2015",
"2016",
"2018",
"2019",
"202",
"2020",
"2021",
"2022",
"2023",
"2024",
"2025",
"2026",
"2027",
"2028",
"2029",
"203",
"2030",
"2031",
"2032",
"2033",
"2034",
"2035",
"2036",
"2037",
"2038",
"2039",
"204",
"2040",
"2041",
"2042",
"2043",
"2044",
"2045",
"2046",
"2047",
"2048",
"2049",
"205",
"2050",
"2051",
"2052",
"2053",
"2054",
"2055",
"2056",
"2057",
"2058",
"2059",
"206",
"2060",
"2061",
"2062",
"2063",
"2064",
"2065",
"2066",
"2067",
"2068",
"2069",
"207",
"2070",
"2071",
"2072",
"2073",
"2074",
"2075",
"2076",
"2077",
"2078",
"2079",
"208",
"2080",
"2081",
"2082",
"2083",
"2084",
"2085",
"2086",
"2087",
"2088",
"2089",
"209",
"2090",
"2091",
"2092",
"2093",
"2094",
"2095",
"2096",
"2097",
"2098",
"2099",
"21",
"210",
"2100",
"2101",
"2102",
"2103",
"2104",
"2105",
"2106",
"2107",
"2108",
"2109",
"211",
"2110",
"2111",
"2112",
"2113",
"2114",
"2115",
"2116",
"2117",
"2118",
"2119",
"2120",
"2121",
"2122",
"2123",
"2124",
"2125",
"2126",
"2127",
"2128",
"2129",
"213",
"2130",
"2131",
"2132",
"2133",
"2134",
"2135",
"2136",
"2137",
"2138",
"2139",
"214",
"2140",
"2141",
"2142",
"2143",
"2144",
"2145",
"2146",
"2147",
"2148",
"2149",
"215",
"2150",
"2151",
"2152",
"2153",
"2154",
"2155",
"2156",
"2157",
"2158",
"2159",
"216",
"2160",
"2161",
"2162",
"2163",
"2164",
"2165",
"2166",
"2167",
"2168",
"2169",
"217",
"2170",
"2171",
"2172",
"2173",
"2174",
"2175",
"2176",
"2177",
"2178",
"2179",
"218",
"2180",
"2181",
"2182",
"2183",
"2184",
"2185",
"2186",
"2187",
"2188",
"2189",
"219",
"2190",
"2191",
"2192",
"2193",
"2194",
"2195",
"2196",
"2197",
"2198",
"2199",
"22",
"220",
"2200",
"2201",
"2202",
"2203",
"2204",
"2205",
"2206",
"2207",
"2208",
"2209",
"221",
"2210",
"2211",
"2212",
"2213",
"2214",
"2215",
"2216",
"2217",
"2218",
"2219",
"222",
"2220",
"2221",
"2222",
"2223",
"2224",
"2225",
"2226",
"2227",
"2228",
"2229",
"223",
"2230",
"2231",
"2232",
"2233",
"2234",
"2235",
"2236",
"2237",
"2238",
"2239",
"224",
"2240",
"2241",
"2242",
"2243",
"2244",
"2245",
"2246",
"2247",
"2248",
"2249",
"225",
"2250",
"2251",
"2252",
"2253",
"2254",
"2255",
"2256",
"2257",
"2258",
"2259",
"226",
"2260",
"2261",
"2262",
"2263",
"2264",
"2265",
"2266",
"2267",
"2268",
"2269",
"227",
"2270",
"2271",
"2272",
"2273",
"2274",
"2275",
"2276",
"2277",
"2278",
"2279",
"228",
"2280",
"2281",
"2282",
"2283",
"2284",
"2285",
"2286",
"2287",
"2288",
"2289",
"229",
"2290",
"2291",
"2292",
"2293",
"2294",
"2295",
"2296",
"2298",
"2299",
"23",
"230",
"2300",
"2301",
"2302",
"2303",
"2305",
"2306",
"2307",
"2308",
"2309",
"231",
"2310",
"2311",
"2312",
"2313",
"2314",
"2315",
"2316",
"2317",
"2318",
"2319",
"232",
"2320",
"2321",
"2322",
"2323",
"2324",
"2325",
"2326",
"2327",
"2328",
"2329",
"233",
"2330",
"2331",
"2332",
"2333",
"2334",
"2335",
"2336",
"2337",
"2338",
"2339",
"234",
"2340",
"2341",
"2342",
"2343",
"2344",
"2345",
"2346",
"2347",
"2348",
"2349",
"235",
"2350",
"2351",
"2352",
"2353",
"2354",
"2355",
"2356",
"2357",
"2358",
"2359",
"236",
"2360",
"2361",
"2362",
"2363",
"2364",
"2365",
"2366",
"2367",
"2368",
"2369",
"237",
"2370",
"2371",
"2372",
"2373",
"2374",
"2375",
"2376",
"2377",
"2378",
"2379",
"238",
"2380",
"2381",
"2382",
"2383",
"2384",
"2385",
"2386",
"2387",
"2388",
"2389",
"239",
"2390",
"2391",
"2392",
"2393",
"2394",
"2395",
"2396",
"2397",
"2398",
"2399",
"24",
"240",
"2400",
"2401",
"2402",
"2403",
"2404",
"2405",
"2406",
"2407",
"2408",
"2409",
"241",
"2410",
"2411",
"2412",
"2413",
"2414",
"2415",
"2417",
"2418",
"2419",
"242",
"2420",
"2421",
"2422",
"2423",
"2424",
"2425",
"2426",
"2427",
"2428",
"2429",
"243",
"2430",
"2431",
"2432",
"2433",
"2434",
"2435",
"2436",
"2437",
"2438",
"2439",
"244",
"2440",
"2441",
"2442",
"2443",
"2444",
"2445",
"2446",
"2447",
"2448",
"2449",
"245",
"2450",
"2451",
"2452",
"2453",
"2454",
"2455",
"2456",
"2457",
"2458",
"2459",
"246",
"2460",
"2461",
"2462",
"2463",
"2464",
"2465",
"2466",
"2467",
"2468",
"2469",
"247",
"2470",
"2471",
"2472",
"2473",
"2474",
"2476",
"2477",
"2478",
"2479",
"248",
"2480",
"2481",
"2482",
"2483",
"2484",
"2485",
"2486",
"2487",
"2488",
"2489",
"249",
"2490",
"2491",
"2492",
"2493",
"2494",
"2495",
"2496",
"2497",
"2498",
"2499",
"25",
"250",
"2500",
"2501",
"2502",
"2503",
"2504",
"2505",
"2506",
"2507",
"2508",
"2509",
"251",
"2510",
"2511",
"2512",
"2513",
"2514",
"2515",
"2516",
"2517",
"2518",
"2519",
"252",
"2520",
"2521",
"2522",
"253",
"254",
"255",
"256",
"257",
"258",
"259",
"26",
"260",
"261",
"262",
"263",
"264",
"265",
"266",
"267",
"268",
"269",
"27",
"270",
"271",
"272",
"273",
"274",
"276",
"277",
"278",
"279",
"28",
"280",
"281",
"282",
"283",
"284",
"285",
"286",
"287",
"288",
"289",
"29",
"290",
"291",
"292",
"293",
"294",
"295",
"296",
"297",
"298",
"299",
"3",
"30",
"300",
"301",
"302",
"303",
"304",
"305",
"306",
"307",
"308",
"309",
"31",
"310",
"311",
"312",
"313",
"314",
"315",
"316",
"317",
"318",
"319",
"32",
"320",
"321",
"322",
"323",
"324",
"325",
"326",
"327",
"328",
"329",
"33",
"330",
"331",
"332",
"333",
"334",
"335",
"336",
"337",
"338",
"339",
"34",
"340",
"341",
"342",
"343",
"344",
"345",
"346",
"347",
"348",
"349",
"35",
"350",
"351",
"352",
"353",
"354",
"355",
"356",
"357",
"358",
"359",
"36",
"360",
"361",
"362",
"363",
"364",
"365",
"366",
"367",
"368",
"369",
"37",
"370",
"371",
"372",
"373",
"374",
"375",
"376",
"377",
"378",
"379",
"38",
"380",
"381",
"382",
"383",
"384",
"385",
"386",
"387",
"388",
"389",
"39",
"390",
"391",
"392",
"393",
"394",
"395",
"396",
"397",
"398",
"4",
"40",
"400",
"401",
"402",
"403",
"404",
"405",
"406",
"407",
"408",
"409",
"41",
"410",
"411",
"412",
"413",
"414",
"415",
"416",
"417",
"418",
"419",
"42",
"420",
"422",
"423",
"424",
"425",
"426",
"427",
"428",
"429",
"43",
"430",
"431",
"432",
"433",
"434",
"435",
"436",
"437",
"438",
"439",
"44",
"440",
"441",
"442",
"443",
"444",
"445",
"446",
"447",
"448",
"449",
"45",
"450",
"451",
"452",
"453",
"454",
"455",
"456",
"457",
"458",
"459",
"46",
"460",
"461",
"462",
"463",
"464",
"465",
"466",
"467",
"468",
"469",
"47",
"470",
"471",
"472",
"473",
"474",
"475",
"476",
"477",
"478",
"479",
"48",
"480",
"481",
"482",
"483",
"484",
"485",
"486",
"487",
"488",
"489",
"49",
"490",
"491",
"493",
"494",
"495",
"496",
"497",
"498",
"499",
"5",
"50",
"500",
"501",
"502",
"503",
"504",
"505",
"506",
"507",
"508",
"509",
"51",
"510",
"511",
"512",
"513",
"514",
"515",
"516",
"517",
"518",
"519",
"52",
"520",
"521",
"522",
"523",
"524",
"525",
"526",
"527",
"528",
"529",
"53",
"530",
"531",
"532",
"533",
"534",
"535",
"536",
"537",
"538",
"539",
"54",
"540",
"541",
"542",
"543",
"544",
"545",
"546",
"547",
"548",
"549",
"55",
"550",
"551",
"552",
"553",
"554",
"555",
"556",
"557",
"558",
"559",
"56",
"560",
"561",
"562",
"563",
"564",
"565",
"566",
"567",
"568",
"569",
"57",
"570",
"571",
"572",
"573",
"574",
"575",
"576",
"577",
"578",
"579",
"58",
"580",
"581",
"582",
"583",
"584",
"585",
"586",
"587",
"588",
"589",
"59",
"590",
"591",
"592",
"593",
"594",
"595",
"596",
"597",
"598",
"599",
"6",
"60",
"600",
"601",
"602",
"603",
"604",
"605",
"606",
"607",
"608",
"61",
"610",
"611",
"612",
"613",
"614",
"615",
"616",
"617",
"618",
"619",
"62",
"620",
"622",
"623",
"624",
"625",
"626",
"627",
"628",
"629",
"63",
"630",
"631",
"632",
"633",
"634",
"635",
"636",
"637",
"638",
"639",
"64",
"640",
"641",
"642",
"643",
"644",
"645",
"646",
"647",
"648",
"649",
"65",
"650",
"651",
"652",
"653",
"654",
"655",
"656",
"657",
"658",
"659",
"660",
"661",
"662",
"663",
"664",
"665",
"666",
"667",
"668",
"669",
"67",
"670",
"671",
"672",
"673",
"674",
"675",
"676",
"677",
"678",
"679",
"68",
"680",
"681",
"682",
"683",
"684",
"685",
"686",
"687",
"688",
"689",
"69",
"690",
"691",
"692",
"693",
"694",
"695",
"696",
"697",
"698",
"699",
"7",
"70",
"700",
"701",
"702",
"703",
"704",
"705",
"706",
"707",
"708",
"709",
"71",
"710",
"711",
"712",
"713",
"714",
"715",
"716",
"717",
"718",
"719",
"72",
"720",
"721",
"722",
"723",
"724",
"725",
"726",
"727",
"728",
"729",
"73",
"730",
"731",
"732",
"733",
"734",
"735",
"736",
"737",
"738",
"739",
"74",
"740",
"741",
"742",
"743",
"744",
"745",
"746",
"747",
"748",
"749",
"75",
"750",
"751",
"752",
"753",
"754",
"755",
"756",
"757",
"758",
"759",
"76",
"760",
"761",
"762",
"763",
"764",
"765",
"766",
"767",
"768",
"769",
"77",
"770",
"771",
"772",
"773",
"774",
"775",
"776",
"777",
"778",
"779",
"78",
"780",
"781",
"782",
"783",
"784",
"785",
"786",
"787",
"788",
"789",
"79",
"790",
"791",
"792",
"793",
"794",
"795",
"796",
"797",
"798",
"799",
"8",
"80",
"800",
"801",
"802",
"803",
"804",
"805",
"806",
"807",
"808",
"809",
"81",
"810",
"811",
"812",
"813",
"814",
"815",
"816",
"817",
"819",
"82",
"820",
"821",
"822",
"823",
"824",
"825",
"826",
"827",
"828",
"829",
"83",
"830",
"831",
"832",
"833",
"834",
"835",
"836",
"837",
"838",
"839",
"84",
"840",
"841",
"842",
"843",
"844",
"845",
"846",
"847",
"848",
"849",
"85",
"850",
"851",
"852",
"853",
"854",
"855",
"856",
"857",
"858",
"859",
"86",
"860",
"861",
"862",
"863",
"864",
"865",
"866",
"867",
"868",
"869",
"87",
"870",
"871",
"872",
"873",
"874",
"875",
"876",
"877",
"878",
"879",
"88",
"880",
"881",
"882",
"883",
"884",
"885",
"886",
"887",
"888",
"889",
"89",
"891",
"892",
"893",
"894",
"895",
"896",
"897",
"898",
"899",
"9",
"90",
"900",
"901",
"902",
"903",
"904",
"905",
"906",
"907",
"908",
"909",
"91",
"910",
"911",
"912",
"913",
"914",
"915",
"916",
"917",
"918",
"919",
"92",
"920",
"921",
"922",
"923",
"924",
"925",
"926",
"927",
"928",
"929",
"93",
"930",
"931",
"932",
"933",
"934",
"935",
"936",
"937",
"938",
"939",
"94",
"940",
"941",
"942",
"943",
"944",
"945",
"946",
"947",
"948",
"949",
"95",
"950",
"951",
"952",
"953",
"954",
"955",
"956",
"957",
"958",
"959",
"96",
"960",
"961",
"962",
"963",
"964",
"965",
"966",
"967",
"968",
"969",
"97",
"970",
"971",
"972",
"973",
"974",
"975",
"976",
"977",
"978",
"979",
"98",
"980",
"981",
"982",
"983",
"984",
"985",
"986",
"987",
"988",
"989",
"99",
"990",
"991",
"992",
"993",
"994",
"995",
"996",
"997",
"998",
"999"
] |
neuralhaven/KDRSSC_ViT2MobileViT-xx-small
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KDRSSC_ViT2MobileViT-xx-small
This model is a fine-tuned version of [apple/mobilevit-xx-small](https://huggingface.co/apple/mobilevit-xx-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6274
- Accuracy: 0.8495
- Precision: 0.8504
- Recall: 0.8501
- F1: 0.8440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.6963 | 1.0 | 148 | 1.3476 | 0.596 | 0.6092 | 0.5736 | 0.5557 |
| 1.2335 | 2.0 | 296 | 1.0216 | 0.725 | 0.7180 | 0.7135 | 0.6918 |
| 0.9693 | 3.0 | 444 | 0.8330 | 0.776 | 0.7560 | 0.7699 | 0.7481 |
| 0.8246 | 4.0 | 592 | 0.7345 | 0.812 | 0.8091 | 0.8042 | 0.7889 |
| 0.7393 | 5.0 | 740 | 0.6836 | 0.828 | 0.8084 | 0.8223 | 0.8070 |
| 0.6895 | 6.0 | 888 | 0.6504 | 0.831 | 0.8245 | 0.8253 | 0.8134 |
| 0.6528 | 7.0 | 1036 | 0.6252 | 0.859 | 0.8546 | 0.8571 | 0.8461 |
| 0.6303 | 8.0 | 1184 | 0.6089 | 0.856 | 0.8506 | 0.8554 | 0.8444 |
| 0.6138 | 9.0 | 1332 | 0.6002 | 0.863 | 0.8567 | 0.8632 | 0.8519 |
| 0.6067 | 10.0 | 1480 | 0.6003 | 0.863 | 0.8596 | 0.8624 | 0.8521 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0
- Datasets 3.0.0
- Tokenizers 0.19.1
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24",
"label_25",
"label_26",
"label_27",
"label_28",
"label_29",
"label_30",
"label_31",
"label_32",
"label_33",
"label_34",
"label_35",
"label_36",
"label_37",
"label_38",
"label_39",
"label_40",
"label_41",
"label_42",
"label_43",
"label_44"
] |
honchanphat/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# honchanphat/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1913
- Validation Loss: 0.2332
- Train Accuracy: 0.93
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.3239 | 0.2876 | 0.916 | 0 |
| 0.2662 | 0.2741 | 0.924 | 1 |
| 0.2329 | 0.2906 | 0.915 | 2 |
| 0.2142 | 0.2657 | 0.919 | 3 |
| 0.1913 | 0.2332 | 0.93 | 4 |
### Framework versions
- Transformers 4.45.1
- TensorFlow 2.16.2
- Datasets 3.0.1
- Tokenizers 0.20.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
rukundob451/dinov2-small-finetuned-papsmear
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dinov2-small-finetuned-papsmear
This model is a fine-tuned version of [facebook/dinov2-small](https://huggingface.co/facebook/dinov2-small) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3843
- Accuracy: 0.8603
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.846 | 0.9935 | 38 | 1.0217 | 0.5956 |
| 1.0241 | 1.9869 | 76 | 0.8413 | 0.6544 |
| 0.9178 | 2.9804 | 114 | 0.7204 | 0.7426 |
| 0.693 | 4.0 | 153 | 0.5731 | 0.75 |
| 0.7157 | 4.9935 | 191 | 0.5501 | 0.8162 |
| 0.5006 | 5.9869 | 229 | 0.6096 | 0.7794 |
| 0.4576 | 6.9804 | 267 | 0.5535 | 0.7941 |
| 0.467 | 8.0 | 306 | 0.5041 | 0.8162 |
| 0.4378 | 8.9935 | 344 | 0.5771 | 0.8015 |
| 0.2876 | 9.9869 | 382 | 0.4234 | 0.8456 |
| 0.2308 | 10.9804 | 420 | 0.4946 | 0.8382 |
| 0.2312 | 12.0 | 459 | 0.5098 | 0.8309 |
| 0.1625 | 12.9935 | 497 | 0.3813 | 0.8603 |
| 0.1775 | 13.9869 | 535 | 0.3695 | 0.8529 |
| 0.1358 | 14.9020 | 570 | 0.3843 | 0.8603 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"ascus",
"cancer",
"hsil",
"lsil",
"nilm",
"non-diagnostic"
] |
manitri/manitri
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
2nzi/Image_Surf_NotSurf
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"applyeyemakeup",
"applylipstick",
"archery",
"babycrawling",
"balancebeam",
"bandmarching",
"baseballpitch",
"basketball",
"basketballdunk",
"benchpress",
"biking",
"billiards",
"blowdryhair",
"blowingcandles",
"bodyweightsquats",
"bowling",
"boxingpunchingbag",
"boxingspeedbag",
"breaststroke",
"brushingteeth",
"cleanandjerk",
"cliffdiving",
"cricketbowling",
"cricketshot",
"cuttinginkitchen",
"diving",
"drumming",
"fencing",
"fieldhockeypenalty",
"floorgymnastics",
"frisbeecatch",
"frontcrawl",
"golfswing",
"haircut",
"hammering",
"hammerthrow",
"handstandpushups",
"handstandwalking",
"headmassage",
"highjump",
"horserace",
"horseriding",
"hulahoop",
"icedancing",
"javelinthrow",
"jugglingballs",
"jumpingjack",
"jumprope",
"kayaking",
"knitting",
"longjump",
"lunges",
"militaryparade",
"mixing",
"moppingfloor",
"nunchucks",
"parallelbars",
"pizzatossing",
"playingcello",
"playingdaf",
"playingdhol",
"playingflute",
"playingguitar",
"playingpiano",
"playingsitar",
"playingtabla",
"playingviolin",
"polevault",
"pommelhorse",
"pullups",
"punch",
"pushups",
"rafting",
"rockclimbingindoor",
"ropeclimbing",
"rowing",
"salsaspin",
"shavingbeard",
"shotput",
"skateboarding",
"skiing",
"skijet",
"skydiving",
"soccerjuggling",
"soccerpenalty",
"stillrings",
"sumowrestling",
"surfing",
"swing",
"tabletennisshot",
"taichi",
"tennisswing",
"throwdiscus",
"trampolinejumping",
"typing",
"unevenbars",
"volleyballspiking",
"walkingwithdog",
"wallpushups",
"writingonboard",
"yoyo"
] |
carlosleao/vit-Facial-Expression-Recognition
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-Facial-Expression-Recognition
This model is a fine-tuned version of [carlosleao/vit-Facial-Expression-Recognition](https://huggingface.co/carlosleao/vit-Facial-Expression-Recognition) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2687
- Accuracy: 0.4177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.9372 | 0.8959 | 100 | 1.5720 | 0.4417 |
| 0.9147 | 1.7917 | 200 | 1.6084 | 0.4364 |
| 0.8393 | 2.6876 | 300 | 1.7268 | 0.4169 |
| 0.7882 | 3.5834 | 400 | 1.7604 | 0.4227 |
| 0.6916 | 4.4793 | 500 | 1.8619 | 0.4124 |
| 0.6367 | 5.3751 | 600 | 1.9493 | 0.4261 |
| 0.5848 | 6.2710 | 700 | 2.0511 | 0.4046 |
| 0.5183 | 7.1669 | 800 | 2.1316 | 0.4230 |
| 0.4788 | 8.0627 | 900 | 2.2210 | 0.4026 |
| 0.4586 | 8.9586 | 1000 | 2.2687 | 0.4177 |
| 0.4079 | 9.8544 | 1100 | 2.4038 | 0.3747 |
| 0.3797 | 10.7503 | 1200 | 2.3664 | 0.4046 |
| 0.2957 | 11.6461 | 1300 | 2.4534 | 0.4068 |
| 0.2622 | 12.5420 | 1400 | 2.5413 | 0.3956 |
| 0.2202 | 13.4378 | 1500 | 2.5601 | 0.4127 |
| 0.2112 | 14.3337 | 1600 | 2.6560 | 0.3920 |
| 0.1769 | 15.2296 | 1700 | 2.8006 | 0.3909 |
| 0.161 | 16.1254 | 1800 | 2.8011 | 0.3928 |
| 0.155 | 17.0213 | 1900 | 2.9518 | 0.3856 |
| 0.1309 | 17.9171 | 2000 | 2.9363 | 0.3727 |
| 0.1001 | 18.8130 | 2100 | 2.9187 | 0.3998 |
| 0.0816 | 19.7088 | 2200 | 3.0563 | 0.3842 |
| 0.0672 | 20.6047 | 2300 | 2.9358 | 0.4205 |
| 0.0567 | 21.5006 | 2400 | 3.1118 | 0.3970 |
| 0.0524 | 22.3964 | 2500 | 3.2147 | 0.4054 |
| 0.0413 | 23.2923 | 2600 | 3.1928 | 0.3951 |
| 0.0368 | 24.1881 | 2700 | 3.1599 | 0.4141 |
| 0.0275 | 25.0840 | 2800 | 3.1720 | 0.4166 |
| 0.029 | 25.9798 | 2900 | 3.1924 | 0.4012 |
| 0.0231 | 26.8757 | 3000 | 3.2031 | 0.4088 |
| 0.0226 | 27.7716 | 3100 | 3.2125 | 0.4113 |
| 0.0205 | 28.6674 | 3200 | 3.2122 | 0.4118 |
| 0.0197 | 29.5633 | 3300 | 3.2126 | 0.4116 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
[
"neutral",
"happiness",
"surprise",
"sadness",
"anger",
"disgust",
"fear",
"contempt"
] |
TirthDesai/Image-Classification
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9"
] |
miguel-organization/vit-model-miguel-gutierrez
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-model-miguel-gutierrez
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0097
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1263 | 3.8462 | 500 | 0.0097 | 1.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
crocutacrocuto/dinov2-base-MEG-10
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"aardvark",
"bird",
"black-and-white colobus",
"blue duiker",
"buffalo",
"bushbuck",
"bushpig",
"cattle",
"chimpanzee",
"civet_genet",
"dog_jackal",
"elephant",
"galago_potto",
"goat",
"golden cat",
"gorilla",
"guineafowl",
"honey badger",
"leopard",
"mandrill",
"mongoose",
"monkey",
"olive baboon",
"pangolin",
"porcupine",
"red colobus_red-capped mangabey",
"red duiker",
"rodent",
"serval",
"spotted hyena",
"squirel",
"water chevrotain",
"yellow-backed duiker"
] |
rukundob451/pvt-small-224-finetuned-papsmear
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pvt-small-224-finetuned-papsmear
This model is a fine-tuned version of [Xrenya/pvt-small-224](https://huggingface.co/Xrenya/pvt-small-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Accuracy: 0.2426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.0 | 0.9935 | 38 | nan | 0.2426 |
| 0.0 | 1.9869 | 76 | nan | 0.2426 |
| 0.0 | 2.9804 | 114 | nan | 0.2426 |
| 0.0 | 4.0 | 153 | nan | 0.2426 |
| 0.0 | 4.9935 | 191 | nan | 0.2426 |
| 0.0 | 5.9869 | 229 | nan | 0.2426 |
| 0.0 | 6.9804 | 267 | nan | 0.2426 |
| 0.0 | 8.0 | 306 | nan | 0.2426 |
| 0.0 | 8.9935 | 344 | nan | 0.2426 |
| 0.0 | 9.9869 | 382 | nan | 0.2426 |
| 0.0 | 10.9804 | 420 | nan | 0.2426 |
| 0.0 | 12.0 | 459 | nan | 0.2426 |
| 0.0 | 12.9935 | 497 | nan | 0.2426 |
| 0.0 | 13.9869 | 535 | nan | 0.2426 |
| 0.0 | 14.9020 | 570 | nan | 0.2426 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"ascus",
"cancer",
"hsil",
"lsil",
"nilm",
"non-diagnostic"
] |
raffaelsiregar/utkface-race-classifications
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4743
- Accuracy: 0.8486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6744 | 1.0 | 667 | 0.5635 | 0.7980 |
| 0.485 | 2.0 | 1334 | 0.4799 | 0.8342 |
| 0.2414 | 3.0 | 2001 | 0.4743 | 0.8486 |
| 0.1413 | 4.0 | 2668 | 0.5983 | 0.8444 |
| 0.0489 | 5.0 | 3335 | 0.6865 | 0.8541 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"white",
"black",
"asian",
"indian",
"others (hispanic, latino, middle eastern)"
] |
Deepri24/my_awesome_emotion_identifier_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_emotion_identifier_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8225
- Accuracy: 0.3875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.99 | 1.0 | 10 | 1.9723 | 0.225 |
| 1.8942 | 2.0 | 20 | 1.8762 | 0.3812 |
| 1.8036 | 3.0 | 30 | 1.8225 | 0.3875 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.1+cpu
- Datasets 3.0.1
- Tokenizers 0.20.0
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
so-hey/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0602
- Accuracy: 0.9793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2486 | 1.0 | 190 | 0.1190 | 0.9596 |
| 0.1564 | 2.0 | 380 | 0.0694 | 0.9756 |
| 0.1244 | 3.0 | 570 | 0.0602 | 0.9793 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 2.13.0
- Tokenizers 0.19.1
|
[
"annualcrop",
"forest",
"herbaceousvegetation",
"highway",
"industrial",
"pasture",
"permanentcrop",
"residential",
"river",
"sealake"
] |
raffaelsiregar/dog-breeds-classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Dog Breeds Classification
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on 71 Dog Breeds-Image Data Set (Kaggle).
It achieves the following results on the evaluation set:
- Loss: 0.0763
- Accuracy: 0.9743
## Model description
This Model is a Transfer Learning-based model and trained with the size of 224x224 pixels. This model can predict dog with 71 classes of breeds.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4379 | 1.0 | 249 | 0.2430 | 0.93 |
| 0.1998 | 2.0 | 498 | 0.1380 | 0.9514 |
| 0.0739 | 3.0 | 747 | 0.1008 | 0.9614 |
| 0.0135 | 4.0 | 996 | 0.0834 | 0.9671 |
| 0.0036 | 5.0 | 1245 | 0.0763 | 0.9743 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"afghan",
"african wild dog",
"airedale",
"american hairless",
"american spaniel",
"basenji",
"basset",
"beagle",
"bearded collie",
"bermaise",
"bichon frise",
"blenheim",
"bloodhound",
"bluetick",
"border collie",
"borzoi",
"boston terrier",
"boxer",
"bull mastiff",
"bull terrier",
"bulldog",
"cairn",
"chihuahua",
"chinese crested",
"chow",
"clumber",
"cockapoo",
"cocker",
"collie",
"corgi",
"coyote",
"dalmation",
"dhole",
"dingo",
"doberman",
"elk hound",
"french bulldog",
"german sheperd",
"golden retriever",
"great dane",
"great perenees",
"greyhound",
"groenendael",
"irish spaniel",
"irish wolfhound",
"japanese spaniel",
"komondor",
"labradoodle",
"labrador",
"lhasa",
"malinois",
"maltese",
"mex hairless",
"newfoundland",
"pekinese",
"pit bull",
"pomeranian",
"poodle",
"pug",
"rhodesian",
"rottweiler",
"saint bernard",
"schnauzer",
"scotch terrier",
"shar_pei",
"shiba inu",
"shih-tzu",
"siberian husky",
"vizsla",
"yorkie",
"american spaniel"
] |
e1010101/vit-384-tongue-image
|
# Model Card for Model ID
A fine-tune of Google's ViT-384 model for multi-label image classification on tongue images.
## Model Details
### Model Description
The model will predict the presence/absence of three features; Cracks, Red Dots and Toothmarks.
- **Model type:** Vision Transformer
- **Finetuned from model [optional]:** https://huggingface.co/google/vit-base-patch16-384
|
[
"crack",
"red-dots",
"toothmark"
] |
dbfordeeplearn/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
itsTomLie/flowers_microsoft_resnet50
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itstomlie-itstomlie/flowers_microsoft_resnet50/runs/t5ykecz5)
# flowers_microsoft_resnet50
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.02
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24",
"label_25",
"label_26",
"label_27",
"label_28",
"label_29",
"label_30",
"label_31",
"label_32",
"label_33",
"label_34",
"label_35",
"label_36",
"label_37",
"label_38",
"label_39",
"label_40",
"label_41",
"label_42",
"label_43",
"label_44",
"label_45",
"label_46",
"label_47",
"label_48",
"label_49",
"label_50",
"label_51",
"label_52",
"label_53",
"label_54",
"label_55",
"label_56",
"label_57",
"label_58",
"label_59",
"label_60",
"label_61",
"label_62",
"label_63",
"label_64",
"label_65",
"label_66",
"label_67",
"label_68",
"label_69",
"label_70",
"label_71",
"label_72",
"label_73",
"label_74",
"label_75",
"label_76",
"label_77",
"label_78",
"label_79",
"label_80",
"label_81",
"label_82",
"label_83",
"label_84",
"label_85",
"label_86",
"label_87",
"label_88",
"label_89",
"label_90",
"label_91",
"label_92",
"label_93",
"label_94",
"label_95",
"label_96",
"label_97",
"label_98",
"label_99",
"label_100",
"label_101"
] |
itsTomLie/genders_microsoft_resnet50
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itstomlie-itstomlie/genders_microsoft_resnet50/runs/81fnq6xz)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itstomlie-itstomlie/genders_microsoft_resnet50/runs/81fnq6xz)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itstomlie-itstomlie/genders_microsoft_resnet50/runs/81fnq6xz)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itstomlie-itstomlie/genders_microsoft_resnet50/runs/81fnq6xz)
# genders_microsoft_resnet50
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.2
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 0.5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24",
"label_25",
"label_26",
"label_27",
"label_28",
"label_29",
"label_30",
"label_31",
"label_32",
"label_33",
"label_34",
"label_35",
"label_36",
"label_37",
"label_38",
"label_39",
"label_40",
"label_41",
"label_42",
"label_43",
"label_44",
"label_45",
"label_46",
"label_47",
"label_48",
"label_49",
"label_50",
"label_51",
"label_52",
"label_53",
"label_54",
"label_55",
"label_56",
"label_57",
"label_58",
"label_59",
"label_60",
"label_61",
"label_62",
"label_63",
"label_64",
"label_65",
"label_66",
"label_67",
"label_68",
"label_69",
"label_70",
"label_71",
"label_72",
"label_73",
"label_74",
"label_75",
"label_76",
"label_77",
"label_78",
"label_79",
"label_80",
"label_81",
"label_82",
"label_83",
"label_84",
"label_85",
"label_86",
"label_87",
"label_88",
"label_89",
"label_90",
"label_91",
"label_92",
"label_93",
"label_94",
"label_95",
"label_96",
"label_97",
"label_98",
"label_99",
"label_100",
"label_101"
] |
afraid15chicken/finetuned-arsenic
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-arsenic
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0048
- Accuracy: 0.9993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1855 | 0.1848 | 100 | 0.1918 | 0.9312 |
| 0.1792 | 0.3697 | 200 | 0.1740 | 0.9365 |
| 0.1688 | 0.5545 | 300 | 0.0782 | 0.9692 |
| 0.1238 | 0.7394 | 400 | 0.2158 | 0.9227 |
| 0.0969 | 0.9242 | 500 | 0.0449 | 0.9843 |
| 0.0326 | 1.1091 | 600 | 0.1554 | 0.9574 |
| 0.1057 | 1.2939 | 700 | 0.0845 | 0.9738 |
| 0.0805 | 1.4787 | 800 | 0.0712 | 0.9823 |
| 0.0889 | 1.6636 | 900 | 0.0718 | 0.9797 |
| 0.0503 | 1.8484 | 1000 | 0.0251 | 0.9935 |
| 0.0225 | 2.0333 | 1100 | 0.0177 | 0.9967 |
| 0.0049 | 2.2181 | 1200 | 0.0246 | 0.9921 |
| 0.0152 | 2.4030 | 1300 | 0.0083 | 0.9987 |
| 0.08 | 2.5878 | 1400 | 0.0214 | 0.9941 |
| 0.0043 | 2.7726 | 1500 | 0.0069 | 0.9980 |
| 0.0501 | 2.9575 | 1600 | 0.0151 | 0.9967 |
| 0.0186 | 3.1423 | 1700 | 0.0078 | 0.9974 |
| 0.0033 | 3.3272 | 1800 | 0.0139 | 0.9961 |
| 0.0023 | 3.5120 | 1900 | 0.0076 | 0.9987 |
| 0.0054 | 3.6969 | 2000 | 0.0048 | 0.9993 |
| 0.0168 | 3.8817 | 2100 | 0.0066 | 0.9987 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"infacted",
"not_infacted"
] |
MSchneiderEoda/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1063
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 2 | 4.4964 | 0.025 |
| No log | 2.0 | 5 | 4.1472 | 0.95 |
| No log | 2.4 | 6 | 4.1063 | 1.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
AugustoReies/vit-base-patch16-224-mascotas-DA
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-mascotas-DA
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1356
- Accuracy: 0.9625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00035
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.3161 | 0.9849 | 49 | 0.1356 | 0.9625 |
| 0.157 | 1.9899 | 99 | 0.1231 | 0.95 |
| 0.1355 | 2.9950 | 149 | 0.1380 | 0.9625 |
| 0.0979 | 4.0 | 199 | 0.2714 | 0.925 |
| 0.0788 | 4.9849 | 248 | 0.2664 | 0.9375 |
| 0.0584 | 5.9095 | 294 | 0.2223 | 0.9375 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"cats",
"dogs",
"parrots"
] |
bob123dylan/finetuned-arsenic
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-arsenic
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the arsenic_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0026
- Accuracy: 0.9993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.2214 | 0.1848 | 100 | 0.1243 | 0.9607 |
| 0.1213 | 0.3697 | 200 | 0.1763 | 0.9339 |
| 0.1201 | 0.5545 | 300 | 0.1018 | 0.9607 |
| 0.0991 | 0.7394 | 400 | 0.2071 | 0.9417 |
| 0.1127 | 0.9242 | 500 | 0.0886 | 0.9666 |
| 0.0314 | 1.1091 | 600 | 0.0333 | 0.9908 |
| 0.0252 | 1.2939 | 700 | 0.0110 | 0.9974 |
| 0.0582 | 1.4787 | 800 | 0.0104 | 0.9987 |
| 0.0455 | 1.6636 | 900 | 0.0198 | 0.9954 |
| 0.0569 | 1.8484 | 1000 | 0.0180 | 0.9961 |
| 0.0627 | 2.0333 | 1100 | 0.0244 | 0.9948 |
| 0.0328 | 2.2181 | 1200 | 0.0054 | 0.9987 |
| 0.0156 | 2.4030 | 1300 | 0.0193 | 0.9948 |
| 0.0016 | 2.5878 | 1400 | 0.0074 | 0.9974 |
| 0.0032 | 2.7726 | 1500 | 0.0045 | 0.9980 |
| 0.0233 | 2.9575 | 1600 | 0.0029 | 0.9993 |
| 0.0434 | 3.1423 | 1700 | 0.0026 | 0.9993 |
| 0.0079 | 3.3272 | 1800 | 0.0095 | 0.9980 |
| 0.0175 | 3.5120 | 1900 | 0.0111 | 0.9974 |
| 0.0013 | 3.6969 | 2000 | 0.0109 | 0.9974 |
| 0.0008 | 3.8817 | 2100 | 0.0053 | 0.9987 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"infected",
"not_infected"
] |
flavioferlin/cookbook-finetuning-biomedical
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"benign",
"malignant",
"normal"
] |
aisak-ai/GD
|
© 2024 Mandela Logan. All rights reserved.
No part of this model may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the copyright holder. Unauthorized use or reproduction of this model is strictly prohibited by copyright law.
|
[
"female",
"male"
] |
kriskrishna/vit-Facial-Expression-Recognition
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-Facial-Expression-Recognition
This model is a fine-tuned version of [motheecreator/vit-Facial-Expression-Recognition](https://huggingface.co/motheecreator/vit-Facial-Expression-Recognition) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1672
- Accuracy: 0.9320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.7209 | 21.6216 | 100 | 0.5301 | 0.8155 |
| 0.0966 | 43.2432 | 200 | 0.1672 | 0.9320 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.20.0
|
[
"anger",
"confused",
"disgust",
"happy",
"neutral",
"sad",
"surprise"
] |
aisak-ai/AD
|
© 2024 Mandela Logan. All rights reserved.
No part of this model may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the copyright holder. Unauthorized use or reproduction of this model is strictly prohibited by copyright law.
|
[
"0-2",
"3-9",
"10-19",
"20-29",
"30-39",
"40-49",
"50-59",
"60-69",
"more than 70"
] |
MakAIHealthLab/vit-base-patch16-224-in21k-finetuned-papsmear
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-papsmear
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2825
- Accuracy: 0.9338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.9231 | 9 | 1.7346 | 0.2647 |
| 1.7645 | 1.9487 | 19 | 1.6152 | 0.3088 |
| 1.661 | 2.9744 | 29 | 1.4663 | 0.4118 |
| 1.496 | 4.0 | 39 | 1.2989 | 0.4853 |
| 1.3097 | 4.9231 | 48 | 1.1491 | 0.5588 |
| 1.091 | 5.9487 | 58 | 0.9933 | 0.7206 |
| 0.9088 | 6.9744 | 68 | 0.9171 | 0.6985 |
| 0.7858 | 8.0 | 78 | 0.8301 | 0.7721 |
| 0.7016 | 8.9231 | 87 | 0.7925 | 0.7353 |
| 0.6136 | 9.9487 | 97 | 0.6992 | 0.7647 |
| 0.532 | 10.9744 | 107 | 0.6401 | 0.8309 |
| 0.5018 | 12.0 | 117 | 0.5787 | 0.8382 |
| 0.4279 | 12.9231 | 126 | 0.6130 | 0.8088 |
| 0.4116 | 13.9487 | 136 | 0.5090 | 0.8382 |
| 0.3848 | 14.9744 | 146 | 0.5165 | 0.8676 |
| 0.3449 | 16.0 | 156 | 0.4843 | 0.8382 |
| 0.3008 | 16.9231 | 165 | 0.5460 | 0.8456 |
| 0.2797 | 17.9487 | 175 | 0.4985 | 0.8309 |
| 0.2696 | 18.9744 | 185 | 0.5586 | 0.8456 |
| 0.2633 | 20.0 | 195 | 0.4349 | 0.9044 |
| 0.2569 | 20.9231 | 204 | 0.4017 | 0.8897 |
| 0.27 | 21.9487 | 214 | 0.4758 | 0.8603 |
| 0.2706 | 22.9744 | 224 | 0.4133 | 0.8897 |
| 0.2211 | 24.0 | 234 | 0.3844 | 0.9118 |
| 0.1977 | 24.9231 | 243 | 0.3497 | 0.9265 |
| 0.1969 | 25.9487 | 253 | 0.3736 | 0.9044 |
| 0.1776 | 26.9744 | 263 | 0.3797 | 0.9044 |
| 0.1787 | 28.0 | 273 | 0.3949 | 0.8897 |
| 0.18 | 28.9231 | 282 | 0.3278 | 0.9265 |
| 0.1797 | 29.9487 | 292 | 0.3615 | 0.9044 |
| 0.1665 | 30.9744 | 302 | 0.4174 | 0.8603 |
| 0.163 | 32.0 | 312 | 0.3574 | 0.8971 |
| 0.1498 | 32.9231 | 321 | 0.3591 | 0.9044 |
| 0.1405 | 33.9487 | 331 | 0.3017 | 0.9191 |
| 0.155 | 34.9744 | 341 | 0.3303 | 0.9265 |
| 0.1519 | 36.0 | 351 | 0.3559 | 0.8971 |
| 0.1415 | 36.9231 | 360 | 0.2890 | 0.9191 |
| 0.1256 | 37.9487 | 370 | 0.3445 | 0.8897 |
| 0.1217 | 38.9744 | 380 | 0.3435 | 0.9118 |
| 0.1285 | 40.0 | 390 | 0.3025 | 0.9191 |
| 0.1285 | 40.9231 | 399 | 0.3602 | 0.8824 |
| 0.1301 | 41.9487 | 409 | 0.3336 | 0.8897 |
| 0.1243 | 42.9744 | 419 | 0.2825 | 0.9338 |
| 0.1191 | 44.0 | 429 | 0.2835 | 0.9265 |
| 0.1221 | 44.9231 | 438 | 0.2724 | 0.9191 |
| 0.1151 | 45.9487 | 448 | 0.2708 | 0.9191 |
| 0.1195 | 46.1538 | 450 | 0.2707 | 0.9191 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.2
- Tokenizers 0.20.1
|
[
"ascus",
"cancer",
"hsil",
"lsil",
"nilm",
"non-diagnostic"
] |
riyadifirman/classbird_1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classbird_1
This model is a fine-tuned version of [RobertZ2011/resnet-18-birb](https://huggingface.co/RobertZ2011/resnet-18-birb) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0949
- Accuracy: 0.7726
- Precision: 0.7789
- Recall: 0.7726
- F1: 0.7680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 2.5479 | 1.0 | 29 | 2.2789 | 0.2946 | 0.3112 | 0.2946 | 0.2571 |
| 1.9716 | 2.0 | 58 | 1.9112 | 0.4651 | 0.5401 | 0.4651 | 0.4270 |
| 1.506 | 3.0 | 87 | 1.6503 | 0.6202 | 0.6831 | 0.6202 | 0.5933 |
| 1.1838 | 4.0 | 116 | 1.4594 | 0.6667 | 0.6855 | 0.6667 | 0.6439 |
| 0.9704 | 5.0 | 145 | 1.3127 | 0.7183 | 0.7395 | 0.7183 | 0.7064 |
| 0.7997 | 6.0 | 174 | 1.2345 | 0.7468 | 0.7586 | 0.7468 | 0.7410 |
| 0.763 | 7.0 | 203 | 1.1520 | 0.7442 | 0.7493 | 0.7442 | 0.7332 |
| 0.6448 | 8.0 | 232 | 1.1172 | 0.7597 | 0.7745 | 0.7597 | 0.7531 |
| 0.5839 | 9.0 | 261 | 1.0984 | 0.7649 | 0.7753 | 0.7649 | 0.7621 |
| 0.5993 | 10.0 | 290 | 1.0949 | 0.7726 | 0.7789 | 0.7726 | 0.7680 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14"
] |
Liberow/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2046
- Accuracy: 0.9337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3652 | 1.0 | 370 | 0.3105 | 0.9269 |
| 0.2061 | 2.0 | 740 | 0.2322 | 0.9364 |
| 0.167 | 3.0 | 1110 | 0.2135 | 0.9337 |
| 0.1584 | 4.0 | 1480 | 0.2093 | 0.9337 |
| 0.131 | 5.0 | 1850 | 0.2069 | 0.9337 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
ozair23/swin-tiny-patch4-window7-224-finetuned-plantdisease
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plant_disease_detection(vriksharakshak)
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0880
- Accuracy: 0.9811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
# use this model
```python
from transformers import pipeline
from PIL import Image
import requests
# Load the image classification pipeline with a specific model
pipe = pipeline("image-classification", "ozair23/swin-tiny-patch4-window7-224-finetuned-plantdisease")
# Load the image from a URL
url = 'https://huggingface.co/nielsr/convnext-tiny-finetuned-eurostat/resolve/main/forest.png'
image = Image.open(requests.get(url, stream=True).raw)
# Classify the image
results = pipe(image)
# Display the results
print("Predictions:")
for result in results:
print(f"Label: {result['label']}, Score: {result['score']:.4f}")
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1968 | 0.9983 | 145 | 0.0880 | 0.9811 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"pepper__bell___bacterial_spot",
"pepper__bell___healthy",
"potato___early_blight",
"potato___late_blight",
"potato___healthy",
"tomato_bacterial_spot",
"tomato_early_blight",
"tomato_late_blight",
"tomato_leaf_mold",
"tomato_septoria_leaf_spot",
"tomato_spider_mites_two_spotted_spider_mite",
"tomato__target_spot",
"tomato__tomato_yellowleaf__curl_virus",
"tomato__tomato_mosaic_virus",
"tomato_healthy"
] |
Jagmeet29/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3126
- Accuracy: 0.9081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.4219 | 0.9987 | 587 | 0.4344 | 0.8753 |
| 0.3599 | 1.9991 | 1175 | 0.3464 | 0.9003 |
| 0.239 | 2.9962 | 1761 | 0.3126 | 0.9081 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
[
"angioectasia",
"bleeding",
"erosion",
"erythema",
"foreign body",
"lymphangiectasia",
"normal",
"polyp",
"ulcer",
"worms"
] |
Tr13/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [apple/mobilevit-small](https://huggingface.co/apple/mobilevit-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5132
- Accuracy: 0.8636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 4.614 | 0.9829 | 43 | 4.6167 | 0.01 |
| 4.5907 | 1.9886 | 87 | 4.5880 | 0.0214 |
| 4.5382 | 2.9943 | 131 | 4.5343 | 0.105 |
| 4.4551 | 4.0 | 175 | 4.4235 | 0.3043 |
| 4.287 | 4.9829 | 218 | 4.2081 | 0.5043 |
| 3.885 | 5.9886 | 262 | 3.8133 | 0.5307 |
| 3.4412 | 6.9943 | 306 | 3.3131 | 0.4907 |
| 2.8825 | 8.0 | 350 | 2.8045 | 0.4871 |
| 2.5521 | 8.9829 | 393 | 2.3834 | 0.4671 |
| 2.2911 | 9.9886 | 437 | 2.0563 | 0.5186 |
| 2.0126 | 10.9943 | 481 | 1.8029 | 0.5679 |
| 1.8294 | 12.0 | 525 | 1.6611 | 0.605 |
| 1.745 | 12.9829 | 568 | 1.4988 | 0.6257 |
| 1.6162 | 13.9886 | 612 | 1.3868 | 0.62 |
| 1.5084 | 14.9943 | 656 | 1.2984 | 0.6429 |
| 1.4441 | 16.0 | 700 | 1.2081 | 0.6457 |
| 1.3625 | 16.9829 | 743 | 1.1554 | 0.6814 |
| 1.2752 | 17.9886 | 787 | 1.0955 | 0.6929 |
| 1.224 | 18.9943 | 831 | 1.0373 | 0.7164 |
| 1.2096 | 20.0 | 875 | 1.0375 | 0.7164 |
| 1.1551 | 20.9829 | 918 | 0.9842 | 0.7414 |
| 1.1079 | 21.9886 | 962 | 0.9645 | 0.7571 |
| 1.0669 | 22.9943 | 1006 | 0.9150 | 0.77 |
| 1.0206 | 24.0 | 1050 | 0.8508 | 0.7836 |
| 0.9963 | 24.9829 | 1093 | 0.8458 | 0.7743 |
| 0.9132 | 25.9886 | 1137 | 0.7838 | 0.7971 |
| 0.863 | 26.9943 | 1181 | 0.7590 | 0.8057 |
| 0.8669 | 28.0 | 1225 | 0.7646 | 0.785 |
| 0.8776 | 28.9829 | 1268 | 0.7084 | 0.8157 |
| 0.793 | 29.9886 | 1312 | 0.6862 | 0.82 |
| 0.7941 | 30.9943 | 1356 | 0.6971 | 0.8143 |
| 0.7863 | 32.0 | 1400 | 0.6135 | 0.8314 |
| 0.7344 | 32.9829 | 1443 | 0.5961 | 0.8407 |
| 0.6888 | 33.9886 | 1487 | 0.6304 | 0.845 |
| 0.6693 | 34.9943 | 1531 | 0.6011 | 0.8364 |
| 0.6736 | 36.0 | 1575 | 0.5917 | 0.8364 |
| 0.6739 | 36.9829 | 1618 | 0.5933 | 0.8336 |
| 0.6595 | 37.9886 | 1662 | 0.5824 | 0.8357 |
| 0.641 | 38.9943 | 1706 | 0.5232 | 0.8579 |
| 0.576 | 40.0 | 1750 | 0.5700 | 0.8393 |
| 0.6097 | 40.9829 | 1793 | 0.5384 | 0.8471 |
| 0.6016 | 41.9886 | 1837 | 0.5824 | 0.8379 |
| 0.6017 | 42.9943 | 1881 | 0.5511 | 0.8443 |
| 0.5937 | 44.0 | 1925 | 0.5095 | 0.8621 |
| 0.5674 | 44.9829 | 1968 | 0.5299 | 0.8536 |
| 0.5575 | 45.9886 | 2012 | 0.5106 | 0.8507 |
| 0.5709 | 46.9943 | 2056 | 0.5445 | 0.8507 |
| 0.5046 | 48.0 | 2100 | 0.4848 | 0.855 |
| 0.5485 | 48.9829 | 2143 | 0.5097 | 0.8564 |
| 0.4865 | 49.9886 | 2187 | 0.5227 | 0.8471 |
| 0.5505 | 50.9943 | 2231 | 0.5127 | 0.8507 |
| 0.4827 | 52.0 | 2275 | 0.5253 | 0.8493 |
| 0.5121 | 52.9829 | 2318 | 0.5095 | 0.8636 |
| 0.4879 | 53.9886 | 2362 | 0.5053 | 0.8621 |
| 0.5008 | 54.9943 | 2406 | 0.5196 | 0.8521 |
| 0.489 | 56.0 | 2450 | 0.4834 | 0.8657 |
| 0.5019 | 56.9829 | 2493 | 0.4714 | 0.8614 |
| 0.4828 | 57.9886 | 2537 | 0.5019 | 0.8571 |
| 0.4373 | 58.9943 | 2581 | 0.4894 | 0.8679 |
| 0.4444 | 60.0 | 2625 | 0.5093 | 0.8657 |
| 0.4178 | 60.9829 | 2668 | 0.5058 | 0.8614 |
| 0.4081 | 61.9886 | 2712 | 0.4996 | 0.8586 |
| 0.4311 | 62.9943 | 2756 | 0.4973 | 0.8557 |
| 0.425 | 64.0 | 2800 | 0.4627 | 0.8743 |
| 0.4147 | 64.9829 | 2843 | 0.4875 | 0.865 |
| 0.4505 | 65.9886 | 2887 | 0.4918 | 0.8636 |
| 0.3621 | 66.9943 | 2931 | 0.4903 | 0.86 |
| 0.4072 | 68.0 | 2975 | 0.4983 | 0.8564 |
| 0.3883 | 68.9829 | 3018 | 0.4635 | 0.8743 |
| 0.4284 | 69.9886 | 3062 | 0.4582 | 0.8686 |
| 0.3891 | 70.9943 | 3106 | 0.4456 | 0.8793 |
| 0.4255 | 72.0 | 3150 | 0.4760 | 0.87 |
| 0.425 | 72.9829 | 3193 | 0.4905 | 0.8721 |
| 0.4301 | 73.9886 | 3237 | 0.4942 | 0.8643 |
| 0.3666 | 74.9943 | 3281 | 0.4824 | 0.8629 |
| 0.4275 | 76.0 | 3325 | 0.4638 | 0.8671 |
| 0.4161 | 76.9829 | 3368 | 0.4859 | 0.8621 |
| 0.3773 | 77.9886 | 3412 | 0.4918 | 0.8521 |
| 0.3591 | 78.9943 | 3456 | 0.4881 | 0.8729 |
| 0.4018 | 80.0 | 3500 | 0.4681 | 0.8707 |
| 0.404 | 80.9829 | 3543 | 0.4882 | 0.86 |
| 0.3987 | 81.9886 | 3587 | 0.4796 | 0.8657 |
| 0.3546 | 82.9943 | 3631 | 0.4945 | 0.8643 |
| 0.3795 | 84.0 | 3675 | 0.4638 | 0.8679 |
| 0.4007 | 84.9829 | 3718 | 0.4624 | 0.8729 |
| 0.3783 | 85.9886 | 3762 | 0.4693 | 0.8729 |
| 0.3498 | 86.9943 | 3806 | 0.4980 | 0.8621 |
| 0.3477 | 88.0 | 3850 | 0.4705 | 0.8671 |
| 0.4022 | 88.9829 | 3893 | 0.4817 | 0.86 |
| 0.3697 | 89.9886 | 3937 | 0.4763 | 0.8629 |
| 0.3828 | 90.9943 | 3981 | 0.4867 | 0.8671 |
| 0.3842 | 92.0 | 4025 | 0.4911 | 0.865 |
| 0.3562 | 92.9829 | 4068 | 0.4562 | 0.875 |
| 0.3343 | 93.9886 | 4112 | 0.4573 | 0.8786 |
| 0.3521 | 94.9943 | 4156 | 0.4481 | 0.8843 |
| 0.3788 | 96.0 | 4200 | 0.4793 | 0.8721 |
| 0.3518 | 96.9829 | 4243 | 0.4802 | 0.8693 |
| 0.3491 | 97.9886 | 4287 | 0.4740 | 0.8686 |
| 0.4063 | 98.2857 | 4300 | 0.5132 | 0.8636 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
riyadifirman/klasifikasiburung
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klasifikasiburung
This model is a fine-tuned version of [RobertZ2011/resnet-18-birb](https://huggingface.co/RobertZ2011/resnet-18-birb) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0186
- Accuracy: 0.7565
- Precision: 0.7631
- Recall: 0.7565
- F1: 0.7554
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.5955 | 1.0 | 188 | 1.4442 | 0.7235 | 0.7426 | 0.7235 | 0.7169 |
| 1.1224 | 2.0 | 376 | 1.2881 | 0.7458 | 0.7546 | 0.7458 | 0.7406 |
| 0.7778 | 3.0 | 564 | 1.1965 | 0.7501 | 0.7635 | 0.7501 | 0.7483 |
| 0.5573 | 4.0 | 752 | 1.1417 | 0.7565 | 0.7635 | 0.7565 | 0.7538 |
| 0.4231 | 5.0 | 940 | 1.1077 | 0.7584 | 0.7671 | 0.7584 | 0.7567 |
| 0.2878 | 6.0 | 1128 | 1.0893 | 0.7601 | 0.7716 | 0.7601 | 0.7597 |
| 0.2043 | 7.0 | 1316 | 1.0688 | 0.7591 | 0.7661 | 0.7591 | 0.7579 |
| 0.1326 | 8.0 | 1504 | 1.0687 | 0.7582 | 0.7653 | 0.7582 | 0.7565 |
| 0.0851 | 9.0 | 1692 | 1.0502 | 0.7598 | 0.7652 | 0.7598 | 0.7581 |
| 0.0807 | 10.0 | 1880 | 1.0318 | 0.7582 | 0.7644 | 0.7582 | 0.7569 |
| 0.0581 | 11.0 | 2068 | 1.0403 | 0.7572 | 0.7629 | 0.7572 | 0.7558 |
| 0.043 | 12.0 | 2256 | 1.0295 | 0.7565 | 0.7633 | 0.7565 | 0.7557 |
| 0.0379 | 13.0 | 2444 | 1.0271 | 0.7568 | 0.7636 | 0.7568 | 0.7557 |
| 0.0399 | 14.0 | 2632 | 1.0319 | 0.7558 | 0.7627 | 0.7558 | 0.7549 |
| 0.0447 | 15.0 | 2820 | 1.0186 | 0.7565 | 0.7631 | 0.7565 | 0.7554 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"black footed albatross",
"laysan albatross",
"sooty albatross",
"groove billed ani",
"crested auklet",
"least auklet",
"parakeet auklet",
"rhinoceros auklet",
"brewer blackbird",
"red winged blackbird",
"rusty blackbird",
"yellow headed blackbird",
"bobolink",
"indigo bunting",
"lazuli bunting",
"painted bunting",
"cardinal",
"spotted catbird",
"gray catbird",
"yellow breasted chat",
"eastern towhee",
"chuck will widow",
"brandt cormorant",
"red faced cormorant",
"pelagic cormorant",
"bronzed cowbird",
"shiny cowbird",
"brown creeper",
"american crow",
"fish crow",
"black billed cuckoo",
"mangrove cuckoo",
"yellow billed cuckoo",
"gray crowned rosy finch",
"purple finch",
"northern flicker",
"acadian flycatcher",
"great crested flycatcher",
"least flycatcher",
"olive sided flycatcher",
"scissor tailed flycatcher",
"vermilion flycatcher",
"yellow bellied flycatcher",
"frigatebird",
"northern fulmar",
"gadwall",
"american goldfinch",
"european goldfinch",
"boat tailed grackle",
"eared grebe",
"horned grebe",
"pied billed grebe",
"western grebe",
"blue grosbeak",
"evening grosbeak",
"pine grosbeak",
"rose breasted grosbeak",
"pigeon guillemot",
"california gull",
"glaucous winged gull",
"heermann gull",
"herring gull",
"ivory gull",
"ring billed gull",
"slaty backed gull",
"western gull",
"anna hummingbird",
"ruby throated hummingbird",
"rufous hummingbird",
"green violetear",
"long tailed jaeger",
"pomarine jaeger",
"blue jay",
"florida jay",
"green jay",
"dark eyed junco",
"tropical kingbird",
"gray kingbird",
"belted kingfisher",
"green kingfisher",
"pied kingfisher",
"ringed kingfisher",
"white breasted kingfisher",
"red legged kittiwake",
"horned lark",
"pacific loon",
"mallard",
"western meadowlark",
"hooded merganser",
"red breasted merganser",
"mockingbird",
"nighthawk",
"clark nutcracker",
"white breasted nuthatch",
"baltimore oriole",
"hooded oriole",
"orchard oriole",
"scott oriole",
"ovenbird",
"brown pelican",
"white pelican",
"western wood pewee",
"sayornis",
"american pipit",
"whip poor will",
"horned puffin",
"common raven",
"white necked raven",
"american redstart",
"geococcyx",
"loggerhead shrike",
"great grey shrike",
"baird sparrow",
"black throated sparrow",
"brewer sparrow",
"chipping sparrow",
"clay colored sparrow",
"house sparrow",
"field sparrow",
"fox sparrow",
"grasshopper sparrow",
"harris sparrow",
"henslow sparrow",
"le conte sparrow",
"lincoln sparrow",
"nelson sharp tailed sparrow",
"savannah sparrow",
"seaside sparrow",
"song sparrow",
"tree sparrow",
"vesper sparrow",
"white crowned sparrow",
"white throated sparrow",
"cape glossy starling",
"bank swallow",
"barn swallow",
"cliff swallow",
"tree swallow",
"scarlet tanager",
"summer tanager",
"artic tern",
"black tern",
"caspian tern",
"common tern",
"elegant tern",
"forsters tern",
"least tern",
"green tailed towhee",
"brown thrasher",
"sage thrasher",
"black capped vireo",
"blue headed vireo",
"philadelphia vireo",
"red eyed vireo",
"warbling vireo",
"white eyed vireo",
"yellow throated vireo",
"bay breasted warbler",
"black and white warbler",
"black throated blue warbler",
"blue winged warbler",
"canada warbler",
"cape may warbler",
"cerulean warbler",
"chestnut sided warbler",
"golden winged warbler",
"hooded warbler",
"kentucky warbler",
"magnolia warbler",
"mourning warbler",
"myrtle warbler",
"nashville warbler",
"orange crowned warbler",
"palm warbler",
"pine warbler",
"prairie warbler",
"prothonotary warbler",
"swainson warbler",
"tennessee warbler",
"wilson warbler",
"worm eating warbler",
"yellow warbler",
"northern waterthrush",
"louisiana waterthrush",
"bohemian waxwing",
"cedar waxwing",
"american three toed woodpecker",
"pileated woodpecker",
"red bellied woodpecker",
"red cockaded woodpecker",
"red headed woodpecker",
"downy woodpecker",
"bewick wren",
"cactus wren",
"carolina wren",
"house wren",
"marsh wren",
"rock wren",
"winter wren",
"common yellowthroat"
] |
audgns/ViT_beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT_beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 17 | 0.8654 |
| No log | 2.0 | 34 | 0.7691 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"angular_leaf_spot",
"bean_rust",
"healty"
] |
czarmagnate/ViT_beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT_beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 17 | 0.7584 |
| No log | 2.0 | 34 | 0.6724 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
han745/ViT_beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT_beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 17 | 0.8169 |
| No log | 2.0 | 34 | 0.7275 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
eedeedeed/ViT_beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT_beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 17 | 0.7904 |
| No log | 2.0 | 34 | 0.6994 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
siuuuuuuuuuuuuuu/ViT_beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT_beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 17 | 0.8172 |
| No log | 2.0 | 34 | 0.7190 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
Shinee21/ViT_beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT_beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 17 | 0.8099 |
| No log | 2.0 | 34 | 0.7117 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"angular_leaf_spot",
"bean_rust",
"healty"
] |
Jipumpkin/ViT_beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT_beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 17 | 0.8311 |
| No log | 2.0 | 34 | 0.7454 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"angular_leaf_spot",
"bean_rust",
"healty"
] |
2todeux/ViT_beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT_beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 17 | 0.8569 |
| No log | 2.0 | 34 | 0.7710 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"angylar_leaf_spot",
"bean_rust",
"healty"
] |
jy1003/ViT_beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT_beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 17 | 0.7812 |
| No log | 2.0 | 34 | 0.6929 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
Kakaronalq/ViT_beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT_beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7163
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 17 | 0.7984 |
| No log | 2.0 | 34 | 0.7163 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"angular_leaf_spot",
"bean_rust",
"healty"
] |
Changmin0816/ViT_beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT_beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 17 | 0.8418 |
| No log | 2.0 | 34 | 0.7563 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"angular_leaf_spot",
"bean_rust",
"healty"
] |
HanDaeYu/ViT_beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT_beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7130
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 17 | 0.8154 |
| No log | 2.0 | 34 | 0.7130 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
ahmed792002/vit-plant-classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-plant-classification
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0182
- Accuracy: 0.9933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0529 | 1.0 | 476 | 0.0660 | 0.9816 |
| 0.0609 | 2.0 | 952 | 0.0229 | 0.9939 |
| 0.0012 | 3.0 | 1428 | 0.0205 | 0.9951 |
| 0.0007 | 4.0 | 1904 | 0.0126 | 0.9969 |
| 0.0006 | 5.0 | 2380 | 0.0122 | 0.9969 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24",
"label_25",
"label_26",
"label_27",
"label_28",
"label_29",
"label_30",
"label_31",
"label_32",
"label_33",
"label_34",
"label_35",
"label_36",
"label_37"
] |
dacxshaki/save_here
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# save_here
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 150
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"dry",
"wet"
] |
100rab25/bridalMakeupClassifier
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bridalMakeupClassifier
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0326
- Accuracy: 0.9969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1604 | 1.0 | 23 | 0.0509 | 0.9846 |
| 0.0837 | 2.0 | 46 | 0.0353 | 0.9877 |
| 0.0588 | 3.0 | 69 | 0.0326 | 0.9969 |
| 0.05 | 4.0 | 92 | 0.0302 | 0.9969 |
| 0.0284 | 5.0 | 115 | 0.0313 | 0.9938 |
| 0.0372 | 6.0 | 138 | 0.0273 | 0.9938 |
| 0.0461 | 7.0 | 161 | 0.0268 | 0.9969 |
| 0.0338 | 8.0 | 184 | 0.0259 | 0.9969 |
| 0.0253 | 9.0 | 207 | 0.0256 | 0.9938 |
| 0.0326 | 10.0 | 230 | 0.0266 | 0.9969 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.20.0
|
[
"bridalmakeup",
"others"
] |
aningddd/swinv2-base
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-base
This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window12-192-22k](https://huggingface.co/microsoft/swinv2-base-patch4-window12-192-22k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1570
- Accuracy: 0.9577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1001 | 1.0 | 240 | 1.0510 | 0.5701 |
| 0.7709 | 2.0 | 480 | 0.7516 | 0.7091 |
| 0.5077 | 3.0 | 720 | 0.5670 | 0.7953 |
| 0.2908 | 4.0 | 960 | 0.3946 | 0.8650 |
| 0.1676 | 5.0 | 1200 | 0.2796 | 0.9038 |
| 0.117 | 6.0 | 1440 | 0.2322 | 0.9275 |
| 0.0634 | 7.0 | 1680 | 0.2433 | 0.9306 |
| 0.0425 | 8.0 | 1920 | 0.1843 | 0.9490 |
| 0.0252 | 9.0 | 2160 | 0.1653 | 0.9543 |
| 0.0147 | 10.0 | 2400 | 0.1570 | 0.9577 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Tokenizers 0.19.1
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5"
] |
wambugu71/crop_leaf_diseases_vit
|
# Model Card for Smart Farming Disease Detection Transformer
This model is a Vision Transformer (ViT) designed to identify plant diseases in crops as part of a smart agricultural farming system. It has been trained on a diverse dataset of plant images, including different disease categories affecting crops such as corn, potato, rice, and wheat. The model aims to provide farmers and agronomists with real-time disease detection for better crop management.
## Model Details
### Model Description
This Vision Transformer model has been fine-tuned to classify various plant diseases commonly found in agricultural settings. The model can classify diseases in crops such as corn, potato, rice, and wheat, identifying diseases like rust, blight, leaf spots, and others. The goal is to enable precision farming by helping farmers detect diseases early and take appropriate actions.
- **Developed by:** Wambugu Kinyua
- **Model type:** Vision Transformer (ViT)
- **Languages (NLP):** N/A (Computer Vision Model)
- **License:** Apache 2.0
- **Finetuned from model:** (WinKawaks/vit-tiny-patch16-224)[https://huggingface.co/WinKawaks/vit-tiny-patch16-224]
- **Input:** Images of crops (RGB format)
- **Output:** Disease classification labels (healthy or diseased categories)
## Diseases from the model
| Crop | Diseases Identified |
|--------|------------------------------|
| Corn | Common Rust |
| Corn | Gray Leaf Spot |
| Corn | Healthy |
| Corn | Leaf Blight |
| - | Invalid |
| Potato | Early Blight |
| Potato | Healthy |
| Potato | Late Blight |
| Rice | Brown Spot |
| Rice | Healthy |
| Rice | Leaf Blast |
| Wheat | Brown Rust |
| Wheat | Healthy |
| Wheat | Yellow Rust |
## Uses
### Direct Use
This model can be used directly to classify images of crops to detect plant diseases. It is especially useful for precision farming, enabling users to monitor crop health and take early interventions based on the detected disease.
### Downstream Use
This model can be fine-tuned on other agricultural datasets for specific crops or regions to improve its performance or be integrated into larger precision farming systems that include other features like weather predictions and irrigation control.
Can be quantitized or deployed in full precision on edge devices due to its small parameter size without compromising on precision and accuracy.
### Out-of-Scope Use
This model is not designed for non-agricultural image classification tasks or for environments with insufficient or very noisy data. Misuse includes using the model in areas with vastly different agricultural conditions from those it was trained on.
## Bias, Risks, and Limitations
- The model may exhibit bias toward the crops and diseases present in the training dataset, leading to lower performance on unrepresented diseases or crop varieties.
- False negatives (failing to detect a disease) may result in untreated crop damage, while false positives could lead to unnecessary interventions.
### Recommendations
Users should evaluate the model on their specific crops and farming conditions. Regular updates and retraining with local data are recommended for optimal performance.
## How to Get Started with the Model
```python
from PIL import Image, UnidentifiedImageError
from transformers import ViTFeatureExtractor, ViTForImageClassification
feature_extractor = ViTFeatureExtractor.from_pretrained('wambugu71/crop_leaf_diseases_vit')
model = ViTForImageClassification.from_pretrained(
'wambugu1738/crop_leaf_diseases_vit',
ignore_mismatched_sizes=True
)
image = Image.open('<image_path>')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
## Training Details
### Training Data
The model was trained on a dataset containing images of various crops with labeled diseases, including the following categories:
- **Corn**: Common Rust, Gray Leaf Spot, Leaf Blight, Healthy
- **Potato**: Early Blight, Late Blight, Healthy
- **Rice**: Brown Spot, Hispa, Leaf Blast, Healthy
- **Wheat**: Brown Rust, Yellow Rust, Healthy
The dataset also includes images captured under various lighting conditions, from both controlled and uncontrolled environments and angles, to simulate real-world farming scenarios.
We made use of public available datasets, and our own private data.
### Training Procedure
The model was fine-tuned using a vision transformer architecture pre-trained on the ImageNet dataset. The dataset was preprocessed by resizing the images and normalizing the pixel values.
#### Training Hyperparameters
- **Batch size:** 32
- **Learning rate:** 2e-5
- **Epochs:** 4
- **Optimizer:** AdamW
- **Precision:** fp16
### Evaluation

#### Testing Data, Factors & Metrics
The model was evaluated using a validation set consisting of 20% of the original dataset, with the following metrics:
- **Accuracy:** 98%
- **Precision:** 97%
- **Recall:** 97%
- **F1 Score:** 96%
## Environmental Impact
Carbon emissions during model training can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute).
- **Hardware Type:** NVIDIA L40S
- **Hours used:** 1 hours
- **Cloud Provider:** Lightning AI
## Technical Specifications
### Model Architecture and Objective
The model uses a Vision Transformer architecture to learn image representations and classify them into disease categories. Its self-attention mechanism enables it to capture global contextual information in the images, making it suitable for agricultural disease detection.
### Compute Infrastructure
#### Hardware
- NVIDIA L40S GPUs
- 48 GB RAM
- SSD storage for fast I/O
#### Software
- Python 3.9
- PyTorch 2.4.1+cu121
- pytorch_lightning
- Transformers library by Hugging Face
## Citation
If you use this model in your research or applications, please cite it as:
**BibTeX:**
```
@misc{kinyua2024smartfarming,
title={Smart Farming Disease Detection Transformer},
author={Wambugu Kinyua},
year={2024},
publisher={Hugging Face},
}
```
**APA:**
Kinyua, W. (2024). Smart Farming Disease Detection Transformer. Hugging Face.
## Model Card Contact
For further inquiries, contact: [email protected]
|
[
"corn___common_rust",
"corn___gray_leaf_spot",
"wheat___brown_rust",
"wheat___healthy",
"wheat___yellow_rust",
"corn___healthy",
"invalid",
"potato___early_blight",
"potato___healthy",
"potato___late_blight",
"rice___brown_spot",
"rice___healthy",
"rice___leaf_blast"
] |
howdyaendra/microsoft-swinv2-small-patch4-window16-256-finetuned-xblockm
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# microsoft-swinv2-small-patch4-window16-256-finetuned-xblockm
This model is a fine-tuned version of [microsoft/swinv2-small-patch4-window16-256](https://huggingface.co/microsoft/swinv2-small-patch4-window16-256) on the [howdyaendra/xblock-social-screenshots](https://huggingface.co/datasets/howdyaendra/xblock-social-screenshots) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1252
- Roc Auc: 0.9535
## Model description
This model is trained on several thousand screenshots reported to the [XBlock 3rd-party Bluesky labeller service](https://bsky.app/profile/xblock.aendra.dev). It is
intended to be used to label Bluesky posts that have screenshots from social media sites embedded in them. Please also see [aendra-rininsland/xblock](https://github.com/aendra-rininsland/xblock).
## Intended uses & limitations
Screenshot moderation
## Training and evaluation data
20% split of 1618 images
## Training procedure
See [notebook](https://github.com/aendra-rininsland/xblock-notebooks/blob/main/xblock-m.ipynb).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Roc Auc |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.4357 | 0.9877 | 20 | 0.2544 | 0.7784 |
| 0.2027 | 1.9753 | 40 | 0.2016 | 0.8431 |
| 0.1743 | 2.9630 | 60 | 0.1701 | 0.8912 |
| 0.1625 | 4.0 | 81 | 0.1677 | 0.9083 |
| 0.1321 | 4.9877 | 101 | 0.1447 | 0.9246 |
| 0.1155 | 5.9753 | 121 | 0.1418 | 0.9311 |
| 0.0959 | 6.9630 | 141 | 0.1381 | 0.9460 |
| 0.0788 | 7.9012 | 160 | 0.1252 | 0.9535 |
### Framework versions
- Transformers 4.44.1
- Pytorch 2.2.2
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"altright",
"bluesky",
"discord",
"facebook",
"fediverse",
"instagram",
"negative",
"news",
"ngl",
"reddit",
"threads",
"tumblr",
"twitter"
] |
wcosmas/convnext-tiny-224-finetuned-papsmear
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-papsmear
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2836
- Accuracy: 0.8897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.9231 | 9 | 1.7808 | 0.1691 |
| 1.8057 | 1.9487 | 19 | 1.6808 | 0.3309 |
| 1.7394 | 2.9744 | 29 | 1.5825 | 0.3382 |
| 1.6408 | 4.0 | 39 | 1.4576 | 0.375 |
| 1.5428 | 4.9231 | 48 | 1.3281 | 0.5221 |
| 1.3931 | 5.9487 | 58 | 1.2044 | 0.5588 |
| 1.2669 | 6.9744 | 68 | 1.0756 | 0.6103 |
| 1.1355 | 8.0 | 78 | 0.9845 | 0.6324 |
| 1.0379 | 8.9231 | 87 | 0.9260 | 0.6618 |
| 0.9571 | 9.9487 | 97 | 0.8539 | 0.6618 |
| 0.8376 | 10.9744 | 107 | 0.7998 | 0.7279 |
| 0.7942 | 12.0 | 117 | 0.7573 | 0.75 |
| 0.7095 | 12.9231 | 126 | 0.7005 | 0.7426 |
| 0.7022 | 13.9487 | 136 | 0.6834 | 0.7868 |
| 0.6504 | 14.9744 | 146 | 0.6552 | 0.7721 |
| 0.589 | 16.0 | 156 | 0.6192 | 0.8015 |
| 0.5679 | 16.9231 | 165 | 0.5738 | 0.8088 |
| 0.5236 | 17.9487 | 175 | 0.5617 | 0.8015 |
| 0.5244 | 18.9744 | 185 | 0.5073 | 0.8235 |
| 0.4781 | 20.0 | 195 | 0.5112 | 0.8162 |
| 0.453 | 20.9231 | 204 | 0.4650 | 0.8235 |
| 0.4544 | 21.9487 | 214 | 0.4591 | 0.8456 |
| 0.419 | 22.9744 | 224 | 0.4403 | 0.8309 |
| 0.4146 | 24.0 | 234 | 0.4292 | 0.8382 |
| 0.398 | 24.9231 | 243 | 0.4315 | 0.8382 |
| 0.3918 | 25.9487 | 253 | 0.3980 | 0.8676 |
| 0.361 | 26.9744 | 263 | 0.3758 | 0.8603 |
| 0.3355 | 28.0 | 273 | 0.3657 | 0.8603 |
| 0.3483 | 28.9231 | 282 | 0.3669 | 0.875 |
| 0.3171 | 29.9487 | 292 | 0.3492 | 0.8603 |
| 0.3249 | 30.9744 | 302 | 0.3400 | 0.875 |
| 0.3087 | 32.0 | 312 | 0.3251 | 0.875 |
| 0.3029 | 32.9231 | 321 | 0.3167 | 0.8824 |
| 0.3018 | 33.9487 | 331 | 0.3192 | 0.875 |
| 0.2823 | 34.9744 | 341 | 0.3066 | 0.875 |
| 0.2744 | 36.0 | 351 | 0.3003 | 0.875 |
| 0.258 | 36.9231 | 360 | 0.2964 | 0.875 |
| 0.2714 | 37.9487 | 370 | 0.3039 | 0.875 |
| 0.2486 | 38.9744 | 380 | 0.2937 | 0.875 |
| 0.2511 | 40.0 | 390 | 0.2739 | 0.8824 |
| 0.2511 | 40.9231 | 399 | 0.2836 | 0.8897 |
| 0.2659 | 41.9487 | 409 | 0.2804 | 0.875 |
| 0.2379 | 42.9744 | 419 | 0.2747 | 0.8824 |
| 0.2279 | 44.0 | 429 | 0.2726 | 0.8897 |
| 0.2153 | 44.9231 | 438 | 0.2732 | 0.8897 |
| 0.2461 | 45.9487 | 448 | 0.2738 | 0.8897 |
| 0.2482 | 46.1538 | 450 | 0.2738 | 0.8897 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"ascus",
"cancer",
"hsil",
"lsil",
"nilm",
"non-diagnostic"
] |
krishna-exe/brain-tumor-classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# brain-tumor-classification
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1636
- Accuracy: 0.9373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.8509 | 0.9877 | 20 | 0.5305 | 0.8467 |
| 0.4478 | 1.9753 | 40 | 0.3092 | 0.9094 |
| 0.3313 | 2.9630 | 60 | 0.2422 | 0.9233 |
| 0.2777 | 4.0 | 81 | 0.1716 | 0.9373 |
| 0.2465 | 4.9383 | 100 | 0.1636 | 0.9373 |
### Framework versions
- Transformers 4.46.0.dev0
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
[
"glioma_tumor",
"meningioma_tumor",
"no_tumor",
"pituitary_tumor"
] |
benholloway/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3128
- Accuracy: 0.231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.0523 | 0.992 | 62 | 4.0196 | 0.174 |
| 3.4782 | 2.0 | 125 | 3.4764 | 0.24 |
| 3.2317 | 2.976 | 186 | 3.3128 | 0.231 |
### Framework versions
- Transformers 4.45.0
- Pytorch 2.2.2+cu121
- Datasets 2.21.0
- Tokenizers 0.20.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
benholloway/my_awesome_food_model_resnet
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model_resnet
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4383
- Accuracy: 0.661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2955 | 0.992 | 62 | 4.2486 | 0.18 |
| 3.4887 | 2.0 | 125 | 3.3899 | 0.312 |
| 2.7039 | 2.992 | 187 | 2.6616 | 0.475 |
| 2.1832 | 4.0 | 250 | 2.1833 | 0.565 |
| 1.946 | 4.992 | 312 | 1.9504 | 0.631 |
| 1.7753 | 6.0 | 375 | 1.7184 | 0.638 |
| 1.666 | 6.992 | 437 | 1.5985 | 0.667 |
| 1.5402 | 8.0 | 500 | 1.4900 | 0.667 |
| 1.5239 | 8.992 | 562 | 1.4500 | 0.665 |
| 1.5147 | 9.92 | 620 | 1.4383 | 0.661 |
### Framework versions
- Transformers 4.45.0
- Pytorch 2.2.2+cu121
- Datasets 2.21.0
- Tokenizers 0.20.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
gerald29/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [facebook/dinov2-base-imagenet1k-1-layer](https://huggingface.co/facebook/dinov2-base-imagenet1k-1-layer) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1930
- Accuracy: 0.943
This is just a model created by following the the Tramnformers tutorial on image classification at https://huggingface.co/docs/transformers/main/en/tasks/image_classification
So quite worthless
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3989 | 0.992 | 62 | 0.3865 | 0.867 |
| 0.2722 | 2.0 | 125 | 0.2720 | 0.916 |
| 0.126 | 2.976 | 186 | 0.1930 | 0.943 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
davanstrien/test-timm
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-timm
This model is a fine-tuned version of [timm/mobilenetv3_large_100.miil_in21k](https://huggingface.co/timm/mobilenetv3_large_100.miil_in21k) on the davanstrien/zenodo-presentations-open-labels dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4897
- Accuracy: 0.7913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 1337
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 200.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 0.6794 | 1.0 | 23 | 0.6063 | 0.6560 |
| 0.6215 | 2.0 | 46 | 0.7362 | 0.5833 |
| 0.5784 | 3.0 | 69 | 0.7598 | 0.5490 |
| 0.5347 | 4.0 | 92 | 0.7638 | 0.5306 |
| 0.5307 | 5.0 | 115 | 0.7638 | 0.5235 |
| 0.5391 | 6.0 | 138 | 0.7677 | 0.5090 |
| 0.48 | 7.0 | 161 | 0.7717 | 0.5108 |
| 0.473 | 8.0 | 184 | 0.7756 | 0.5028 |
| 0.5014 | 9.0 | 207 | 0.7717 | 0.5054 |
| 0.496 | 10.0 | 230 | 0.7717 | 0.5040 |
| 0.4688 | 11.0 | 253 | 0.7677 | 0.4972 |
| 0.4943 | 12.0 | 276 | 0.7638 | 0.4977 |
| 0.5012 | 13.0 | 299 | 0.7717 | 0.5057 |
| 0.4639 | 14.0 | 322 | 0.7717 | 0.5010 |
| 0.4709 | 15.0 | 345 | 0.7795 | 0.4949 |
| 0.4888 | 16.0 | 368 | 0.7835 | 0.4955 |
| 0.4594 | 17.0 | 391 | 0.7717 | 0.4986 |
| 0.4745 | 18.0 | 414 | 0.7677 | 0.5011 |
| 0.4667 | 19.0 | 437 | 0.7756 | 0.4928 |
| 0.4551 | 20.0 | 460 | 0.7795 | 0.5055 |
| 0.4657 | 21.0 | 483 | 0.7756 | 0.4928 |
| 0.4818 | 22.0 | 506 | 0.7756 | 0.5002 |
| 0.4633 | 23.0 | 529 | 0.7835 | 0.4946 |
| 0.4779 | 24.0 | 552 | 0.7795 | 0.4942 |
| 0.4718 | 25.0 | 575 | 0.7835 | 0.4963 |
| 0.4511 | 26.0 | 598 | 0.7717 | 0.5011 |
| 0.4798 | 27.0 | 621 | 0.7874 | 0.4904 |
| 0.4868 | 28.0 | 644 | 0.7835 | 0.4982 |
| 0.4653 | 29.0 | 667 | 0.7874 | 0.4988 |
| 0.4613 | 30.0 | 690 | 0.7795 | 0.4985 |
| 0.4675 | 31.0 | 713 | 0.7717 | 0.5060 |
| 0.4587 | 32.0 | 736 | 0.7717 | 0.5059 |
| 0.464 | 33.0 | 759 | 0.7795 | 0.5042 |
| 0.4374 | 34.0 | 782 | 0.7677 | 0.5063 |
| 0.4864 | 35.0 | 805 | 0.7677 | 0.5040 |
| 0.4354 | 36.0 | 828 | 0.7717 | 0.5109 |
| 0.4655 | 37.0 | 851 | 0.7717 | 0.5107 |
| 0.4691 | 38.0 | 874 | 0.7677 | 0.5093 |
| 0.4826 | 39.0 | 897 | 0.7717 | 0.5044 |
| 0.4577 | 40.0 | 920 | 0.7795 | 0.5000 |
| 0.4636 | 41.0 | 943 | 0.7717 | 0.4963 |
| 0.4361 | 42.0 | 966 | 0.7717 | 0.4958 |
| 0.4534 | 43.0 | 989 | 0.7795 | 0.5008 |
| 0.4559 | 44.0 | 1012 | 0.7795 | 0.5025 |
| 0.4189 | 45.0 | 1035 | 0.7756 | 0.5014 |
| 0.4861 | 46.0 | 1058 | 0.7677 | 0.5004 |
| 0.4709 | 47.0 | 1081 | 0.7795 | 0.5005 |
| 0.4726 | 48.0 | 1104 | 0.7717 | 0.5008 |
| 0.4441 | 49.0 | 1127 | 0.7756 | 0.4988 |
| 0.4579 | 50.0 | 1150 | 0.7756 | 0.5000 |
| 0.4366 | 51.0 | 1173 | 0.4980 | 0.7756 |
| 0.4467 | 52.0 | 1196 | 0.4947 | 0.7795 |
| 0.4797 | 53.0 | 1219 | 0.4950 | 0.7756 |
| 0.4544 | 54.0 | 1242 | 0.4998 | 0.7717 |
| 0.4466 | 55.0 | 1265 | 0.4980 | 0.7795 |
| 0.4599 | 56.0 | 1288 | 0.4963 | 0.7835 |
| 0.4458 | 57.0 | 1311 | 0.4956 | 0.7874 |
| 0.4296 | 58.0 | 1334 | 0.4994 | 0.7874 |
| 0.4415 | 59.0 | 1357 | 0.4998 | 0.7835 |
| 0.4036 | 60.0 | 1380 | 0.4996 | 0.7795 |
| 0.4406 | 61.0 | 1403 | 0.5022 | 0.7913 |
| 0.4235 | 62.0 | 1426 | 0.5018 | 0.7913 |
| 0.4492 | 63.0 | 1449 | 0.4964 | 0.8031 |
| 0.4065 | 64.0 | 1472 | 0.4953 | 0.7874 |
| 0.4474 | 65.0 | 1495 | 0.4897 | 0.7913 |
| 0.4605 | 66.0 | 1518 | 0.5039 | 0.7795 |
| 0.436 | 67.0 | 1541 | 0.5024 | 0.7756 |
| 0.4746 | 68.0 | 1564 | 0.5007 | 0.7874 |
| 0.4555 | 69.0 | 1587 | 0.5054 | 0.7874 |
| 0.433 | 70.0 | 1610 | 0.4974 | 0.7874 |
| 0.4503 | 71.0 | 1633 | 0.5096 | 0.7795 |
| 0.4424 | 72.0 | 1656 | 0.5040 | 0.7756 |
| 0.4331 | 73.0 | 1679 | 0.5056 | 0.7913 |
| 0.4263 | 74.0 | 1702 | 0.5026 | 0.7874 |
| 0.4305 | 75.0 | 1725 | 0.5033 | 0.7835 |
| 0.4271 | 76.0 | 1748 | 0.5015 | 0.7874 |
| 0.4635 | 77.0 | 1771 | 0.4988 | 0.7913 |
| 0.4212 | 78.0 | 1794 | 0.4994 | 0.7913 |
| 0.4154 | 79.0 | 1817 | 0.5044 | 0.7874 |
| 0.4288 | 80.0 | 1840 | 0.5033 | 0.7913 |
| 0.4211 | 81.0 | 1863 | 0.5050 | 0.7835 |
| 0.4022 | 82.0 | 1886 | 0.5021 | 0.7835 |
| 0.4477 | 83.0 | 1909 | 0.5096 | 0.7756 |
| 0.4091 | 84.0 | 1932 | 0.5017 | 0.7913 |
| 0.4284 | 85.0 | 1955 | 0.5094 | 0.7795 |
| 0.4317 | 86.0 | 1978 | 0.5056 | 0.7874 |
| 0.4011 | 87.0 | 2001 | 0.4992 | 0.7953 |
| 0.4043 | 88.0 | 2024 | 0.5106 | 0.7874 |
| 0.4233 | 89.0 | 2047 | 0.5083 | 0.7835 |
| 0.4383 | 90.0 | 2070 | 0.5016 | 0.7913 |
| 0.4328 | 91.0 | 2093 | 0.5062 | 0.7874 |
| 0.3978 | 92.0 | 2116 | 0.5026 | 0.7874 |
| 0.4052 | 93.0 | 2139 | 0.4964 | 0.7913 |
| 0.3938 | 94.0 | 2162 | 0.5036 | 0.7874 |
| 0.393 | 95.0 | 2185 | 0.5102 | 0.7835 |
| 0.4294 | 96.0 | 2208 | 0.5003 | 0.7874 |
| 0.4122 | 97.0 | 2231 | 0.5013 | 0.7913 |
| 0.4207 | 98.0 | 2254 | 0.5076 | 0.7874 |
| 0.4127 | 99.0 | 2277 | 0.5040 | 0.7835 |
| 0.441 | 100.0 | 2300 | 0.5022 | 0.7835 |
| 0.3938 | 101.0 | 2323 | 0.4975 | 0.7992 |
| 0.4109 | 102.0 | 2346 | 0.5019 | 0.7913 |
| 0.4299 | 103.0 | 2369 | 0.5060 | 0.7874 |
| 0.4148 | 104.0 | 2392 | 0.5038 | 0.7874 |
| 0.4179 | 105.0 | 2415 | 0.5064 | 0.7835 |
| 0.4352 | 106.0 | 2438 | 0.5059 | 0.7874 |
| 0.4027 | 107.0 | 2461 | 0.5025 | 0.7953 |
| 0.4002 | 108.0 | 2484 | 0.5020 | 0.7874 |
| 0.3988 | 109.0 | 2507 | 0.5063 | 0.7874 |
| 0.4095 | 110.0 | 2530 | 0.5034 | 0.7913 |
| 0.4001 | 111.0 | 2553 | 0.5054 | 0.7874 |
| 0.4201 | 112.0 | 2576 | 0.5076 | 0.7992 |
| 0.4134 | 113.0 | 2599 | 0.5070 | 0.7953 |
| 0.3614 | 114.0 | 2622 | 0.5033 | 0.7835 |
| 0.3928 | 115.0 | 2645 | 0.5043 | 0.7874 |
| 0.435 | 116.0 | 2668 | 0.4999 | 0.7874 |
| 0.4162 | 117.0 | 2691 | 0.5132 | 0.7874 |
| 0.4078 | 118.0 | 2714 | 0.5088 | 0.7795 |
| 0.4025 | 119.0 | 2737 | 0.5075 | 0.7835 |
| 0.4096 | 120.0 | 2760 | 0.5023 | 0.7835 |
| 0.3879 | 121.0 | 2783 | 0.5063 | 0.7835 |
| 0.4033 | 122.0 | 2806 | 0.5001 | 0.7874 |
| 0.3927 | 123.0 | 2829 | 0.5087 | 0.7795 |
| 0.3803 | 124.0 | 2852 | 0.5150 | 0.7913 |
| 0.4248 | 125.0 | 2875 | 0.5150 | 0.7835 |
| 0.3874 | 126.0 | 2898 | 0.5158 | 0.7874 |
| 0.3646 | 127.0 | 2921 | 0.4980 | 0.8031 |
| 0.4115 | 128.0 | 2944 | 0.5077 | 0.7913 |
| 0.385 | 129.0 | 2967 | 0.5153 | 0.7913 |
| 0.4064 | 130.0 | 2990 | 0.5114 | 0.7953 |
| 0.4168 | 131.0 | 3013 | 0.5057 | 0.7992 |
| 0.4319 | 132.0 | 3036 | 0.5041 | 0.7953 |
| 0.4234 | 133.0 | 3059 | 0.5119 | 0.7992 |
| 0.3721 | 134.0 | 3082 | 0.5118 | 0.7874 |
| 0.3709 | 135.0 | 3105 | 0.5078 | 0.7913 |
| 0.4149 | 136.0 | 3128 | 0.5164 | 0.7795 |
| 0.416 | 137.0 | 3151 | 0.5123 | 0.7835 |
| 0.406 | 138.0 | 3174 | 0.5116 | 0.7913 |
| 0.3613 | 139.0 | 3197 | 0.5170 | 0.7913 |
| 0.3786 | 140.0 | 3220 | 0.5099 | 0.8031 |
| 0.3976 | 141.0 | 3243 | 0.5111 | 0.7913 |
| 0.371 | 142.0 | 3266 | 0.5081 | 0.7953 |
| 0.4056 | 143.0 | 3289 | 0.5098 | 0.7913 |
| 0.4214 | 144.0 | 3312 | 0.5085 | 0.7953 |
| 0.3832 | 145.0 | 3335 | 0.5084 | 0.7953 |
| 0.3762 | 146.0 | 3358 | 0.5061 | 0.7913 |
| 0.4118 | 147.0 | 3381 | 0.5111 | 0.7992 |
| 0.3866 | 148.0 | 3404 | 0.5092 | 0.8071 |
| 0.3869 | 149.0 | 3427 | 0.5122 | 0.7953 |
| 0.3734 | 150.0 | 3450 | 0.5117 | 0.7953 |
| 0.4061 | 151.0 | 3473 | 0.5095 | 0.7913 |
| 0.3705 | 152.0 | 3496 | 0.5171 | 0.7953 |
| 0.3873 | 153.0 | 3519 | 0.5179 | 0.7953 |
| 0.3927 | 154.0 | 3542 | 0.5117 | 0.7992 |
| 0.3807 | 155.0 | 3565 | 0.5133 | 0.7953 |
| 0.3761 | 156.0 | 3588 | 0.5140 | 0.7913 |
| 0.3964 | 157.0 | 3611 | 0.5118 | 0.7953 |
| 0.39 | 158.0 | 3634 | 0.5122 | 0.8031 |
| 0.3943 | 159.0 | 3657 | 0.5126 | 0.8031 |
| 0.3417 | 160.0 | 3680 | 0.5097 | 0.7992 |
| 0.3996 | 161.0 | 3703 | 0.5048 | 0.7913 |
| 0.4 | 162.0 | 3726 | 0.5148 | 0.7953 |
| 0.4051 | 163.0 | 3749 | 0.5150 | 0.7874 |
| 0.3973 | 164.0 | 3772 | 0.5037 | 0.8031 |
| 0.3963 | 165.0 | 3795 | 0.5048 | 0.7953 |
| 0.3568 | 166.0 | 3818 | 0.5168 | 0.7913 |
| 0.3995 | 167.0 | 3841 | 0.5096 | 0.7913 |
| 0.3628 | 168.0 | 3864 | 0.5102 | 0.7953 |
| 0.3836 | 169.0 | 3887 | 0.5133 | 0.7953 |
| 0.3646 | 170.0 | 3910 | 0.5099 | 0.8031 |
| 0.3789 | 171.0 | 3933 | 0.5151 | 0.7874 |
| 0.3832 | 172.0 | 3956 | 0.5149 | 0.8031 |
| 0.3476 | 173.0 | 3979 | 0.5178 | 0.7835 |
| 0.3806 | 174.0 | 4002 | 0.5081 | 0.7992 |
| 0.4053 | 175.0 | 4025 | 0.5100 | 0.7874 |
| 0.3986 | 176.0 | 4048 | 0.5189 | 0.7992 |
| 0.3827 | 177.0 | 4071 | 0.5129 | 0.7992 |
| 0.3892 | 178.0 | 4094 | 0.5099 | 0.7874 |
| 0.3955 | 179.0 | 4117 | 0.5212 | 0.7992 |
| 0.4077 | 180.0 | 4140 | 0.5102 | 0.7953 |
| 0.3579 | 181.0 | 4163 | 0.5100 | 0.7953 |
| 0.3666 | 182.0 | 4186 | 0.5248 | 0.7835 |
| 0.3746 | 183.0 | 4209 | 0.5220 | 0.7874 |
| 0.3867 | 184.0 | 4232 | 0.5173 | 0.7913 |
| 0.4024 | 185.0 | 4255 | 0.5248 | 0.7874 |
| 0.4014 | 186.0 | 4278 | 0.5085 | 0.7913 |
| 0.3445 | 187.0 | 4301 | 0.5137 | 0.8031 |
| 0.382 | 188.0 | 4324 | 0.5213 | 0.7913 |
| 0.3673 | 189.0 | 4347 | 0.5242 | 0.7913 |
| 0.3631 | 190.0 | 4370 | 0.5146 | 0.7913 |
| 0.393 | 191.0 | 4393 | 0.5098 | 0.7835 |
| 0.3806 | 192.0 | 4416 | 0.5134 | 0.7992 |
| 0.3789 | 193.0 | 4439 | 0.5127 | 0.7992 |
| 0.3717 | 194.0 | 4462 | 0.5184 | 0.7913 |
| 0.361 | 195.0 | 4485 | 0.5186 | 0.7835 |
| 0.3722 | 196.0 | 4508 | 0.5107 | 0.7953 |
| 0.3551 | 197.0 | 4531 | 0.5175 | 0.7953 |
| 0.3649 | 198.0 | 4554 | 0.5136 | 0.7992 |
| 0.3749 | 199.0 | 4577 | 0.5193 | 0.7913 |
| 0.3782 | 200.0 | 4600 | 0.5182 | 0.7992 |
### Framework versions
- Transformers 4.46.0.dev0
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
[
"false",
"true"
] |
davanstrien/convnextv2-tiny-1k-224-text
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-tiny-1k-224-text
This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on the davanstrien/zenodo-presentations-open-labels dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4554
- Accuracy: 0.7874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 1337
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 200.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5242 | 1.0 | 23 | 0.4961 | 0.7559 |
| 0.459 | 2.0 | 46 | 0.5001 | 0.7638 |
| 0.4429 | 3.0 | 69 | 0.4554 | 0.7874 |
| 0.4308 | 4.0 | 92 | 0.4924 | 0.7638 |
| 0.4319 | 5.0 | 115 | 0.4673 | 0.7874 |
| 0.4047 | 6.0 | 138 | 0.4930 | 0.7756 |
| 0.425 | 7.0 | 161 | 0.4739 | 0.7795 |
| 0.4102 | 8.0 | 184 | 0.5118 | 0.7598 |
| 0.3959 | 9.0 | 207 | 0.5490 | 0.7480 |
| 0.365 | 10.0 | 230 | 0.5261 | 0.7638 |
| 0.4214 | 11.0 | 253 | 0.5089 | 0.7795 |
| 0.3798 | 12.0 | 276 | 0.4711 | 0.7992 |
| 0.3906 | 13.0 | 299 | 0.5035 | 0.7913 |
| 0.3706 | 14.0 | 322 | 0.4933 | 0.7953 |
| 0.3766 | 15.0 | 345 | 0.4973 | 0.7992 |
| 0.3213 | 16.0 | 368 | 0.5221 | 0.7874 |
| 0.329 | 17.0 | 391 | 0.5400 | 0.7835 |
| 0.3427 | 18.0 | 414 | 0.5252 | 0.7913 |
| 0.3472 | 19.0 | 437 | 0.6208 | 0.7441 |
| 0.3424 | 20.0 | 460 | 0.5320 | 0.7795 |
| 0.3016 | 21.0 | 483 | 0.5488 | 0.7795 |
| 0.3033 | 22.0 | 506 | 0.5889 | 0.7480 |
| 0.3083 | 23.0 | 529 | 0.6108 | 0.7638 |
| 0.2772 | 24.0 | 552 | 0.5845 | 0.7480 |
| 0.287 | 25.0 | 575 | 0.5242 | 0.8071 |
| 0.2651 | 26.0 | 598 | 0.6276 | 0.7598 |
| 0.2696 | 27.0 | 621 | 0.5649 | 0.7835 |
| 0.2701 | 28.0 | 644 | 0.6103 | 0.7756 |
| 0.2451 | 29.0 | 667 | 0.6207 | 0.7638 |
| 0.2705 | 30.0 | 690 | 0.5990 | 0.7756 |
| 0.2553 | 31.0 | 713 | 0.5962 | 0.7835 |
| 0.2559 | 32.0 | 736 | 0.6681 | 0.7717 |
| 0.2405 | 33.0 | 759 | 0.5917 | 0.7638 |
| 0.2707 | 34.0 | 782 | 0.5906 | 0.7638 |
| 0.3004 | 35.0 | 805 | 0.5905 | 0.7874 |
| 0.2404 | 36.0 | 828 | 0.5914 | 0.7677 |
| 0.242 | 37.0 | 851 | 0.7637 | 0.7638 |
| 0.2221 | 38.0 | 874 | 0.7117 | 0.7598 |
| 0.2196 | 39.0 | 897 | 0.6442 | 0.7835 |
| 0.23 | 40.0 | 920 | 0.7011 | 0.7717 |
| 0.2045 | 41.0 | 943 | 0.7822 | 0.7598 |
| 0.2043 | 42.0 | 966 | 0.7339 | 0.7520 |
| 0.2413 | 43.0 | 989 | 0.6917 | 0.7677 |
| 0.2135 | 44.0 | 1012 | 0.6954 | 0.7717 |
| 0.2194 | 45.0 | 1035 | 0.6729 | 0.7795 |
| 0.211 | 46.0 | 1058 | 0.6841 | 0.7835 |
| 0.2155 | 47.0 | 1081 | 0.7108 | 0.7677 |
| 0.2231 | 48.0 | 1104 | 0.6758 | 0.7677 |
| 0.2364 | 49.0 | 1127 | 0.7747 | 0.7520 |
| 0.222 | 50.0 | 1150 | 0.7104 | 0.7638 |
| 0.2018 | 51.0 | 1173 | 0.6885 | 0.7953 |
| 0.219 | 52.0 | 1196 | 0.7609 | 0.7520 |
| 0.1916 | 53.0 | 1219 | 0.8394 | 0.7677 |
| 0.1767 | 54.0 | 1242 | 0.7910 | 0.7717 |
| 0.236 | 55.0 | 1265 | 0.7601 | 0.7756 |
| 0.1898 | 56.0 | 1288 | 0.7501 | 0.7717 |
| 0.1876 | 57.0 | 1311 | 0.7492 | 0.7756 |
| 0.1592 | 58.0 | 1334 | 0.7905 | 0.7638 |
| 0.1772 | 59.0 | 1357 | 0.7411 | 0.7717 |
| 0.1787 | 60.0 | 1380 | 0.8145 | 0.7795 |
| 0.1782 | 61.0 | 1403 | 0.7721 | 0.7795 |
| 0.1781 | 62.0 | 1426 | 0.8022 | 0.7835 |
| 0.1884 | 63.0 | 1449 | 0.8630 | 0.7756 |
| 0.1905 | 64.0 | 1472 | 0.7472 | 0.7953 |
| 0.16 | 65.0 | 1495 | 0.7761 | 0.7874 |
| 0.1619 | 66.0 | 1518 | 0.8586 | 0.7795 |
| 0.1768 | 67.0 | 1541 | 0.7700 | 0.7835 |
| 0.1395 | 68.0 | 1564 | 0.8326 | 0.7717 |
| 0.1536 | 69.0 | 1587 | 0.8442 | 0.7756 |
| 0.208 | 70.0 | 1610 | 0.9289 | 0.7677 |
| 0.1783 | 71.0 | 1633 | 0.9022 | 0.7638 |
| 0.1572 | 72.0 | 1656 | 0.8510 | 0.7677 |
| 0.1349 | 73.0 | 1679 | 0.7962 | 0.7677 |
| 0.148 | 74.0 | 1702 | 0.8641 | 0.7756 |
| 0.1768 | 75.0 | 1725 | 0.9277 | 0.7677 |
| 0.1833 | 76.0 | 1748 | 0.8663 | 0.7638 |
| 0.1696 | 77.0 | 1771 | 0.8302 | 0.7756 |
| 0.1577 | 78.0 | 1794 | 0.8576 | 0.7638 |
| 0.1724 | 79.0 | 1817 | 0.8652 | 0.7598 |
| 0.1525 | 80.0 | 1840 | 0.8567 | 0.7717 |
| 0.158 | 81.0 | 1863 | 0.9139 | 0.7598 |
| 0.1639 | 82.0 | 1886 | 0.9689 | 0.7520 |
| 0.1424 | 83.0 | 1909 | 0.9698 | 0.7638 |
| 0.1224 | 84.0 | 1932 | 1.0239 | 0.7717 |
| 0.1765 | 85.0 | 1955 | 0.9072 | 0.7795 |
| 0.1726 | 86.0 | 1978 | 0.9436 | 0.7520 |
| 0.1584 | 87.0 | 2001 | 0.8775 | 0.7638 |
| 0.164 | 88.0 | 2024 | 0.8592 | 0.7717 |
| 0.1682 | 89.0 | 2047 | 0.9051 | 0.7638 |
| 0.1455 | 90.0 | 2070 | 1.0020 | 0.7717 |
| 0.1596 | 91.0 | 2093 | 0.9423 | 0.7677 |
| 0.1667 | 92.0 | 2116 | 0.9586 | 0.7638 |
| 0.132 | 93.0 | 2139 | 0.9890 | 0.7638 |
| 0.1335 | 94.0 | 2162 | 0.9922 | 0.7717 |
| 0.1538 | 95.0 | 2185 | 0.9534 | 0.7520 |
| 0.1288 | 96.0 | 2208 | 1.0714 | 0.7480 |
| 0.1661 | 97.0 | 2231 | 0.9950 | 0.7598 |
| 0.1392 | 98.0 | 2254 | 0.9866 | 0.7520 |
| 0.1413 | 99.0 | 2277 | 1.0638 | 0.7598 |
| 0.1619 | 100.0 | 2300 | 1.0178 | 0.7598 |
| 0.1537 | 101.0 | 2323 | 0.9892 | 0.7638 |
| 0.137 | 102.0 | 2346 | 0.9524 | 0.7559 |
| 0.1416 | 103.0 | 2369 | 1.0539 | 0.7402 |
| 0.1477 | 104.0 | 2392 | 1.0825 | 0.7283 |
| 0.1283 | 105.0 | 2415 | 1.0008 | 0.7520 |
| 0.1498 | 106.0 | 2438 | 0.9702 | 0.7638 |
| 0.1576 | 107.0 | 2461 | 1.0144 | 0.7677 |
| 0.1433 | 108.0 | 2484 | 0.9457 | 0.7638 |
| 0.1377 | 109.0 | 2507 | 0.9770 | 0.7677 |
| 0.1163 | 110.0 | 2530 | 1.1386 | 0.7559 |
| 0.1449 | 111.0 | 2553 | 1.0589 | 0.7559 |
| 0.1475 | 112.0 | 2576 | 1.0110 | 0.7480 |
| 0.1582 | 113.0 | 2599 | 0.9657 | 0.7677 |
| 0.1291 | 114.0 | 2622 | 0.9563 | 0.7756 |
| 0.1106 | 115.0 | 2645 | 1.1004 | 0.7480 |
| 0.1339 | 116.0 | 2668 | 1.0327 | 0.7520 |
| 0.1344 | 117.0 | 2691 | 1.0161 | 0.7520 |
| 0.1433 | 118.0 | 2714 | 1.0312 | 0.7559 |
| 0.1271 | 119.0 | 2737 | 1.0266 | 0.7598 |
| 0.1222 | 120.0 | 2760 | 1.0119 | 0.7638 |
| 0.1235 | 121.0 | 2783 | 1.0808 | 0.7520 |
| 0.1311 | 122.0 | 2806 | 1.0612 | 0.7520 |
| 0.1219 | 123.0 | 2829 | 1.1412 | 0.7520 |
| 0.148 | 124.0 | 2852 | 1.0836 | 0.7402 |
| 0.1076 | 125.0 | 2875 | 1.0629 | 0.7559 |
| 0.1306 | 126.0 | 2898 | 1.0791 | 0.7362 |
| 0.1153 | 127.0 | 2921 | 1.1495 | 0.7402 |
| 0.1239 | 128.0 | 2944 | 1.1446 | 0.7520 |
| 0.1533 | 129.0 | 2967 | 1.0818 | 0.7441 |
| 0.136 | 130.0 | 2990 | 1.0558 | 0.7520 |
| 0.1189 | 131.0 | 3013 | 1.0423 | 0.7520 |
| 0.1247 | 132.0 | 3036 | 1.0581 | 0.7638 |
| 0.1136 | 133.0 | 3059 | 1.0132 | 0.7717 |
| 0.1492 | 134.0 | 3082 | 1.1127 | 0.7441 |
| 0.1184 | 135.0 | 3105 | 1.1450 | 0.7402 |
| 0.1122 | 136.0 | 3128 | 1.1063 | 0.7520 |
| 0.1047 | 137.0 | 3151 | 1.1029 | 0.7441 |
| 0.1285 | 138.0 | 3174 | 1.1563 | 0.7402 |
| 0.1004 | 139.0 | 3197 | 1.1552 | 0.7362 |
| 0.1285 | 140.0 | 3220 | 1.1097 | 0.7480 |
| 0.1257 | 141.0 | 3243 | 1.1602 | 0.7402 |
| 0.1075 | 142.0 | 3266 | 1.1912 | 0.7559 |
| 0.1098 | 143.0 | 3289 | 1.1894 | 0.7520 |
| 0.1148 | 144.0 | 3312 | 1.1551 | 0.7441 |
| 0.1489 | 145.0 | 3335 | 1.1379 | 0.7441 |
| 0.1461 | 146.0 | 3358 | 1.1726 | 0.7480 |
| 0.1171 | 147.0 | 3381 | 1.1191 | 0.7441 |
| 0.1262 | 148.0 | 3404 | 1.1662 | 0.7441 |
| 0.1137 | 149.0 | 3427 | 1.1283 | 0.7480 |
| 0.1118 | 150.0 | 3450 | 1.1388 | 0.7480 |
| 0.1169 | 151.0 | 3473 | 1.1627 | 0.7520 |
| 0.1021 | 152.0 | 3496 | 1.1821 | 0.7323 |
| 0.1392 | 153.0 | 3519 | 1.1672 | 0.7323 |
| 0.1111 | 154.0 | 3542 | 1.2136 | 0.7402 |
| 0.1298 | 155.0 | 3565 | 1.1966 | 0.7402 |
| 0.1114 | 156.0 | 3588 | 1.1382 | 0.7362 |
| 0.09 | 157.0 | 3611 | 1.1460 | 0.7323 |
| 0.1294 | 158.0 | 3634 | 1.1612 | 0.7441 |
| 0.1186 | 159.0 | 3657 | 1.2204 | 0.7402 |
| 0.1096 | 160.0 | 3680 | 1.2096 | 0.7441 |
| 0.1107 | 161.0 | 3703 | 1.1822 | 0.7480 |
| 0.1094 | 162.0 | 3726 | 1.1908 | 0.7480 |
| 0.1112 | 163.0 | 3749 | 1.1647 | 0.7402 |
| 0.1042 | 164.0 | 3772 | 1.2523 | 0.7441 |
| 0.0993 | 165.0 | 3795 | 1.2040 | 0.7402 |
| 0.105 | 166.0 | 3818 | 1.2296 | 0.7402 |
| 0.1071 | 167.0 | 3841 | 1.2863 | 0.7480 |
| 0.108 | 168.0 | 3864 | 1.2372 | 0.7441 |
| 0.1076 | 169.0 | 3887 | 1.1872 | 0.7480 |
| 0.1107 | 170.0 | 3910 | 1.2354 | 0.7323 |
| 0.1012 | 171.0 | 3933 | 1.2105 | 0.7441 |
| 0.0918 | 172.0 | 3956 | 1.2026 | 0.7441 |
| 0.1043 | 173.0 | 3979 | 1.2925 | 0.7559 |
| 0.1035 | 174.0 | 4002 | 1.2314 | 0.7402 |
| 0.1101 | 175.0 | 4025 | 1.1943 | 0.7441 |
| 0.1084 | 176.0 | 4048 | 1.2069 | 0.7362 |
| 0.1247 | 177.0 | 4071 | 1.2303 | 0.7520 |
| 0.1278 | 178.0 | 4094 | 1.2118 | 0.7480 |
| 0.1117 | 179.0 | 4117 | 1.2213 | 0.7480 |
| 0.1123 | 180.0 | 4140 | 1.2403 | 0.7480 |
| 0.0918 | 181.0 | 4163 | 1.1987 | 0.7441 |
| 0.0827 | 182.0 | 4186 | 1.2358 | 0.7441 |
| 0.0814 | 183.0 | 4209 | 1.2608 | 0.7441 |
| 0.0897 | 184.0 | 4232 | 1.2370 | 0.7441 |
| 0.1321 | 185.0 | 4255 | 1.2317 | 0.7480 |
| 0.1194 | 186.0 | 4278 | 1.2289 | 0.7441 |
| 0.1154 | 187.0 | 4301 | 1.1964 | 0.7441 |
| 0.0964 | 188.0 | 4324 | 1.2009 | 0.7441 |
| 0.0903 | 189.0 | 4347 | 1.2123 | 0.7441 |
| 0.1174 | 190.0 | 4370 | 1.2335 | 0.7441 |
| 0.0846 | 191.0 | 4393 | 1.2399 | 0.7441 |
| 0.1073 | 192.0 | 4416 | 1.2432 | 0.7441 |
| 0.0892 | 193.0 | 4439 | 1.2604 | 0.7480 |
| 0.1158 | 194.0 | 4462 | 1.2473 | 0.7480 |
| 0.1153 | 195.0 | 4485 | 1.2267 | 0.7441 |
| 0.1208 | 196.0 | 4508 | 1.2178 | 0.7441 |
| 0.083 | 197.0 | 4531 | 1.2145 | 0.7480 |
| 0.1331 | 198.0 | 4554 | 1.2215 | 0.7441 |
| 0.0943 | 199.0 | 4577 | 1.2238 | 0.7441 |
| 0.0926 | 200.0 | 4600 | 1.2236 | 0.7441 |
### Framework versions
- Transformers 4.46.0.dev0
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
[
"false",
"true"
] |
wcosmas/resnet-18-finetuned-papsmear
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-18-finetuned-papsmear
This model is a fine-tuned version of [microsoft/resnet-18](https://huggingface.co/microsoft/resnet-18) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2838
- Accuracy: 0.9118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.9231 | 9 | 1.9256 | 0.1691 |
| 1.9692 | 1.9487 | 19 | 1.6557 | 0.2868 |
| 1.7979 | 2.9744 | 29 | 1.3300 | 0.5368 |
| 1.5079 | 4.0 | 39 | 1.0482 | 0.6324 |
| 1.217 | 4.9231 | 48 | 0.9019 | 0.6618 |
| 0.9536 | 5.9487 | 58 | 0.7687 | 0.6691 |
| 0.7881 | 6.9744 | 68 | 0.6150 | 0.7721 |
| 0.68 | 8.0 | 78 | 0.5481 | 0.7868 |
| 0.5678 | 8.9231 | 87 | 0.5341 | 0.7868 |
| 0.5169 | 9.9487 | 97 | 0.4800 | 0.7941 |
| 0.4838 | 10.9744 | 107 | 0.4356 | 0.8235 |
| 0.4738 | 12.0 | 117 | 0.4573 | 0.8162 |
| 0.3798 | 12.9231 | 126 | 0.4263 | 0.8088 |
| 0.3431 | 13.9487 | 136 | 0.4159 | 0.8382 |
| 0.3282 | 14.9744 | 146 | 0.3787 | 0.8603 |
| 0.3167 | 16.0 | 156 | 0.4234 | 0.8382 |
| 0.3186 | 16.9231 | 165 | 0.3853 | 0.8235 |
| 0.2568 | 17.9487 | 175 | 0.3904 | 0.8456 |
| 0.2528 | 18.9744 | 185 | 0.4013 | 0.8309 |
| 0.2661 | 20.0 | 195 | 0.3275 | 0.8824 |
| 0.2287 | 20.9231 | 204 | 0.3219 | 0.8824 |
| 0.2465 | 21.9487 | 214 | 0.3410 | 0.8529 |
| 0.2422 | 22.9744 | 224 | 0.3256 | 0.8603 |
| 0.222 | 24.0 | 234 | 0.3232 | 0.875 |
| 0.1917 | 24.9231 | 243 | 0.3307 | 0.8676 |
| 0.194 | 25.9487 | 253 | 0.3146 | 0.8971 |
| 0.212 | 26.9744 | 263 | 0.3125 | 0.8897 |
| 0.1718 | 28.0 | 273 | 0.3015 | 0.9044 |
| 0.1975 | 28.9231 | 282 | 0.3195 | 0.8824 |
| 0.1948 | 29.9487 | 292 | 0.3536 | 0.8971 |
| 0.1809 | 30.9744 | 302 | 0.3105 | 0.875 |
| 0.1744 | 32.0 | 312 | 0.3032 | 0.8824 |
| 0.1731 | 32.9231 | 321 | 0.2936 | 0.8971 |
| 0.1513 | 33.9487 | 331 | 0.2889 | 0.8824 |
| 0.1527 | 34.9744 | 341 | 0.2875 | 0.8897 |
| 0.1693 | 36.0 | 351 | 0.2754 | 0.8897 |
| 0.1743 | 36.9231 | 360 | 0.2875 | 0.8971 |
| 0.1463 | 37.9487 | 370 | 0.2961 | 0.8971 |
| 0.1429 | 38.9744 | 380 | 0.2848 | 0.8971 |
| 0.1483 | 40.0 | 390 | 0.2873 | 0.8897 |
| 0.1483 | 40.9231 | 399 | 0.2856 | 0.875 |
| 0.1613 | 41.9487 | 409 | 0.2801 | 0.8971 |
| 0.1358 | 42.9744 | 419 | 0.2838 | 0.9118 |
| 0.1453 | 44.0 | 429 | 0.2783 | 0.8971 |
| 0.1383 | 44.9231 | 438 | 0.2897 | 0.8897 |
| 0.1655 | 45.9487 | 448 | 0.2847 | 0.9044 |
| 0.1489 | 46.1538 | 450 | 0.2861 | 0.8897 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"ascus",
"cancer",
"hsil",
"lsil",
"nilm",
"non-diagnostic"
] |
vony227/vit-base-patch16-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.4052
- eval_model_preparation_time: 0.0118
- eval_accuracy: 0.1337
- eval_runtime: 253.0403
- eval_samples_per_second: 10.67
- eval_steps_per_second: 0.336
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1
- Datasets 3.0.1
- Tokenizers 0.20.1
|
[
"annualcrop",
"forest",
"herbaceousvegetation",
"highway",
"industrial",
"pasture",
"permanentcrop",
"residential",
"river",
"sealake"
] |
aningddd/vit-base
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3177
- Accuracy: 0.4987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6698 | 1.0 | 48 | 1.5900 | 0.2539 |
| 1.4981 | 2.0 | 96 | 1.4551 | 0.3835 |
| 1.2747 | 3.0 | 144 | 1.3591 | 0.4408 |
| 1.0701 | 4.0 | 192 | 1.3058 | 0.4902 |
| 0.7885 | 5.0 | 240 | 1.3177 | 0.4987 |
| 0.6023 | 6.0 | 288 | 1.3985 | 0.4870 |
| 0.4814 | 7.0 | 336 | 1.4607 | 0.4824 |
| 0.3708 | 8.0 | 384 | 1.5195 | 0.4720 |
| 0.2755 | 9.0 | 432 | 1.5524 | 0.4798 |
| 0.2476 | 10.0 | 480 | 1.5632 | 0.4792 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Tokenizers 0.19.1
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5"
] |
ManhManhManh123/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3342
- Accuracy: 0.919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2451 | 0.992 | 62 | 0.4081 | 0.906 |
| 0.1395 | 2.0 | 125 | 0.3829 | 0.905 |
| 0.1087 | 2.992 | 187 | 0.3393 | 0.919 |
| 0.0848 | 4.0 | 250 | 0.3120 | 0.927 |
| 0.1408 | 4.96 | 310 | 0.3342 | 0.919 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
DanukaLakshan/swin-tiny-patch4-window7-224-finetuned-skin-cancer
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-skin-cancer
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3211
- Accuracy: 0.8772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.8717 | 0.9929 | 35 | 0.8160 | 0.6916 |
| 0.6289 | 1.9858 | 70 | 0.5764 | 0.8034 |
| 0.4878 | 2.9787 | 105 | 0.4994 | 0.8174 |
| 0.4392 | 4.0 | 141 | 0.4301 | 0.8493 |
| 0.3867 | 4.9929 | 176 | 0.4034 | 0.8573 |
| 0.3653 | 5.9858 | 211 | 0.3476 | 0.8693 |
| 0.3359 | 6.9787 | 246 | 0.3681 | 0.8643 |
| 0.2865 | 8.0 | 282 | 0.3578 | 0.8653 |
| 0.3041 | 8.9929 | 317 | 0.3245 | 0.8792 |
| 0.2869 | 9.9291 | 350 | 0.3211 | 0.8772 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
[
"actinic-keratoses",
"basal-cell-carcinoma",
"benign-keratosis-like-lesions",
"dermatofibroma",
"melanocytic-nevi",
"melanoma",
"vascular-lesions"
] |
Najathpathiyil/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0793
- Accuracy: 0.9741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2912 | 1.0 | 190 | 0.1284 | 0.96 |
| 0.1963 | 2.0 | 380 | 0.0975 | 0.9715 |
| 0.1511 | 3.0 | 570 | 0.0793 | 0.9741 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"annual crop",
"forest",
"herbaceous vegetation",
"highway",
"industrial",
"pasture",
"permanent crop",
"residential",
"river",
"sea or lake"
] |
Augusto777/swinv2-tiny-patch4-window8-256-Ocular-Toxoplasmosis
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-Ocular-Toxoplasmosis
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5167
- Accuracy: 0.8387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.7273 | 2 | 1.4057 | 0.2419 |
| No log | 1.8182 | 5 | 1.2100 | 0.4677 |
| No log | 2.9091 | 8 | 1.1808 | 0.4516 |
| 1.3062 | 4.0 | 11 | 1.0975 | 0.5968 |
| 1.3062 | 4.7273 | 13 | 1.0542 | 0.6613 |
| 1.3062 | 5.8182 | 16 | 0.9857 | 0.6613 |
| 1.3062 | 6.9091 | 19 | 0.9176 | 0.6774 |
| 1.0003 | 8.0 | 22 | 0.8761 | 0.6774 |
| 1.0003 | 8.7273 | 24 | 0.8540 | 0.6774 |
| 1.0003 | 9.8182 | 27 | 0.7777 | 0.6613 |
| 0.8096 | 10.9091 | 30 | 0.7498 | 0.6613 |
| 0.8096 | 12.0 | 33 | 0.7569 | 0.6613 |
| 0.8096 | 12.7273 | 35 | 0.7422 | 0.6774 |
| 0.8096 | 13.8182 | 38 | 0.7278 | 0.7097 |
| 0.6556 | 14.9091 | 41 | 0.6877 | 0.7258 |
| 0.6556 | 16.0 | 44 | 0.6433 | 0.7258 |
| 0.6556 | 16.7273 | 46 | 0.6324 | 0.7419 |
| 0.6556 | 17.8182 | 49 | 0.6390 | 0.7419 |
| 0.5725 | 18.9091 | 52 | 0.6504 | 0.7742 |
| 0.5725 | 20.0 | 55 | 0.6145 | 0.7581 |
| 0.5725 | 20.7273 | 57 | 0.5824 | 0.7903 |
| 0.5057 | 21.8182 | 60 | 0.5476 | 0.8226 |
| 0.5057 | 22.9091 | 63 | 0.5413 | 0.8226 |
| 0.5057 | 24.0 | 66 | 0.5335 | 0.8226 |
| 0.5057 | 24.7273 | 68 | 0.5302 | 0.8226 |
| 0.4945 | 25.8182 | 71 | 0.5231 | 0.8226 |
| 0.4945 | 26.9091 | 74 | 0.5167 | 0.8387 |
| 0.4945 | 28.0 | 77 | 0.5132 | 0.8387 |
| 0.4945 | 28.7273 | 79 | 0.5131 | 0.8387 |
| 0.4883 | 29.0909 | 80 | 0.5131 | 0.8387 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"active",
"active-inactive",
"healthy",
"inactive"
] |
Augusto777/swinv2-tiny-patch4-window8-256-Ocular-Toxoplasmosis-DA
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-Ocular-Toxoplasmosis-DA
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5075
- Accuracy: 0.8548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.3402 | 0.9630 | 13 | 1.1682 | 0.5484 |
| 1.1725 | 2.0 | 27 | 1.0025 | 0.6290 |
| 0.8824 | 2.9630 | 40 | 0.7644 | 0.6613 |
| 0.7342 | 4.0 | 54 | 0.5840 | 0.7258 |
| 0.6734 | 4.9630 | 67 | 0.6754 | 0.6452 |
| 0.5167 | 6.0 | 81 | 0.5904 | 0.6935 |
| 0.5009 | 6.9630 | 94 | 0.5549 | 0.6935 |
| 0.4988 | 8.0 | 108 | 0.6204 | 0.6774 |
| 0.3856 | 8.9630 | 121 | 0.4463 | 0.8226 |
| 0.4057 | 10.0 | 135 | 0.5232 | 0.7903 |
| 0.3929 | 10.9630 | 148 | 0.4580 | 0.8387 |
| 0.3638 | 12.0 | 162 | 0.5115 | 0.7742 |
| 0.3248 | 12.9630 | 175 | 0.5313 | 0.7742 |
| 0.2673 | 14.0 | 189 | 0.5203 | 0.7903 |
| 0.2922 | 14.9630 | 202 | 0.4315 | 0.8387 |
| 0.2803 | 16.0 | 216 | 0.4577 | 0.8387 |
| 0.2735 | 16.9630 | 229 | 0.5467 | 0.8065 |
| 0.2586 | 18.0 | 243 | 0.5236 | 0.8387 |
| 0.2366 | 18.9630 | 256 | 0.5075 | 0.8548 |
| 0.2347 | 20.0 | 270 | 0.5179 | 0.8387 |
| 0.2046 | 20.9630 | 283 | 0.5428 | 0.8387 |
| 0.2289 | 22.0 | 297 | 0.5748 | 0.8387 |
| 0.2195 | 22.9630 | 310 | 0.5969 | 0.8226 |
| 0.2224 | 24.0 | 324 | 0.6092 | 0.8226 |
| 0.2167 | 24.9630 | 337 | 0.6333 | 0.8226 |
| 0.1956 | 26.0 | 351 | 0.5993 | 0.8226 |
| 0.2174 | 26.9630 | 364 | 0.6063 | 0.8548 |
| 0.1999 | 28.0 | 378 | 0.6414 | 0.8387 |
| 0.1667 | 28.9630 | 391 | 0.6297 | 0.8387 |
| 0.1835 | 30.0 | 405 | 0.6149 | 0.8226 |
| 0.186 | 30.9630 | 418 | 0.6430 | 0.8387 |
| 0.1749 | 32.0 | 432 | 0.6678 | 0.8387 |
| 0.1663 | 32.9630 | 445 | 0.6829 | 0.8387 |
| 0.1557 | 34.0 | 459 | 0.6557 | 0.8387 |
| 0.1913 | 34.9630 | 472 | 0.6275 | 0.8387 |
| 0.1775 | 36.0 | 486 | 0.6555 | 0.8548 |
| 0.152 | 36.9630 | 499 | 0.6653 | 0.8548 |
| 0.1897 | 38.0 | 513 | 0.6682 | 0.8548 |
| 0.1589 | 38.5185 | 520 | 0.6679 | 0.8548 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"active",
"active-inactive",
"healthy",
"inactive"
] |
100rab25/bridalMakeupClassifier_binary
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bridalMakeupClassifier_binary
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0072
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2966 | 1.0 | 23 | 0.1290 | 0.9662 | 0.9432 | 0.9326 | 0.9379 |
| 0.1233 | 2.0 | 46 | 0.0407 | 0.9877 | 0.9670 | 0.9888 | 0.9778 |
| 0.0469 | 3.0 | 69 | 0.0594 | 0.9815 | 0.9368 | 1.0 | 0.9674 |
| 0.0394 | 4.0 | 92 | 0.0557 | 0.9877 | 0.9670 | 0.9888 | 0.9778 |
| 0.0909 | 5.0 | 115 | 0.0401 | 0.9908 | 0.9674 | 1.0 | 0.9834 |
| 0.05 | 6.0 | 138 | 0.0252 | 0.9877 | 0.9670 | 0.9888 | 0.9778 |
| 0.0451 | 7.0 | 161 | 0.0279 | 0.9877 | 0.9885 | 0.9663 | 0.9773 |
| 0.0231 | 8.0 | 184 | 0.0278 | 0.9938 | 0.9780 | 1.0 | 0.9889 |
| 0.0404 | 9.0 | 207 | 0.0256 | 0.9877 | 0.9775 | 0.9775 | 0.9775 |
| 0.0297 | 10.0 | 230 | 0.0260 | 0.9908 | 0.9778 | 0.9888 | 0.9832 |
| 0.0327 | 11.0 | 253 | 0.0230 | 0.9938 | 0.9780 | 1.0 | 0.9889 |
| 0.0221 | 12.0 | 276 | 0.0140 | 0.9969 | 0.9889 | 1.0 | 0.9944 |
| 0.0294 | 13.0 | 299 | 0.0106 | 0.9969 | 0.9889 | 1.0 | 0.9944 |
| 0.0292 | 14.0 | 322 | 0.0132 | 0.9969 | 0.9889 | 1.0 | 0.9944 |
| 0.0064 | 15.0 | 345 | 0.0231 | 0.9908 | 0.9674 | 1.0 | 0.9834 |
| 0.02 | 16.0 | 368 | 0.0087 | 0.9969 | 0.9889 | 1.0 | 0.9944 |
| 0.0356 | 17.0 | 391 | 0.0114 | 0.9969 | 0.9889 | 1.0 | 0.9944 |
| 0.0232 | 18.0 | 414 | 0.0072 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0351 | 19.0 | 437 | 0.0087 | 0.9969 | 0.9889 | 1.0 | 0.9944 |
| 0.0155 | 20.0 | 460 | 0.0075 | 0.9969 | 0.9889 | 1.0 | 0.9944 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.20.0
|
[
"bridalmakeup",
"others"
] |
kite007/study_resnet-18
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
DouglasBraga/swin-tiny-patch4-window7-224-finetuned-leukemia.v2.0
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-leukemia.v2.0
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5694
- Accuracy: 0.8855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.4734 | 0.9984 | 312 | 0.7528 | 0.5968 |
| 0.3596 | 2.0 | 625 | 0.8091 | 0.688 |
| 0.2991 | 2.9984 | 937 | 0.9220 | 0.6335 |
| 0.2658 | 4.0 | 1250 | 0.7774 | 0.7137 |
| 0.2511 | 4.9984 | 1562 | 0.4364 | 0.8267 |
| 0.2218 | 6.0 | 1875 | 0.6225 | 0.7837 |
| 0.1691 | 6.9984 | 2187 | 0.3587 | 0.8718 |
| 0.1721 | 8.0 | 2500 | 0.6494 | 0.7987 |
| 0.1393 | 8.9984 | 2812 | 0.6802 | 0.818 |
| 0.1109 | 10.0 | 3125 | 0.5511 | 0.834 |
| 0.1213 | 10.9984 | 3437 | 0.5982 | 0.8417 |
| 0.0971 | 12.0 | 3750 | 0.8005 | 0.814 |
| 0.1121 | 12.9984 | 4062 | 0.6397 | 0.8407 |
| 0.0947 | 14.0 | 4375 | 1.0869 | 0.768 |
| 0.1022 | 14.9984 | 4687 | 0.5969 | 0.8515 |
| 0.0801 | 16.0 | 5000 | 0.5839 | 0.8732 |
| 0.0951 | 16.9984 | 5312 | 0.8599 | 0.827 |
| 0.0716 | 18.0 | 5625 | 0.8355 | 0.822 |
| 0.0859 | 18.9984 | 5937 | 0.7547 | 0.8427 |
| 0.0661 | 20.0 | 6250 | 0.7206 | 0.851 |
| 0.0543 | 20.9984 | 6562 | 0.8396 | 0.8363 |
| 0.0646 | 22.0 | 6875 | 0.5467 | 0.881 |
| 0.0563 | 22.9984 | 7187 | 0.5694 | 0.8855 |
| 0.042 | 24.0 | 7500 | 0.8404 | 0.8492 |
| 0.0638 | 24.9984 | 7812 | 0.9300 | 0.84 |
| 0.0455 | 26.0 | 8125 | 0.9865 | 0.8393 |
| 0.037 | 26.9984 | 8437 | 0.8503 | 0.8525 |
| 0.0469 | 28.0 | 8750 | 0.8272 | 0.8602 |
| 0.0409 | 28.9984 | 9062 | 0.8988 | 0.8502 |
| 0.0438 | 29.9520 | 9360 | 0.8338 | 0.858 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
[
"hem",
"all"
] |
Ahs2000/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0960
- Accuracy: 0.9718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1163 | 1.0 | 2500 | 0.1026 | 0.9676 |
| 0.102 | 2.0 | 5000 | 0.0978 | 0.9708 |
| 0.0798 | 3.0 | 7500 | 0.0954 | 0.9728 |
| 0.0625 | 4.0 | 10000 | 0.0954 | 0.972 |
| 0.0669 | 5.0 | 12500 | 0.0952 | 0.9728 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"0",
"6",
"2",
"7",
"1",
"4",
"5",
"3",
"8",
"9"
] |
felixwf/fine_tuned_face_emotion_model
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"frowning",
"kiss",
"neutral",
"openmouth",
"raisedbrows",
"smile"
] |
HimanshuWiai/outputs
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2940
- Accuracy: 0.6066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 3.0125 | 0.9953 | 53 | 2.9198 | 0.1172 |
| 2.4616 | 1.9906 | 106 | 2.2769 | 0.3719 |
| 1.8476 | 2.9859 | 159 | 1.6799 | 0.5055 |
| 1.3277 | 3.9812 | 212 | 1.2940 | 0.6066 |
### Framework versions
- Transformers 4.46.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.20.1
|
[
"bacterial_blight",
"bacterial_leaf_blight",
"bacterial_leaf_streak",
"bacterial_panicle_blight",
"black_stem_borer",
"blast",
"brown_plant_hopper",
"brown_spot",
"downy_mildew",
"false_smut",
"gundhi_bug",
"healthy",
"hispa",
"ladybird_beetle",
"leaf_folder",
"leaf_roller",
"leaf_smut",
"others",
"tungro",
"white_stem_borer",
"yellow_stem_borer"
] |
Tr13/my_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_food_model
This model is a fine-tuned version of [apple/mobilevit-small](https://huggingface.co/apple/mobilevit-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0492
- Accuracy: 0.7236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 4.4987 | 1.0 | 592 | 4.4835 | 0.1002 |
| 3.6382 | 2.0 | 1184 | 3.4318 | 0.2707 |
| 2.9497 | 3.0 | 1776 | 2.5897 | 0.3945 |
| 2.586 | 4.0 | 2368 | 2.1261 | 0.4824 |
| 2.2806 | 5.0 | 2960 | 1.8201 | 0.5501 |
| 2.0928 | 6.0 | 3552 | 1.6291 | 0.5896 |
| 1.9839 | 7.0 | 4144 | 1.4954 | 0.6145 |
| 1.8465 | 8.0 | 4736 | 1.4209 | 0.6333 |
| 1.6939 | 9.0 | 5328 | 1.3486 | 0.6493 |
| 1.6212 | 10.0 | 5920 | 1.2959 | 0.6616 |
| 1.6672 | 11.0 | 6512 | 1.2299 | 0.6744 |
| 1.5973 | 12.0 | 7104 | 1.2018 | 0.6871 |
| 1.5419 | 13.0 | 7696 | 1.1750 | 0.6928 |
| 1.5003 | 14.0 | 8288 | 1.1297 | 0.7017 |
| 1.4908 | 15.0 | 8880 | 1.1184 | 0.7030 |
| 1.4033 | 16.0 | 9472 | 1.0983 | 0.7125 |
| 1.4015 | 17.0 | 10064 | 1.0832 | 0.7159 |
| 1.3651 | 18.0 | 10656 | 1.0728 | 0.7134 |
| 1.3698 | 19.0 | 11248 | 1.0678 | 0.7166 |
| 1.4136 | 20.0 | 11840 | 1.0541 | 0.7217 |
| 1.4679 | 21.0 | 12432 | 1.0542 | 0.7208 |
| 1.3328 | 22.0 | 13024 | 1.0466 | 0.7253 |
| 1.2773 | 23.0 | 13616 | 1.0655 | 0.7188 |
| 1.342 | 24.0 | 14208 | 1.0471 | 0.7236 |
| 1.3437 | 25.0 | 14800 | 1.0492 | 0.7236 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
MakAIHealthLab/vit-base-patch16-224-in21k-finetuned-biopsy
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-biopsy
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0838
- Accuracy: 0.9799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1862 | 1.0 | 42 | 1.1167 | 0.5611 |
| 0.7235 | 2.0 | 84 | 0.6029 | 0.8543 |
| 0.4286 | 3.0 | 126 | 0.3452 | 0.9280 |
| 0.3612 | 4.0 | 168 | 0.3485 | 0.8945 |
| 0.3015 | 5.0 | 210 | 0.2590 | 0.9296 |
| 0.2917 | 6.0 | 252 | 0.2219 | 0.9414 |
| 0.2312 | 7.0 | 294 | 0.2400 | 0.9280 |
| 0.1708 | 8.0 | 336 | 0.2120 | 0.9414 |
| 0.1806 | 9.0 | 378 | 0.1784 | 0.9514 |
| 0.1703 | 10.0 | 420 | 0.1571 | 0.9481 |
| 0.139 | 11.0 | 462 | 0.1544 | 0.9648 |
| 0.1301 | 12.0 | 504 | 0.1431 | 0.9598 |
| 0.122 | 13.0 | 546 | 0.1297 | 0.9631 |
| 0.1104 | 14.0 | 588 | 0.1401 | 0.9598 |
| 0.1075 | 15.0 | 630 | 0.1200 | 0.9665 |
| 0.0986 | 16.0 | 672 | 0.1665 | 0.9581 |
| 0.092 | 17.0 | 714 | 0.1399 | 0.9531 |
| 0.1123 | 18.0 | 756 | 0.1122 | 0.9698 |
| 0.0766 | 19.0 | 798 | 0.1337 | 0.9564 |
| 0.0762 | 20.0 | 840 | 0.0974 | 0.9732 |
| 0.0994 | 21.0 | 882 | 0.1023 | 0.9698 |
| 0.0687 | 22.0 | 924 | 0.0976 | 0.9749 |
| 0.0767 | 23.0 | 966 | 0.0952 | 0.9765 |
| 0.0581 | 24.0 | 1008 | 0.1096 | 0.9665 |
| 0.0544 | 25.0 | 1050 | 0.1123 | 0.9715 |
| 0.079 | 26.0 | 1092 | 0.1040 | 0.9682 |
| 0.0661 | 27.0 | 1134 | 0.0838 | 0.9799 |
| 0.068 | 28.0 | 1176 | 0.1169 | 0.9715 |
| 0.0722 | 29.0 | 1218 | 0.0897 | 0.9732 |
| 0.048 | 30.0 | 1260 | 0.0864 | 0.9732 |
| 0.0509 | 31.0 | 1302 | 0.0858 | 0.9749 |
| 0.047 | 32.0 | 1344 | 0.0801 | 0.9782 |
| 0.0411 | 33.0 | 1386 | 0.1221 | 0.9648 |
| 0.0378 | 34.0 | 1428 | 0.1011 | 0.9648 |
| 0.0358 | 35.0 | 1470 | 0.0834 | 0.9799 |
| 0.0347 | 36.0 | 1512 | 0.0993 | 0.9715 |
| 0.0434 | 37.0 | 1554 | 0.0938 | 0.9732 |
| 0.0507 | 38.0 | 1596 | 0.0874 | 0.9782 |
| 0.0466 | 39.0 | 1638 | 0.0932 | 0.9765 |
| 0.0502 | 40.0 | 1680 | 0.1012 | 0.9698 |
| 0.0289 | 41.0 | 1722 | 0.0841 | 0.9715 |
| 0.0274 | 42.0 | 1764 | 0.0883 | 0.9682 |
| 0.0251 | 43.0 | 1806 | 0.0843 | 0.9782 |
| 0.0343 | 44.0 | 1848 | 0.0812 | 0.9782 |
| 0.0289 | 45.0 | 1890 | 0.0805 | 0.9782 |
| 0.0277 | 46.0 | 1932 | 0.0943 | 0.9698 |
| 0.0332 | 47.0 | 1974 | 0.0807 | 0.9765 |
| 0.0328 | 48.0 | 2016 | 0.0826 | 0.9749 |
| 0.0257 | 49.0 | 2058 | 0.0852 | 0.9749 |
| 0.0287 | 50.0 | 2100 | 0.0848 | 0.9782 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
[
"adenocarcinoma",
"normal",
"precancerous",
"scc"
] |
ckb100/pokemon-image-classifier
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24",
"label_25",
"label_26",
"label_27",
"label_28",
"label_29",
"label_30",
"label_31",
"label_32",
"label_33",
"label_34",
"label_35",
"label_36",
"label_37",
"label_38",
"label_39",
"label_40",
"label_41",
"label_42",
"label_43",
"label_44",
"label_45",
"label_46",
"label_47",
"label_48",
"label_49",
"label_50",
"label_51",
"label_52",
"label_53",
"label_54",
"label_55",
"label_56",
"label_57",
"label_58",
"label_59",
"label_60",
"label_61",
"label_62",
"label_63",
"label_64",
"label_65",
"label_66",
"label_67",
"label_68",
"label_69",
"label_70",
"label_71",
"label_72",
"label_73",
"label_74",
"label_75",
"label_76",
"label_77",
"label_78",
"label_79",
"label_80",
"label_81",
"label_82",
"label_83",
"label_84",
"label_85",
"label_86",
"label_87",
"label_88",
"label_89",
"label_90",
"label_91",
"label_92",
"label_93",
"label_94",
"label_95",
"label_96",
"label_97",
"label_98",
"label_99",
"label_100",
"label_101",
"label_102",
"label_103",
"label_104",
"label_105",
"label_106",
"label_107",
"label_108",
"label_109",
"label_110",
"label_111",
"label_112",
"label_113",
"label_114",
"label_115",
"label_116",
"label_117",
"label_118",
"label_119",
"label_120",
"label_121",
"label_122",
"label_123",
"label_124",
"label_125",
"label_126",
"label_127",
"label_128",
"label_129",
"label_130",
"label_131",
"label_132",
"label_133",
"label_134",
"label_135",
"label_136",
"label_137",
"label_138",
"label_139",
"label_140",
"label_141",
"label_142",
"label_143",
"label_144",
"label_145",
"label_146",
"label_147",
"label_148",
"label_149"
] |
Jagobaemeka/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6531
- Accuracy: 0.873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.711 | 0.992 | 62 | 2.5698 | 0.801 |
| 1.8586 | 2.0 | 125 | 1.8322 | 0.852 |
| 1.6124 | 2.976 | 186 | 1.6531 | 0.873 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
LynnKukunda/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3951
- eval_accuracy: 0.7997
- eval_f1: 0.5226
- eval_precision: 0.8929
- eval_recall: 0.3695
- eval_runtime: 49.6847
- eval_samples_per_second: 13.767
- eval_steps_per_second: 0.221
- epoch: 0.7674
- step: 33
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.20.1
|
[
"bad",
"good"
] |
alijaanai/vit-finetuned-brain-tumor-classification
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"glioma",
"meningioma",
"no-tumor",
"pituitary"
] |
Ariana03/finetuned-indian-food
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-indian-food
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the indian_food_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2146
- Accuracy: 0.9426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.0574 | 0.3003 | 100 | 0.9445 | 0.8576 |
| 0.8399 | 0.6006 | 200 | 0.5542 | 0.8863 |
| 0.6418 | 0.9009 | 300 | 0.5741 | 0.8672 |
| 0.3785 | 1.2012 | 400 | 0.4702 | 0.8842 |
| 0.4451 | 1.5015 | 500 | 0.3685 | 0.9118 |
| 0.4535 | 1.8018 | 600 | 0.3781 | 0.9097 |
| 0.4618 | 2.1021 | 700 | 0.3000 | 0.9288 |
| 0.2321 | 2.4024 | 800 | 0.3146 | 0.9182 |
| 0.1816 | 2.7027 | 900 | 0.3045 | 0.9214 |
| 0.2332 | 3.0030 | 1000 | 0.3446 | 0.9044 |
| 0.1173 | 3.3033 | 1100 | 0.2381 | 0.9416 |
| 0.2694 | 3.6036 | 1200 | 0.2146 | 0.9426 |
| 0.1227 | 3.9039 | 1300 | 0.2259 | 0.9490 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"burger",
"butter_naan",
"kaathi_rolls",
"kadai_paneer",
"kulfi",
"masala_dosa",
"momos",
"paani_puri",
"pakode",
"pav_bhaji",
"pizza",
"samosa",
"chai",
"chapati",
"chole_bhature",
"dal_makhani",
"dhokla",
"fried_rice",
"idli",
"jalebi"
] |
alyzbane/vit-base-patch16-224-finetuned-barkley
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-barkley
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0036
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
- Top1 Accuracy: 1.0
- Error Rate: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Top1 Accuracy | Error Rate |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:|
| 1.6093 | 1.0 | 38 | 1.4340 | 0.4769 | 0.4342 | 0.4066 | 0.4149 | 0.4342 | 0.5851 |
| 1.2908 | 2.0 | 76 | 1.1747 | 0.6587 | 0.6118 | 0.6160 | 0.6161 | 0.6118 | 0.3839 |
| 1.0409 | 3.0 | 114 | 0.9174 | 0.7382 | 0.7303 | 0.7293 | 0.7425 | 0.7303 | 0.2575 |
| 0.781 | 4.0 | 152 | 0.6528 | 0.8632 | 0.8618 | 0.8622 | 0.8650 | 0.8618 | 0.1350 |
| 0.5429 | 5.0 | 190 | 0.4112 | 0.9417 | 0.9408 | 0.9405 | 0.9443 | 0.9408 | 0.0557 |
| 0.328 | 6.0 | 228 | 0.2229 | 0.9809 | 0.9803 | 0.9802 | 0.9811 | 0.9803 | 0.0189 |
| 0.1837 | 7.0 | 266 | 0.1181 | 0.9871 | 0.9868 | 0.9868 | 0.9878 | 0.9868 | 0.0122 |
| 0.1131 | 8.0 | 304 | 0.0680 | 0.9937 | 0.9934 | 0.9934 | 0.9944 | 0.9934 | 0.0056 |
| 0.0526 | 9.0 | 342 | 0.0387 | 0.9937 | 0.9934 | 0.9934 | 0.9944 | 0.9934 | 0.0056 |
| 0.0283 | 10.0 | 380 | 0.0328 | 0.9873 | 0.9868 | 0.9869 | 0.9878 | 0.9868 | 0.0122 |
| 0.019 | 11.0 | 418 | 0.0224 | 0.9873 | 0.9868 | 0.9868 | 0.9889 | 0.9868 | 0.0111 |
| 0.0148 | 12.0 | 456 | 0.0201 | 0.9873 | 0.9868 | 0.9868 | 0.9889 | 0.9868 | 0.0111 |
| 0.0095 | 13.0 | 494 | 0.0396 | 0.9871 | 0.9868 | 0.9868 | 0.9878 | 0.9868 | 0.0122 |
| 0.007 | 14.0 | 532 | 0.0048 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 |
| 0.011 | 15.0 | 570 | 0.0036 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 |
| 0.0071 | 16.0 | 608 | 0.0092 | 0.9936 | 0.9934 | 0.9934 | 0.9941 | 0.9934 | 0.0059 |
| 0.0103 | 17.0 | 646 | 0.0148 | 0.9936 | 0.9934 | 0.9934 | 0.9944 | 0.9934 | 0.0056 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"iinstia bijuga",
"mangifera indica",
"pterocarpus indicus",
"roystonea regia",
"tabebuia"
] |
alyzbane/resnet-50-finetuned-barkley
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-barkley
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9221
- Precision: 0.8780
- Recall: 0.8618
- F1: 0.8574
- Accuracy: 0.8744
- Top1 Accuracy: 0.8618
- Error Rate: 0.1256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Top1 Accuracy | Error Rate |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:|
| 1.6171 | 1.0 | 38 | 1.6195 | 0.0663 | 0.1513 | 0.0664 | 0.1738 | 0.1513 | 0.8262 |
| 1.6149 | 2.0 | 76 | 1.6160 | 0.2953 | 0.1579 | 0.0802 | 0.1785 | 0.1579 | 0.8215 |
| 1.6119 | 3.0 | 114 | 1.6112 | 0.0804 | 0.1579 | 0.0834 | 0.1772 | 0.1579 | 0.8228 |
| 1.6041 | 4.0 | 152 | 1.6015 | 0.4161 | 0.1974 | 0.1461 | 0.2155 | 0.1974 | 0.7845 |
| 1.5945 | 5.0 | 190 | 1.5895 | 0.4089 | 0.2895 | 0.2428 | 0.3092 | 0.2895 | 0.6908 |
| 1.5777 | 6.0 | 228 | 1.5710 | 0.5764 | 0.4408 | 0.3944 | 0.4663 | 0.4408 | 0.5337 |
| 1.561 | 7.0 | 266 | 1.5490 | 0.6013 | 0.4934 | 0.4516 | 0.5173 | 0.5 | 0.4827 |
| 1.536 | 8.0 | 304 | 1.5222 | 0.6377 | 0.5132 | 0.4711 | 0.5450 | 0.5132 | 0.4550 |
| 1.5081 | 9.0 | 342 | 1.4912 | 0.7595 | 0.5987 | 0.5869 | 0.6250 | 0.5987 | 0.3750 |
| 1.4756 | 10.0 | 380 | 1.4566 | 0.7579 | 0.6447 | 0.6293 | 0.6683 | 0.6447 | 0.3317 |
| 1.4387 | 11.0 | 418 | 1.4156 | 0.7914 | 0.6776 | 0.6692 | 0.6985 | 0.6776 | 0.3015 |
| 1.3993 | 12.0 | 456 | 1.3737 | 0.7997 | 0.6842 | 0.6732 | 0.7080 | 0.6842 | 0.2920 |
| 1.358 | 13.0 | 494 | 1.3288 | 0.8290 | 0.7039 | 0.7048 | 0.7232 | 0.7039 | 0.2768 |
| 1.3139 | 14.0 | 532 | 1.2806 | 0.8277 | 0.7434 | 0.7373 | 0.7592 | 0.75 | 0.2408 |
| 1.262 | 15.0 | 570 | 1.2345 | 0.8478 | 0.7697 | 0.7664 | 0.7829 | 0.7697 | 0.2171 |
| 1.2184 | 16.0 | 608 | 1.1887 | 0.8323 | 0.7697 | 0.7654 | 0.7818 | 0.7697 | 0.2182 |
| 1.1803 | 17.0 | 646 | 1.1408 | 0.8423 | 0.7763 | 0.7735 | 0.7931 | 0.7763 | 0.2069 |
| 1.1422 | 18.0 | 684 | 1.0966 | 0.8594 | 0.8158 | 0.8100 | 0.8317 | 0.8158 | 0.1683 |
| 1.1032 | 19.0 | 722 | 1.0587 | 0.8431 | 0.8026 | 0.7969 | 0.8145 | 0.8026 | 0.1855 |
| 1.058 | 20.0 | 760 | 1.0289 | 0.8610 | 0.8355 | 0.8301 | 0.8487 | 0.8355 | 0.1513 |
| 1.0252 | 21.0 | 798 | 0.9918 | 0.8576 | 0.8421 | 0.8370 | 0.8534 | 0.8421 | 0.1466 |
| 1.002 | 22.0 | 836 | 0.9727 | 0.8677 | 0.8487 | 0.8435 | 0.8611 | 0.8487 | 0.1389 |
| 0.9812 | 23.0 | 874 | 0.9465 | 0.8795 | 0.8553 | 0.8497 | 0.8678 | 0.8553 | 0.1322 |
| 0.9636 | 24.0 | 912 | 0.9331 | 0.8820 | 0.8553 | 0.8485 | 0.8699 | 0.8553 | 0.1301 |
| 0.9591 | 25.0 | 950 | 0.9221 | 0.8780 | 0.8618 | 0.8574 | 0.8744 | 0.8618 | 0.1256 |
| 0.948 | 26.0 | 988 | 0.9158 | 0.8780 | 0.8618 | 0.8574 | 0.8744 | 0.8684 | 0.1256 |
| 0.9384 | 27.0 | 1026 | 0.9017 | 0.8685 | 0.8487 | 0.8431 | 0.8601 | 0.8487 | 0.1399 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
[
"iinstia bijuga",
"mangifera indica",
"pterocarpus indicus",
"roystonea regia",
"tabebuia"
] |
alyzbane/convnext-tiny-224-finetuned-barkley
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-barkley
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0128
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
- Top1 Accuracy: 1.0
- Error Rate: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Top1 Accuracy | Error Rate |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:|
| 1.6288 | 1.0 | 38 | 1.6005 | 0.2133 | 0.2697 | 0.2043 | 0.2371 | 0.2697 | 0.7629 |
| 1.6059 | 2.0 | 76 | 1.5802 | 0.2384 | 0.2763 | 0.2243 | 0.2473 | 0.2763 | 0.7527 |
| 1.5808 | 3.0 | 114 | 1.5570 | 0.2778 | 0.3026 | 0.2595 | 0.2744 | 0.3026 | 0.7256 |
| 1.5555 | 4.0 | 152 | 1.5291 | 0.3831 | 0.375 | 0.3491 | 0.3511 | 0.375 | 0.6489 |
| 1.5232 | 5.0 | 190 | 1.4933 | 0.4252 | 0.4408 | 0.4154 | 0.4147 | 0.4408 | 0.5853 |
| 1.4784 | 6.0 | 228 | 1.4484 | 0.5076 | 0.5197 | 0.4926 | 0.4972 | 0.5197 | 0.5028 |
| 1.4242 | 7.0 | 266 | 1.3902 | 0.6857 | 0.6382 | 0.6307 | 0.6249 | 0.6382 | 0.3751 |
| 1.3586 | 8.0 | 304 | 1.3186 | 0.7728 | 0.7171 | 0.7166 | 0.7134 | 0.7171 | 0.2866 |
| 1.276 | 9.0 | 342 | 1.2236 | 0.8547 | 0.8026 | 0.8109 | 0.8060 | 0.8026 | 0.1940 |
| 1.1778 | 10.0 | 380 | 1.1122 | 0.8899 | 0.8553 | 0.8609 | 0.8601 | 0.8553 | 0.1399 |
| 1.0543 | 11.0 | 418 | 0.9839 | 0.9064 | 0.8947 | 0.8958 | 0.9005 | 0.8947 | 0.0995 |
| 0.921 | 12.0 | 456 | 0.8418 | 0.9541 | 0.9539 | 0.9537 | 0.9575 | 0.9539 | 0.0425 |
| 0.773 | 13.0 | 494 | 0.6935 | 0.9624 | 0.9605 | 0.9605 | 0.9652 | 0.9605 | 0.0348 |
| 0.6204 | 14.0 | 532 | 0.5515 | 0.9688 | 0.9671 | 0.9672 | 0.9708 | 0.9671 | 0.0292 |
| 0.4835 | 15.0 | 570 | 0.4146 | 0.9704 | 0.9671 | 0.9676 | 0.9697 | 0.9671 | 0.0303 |
| 0.3641 | 16.0 | 608 | 0.3043 | 0.9805 | 0.9803 | 0.9802 | 0.9830 | 0.9803 | 0.0170 |
| 0.2706 | 17.0 | 646 | 0.2247 | 0.9805 | 0.9803 | 0.9802 | 0.9830 | 0.9803 | 0.0170 |
| 0.1998 | 18.0 | 684 | 0.1705 | 0.9873 | 0.9868 | 0.9868 | 0.9889 | 0.9868 | 0.0111 |
| 0.1446 | 19.0 | 722 | 0.1271 | 0.9937 | 0.9934 | 0.9934 | 0.9944 | 0.9934 | 0.0056 |
| 0.1106 | 20.0 | 760 | 0.1047 | 0.9873 | 0.9868 | 0.9868 | 0.9889 | 0.9868 | 0.0111 |
| 0.0872 | 21.0 | 798 | 0.0780 | 0.9937 | 0.9934 | 0.9934 | 0.9944 | 0.9934 | 0.0056 |
| 0.0614 | 22.0 | 836 | 0.0739 | 0.9873 | 0.9868 | 0.9868 | 0.9889 | 0.9868 | 0.0111 |
| 0.0491 | 23.0 | 874 | 0.0517 | 0.9937 | 0.9934 | 0.9934 | 0.9944 | 0.9934 | 0.0056 |
| 0.0365 | 24.0 | 912 | 0.0401 | 0.9871 | 0.9868 | 0.9868 | 0.9878 | 0.9868 | 0.0122 |
| 0.0255 | 25.0 | 950 | 0.0336 | 0.9937 | 0.9934 | 0.9934 | 0.9944 | 0.9934 | 0.0056 |
| 0.0212 | 26.0 | 988 | 0.0377 | 0.9873 | 0.9868 | 0.9868 | 0.9889 | 0.9868 | 0.0111 |
| 0.0175 | 27.0 | 1026 | 0.0195 | 0.9937 | 0.9934 | 0.9934 | 0.9944 | 0.9934 | 0.0056 |
| 0.0125 | 28.0 | 1064 | 0.0214 | 0.9936 | 0.9934 | 0.9934 | 0.9933 | 0.9934 | 0.0067 |
| 0.0155 | 29.0 | 1102 | 0.0128 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 |
| 0.0104 | 30.0 | 1140 | 0.0159 | 0.9937 | 0.9934 | 0.9934 | 0.9944 | 0.9934 | 0.0056 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"iinstia bijuga",
"mangifera indica",
"pterocarpus indicus",
"roystonea regia",
"tabebuia"
] |
eligapris/v-mdd-2000
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
<!-- ## Validation Metrics
loss: 0.5462027192115784
f1_macro: 0.38996247906197656
f1_micro: 0.737093690248566
f1_weighted: 0.6627689294144399
precision_macro: 0.3467645553924699
precision_micro: 0.737093690248566
precision_weighted: 0.6320379754980795
recall_macro: 0.49719101123595505
recall_micro: 0.737093690248566
recall_weighted: 0.737093690248566
accuracy: 0.737093690248566 -->
# Image Classification Model Results (AutoTrain)
## Validation Metrics
| Metric | Value |
|--------|-------|
| Loss | 0.5462 |
| Accuracy | 0.7371 |
### F1 Scores
| Type | Value |
|------|-------|
| Macro | 0.3900 |
| Micro | 0.7371 |
| Weighted | 0.6628 |
### Precision
| Type | Value |
|------|-------|
| Macro | 0.3468 |
| Micro | 0.7371 |
| Weighted | 0.6320 |
### Recall
| Type | Value |
|------|-------|
| Macro | 0.4972 |
| Micro | 0.7371 |
| Weighted | 0.7371 |
## How to use
This model is designed for image classification. Here's how you can use it:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
import torch
from PIL import Image
model_name = "eligapris/v-mdd-2000"
processor = AutoImageProcessor.from_pretrained(model_name)
model = AutoModelForImageClassification.from_pretrained(model_name)
image = Image.open("path_to_your_image.jpg")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
|
[
"common_rust",
"gray_leaf_spot",
"healthy_leaf",
"northern_leaf_blight"
] |
eligapris/v-mdd-2000-150
|
<!-- # Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.369663268327713
f1_macro: 0.6843075887364951
f1_micro: 0.858508604206501
f1_weighted: 0.8303295709630173
precision_macro: 0.8204154992433914
precision_micro: 0.858508604206501
precision_weighted: 0.882137838723129
recall_macro: 0.7169578798305077
recall_micro: 0.858508604206501
recall_weighted: 0.858508604206501
accuracy: 0.858508604206501 -->
# Corn Leaf Disease Classification Model Analysis
## Dataset Breakdown
The dataset consists of four classes with the following distribution:
| Class | Number of Images |
|------------------------|-------------------|
| Healthy_Leaf | 3021 |
| Gray_Leaf_Spot | 2478 |
| Common_Rust | 2949 |
| Northern_Leaf_Blight | 3303 |
**Note:** There is a slight class imbalance, with Gray_Leaf_Spot having notably fewer images compared to the other classes.
## Model Performance Metrics
The model was trained using AutoTrain for image classification. Here's a breakdown of the validation metrics:
| Metric | Value |
|-----------------------|-----------|
| Loss | 0.3697 |
| Accuracy | 0.8585 |
| F1 (Macro) | 0.6843 |
| F1 (Micro) | 0.8585 |
| F1 (Weighted) | 0.8303 |
| Precision (Macro) | 0.8204 |
| Precision (Micro) | 0.8585 |
| Precision (Weighted) | 0.8821 |
| Recall (Macro) | 0.7170 |
| Recall (Micro) | 0.8585 |
| Recall (Weighted) | 0.8585 |
### Metric Explanations
1. **Loss (0.3697)**: This relatively low value indicates that the model is learning well.
2. **Accuracy (0.8585)**: The model correctly classifies 85.85% of all instances across all classes.
3. **F1 Score**:
- Macro (0.6843): The unweighted mean of F1 scores for each class.
- Micro (0.8585): Calculated globally by counting the total true positives, false negatives, and false positives.
- Weighted (0.8303): The weighted average of F1 scores for each class, accounting for class imbalance.
4. **Precision**:
- Macro (0.8204): The unweighted mean of precision scores for each class.
- Micro (0.8585): The global precision across all classes.
- Weighted (0.8821): The weighted average of precision scores for each class.
5. **Recall**:
- Macro (0.7170): The unweighted mean of recall scores for each class.
- Micro (0.8585): The global recall across all classes.
- Weighted (0.8585): The weighted average of recall scores for each class.
### Analysis
1. **Class Imbalance**: The difference between macro and micro scores suggests class imbalance, which aligns with our dataset breakdown. The Gray_Leaf_Spot class, having fewer images, likely contributes to this imbalance.
2. **Precision vs Recall**: Precision scores are generally higher than recall scores, especially for macro metrics. This suggests the model is more cautious in its predictions, preferring to be correct when it does predict a class.
3. **Performance on Majority vs Minority Classes**: The higher micro and weighted scores compared to macro scores indicate that the model performs better on more frequent classes. This is likely due to the class imbalance, with the model potentially struggling more with the Gray_Leaf_Spot class.
4. **Overall Performance**: With an accuracy of 85.85%, the model shows good overall performance. However, there's room for improvement, especially in handling the class imbalance.
|
[
"common_rust",
"gray_leaf_spot",
"healthy_leaf",
"northern_leaf_blight"
] |
Tianmu28/vit-google-model-30-classes
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0466
- Accuracy: 0.9967
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2698 | 1.0 | 57 | 0.7910 | 0.9567 |
| 0.1238 | 2.0 | 114 | 0.1673 | 0.99 |
| 0.0269 | 3.0 | 171 | 0.0869 | 0.9967 |
| 0.0096 | 4.0 | 228 | 0.0634 | 0.9967 |
| 0.0059 | 5.0 | 285 | 0.0569 | 0.9967 |
| 0.0049 | 6.0 | 342 | 0.0524 | 0.9967 |
| 0.0043 | 7.0 | 399 | 0.0495 | 0.9967 |
| 0.0036 | 8.0 | 456 | 0.0479 | 0.9967 |
| 0.0036 | 9.0 | 513 | 0.0469 | 0.9967 |
| 0.0032 | 10.0 | 570 | 0.0466 | 0.9967 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24",
"label_25",
"label_26",
"label_27",
"label_28",
"label_29"
] |
soplac/art_classifier
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# art_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6729
- Accuracy: 0.8868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.8571 | 3 | 1.0830 | 0.3962 |
| No log | 2.0 | 7 | 1.0106 | 0.6415 |
| 1.0286 | 2.8571 | 10 | 0.9347 | 0.8302 |
| 1.0286 | 4.0 | 14 | 0.8509 | 0.8679 |
| 1.0286 | 4.8571 | 17 | 0.7853 | 0.8868 |
| 0.7956 | 6.0 | 21 | 0.7458 | 0.8868 |
| 0.7956 | 6.8571 | 24 | 0.7045 | 0.8679 |
| 0.7956 | 8.0 | 28 | 0.6863 | 0.8868 |
| 0.6554 | 8.5714 | 30 | 0.6729 | 0.8868 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"big stripes runway",
"runway leather vogue",
"runway photo cheetah print vogue"
] |
alyzbane/swin-base-patch4-window7-224-finetuned-barkley
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-patch4-window7-224-finetuned-barkley
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0126
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
- Top1 Accuracy: 1.0
- Error Rate: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Top1 Accuracy | Error Rate |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:|
| 1.6386 | 0.9474 | 9 | 1.2206 | 0.7199 | 0.6842 | 0.6848 | 0.6800 | 0.6842 | 0.3200 |
| 0.7786 | 2.0 | 19 | 0.3487 | 0.9484 | 0.9474 | 0.9467 | 0.9497 | 0.9474 | 0.0503 |
| 0.1763 | 2.9474 | 28 | 0.0609 | 0.9937 | 0.9934 | 0.9934 | 0.9944 | 0.9934 | 0.0056 |
| 0.0472 | 4.0 | 38 | 0.0318 | 0.9871 | 0.9868 | 0.9868 | 0.9878 | 0.9868 | 0.0122 |
| 0.0316 | 4.9474 | 47 | 0.0126 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 |
| 0.0171 | 6.0 | 57 | 0.0392 | 0.9875 | 0.9868 | 0.9868 | 0.9867 | 0.9868 | 0.0133 |
| 0.0152 | 6.9474 | 66 | 0.0042 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
[
"iinstia bijuga",
"mangifera indica",
"pterocarpus indicus",
"roystonea regia",
"tabebuia"
] |
Paco24/convnext-tiny-224-afinaopalcaxarro
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-afinaopalcaxarro
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7981
- Accuracy: 0.8624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 8 | 2.2682 | 0.3383 |
| 2.3684 | 2.0 | 16 | 1.9344 | 0.6869 |
| 2.0258 | 3.0 | 24 | 1.6364 | 0.7901 |
| 1.6828 | 4.0 | 32 | 1.3840 | 0.8165 |
| 1.3866 | 5.0 | 40 | 1.1854 | 0.8222 |
| 1.3866 | 6.0 | 48 | 1.0353 | 0.8337 |
| 1.1679 | 7.0 | 56 | 0.9255 | 0.8521 |
| 1.021 | 8.0 | 64 | 0.8538 | 0.8567 |
| 0.9338 | 9.0 | 72 | 0.8123 | 0.8612 |
| 0.8718 | 10.0 | 80 | 0.7981 | 0.8624 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
[
"bottom_bolt",
"bracket",
"cable_labels",
"cables_secured_to_the_pole_showing_lables",
"dp_spare_storage_and_cable_entry",
"front_view_of_the_dp_showing_dp_identifier",
"lhs_view_of_the_dp",
"rhs_view_of_the_dp",
"splice_tray_cover_showing_lable_and_velco_strap_installed",
"splitter_distribution_tray",
"top_bolt"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.