model_id
stringlengths 7
105
| model_card
stringlengths 1
130k
| model_labels
listlengths 2
80k
|
---|---|---|
JOSEDURANisc/vit-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0223
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1408 | 3.85 | 500 | 0.0223 | 0.9925 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
ihsansatriawan/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2908
- Accuracy: 0.5563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00018
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.2380 | 0.5062 |
| No log | 2.0 | 40 | 1.1930 | 0.6 |
| No log | 3.0 | 60 | 1.2037 | 0.5687 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
platzi/platzi-vit_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit_model
This model is a fine-tuned version of [google/vit-base-patch32-384](https://huggingface.co/google/vit-base-patch32-384) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0528
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1375 | 3.85 | 500 | 0.0528 | 0.9850 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
Kukuru0917/emotion_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2745
- Accuracy: 0.6375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.7629 | 0.4375 |
| No log | 2.0 | 40 | 1.5012 | 0.5 |
| No log | 3.0 | 60 | 1.3757 | 0.5 |
| No log | 4.0 | 80 | 1.2452 | 0.5625 |
| No log | 5.0 | 100 | 1.2394 | 0.5625 |
| No log | 6.0 | 120 | 1.2083 | 0.6125 |
| No log | 7.0 | 140 | 1.2209 | 0.575 |
| No log | 8.0 | 160 | 1.2755 | 0.5875 |
| No log | 9.0 | 180 | 1.2794 | 0.5687 |
| No log | 10.0 | 200 | 1.2639 | 0.6125 |
| No log | 11.0 | 220 | 1.3129 | 0.6125 |
| No log | 12.0 | 240 | 1.2277 | 0.6312 |
| No log | 13.0 | 260 | 1.3620 | 0.5938 |
| No log | 14.0 | 280 | 1.3023 | 0.6062 |
| No log | 15.0 | 300 | 1.3334 | 0.6 |
| No log | 16.0 | 320 | 1.4142 | 0.5813 |
| No log | 17.0 | 340 | 1.2863 | 0.6125 |
| No log | 18.0 | 360 | 1.4084 | 0.5875 |
| No log | 19.0 | 380 | 1.4195 | 0.575 |
| No log | 20.0 | 400 | 1.4164 | 0.5938 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
reallygoodtechdeals/autotrain-lane-center3-89488143942
|
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 89488143942
- CO2 Emissions (in grams): 0.5738
## Validation Metrics
- Loss: 1.067
- Accuracy: 0.457
- Macro F1: 0.348
- Micro F1: 0.457
- Weighted F1: 0.388
- Macro Precision: 0.303
- Micro Precision: 0.457
- Weighted Precision: 0.337
- Macro Recall: 0.410
- Micro Recall: 0.457
- Weighted Recall: 0.457
|
[
"slight_left",
"slight_right",
"straight"
] |
ammardaffa/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3273
- Accuracy: 0.5375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.7704 | 0.3625 |
| No log | 2.0 | 80 | 1.4682 | 0.4938 |
| No log | 3.0 | 120 | 1.3937 | 0.4625 |
| No log | 4.0 | 160 | 1.3677 | 0.5125 |
| No log | 5.0 | 200 | 1.3114 | 0.525 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
WillyArdiyanto/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4866
- Accuracy: 0.5625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.5045 | 0.4875 |
| No log | 2.0 | 80 | 1.3562 | 0.5312 |
| No log | 3.0 | 120 | 1.5354 | 0.4562 |
| No log | 4.0 | 160 | 1.5095 | 0.5062 |
| No log | 5.0 | 200 | 1.5644 | 0.475 |
| No log | 6.0 | 240 | 1.4651 | 0.5563 |
| No log | 7.0 | 280 | 1.4516 | 0.5375 |
| No log | 8.0 | 320 | 1.5859 | 0.5188 |
| No log | 9.0 | 360 | 1.5498 | 0.5437 |
| No log | 10.0 | 400 | 1.5040 | 0.5625 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
Josevega69/jose69
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jose69
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0328
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1307 | 3.85 | 500 | 0.0328 | 0.9850 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
flatmoon102/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4303
- Accuracy: 0.4562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.4403 | 0.45 |
| No log | 2.0 | 80 | 1.4300 | 0.4313 |
| No log | 3.0 | 120 | 1.3902 | 0.5 |
| No log | 4.0 | 160 | 1.3475 | 0.4688 |
| No log | 5.0 | 200 | 1.3698 | 0.4938 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
kittendev/visual_emotional_analysis
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# visual_emotional_analysis
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2815
- Accuracy: 0.5563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.8308 | 0.375 |
| No log | 2.0 | 40 | 1.5510 | 0.4875 |
| No log | 3.0 | 60 | 1.4138 | 0.5062 |
| No log | 4.0 | 80 | 1.3845 | 0.4875 |
| No log | 5.0 | 100 | 1.3245 | 0.525 |
| No log | 6.0 | 120 | 1.2645 | 0.6 |
| No log | 7.0 | 140 | 1.2887 | 0.5188 |
| No log | 8.0 | 160 | 1.2395 | 0.5875 |
| No log | 9.0 | 180 | 1.2267 | 0.55 |
| No log | 10.0 | 200 | 1.1883 | 0.6 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
zeenfts/output_dir
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_dir
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2976
- Accuracy: 0.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: reduce_lr_on_plateau
- num_epochs: 77
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 2 | 2.0706 | 0.15 |
| No log | 2.0 | 5 | 2.0309 | 0.2313 |
| No log | 2.8 | 7 | 1.9846 | 0.2562 |
| 1.9868 | 4.0 | 10 | 1.8915 | 0.4062 |
| 1.9868 | 4.8 | 12 | 1.8529 | 0.3125 |
| 1.9868 | 6.0 | 15 | 1.7422 | 0.4125 |
| 1.9868 | 6.8 | 17 | 1.6761 | 0.4313 |
| 1.6815 | 8.0 | 20 | 1.6310 | 0.4562 |
| 1.6815 | 8.8 | 22 | 1.5900 | 0.45 |
| 1.6815 | 10.0 | 25 | 1.5402 | 0.4313 |
| 1.6815 | 10.8 | 27 | 1.5018 | 0.5 |
| 1.4233 | 12.0 | 30 | 1.4620 | 0.4875 |
| 1.4233 | 12.8 | 32 | 1.4286 | 0.5062 |
| 1.4233 | 14.0 | 35 | 1.4045 | 0.5125 |
| 1.4233 | 14.8 | 37 | 1.3860 | 0.5312 |
| 1.2127 | 16.0 | 40 | 1.3571 | 0.5 |
| 1.2127 | 16.8 | 42 | 1.3293 | 0.5375 |
| 1.2127 | 18.0 | 45 | 1.3742 | 0.4813 |
| 1.2127 | 18.8 | 47 | 1.3151 | 0.5437 |
| 1.0075 | 20.0 | 50 | 1.3053 | 0.5312 |
| 1.0075 | 20.8 | 52 | 1.3266 | 0.5375 |
| 1.0075 | 22.0 | 55 | 1.2964 | 0.5312 |
| 1.0075 | 22.8 | 57 | 1.2278 | 0.5875 |
| 0.8232 | 24.0 | 60 | 1.2501 | 0.5563 |
| 0.8232 | 24.8 | 62 | 1.2330 | 0.575 |
| 0.8232 | 26.0 | 65 | 1.2198 | 0.5625 |
| 0.8232 | 26.8 | 67 | 1.2071 | 0.5875 |
| 0.6738 | 28.0 | 70 | 1.2643 | 0.5875 |
| 0.6738 | 28.8 | 72 | 1.2594 | 0.5563 |
| 0.6738 | 30.0 | 75 | 1.2263 | 0.5312 |
| 0.6738 | 30.8 | 77 | 1.3218 | 0.5188 |
| 0.5715 | 32.0 | 80 | 1.2593 | 0.5312 |
| 0.5715 | 32.8 | 82 | 1.2214 | 0.5625 |
| 0.5715 | 34.0 | 85 | 1.3060 | 0.55 |
| 0.5715 | 34.8 | 87 | 1.2727 | 0.5563 |
| 0.4523 | 36.0 | 90 | 1.2749 | 0.5375 |
| 0.4523 | 36.8 | 92 | 1.3570 | 0.5437 |
| 0.4523 | 38.0 | 95 | 1.2815 | 0.5687 |
| 0.4523 | 38.8 | 97 | 1.2233 | 0.6062 |
| 0.3971 | 40.0 | 100 | 1.2097 | 0.6 |
| 0.3971 | 40.8 | 102 | 1.2881 | 0.5813 |
| 0.3971 | 42.0 | 105 | 1.2400 | 0.575 |
| 0.3971 | 42.8 | 107 | 1.3140 | 0.5375 |
| 0.3616 | 44.0 | 110 | 1.1525 | 0.6125 |
| 0.3616 | 44.8 | 112 | 1.2725 | 0.5938 |
| 0.3616 | 46.0 | 115 | 1.2634 | 0.5813 |
| 0.3616 | 46.8 | 117 | 1.2299 | 0.6 |
| 0.338 | 48.0 | 120 | 1.3408 | 0.5375 |
| 0.338 | 48.8 | 122 | 1.1931 | 0.5938 |
| 0.338 | 50.0 | 125 | 1.2806 | 0.5938 |
| 0.338 | 50.8 | 127 | 1.2410 | 0.575 |
| 0.3445 | 52.0 | 130 | 1.2901 | 0.5813 |
| 0.3445 | 52.8 | 132 | 1.2504 | 0.6062 |
| 0.3445 | 54.0 | 135 | 1.1614 | 0.5875 |
| 0.3445 | 54.8 | 137 | 1.2247 | 0.6062 |
| 0.3299 | 56.0 | 140 | 1.2591 | 0.5625 |
| 0.3299 | 56.8 | 142 | 1.2629 | 0.5687 |
| 0.3299 | 58.0 | 145 | 1.2369 | 0.5938 |
| 0.3299 | 58.8 | 147 | 1.2771 | 0.575 |
| 0.3292 | 60.0 | 150 | 1.3284 | 0.5875 |
| 0.3292 | 60.8 | 152 | 1.2550 | 0.5625 |
| 0.3292 | 61.6 | 154 | 1.3047 | 0.55 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
mhasnanr/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2966
- Accuracy: 0.525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.4307 | 0.475 |
| No log | 2.0 | 80 | 1.3231 | 0.5125 |
| No log | 3.0 | 120 | 1.3044 | 0.5437 |
| No log | 4.0 | 160 | 1.3204 | 0.525 |
| No log | 5.0 | 200 | 1.2457 | 0.5875 |
| No log | 6.0 | 240 | 1.3604 | 0.5125 |
| No log | 7.0 | 280 | 1.2296 | 0.5813 |
| No log | 8.0 | 320 | 1.3598 | 0.525 |
| No log | 9.0 | 360 | 1.3343 | 0.5188 |
| No log | 10.0 | 400 | 1.4003 | 0.5625 |
| No log | 11.0 | 440 | 1.3580 | 0.5563 |
| No log | 12.0 | 480 | 1.3214 | 0.5687 |
| 0.4908 | 13.0 | 520 | 1.3713 | 0.5312 |
| 0.4908 | 14.0 | 560 | 1.3820 | 0.55 |
| 0.4908 | 15.0 | 600 | 1.3384 | 0.5813 |
| 0.4908 | 16.0 | 640 | 1.4905 | 0.5375 |
| 0.4908 | 17.0 | 680 | 1.3985 | 0.5687 |
| 0.4908 | 18.0 | 720 | 1.4733 | 0.5312 |
| 0.4908 | 19.0 | 760 | 1.3403 | 0.5813 |
| 0.4908 | 20.0 | 800 | 1.3991 | 0.5563 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
ZiaPratama/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3659
- Accuracy: 0.5375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 32 | 1.9290 | 0.3063 |
| No log | 2.0 | 64 | 1.6622 | 0.3563 |
| No log | 3.0 | 96 | 1.5753 | 0.3937 |
| No log | 4.0 | 128 | 1.5099 | 0.475 |
| No log | 5.0 | 160 | 1.4614 | 0.4313 |
| No log | 6.0 | 192 | 1.4104 | 0.5 |
| No log | 7.0 | 224 | 1.3962 | 0.4562 |
| No log | 8.0 | 256 | 1.3535 | 0.5437 |
| No log | 9.0 | 288 | 1.3483 | 0.5062 |
| No log | 10.0 | 320 | 1.3994 | 0.45 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
elenaThevalley/resnet-50-finetuned-32bs-0.01lr
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-32bs-0.01lr
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1702
- Accuracy: 0.9477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.97 | 26 | 0.2610 | 0.9157 |
| No log | 1.97 | 53 | 0.1749 | 0.9419 |
| No log | 2.9 | 78 | 0.1702 | 0.9477 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"drink",
"food",
"inside",
"menu",
"outside"
] |
aprlkhrnss/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2368
- Accuracy: 0.5312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 5 | 1.2726 | 0.575 |
| No log | 2.0 | 10 | 1.3480 | 0.5062 |
| No log | 3.0 | 15 | 1.2696 | 0.5375 |
| No log | 4.0 | 20 | 1.2715 | 0.5312 |
| No log | 5.0 | 25 | 1.2360 | 0.5687 |
| No log | 6.0 | 30 | 1.2728 | 0.5125 |
| No log | 7.0 | 35 | 1.2374 | 0.525 |
| No log | 8.0 | 40 | 1.2484 | 0.5437 |
| No log | 9.0 | 45 | 1.2336 | 0.5563 |
| No log | 10.0 | 50 | 1.2128 | 0.6 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
dima806/food_type_image_detection_new
|
See https://www.kaggle.com/code/dima806/food-type-detection-vit for more details.
|
[
"omelette",
"apple_pie",
"sandwich",
"kulfi",
"chicken_curry",
"fries",
"jalebi",
"taquito",
"crispy chicken",
"baked potato",
"kaathi_rolls",
"masala_dosa",
"paani_puri",
"fried_rice",
"chole_bhature",
"chai",
"taco",
"samosa",
"dhokla",
"chapati",
"sushi",
"pakode",
"butter_naan",
"momos",
"idli",
"pav_bhaji",
"cheesecake",
"donut",
"burger",
"pizza",
"dal_makhani",
"hot dog",
"ice_cream",
"kadai_paneer"
] |
raffel-22/emotion_classification_2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification_2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3274
- Accuracy: 0.5188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.9337 | 0.3563 |
| No log | 2.0 | 40 | 1.7116 | 0.3375 |
| No log | 3.0 | 60 | 1.5755 | 0.4562 |
| No log | 4.0 | 80 | 1.4939 | 0.45 |
| No log | 5.0 | 100 | 1.4377 | 0.5062 |
| No log | 6.0 | 120 | 1.4363 | 0.4562 |
| No log | 7.0 | 140 | 1.3615 | 0.5125 |
| No log | 8.0 | 160 | 1.3021 | 0.5375 |
| No log | 9.0 | 180 | 1.3307 | 0.525 |
| No log | 10.0 | 200 | 1.3085 | 0.4938 |
| No log | 11.0 | 220 | 1.2798 | 0.5813 |
| No log | 12.0 | 240 | 1.2707 | 0.525 |
| No log | 13.0 | 260 | 1.2339 | 0.55 |
| No log | 14.0 | 280 | 1.3053 | 0.5437 |
| No log | 15.0 | 300 | 1.3038 | 0.4938 |
| No log | 16.0 | 320 | 1.3088 | 0.5375 |
| No log | 17.0 | 340 | 1.3336 | 0.5312 |
| No log | 18.0 | 360 | 1.3053 | 0.5 |
| No log | 19.0 | 380 | 1.2206 | 0.5687 |
| No log | 20.0 | 400 | 1.2598 | 0.5312 |
| No log | 21.0 | 420 | 1.3332 | 0.5125 |
| No log | 22.0 | 440 | 1.3388 | 0.5312 |
| No log | 23.0 | 460 | 1.3129 | 0.5563 |
| No log | 24.0 | 480 | 1.3632 | 0.5062 |
| 0.9153 | 25.0 | 500 | 1.4166 | 0.4688 |
| 0.9153 | 26.0 | 520 | 1.4094 | 0.5 |
| 0.9153 | 27.0 | 540 | 1.4294 | 0.475 |
| 0.9153 | 28.0 | 560 | 1.4937 | 0.475 |
| 0.9153 | 29.0 | 580 | 1.3897 | 0.4938 |
| 0.9153 | 30.0 | 600 | 1.4565 | 0.475 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
jeffsabarman/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1918
- Accuracy: 0.6062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.6651 | 0.3187 |
| No log | 2.0 | 40 | 1.3900 | 0.475 |
| No log | 3.0 | 60 | 1.2950 | 0.4875 |
| No log | 4.0 | 80 | 1.2170 | 0.5813 |
| No log | 5.0 | 100 | 1.1709 | 0.5687 |
| No log | 6.0 | 120 | 1.2711 | 0.525 |
| No log | 7.0 | 140 | 1.1324 | 0.575 |
| No log | 8.0 | 160 | 1.2349 | 0.5437 |
| No log | 9.0 | 180 | 1.3844 | 0.5312 |
| No log | 10.0 | 200 | 1.2460 | 0.55 |
| No log | 11.0 | 220 | 1.2182 | 0.6125 |
| No log | 12.0 | 240 | 1.3365 | 0.5563 |
| No log | 13.0 | 260 | 1.2137 | 0.6125 |
| No log | 14.0 | 280 | 1.3335 | 0.575 |
| No log | 15.0 | 300 | 1.1078 | 0.625 |
| No log | 16.0 | 320 | 1.2962 | 0.6 |
| No log | 17.0 | 340 | 1.2558 | 0.6125 |
| No log | 18.0 | 360 | 1.3949 | 0.55 |
| No log | 19.0 | 380 | 1.3807 | 0.5687 |
| No log | 20.0 | 400 | 1.2734 | 0.6 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
ridwansukri/emotion_classification_v1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification_v1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1905
- Accuracy: 0.575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 10 | 2.0278 | 0.2437 |
| No log | 2.0 | 20 | 1.8875 | 0.3875 |
| No log | 3.0 | 30 | 1.6890 | 0.4313 |
| No log | 4.0 | 40 | 1.5484 | 0.5 |
| No log | 5.0 | 50 | 1.4799 | 0.5125 |
| No log | 6.0 | 60 | 1.4148 | 0.5375 |
| No log | 7.0 | 70 | 1.3529 | 0.5375 |
| No log | 8.0 | 80 | 1.3120 | 0.5312 |
| No log | 9.0 | 90 | 1.2790 | 0.5813 |
| No log | 10.0 | 100 | 1.2498 | 0.575 |
| No log | 11.0 | 110 | 1.2610 | 0.525 |
| No log | 12.0 | 120 | 1.1896 | 0.5938 |
| No log | 13.0 | 130 | 1.2251 | 0.5312 |
| No log | 14.0 | 140 | 1.2019 | 0.575 |
| No log | 15.0 | 150 | 1.1797 | 0.5563 |
| No log | 16.0 | 160 | 1.2484 | 0.5437 |
| No log | 17.0 | 170 | 1.1766 | 0.5875 |
| No log | 18.0 | 180 | 1.2401 | 0.4938 |
| No log | 19.0 | 190 | 1.1977 | 0.5312 |
| No log | 20.0 | 200 | 1.1839 | 0.5875 |
| No log | 21.0 | 210 | 1.2028 | 0.5687 |
| No log | 22.0 | 220 | 1.2048 | 0.5625 |
| No log | 23.0 | 230 | 1.2637 | 0.5375 |
| No log | 24.0 | 240 | 1.2371 | 0.5375 |
| No log | 25.0 | 250 | 1.2777 | 0.5687 |
| No log | 26.0 | 260 | 1.2544 | 0.525 |
| No log | 27.0 | 270 | 1.2104 | 0.5625 |
| No log | 28.0 | 280 | 1.1372 | 0.5938 |
| No log | 29.0 | 290 | 1.2405 | 0.575 |
| No log | 30.0 | 300 | 1.1624 | 0.6062 |
| No log | 31.0 | 310 | 1.2376 | 0.5875 |
| No log | 32.0 | 320 | 1.1794 | 0.5875 |
| No log | 33.0 | 330 | 1.2156 | 0.5563 |
| No log | 34.0 | 340 | 1.1725 | 0.55 |
| No log | 35.0 | 350 | 1.2394 | 0.55 |
| No log | 36.0 | 360 | 1.1886 | 0.5938 |
| No log | 37.0 | 370 | 1.1760 | 0.6188 |
| No log | 38.0 | 380 | 1.2757 | 0.525 |
| No log | 39.0 | 390 | 1.1703 | 0.6062 |
| No log | 40.0 | 400 | 1.2734 | 0.575 |
| No log | 41.0 | 410 | 1.2265 | 0.5563 |
| No log | 42.0 | 420 | 1.2651 | 0.5687 |
| No log | 43.0 | 430 | 1.2419 | 0.5813 |
| No log | 44.0 | 440 | 1.1871 | 0.6 |
| No log | 45.0 | 450 | 1.2542 | 0.575 |
| No log | 46.0 | 460 | 1.1910 | 0.5813 |
| No log | 47.0 | 470 | 1.1990 | 0.6 |
| No log | 48.0 | 480 | 1.2097 | 0.5813 |
| No log | 49.0 | 490 | 1.2226 | 0.5875 |
| 0.699 | 50.0 | 500 | 1.2793 | 0.5375 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
Kx15/emotion_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5662
- Accuracy: 0.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.4518 | 0.5687 |
| No log | 2.0 | 40 | 1.5669 | 0.5437 |
| No log | 3.0 | 60 | 1.6466 | 0.5125 |
| No log | 4.0 | 80 | 1.6751 | 0.5125 |
| No log | 5.0 | 100 | 1.6191 | 0.55 |
| No log | 6.0 | 120 | 1.6814 | 0.5437 |
| No log | 7.0 | 140 | 1.7283 | 0.5687 |
| No log | 8.0 | 160 | 1.5768 | 0.575 |
| No log | 9.0 | 180 | 1.7247 | 0.525 |
| No log | 10.0 | 200 | 1.6371 | 0.5563 |
| No log | 11.0 | 220 | 1.7257 | 0.5312 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
Atar01/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0934
- Accuracy: 0.1375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 2.0989 | 0.1 |
| No log | 2.0 | 80 | 2.0933 | 0.1375 |
| No log | 3.0 | 120 | 2.0951 | 0.0938 |
| No log | 4.0 | 160 | 2.0851 | 0.0938 |
| No log | 5.0 | 200 | 2.0861 | 0.0938 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
juniorjukeko/emotion-classificationV3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion-classificationV3
This model is a fine-tuned version of [/content/model/emotion-classificationV3/checkpoint-60](https://huggingface.co//content/model/emotion-classificationV3/checkpoint-60) on FastJobs/Visual_Emotional_Analysis Dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5765
- Accuracy: 0.8438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
FastJobs/Visual_Emotional_Analysis
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 143
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 5 | 0.6923 | 0.7937 |
| 0.5541 | 2.0 | 10 | 0.7871 | 0.8063 |
| 0.5541 | 3.0 | 15 | 0.7193 | 0.8313 |
| 0.5168 | 4.0 | 20 | 0.6446 | 0.825 |
| 0.5168 | 5.0 | 25 | 0.5653 | 0.8438 |
| 0.4627 | 6.0 | 30 | 0.7244 | 0.8063 |
| 0.4627 | 7.0 | 35 | 0.7213 | 0.7937 |
| 0.4516 | 8.0 | 40 | 0.6082 | 0.8313 |
| 0.4516 | 9.0 | 45 | 0.7545 | 0.8063 |
| 0.4339 | 10.0 | 50 | 0.5320 | 0.8562 |
| 0.4339 | 11.0 | 55 | 0.6222 | 0.8187 |
| 0.4233 | 12.0 | 60 | 0.6104 | 0.8438 |
| 0.4233 | 13.0 | 65 | 0.5913 | 0.825 |
| 0.3976 | 14.0 | 70 | 0.6852 | 0.8125 |
| 0.3976 | 15.0 | 75 | 0.6227 | 0.8125 |
| 0.3933 | 16.0 | 80 | 0.5550 | 0.825 |
| 0.3933 | 17.0 | 85 | 0.5438 | 0.8438 |
| 0.4359 | 18.0 | 90 | 0.5916 | 0.825 |
| 0.4359 | 19.0 | 95 | 0.6037 | 0.8063 |
| 0.3589 | 20.0 | 100 | 0.7102 | 0.8125 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
rafelsiregar/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3341
- Accuracy: 0.5375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 80 | 1.3975 | 0.4062 |
| No log | 2.0 | 160 | 1.3917 | 0.4875 |
| No log | 3.0 | 240 | 1.2964 | 0.5 |
| No log | 4.0 | 320 | 1.2587 | 0.5312 |
| No log | 5.0 | 400 | 1.2705 | 0.5125 |
| No log | 6.0 | 480 | 1.2557 | 0.55 |
| 0.7469 | 7.0 | 560 | 1.3400 | 0.525 |
| 0.7469 | 8.0 | 640 | 1.3586 | 0.5687 |
| 0.7469 | 9.0 | 720 | 1.3317 | 0.5563 |
| 0.7469 | 10.0 | 800 | 1.2965 | 0.5687 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
jolieee/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2805
- Accuracy: 0.5188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.7942 | 0.4062 |
| No log | 2.0 | 80 | 1.5663 | 0.3563 |
| No log | 3.0 | 120 | 1.4601 | 0.4813 |
| No log | 4.0 | 160 | 1.3494 | 0.4813 |
| No log | 5.0 | 200 | 1.3107 | 0.5062 |
| No log | 6.0 | 240 | 1.3054 | 0.475 |
| No log | 7.0 | 280 | 1.2423 | 0.575 |
| No log | 8.0 | 320 | 1.3189 | 0.5188 |
| No log | 9.0 | 360 | 1.2515 | 0.5062 |
| No log | 10.0 | 400 | 1.2279 | 0.5437 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
platzi/platzi-vit-model-aaron-jimenez
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-aaron-jimenez
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0288
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1328 | 3.85 | 500 | 0.0288 | 0.9925 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
platzi/platzi-vit-model-sergio-vega
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-sergio-vega
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0121
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1367 | 3.85 | 500 | 0.0121 | 1.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
DifeiT/my_awesome_image_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_image_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4729
- Accuracy: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 1.3138 | 0.5 |
| No log | 2.0 | 2 | 1.4139 | 0.0 |
| No log | 3.0 | 3 | 1.4729 | 0.0 |
### Framework versions
- Transformers 4.33.2
- Pytorch 1.13.1+cpu
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"1",
"2",
"cat",
"dog"
] |
platzi/model-Beans-alejandro-arroyo
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model-Beans-alejandro-arroyo
This model is a fine-tuned version of [google/vit-base-patch32-384](https://huggingface.co/google/vit-base-patch32-384) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0078
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.148 | 3.85 | 500 | 0.0078 | 0.9925 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
DifeiT/rsna_intracranial_hemorrhage_detection
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rsna_intracranial_hemorrhage_detection
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4344
- Accuracy: 0.8586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6034 | 1.0 | 132 | 0.5659 | 0.8315 |
| 0.4903 | 2.0 | 265 | 0.4868 | 0.8472 |
| 0.5305 | 3.0 | 397 | 0.4742 | 0.8538 |
| 0.5424 | 4.0 | 530 | 0.4650 | 0.8552 |
| 0.4289 | 5.0 | 662 | 0.4508 | 0.8552 |
| 0.4275 | 6.0 | 795 | 0.4394 | 0.8590 |
| 0.4075 | 7.0 | 927 | 0.4767 | 0.8434 |
| 0.3649 | 8.0 | 1060 | 0.4462 | 0.8595 |
| 0.3934 | 9.0 | 1192 | 0.4323 | 0.8605 |
| 0.3436 | 9.96 | 1320 | 0.4344 | 0.8586 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"any",
"epidural",
"intraparenchymal",
"intraventricular",
"subarachnoid",
"subdural"
] |
hrtnisri2016/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5771
- Accuracy: 0.4688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.9643 | 0.3438 |
| No log | 2.0 | 40 | 1.7819 | 0.4125 |
| No log | 3.0 | 60 | 1.6521 | 0.4562 |
| No log | 4.0 | 80 | 1.6034 | 0.4938 |
| No log | 5.0 | 100 | 1.5769 | 0.5062 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
Alex14005/model-Dementia-classification-Alejandro-Arroyo
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model-Dementia-classification-Alejandro-Arroyo
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the RiniPL/Dementia_Dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1858
- Accuracy: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"mild_demented",
"moderate_demented",
"non_demented",
"very_mild_demented"
] |
fikribasa/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9812
- Accuracy: 0.2875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0664 | 1.0 | 10 | 2.0297 | 0.2875 |
| 1.9971 | 2.0 | 20 | 1.9725 | 0.35 |
| 1.9375 | 3.0 | 30 | 1.9551 | 0.3 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
DifeiT/rsna-intracranial-hemorrhage-detection
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rsna-intracranial-hemorrhage-detection
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2164
- Accuracy: 0.6152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.5655 | 1.0 | 238 | 1.5235 | 0.4039 |
| 1.3848 | 2.0 | 477 | 1.3622 | 0.4692 |
| 1.2812 | 3.0 | 716 | 1.2811 | 0.5150 |
| 1.2039 | 4.0 | 955 | 1.1795 | 0.5556 |
| 1.1641 | 5.0 | 1193 | 1.1627 | 0.5534 |
| 1.1961 | 6.0 | 1432 | 1.1393 | 0.5705 |
| 1.1382 | 7.0 | 1671 | 1.0921 | 0.5804 |
| 0.9653 | 8.0 | 1910 | 1.0790 | 0.5876 |
| 0.9346 | 9.0 | 2148 | 1.0727 | 0.5931 |
| 0.9083 | 10.0 | 2387 | 1.0605 | 0.5994 |
| 0.8936 | 11.0 | 2626 | 1.0147 | 0.6146 |
| 0.8504 | 12.0 | 2865 | 1.0849 | 0.5818 |
| 0.8544 | 13.0 | 3103 | 1.0349 | 0.6052 |
| 0.7884 | 14.0 | 3342 | 1.0435 | 0.6074 |
| 0.7974 | 15.0 | 3581 | 1.0082 | 0.6127 |
| 0.7921 | 16.0 | 3820 | 1.0438 | 0.6017 |
| 0.709 | 17.0 | 4058 | 1.0484 | 0.6094 |
| 0.6646 | 18.0 | 4297 | 1.0554 | 0.6221 |
| 0.6832 | 19.0 | 4536 | 1.0455 | 0.6124 |
| 0.7076 | 20.0 | 4775 | 1.0905 | 0.6 |
| 0.7442 | 21.0 | 5013 | 1.1094 | 0.6008 |
| 0.6332 | 22.0 | 5252 | 1.0777 | 0.6063 |
| 0.6417 | 23.0 | 5491 | 1.0765 | 0.6141 |
| 0.6267 | 24.0 | 5730 | 1.1057 | 0.6091 |
| 0.6082 | 25.0 | 5968 | 1.0962 | 0.6171 |
| 0.6191 | 26.0 | 6207 | 1.1178 | 0.6039 |
| 0.5654 | 27.0 | 6446 | 1.1386 | 0.5948 |
| 0.5776 | 28.0 | 6685 | 1.1121 | 0.6105 |
| 0.5531 | 29.0 | 6923 | 1.1497 | 0.6030 |
| 0.6275 | 30.0 | 7162 | 1.1796 | 0.6028 |
| 0.5373 | 31.0 | 7401 | 1.1306 | 0.6132 |
| 0.4775 | 32.0 | 7640 | 1.1523 | 0.6058 |
| 0.5469 | 33.0 | 7878 | 1.1634 | 0.6127 |
| 0.4934 | 34.0 | 8117 | 1.1853 | 0.616 |
| 0.5233 | 35.0 | 8356 | 1.2018 | 0.6055 |
| 0.4896 | 36.0 | 8595 | 1.1585 | 0.6108 |
| 0.5122 | 37.0 | 8833 | 1.1874 | 0.6146 |
| 0.4726 | 38.0 | 9072 | 1.1608 | 0.6193 |
| 0.4372 | 39.0 | 9311 | 1.2403 | 0.6132 |
| 0.498 | 40.0 | 9550 | 1.1752 | 0.6201 |
| 0.4813 | 41.0 | 9788 | 1.2005 | 0.6166 |
| 0.4762 | 42.0 | 10027 | 1.2285 | 0.6022 |
| 0.4852 | 43.0 | 10266 | 1.2192 | 0.6119 |
| 0.4332 | 44.0 | 10505 | 1.2391 | 0.6218 |
| 0.3998 | 45.0 | 10743 | 1.1779 | 0.6196 |
| 0.4467 | 46.0 | 10982 | 1.2048 | 0.6284 |
| 0.4332 | 47.0 | 11221 | 1.2302 | 0.6188 |
| 0.4529 | 48.0 | 11460 | 1.2220 | 0.6188 |
| 0.4281 | 49.0 | 11698 | 1.2013 | 0.624 |
| 0.4199 | 49.84 | 11900 | 1.2164 | 0.6152 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"epidural",
"intraparenchymal",
"intraventricular",
"normal",
"subarachnoid",
"subdural"
] |
ahyar002/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2653
- Accuracy: 0.9420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 52 | 0.2598 | 0.9565 |
| No log | 2.0 | 104 | 0.1608 | 0.9517 |
| No log | 3.0 | 156 | 0.1650 | 0.9565 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
nailashfrni/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1728
- Accuracy: 0.9420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 52 | 0.2885 | 0.9179 |
| No log | 2.0 | 104 | 0.1829 | 0.9469 |
| No log | 3.0 | 156 | 0.1789 | 0.9565 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
nailashfrni/emotion_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4178
- Accuracy: 0.5188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.3316 | 0.4562 |
| No log | 2.0 | 80 | 1.3601 | 0.5 |
| No log | 3.0 | 120 | 1.2794 | 0.5563 |
| No log | 4.0 | 160 | 1.3851 | 0.5 |
| No log | 5.0 | 200 | 1.4786 | 0.4625 |
| No log | 6.0 | 240 | 1.4805 | 0.4875 |
| No log | 7.0 | 280 | 1.4581 | 0.4813 |
| No log | 8.0 | 320 | 1.4258 | 0.525 |
| No log | 9.0 | 360 | 1.5452 | 0.5 |
| No log | 10.0 | 400 | 1.3624 | 0.575 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
faldeus0092/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7736
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6551 | 0.99 | 62 | 2.5197 | 0.838 |
| 1.8088 | 2.0 | 125 | 1.7662 | 0.893 |
| 1.5857 | 2.98 | 186 | 1.6207 | 0.885 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
yahyapp/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5080
- Accuracy: 0.45
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.5040 | 0.4313 |
| No log | 2.0 | 40 | 1.4292 | 0.475 |
| No log | 3.0 | 60 | 1.4068 | 0.4562 |
| No log | 4.0 | 80 | 1.3400 | 0.4688 |
| No log | 5.0 | 100 | 1.4205 | 0.4375 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
fullstuck/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5284
- Accuracy: 0.5563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.4223 | 0.525 |
| No log | 2.0 | 80 | 1.5923 | 0.4938 |
| No log | 3.0 | 120 | 1.4860 | 0.5563 |
| No log | 4.0 | 160 | 1.4983 | 0.5625 |
| No log | 5.0 | 200 | 1.5151 | 0.5938 |
| No log | 6.0 | 240 | 1.6818 | 0.5062 |
| No log | 7.0 | 280 | 1.6757 | 0.5125 |
| No log | 8.0 | 320 | 1.4647 | 0.5875 |
| No log | 9.0 | 360 | 1.4922 | 0.5875 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
sparasdya/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1552
- Accuracy: 0.55
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.6906 | 0.3375 |
| No log | 2.0 | 80 | 1.4310 | 0.4062 |
| No log | 3.0 | 120 | 1.3517 | 0.4875 |
| No log | 4.0 | 160 | 1.2080 | 0.5437 |
| No log | 5.0 | 200 | 1.1920 | 0.5437 |
| No log | 6.0 | 240 | 1.1123 | 0.575 |
| No log | 7.0 | 280 | 1.1533 | 0.575 |
| No log | 8.0 | 320 | 1.0971 | 0.5813 |
| No log | 9.0 | 360 | 1.1635 | 0.5687 |
| No log | 10.0 | 400 | 1.1344 | 0.5875 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
B0yc4kra/emotion_finetuned_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_finetuned_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3507
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.6393 | 0.4875 |
| No log | 2.0 | 40 | 1.5461 | 0.4875 |
| No log | 3.0 | 60 | 1.4809 | 0.4938 |
| No log | 4.0 | 80 | 1.4289 | 0.4813 |
| No log | 5.0 | 100 | 1.3878 | 0.4875 |
| No log | 6.0 | 120 | 1.3792 | 0.4813 |
| No log | 7.0 | 140 | 1.3507 | 0.5 |
| No log | 8.0 | 160 | 1.3376 | 0.4938 |
| No log | 9.0 | 180 | 1.3379 | 0.4875 |
| No log | 10.0 | 200 | 1.3305 | 0.5 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
probeadd/rea_transfer_learning_project
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rea_transfer_learning_project
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6430
- Accuracy: 0.375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.8914 | 0.325 |
| No log | 2.0 | 80 | 1.7089 | 0.375 |
| No log | 3.0 | 120 | 1.6569 | 0.3937 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
stevanojs/emotion_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4477
- Accuracy: 0.5062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.9208 | 0.2687 |
| No log | 2.0 | 80 | 1.6469 | 0.3688 |
| 1.7432 | 3.0 | 120 | 1.5591 | 0.45 |
| 1.7432 | 4.0 | 160 | 1.4880 | 0.4313 |
| 0.9778 | 5.0 | 200 | 1.4477 | 0.5062 |
| 0.9778 | 6.0 | 240 | 1.4999 | 0.45 |
| 0.9778 | 7.0 | 280 | 1.4733 | 0.475 |
| 0.442 | 8.0 | 320 | 1.4793 | 0.4625 |
| 0.442 | 9.0 | 360 | 1.5115 | 0.4625 |
| 0.2429 | 10.0 | 400 | 1.5220 | 0.4625 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
ahyar002/emotion_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2445
- Accuracy: 0.5312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 10 | 1.9385 | 0.325 |
| No log | 2.0 | 20 | 1.7153 | 0.4188 |
| No log | 3.0 | 30 | 1.5905 | 0.3937 |
| No log | 4.0 | 40 | 1.4706 | 0.4625 |
| No log | 5.0 | 50 | 1.4078 | 0.5062 |
| No log | 6.0 | 60 | 1.3739 | 0.4813 |
| No log | 7.0 | 70 | 1.3108 | 0.5125 |
| No log | 8.0 | 80 | 1.2874 | 0.5312 |
| No log | 9.0 | 90 | 1.2810 | 0.5312 |
| No log | 10.0 | 100 | 1.2754 | 0.5437 |
| No log | 11.0 | 110 | 1.2380 | 0.5563 |
| No log | 12.0 | 120 | 1.1721 | 0.6125 |
| No log | 13.0 | 130 | 1.2242 | 0.5875 |
| No log | 14.0 | 140 | 1.2530 | 0.525 |
| No log | 15.0 | 150 | 1.2610 | 0.575 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"calling",
"clapping",
"running",
"sitting",
"sleeping",
"texting",
"using_laptop",
"cycling",
"dancing",
"drinking",
"eating",
"fighting",
"hugging",
"laughing",
"listening_to_music"
] |
amtsal/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3283
- Accuracy: 0.5563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.4437 | 0.4813 |
| No log | 2.0 | 80 | 1.3919 | 0.4813 |
| No log | 3.0 | 120 | 1.3595 | 0.5125 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
rayhanozzy/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3383
- Accuracy: 0.5625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 80 | 1.6519 | 0.3312 |
| No log | 2.0 | 160 | 1.4509 | 0.4125 |
| No log | 3.0 | 240 | 1.3641 | 0.5062 |
| No log | 4.0 | 320 | 1.2676 | 0.5875 |
| No log | 5.0 | 400 | 1.2718 | 0.5188 |
| No log | 6.0 | 480 | 1.2250 | 0.5125 |
| 1.2828 | 7.0 | 560 | 1.1933 | 0.55 |
| 1.2828 | 8.0 | 640 | 1.1538 | 0.575 |
| 1.2828 | 9.0 | 720 | 1.2479 | 0.55 |
| 1.2828 | 10.0 | 800 | 1.2487 | 0.575 |
| 1.2828 | 11.0 | 880 | 1.2418 | 0.5938 |
| 1.2828 | 12.0 | 960 | 1.1514 | 0.6062 |
| 0.5147 | 13.0 | 1040 | 1.2563 | 0.5563 |
| 0.5147 | 14.0 | 1120 | 1.2933 | 0.5813 |
| 0.5147 | 15.0 | 1200 | 1.2857 | 0.5813 |
| 0.5147 | 16.0 | 1280 | 1.3044 | 0.575 |
| 0.5147 | 17.0 | 1360 | 1.4134 | 0.5687 |
| 0.5147 | 18.0 | 1440 | 1.3277 | 0.5875 |
| 0.2675 | 19.0 | 1520 | 1.2963 | 0.575 |
| 0.2675 | 20.0 | 1600 | 1.2049 | 0.6125 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
faldeus0092/project_4_transfer_learning
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# project_4_transfer_learning
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1429
- Accuracy: 0.6438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 2.0754 | 1.0 | 10 | 0.125 | 2.0725 |
| 2.0459 | 2.0 | 20 | 0.2625 | 2.0286 |
| 1.968 | 3.0 | 30 | 0.3 | 1.9506 |
| 1.8311 | 4.0 | 40 | 0.4188 | 1.8060 |
| 1.6911 | 5.0 | 50 | 0.4313 | 1.6814 |
| 1.5677 | 6.0 | 60 | 0.4313 | 1.5851 |
| 1.4801 | 7.0 | 70 | 0.4813 | 1.5169 |
| 1.4033 | 8.0 | 80 | 0.4813 | 1.4614 |
| 1.3435 | 9.0 | 90 | 0.475 | 1.4358 |
| 1.3054 | 10.0 | 100 | 0.525 | 1.4292 |
| 1.2532 | 11.0 | 110 | 0.5188 | 1.3942 |
| 1.2178 | 12.0 | 120 | 0.5312 | 1.3684 |
| 1.1857 | 13.0 | 130 | 0.5062 | 1.3599 |
| 1.1558 | 14.0 | 140 | 0.5312 | 1.2992 |
| 1.1118 | 15.0 | 150 | 0.5375 | 1.3217 |
| 1.0967 | 16.0 | 160 | 0.525 | 1.3177 |
| 1.0671 | 17.0 | 170 | 0.5312 | 1.3420 |
| 1.0635 | 18.0 | 180 | 0.5062 | 1.3319 |
| 1.044 | 19.0 | 190 | 0.5813 | 1.2977 |
| 1.037 | 20.0 | 200 | 0.5125 | 1.3127 |
| 1.0743 | 21.0 | 210 | 1.2062 | 0.6062 |
| 1.0454 | 22.0 | 220 | 1.1564 | 0.65 |
| 1.0457 | 23.0 | 230 | 1.1484 | 0.6312 |
| 1.0246 | 24.0 | 240 | 1.1470 | 0.6312 |
| 0.9859 | 25.0 | 250 | 1.1200 | 0.6438 |
| 0.9885 | 26.0 | 260 | 1.1331 | 0.6375 |
| 0.9823 | 27.0 | 270 | 1.1069 | 0.6562 |
| 0.9412 | 28.0 | 280 | 1.1163 | 0.6375 |
| 0.9172 | 29.0 | 290 | 1.1192 | 0.6375 |
| 0.9334 | 30.0 | 300 | 1.1573 | 0.6 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
Alimuddin/amazon_fish_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [facebook/convnext-large-224-22k-1k](https://huggingface.co/facebook/convnext-large-224-22k-1k) on the amazonian_fish_classifier_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2562
- Accuracy: 0.9332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 17
- eval_batch_size: 17
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 145 | 0.6864 | 0.8420 |
| No log | 2.0 | 290 | 0.5780 | 0.8306 |
| No log | 3.0 | 435 | 0.4466 | 0.8860 |
| 0.7812 | 4.0 | 580 | 0.3810 | 0.8958 |
| 0.7812 | 5.0 | 725 | 0.4124 | 0.8860 |
| 0.7812 | 6.0 | 870 | 0.3617 | 0.9007 |
| 0.3315 | 7.0 | 1015 | 0.3397 | 0.8990 |
| 0.3315 | 8.0 | 1160 | 0.3746 | 0.9055 |
| 0.3315 | 9.0 | 1305 | 0.3379 | 0.9023 |
| 0.3315 | 10.0 | 1450 | 0.3825 | 0.8958 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"ancistrus",
"apistogramma",
"corydoras",
"creagrutus",
"curimata",
"doras",
"erythrinus",
"gasteropelecus",
"gymnotus",
"hemigrammus",
"hyphessobrycon",
"knodus",
"astyanax",
"moenkhausia",
"otocinclus",
"oxyropsis",
"phenacogaster",
"pimelodella",
"prochilodus",
"pygocentrus",
"pyrrhulina",
"rineloricaria",
"sorubim",
"bario",
"tatia",
"tetragonopterus",
"tyttocharax",
"bryconops",
"bujurquina",
"bunocephalus",
"characidium",
"charax",
"copella"
] |
RickyIG/emotion_face_image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_face_image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2110
- Accuracy: 0.55
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0717 | 1.0 | 10 | 2.0593 | 0.2062 |
| 2.005 | 2.0 | 20 | 1.9999 | 0.2625 |
| 1.9169 | 3.0 | 30 | 1.8931 | 0.35 |
| 1.7635 | 4.0 | 40 | 1.7616 | 0.4062 |
| 1.6614 | 5.0 | 50 | 1.6452 | 0.4562 |
| 1.6182 | 6.0 | 60 | 1.5661 | 0.4125 |
| 1.5434 | 7.0 | 70 | 1.5183 | 0.4125 |
| 1.46 | 8.0 | 80 | 1.4781 | 0.4875 |
| 1.4564 | 9.0 | 90 | 1.3939 | 0.5125 |
| 1.2966 | 10.0 | 100 | 1.3800 | 0.4562 |
| 1.3732 | 11.0 | 110 | 1.3557 | 0.475 |
| 1.2907 | 12.0 | 120 | 1.3473 | 0.5 |
| 1.2875 | 13.0 | 130 | 1.3416 | 0.5312 |
| 1.2743 | 14.0 | 140 | 1.2964 | 0.4875 |
| 1.1249 | 15.0 | 150 | 1.2385 | 0.525 |
| 1.0963 | 16.0 | 160 | 1.2775 | 0.5062 |
| 1.0261 | 17.0 | 170 | 1.2751 | 0.5125 |
| 0.9298 | 18.0 | 180 | 1.2318 | 0.525 |
| 1.0668 | 19.0 | 190 | 1.2520 | 0.5437 |
| 0.9933 | 20.0 | 200 | 1.2512 | 0.525 |
| 1.1069 | 21.0 | 210 | 1.3016 | 0.5 |
| 1.0279 | 22.0 | 220 | 1.3279 | 0.475 |
| 0.967 | 23.0 | 230 | 1.2481 | 0.5 |
| 0.8115 | 24.0 | 240 | 1.1791 | 0.5563 |
| 0.7912 | 25.0 | 250 | 1.2336 | 0.55 |
| 0.9294 | 26.0 | 260 | 1.1759 | 0.5813 |
| 0.8936 | 27.0 | 270 | 1.1685 | 0.6 |
| 0.7706 | 28.0 | 280 | 1.2403 | 0.5312 |
| 0.7694 | 29.0 | 290 | 1.2479 | 0.5687 |
| 0.7265 | 30.0 | 300 | 1.2000 | 0.5625 |
| 0.6781 | 31.0 | 310 | 1.1856 | 0.55 |
| 0.6676 | 32.0 | 320 | 1.2661 | 0.5437 |
| 0.7254 | 33.0 | 330 | 1.1986 | 0.5437 |
| 0.7396 | 34.0 | 340 | 1.1497 | 0.575 |
| 0.5532 | 35.0 | 350 | 1.2796 | 0.5062 |
| 0.622 | 36.0 | 360 | 1.2749 | 0.5125 |
| 0.6958 | 37.0 | 370 | 1.2034 | 0.5687 |
| 0.6102 | 38.0 | 380 | 1.2576 | 0.5188 |
| 0.6161 | 39.0 | 390 | 1.2635 | 0.5062 |
| 0.6927 | 40.0 | 400 | 1.1535 | 0.5437 |
| 0.549 | 41.0 | 410 | 1.1405 | 0.6 |
| 0.6668 | 42.0 | 420 | 1.2683 | 0.5312 |
| 0.5144 | 43.0 | 430 | 1.2249 | 0.6 |
| 0.6703 | 44.0 | 440 | 1.2297 | 0.5687 |
| 0.6383 | 45.0 | 450 | 1.1507 | 0.6062 |
| 0.5211 | 46.0 | 460 | 1.2914 | 0.4813 |
| 0.4743 | 47.0 | 470 | 1.2782 | 0.5125 |
| 0.553 | 48.0 | 480 | 1.2256 | 0.5375 |
| 0.6407 | 49.0 | 490 | 1.2149 | 0.5687 |
| 0.4195 | 50.0 | 500 | 1.2024 | 0.5625 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
abelkrw/emotion_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1554
- Accuracy: 0.5938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2477 | 1.0 | 10 | 1.3618 | 0.5625 |
| 1.2002 | 2.0 | 20 | 1.3367 | 0.5625 |
| 1.111 | 3.0 | 30 | 1.3178 | 0.5312 |
| 1.0286 | 4.0 | 40 | 1.2215 | 0.5625 |
| 0.9376 | 5.0 | 50 | 1.2117 | 0.5437 |
| 0.8948 | 6.0 | 60 | 1.2304 | 0.5625 |
| 0.8234 | 7.0 | 70 | 1.1634 | 0.5563 |
| 0.8069 | 8.0 | 80 | 1.2422 | 0.5563 |
| 0.7146 | 9.0 | 90 | 1.2053 | 0.5563 |
| 0.709 | 10.0 | 100 | 1.1887 | 0.575 |
| 0.6404 | 11.0 | 110 | 1.2208 | 0.5563 |
| 0.6301 | 12.0 | 120 | 1.2319 | 0.5687 |
| 0.6107 | 13.0 | 130 | 1.1684 | 0.6 |
| 0.5825 | 14.0 | 140 | 1.1837 | 0.5813 |
| 0.5454 | 15.0 | 150 | 1.1818 | 0.5687 |
| 0.5517 | 16.0 | 160 | 1.1974 | 0.55 |
| 0.4989 | 17.0 | 170 | 1.1304 | 0.6 |
| 0.4875 | 18.0 | 180 | 1.2277 | 0.5375 |
| 0.4881 | 19.0 | 190 | 1.1363 | 0.5875 |
| 0.4951 | 20.0 | 200 | 1.1540 | 0.6062 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
RickyIG/emotion_face_image_classification_v2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_face_image_classification_v2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5157
- Accuracy: 0.4813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 150
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 2 | 2.0924 | 0.15 |
| No log | 2.0 | 5 | 2.1024 | 0.0938 |
| No log | 2.8 | 7 | 2.0935 | 0.1375 |
| No log | 4.0 | 10 | 2.0893 | 0.15 |
| No log | 4.8 | 12 | 2.0900 | 0.15 |
| No log | 6.0 | 15 | 2.0987 | 0.0813 |
| No log | 6.8 | 17 | 2.0901 | 0.1 |
| No log | 8.0 | 20 | 2.0872 | 0.15 |
| No log | 8.8 | 22 | 2.0831 | 0.1375 |
| No log | 10.0 | 25 | 2.0750 | 0.1437 |
| No log | 10.8 | 27 | 2.0744 | 0.175 |
| No log | 12.0 | 30 | 2.0778 | 0.1437 |
| No log | 12.8 | 32 | 2.0729 | 0.1812 |
| No log | 14.0 | 35 | 2.0676 | 0.1625 |
| No log | 14.8 | 37 | 2.0694 | 0.1688 |
| No log | 16.0 | 40 | 2.0562 | 0.1625 |
| No log | 16.8 | 42 | 2.0498 | 0.1938 |
| No log | 18.0 | 45 | 2.0393 | 0.2188 |
| No log | 18.8 | 47 | 2.0458 | 0.2062 |
| No log | 20.0 | 50 | 2.0289 | 0.2125 |
| No log | 20.8 | 52 | 2.0226 | 0.2437 |
| No log | 22.0 | 55 | 1.9997 | 0.2625 |
| No log | 22.8 | 57 | 1.9855 | 0.3187 |
| No log | 24.0 | 60 | 1.9571 | 0.3187 |
| No log | 24.8 | 62 | 1.9473 | 0.3375 |
| No log | 26.0 | 65 | 1.9080 | 0.3187 |
| No log | 26.8 | 67 | 1.8894 | 0.35 |
| No log | 28.0 | 70 | 1.8407 | 0.375 |
| No log | 28.8 | 72 | 1.8083 | 0.3438 |
| No log | 30.0 | 75 | 1.7652 | 0.3563 |
| No log | 30.8 | 77 | 1.7281 | 0.3563 |
| No log | 32.0 | 80 | 1.6729 | 0.4062 |
| No log | 32.8 | 82 | 1.6527 | 0.3937 |
| No log | 34.0 | 85 | 1.6044 | 0.4562 |
| No log | 34.8 | 87 | 1.5899 | 0.4313 |
| No log | 36.0 | 90 | 1.5488 | 0.4313 |
| No log | 36.8 | 92 | 1.5340 | 0.45 |
| No log | 38.0 | 95 | 1.5227 | 0.4875 |
| No log | 38.8 | 97 | 1.4846 | 0.4875 |
| No log | 40.0 | 100 | 1.4579 | 0.4688 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
Zekrom997/emotion_recognition_I
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_recognition_I
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2755
- Accuracy: 0.6062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8344 | 1.0 | 5 | 1.1193 | 0.5813 |
| 0.7539 | 2.0 | 10 | 1.2210 | 0.5563 |
| 0.6334 | 3.0 | 15 | 1.2974 | 0.5188 |
| 0.6163 | 4.0 | 20 | 1.1309 | 0.6 |
| 0.4633 | 5.0 | 25 | 1.2804 | 0.5312 |
| 0.4066 | 6.0 | 30 | 1.1664 | 0.6 |
| 0.335 | 7.0 | 35 | 1.1741 | 0.6062 |
| 0.3484 | 8.0 | 40 | 1.1644 | 0.6125 |
| 0.3134 | 9.0 | 45 | 1.2799 | 0.55 |
| 0.2689 | 10.0 | 50 | 1.2276 | 0.6 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
saskiadwiulfah1810/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2586
- Accuracy: 0.55
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.8677 | 0.3688 |
| No log | 2.0 | 80 | 1.5622 | 0.3625 |
| No log | 3.0 | 120 | 1.4344 | 0.5375 |
| No log | 4.0 | 160 | 1.2909 | 0.5 |
| No log | 5.0 | 200 | 1.2146 | 0.6 |
| No log | 6.0 | 240 | 1.2457 | 0.55 |
| No log | 7.0 | 280 | 1.2429 | 0.5563 |
| No log | 8.0 | 320 | 1.2015 | 0.5375 |
| No log | 9.0 | 360 | 1.2393 | 0.5188 |
| No log | 10.0 | 400 | 1.1908 | 0.5687 |
| No log | 11.0 | 440 | 1.1580 | 0.6188 |
| No log | 12.0 | 480 | 1.1608 | 0.575 |
| 1.0532 | 13.0 | 520 | 1.2468 | 0.5687 |
| 1.0532 | 14.0 | 560 | 1.2747 | 0.5188 |
| 1.0532 | 15.0 | 600 | 1.3293 | 0.525 |
| 1.0532 | 16.0 | 640 | 1.3720 | 0.525 |
| 1.0532 | 17.0 | 680 | 1.4374 | 0.5125 |
| 1.0532 | 18.0 | 720 | 1.3092 | 0.5687 |
| 1.0532 | 19.0 | 760 | 1.4143 | 0.5437 |
| 1.0532 | 20.0 | 800 | 1.5023 | 0.4938 |
| 1.0532 | 21.0 | 840 | 1.4033 | 0.575 |
| 1.0532 | 22.0 | 880 | 1.4476 | 0.5437 |
| 1.0532 | 23.0 | 920 | 1.3089 | 0.5813 |
| 1.0532 | 24.0 | 960 | 1.3866 | 0.5813 |
| 0.3016 | 25.0 | 1000 | 1.3748 | 0.5875 |
| 0.3016 | 26.0 | 1040 | 1.5846 | 0.5312 |
| 0.3016 | 27.0 | 1080 | 1.3451 | 0.5875 |
| 0.3016 | 28.0 | 1120 | 1.5289 | 0.5062 |
| 0.3016 | 29.0 | 1160 | 1.6067 | 0.5125 |
| 0.3016 | 30.0 | 1200 | 1.5002 | 0.5375 |
| 0.3016 | 31.0 | 1240 | 1.5404 | 0.55 |
| 0.3016 | 32.0 | 1280 | 1.5542 | 0.5563 |
| 0.3016 | 33.0 | 1320 | 1.4320 | 0.6062 |
| 0.3016 | 34.0 | 1360 | 1.6465 | 0.5312 |
| 0.3016 | 35.0 | 1400 | 1.7259 | 0.5062 |
| 0.3016 | 36.0 | 1440 | 1.5655 | 0.5687 |
| 0.3016 | 37.0 | 1480 | 1.4517 | 0.6188 |
| 0.1764 | 38.0 | 1520 | 1.5884 | 0.575 |
| 0.1764 | 39.0 | 1560 | 1.4692 | 0.5813 |
| 0.1764 | 40.0 | 1600 | 1.5062 | 0.6125 |
| 0.1764 | 41.0 | 1640 | 1.5122 | 0.6 |
| 0.1764 | 42.0 | 1680 | 1.5859 | 0.6 |
| 0.1764 | 43.0 | 1720 | 1.6816 | 0.525 |
| 0.1764 | 44.0 | 1760 | 1.5594 | 0.6062 |
| 0.1764 | 45.0 | 1800 | 1.7011 | 0.5375 |
| 0.1764 | 46.0 | 1840 | 1.5676 | 0.575 |
| 0.1764 | 47.0 | 1880 | 1.5260 | 0.6 |
| 0.1764 | 48.0 | 1920 | 1.5711 | 0.575 |
| 0.1764 | 49.0 | 1960 | 1.7095 | 0.5563 |
| 0.1256 | 50.0 | 2000 | 1.7625 | 0.5188 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
reallygoodtechdeals/autotrain-lane-center-8-89748143997
|
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 89748143997
- CO2 Emissions (in grams): 0.4943
## Validation Metrics
- Loss: 0.693
- Accuracy: 0.523
- Precision: 0.417
- Recall: 0.263
- AUC: 0.371
- F1: 0.323
|
[
"slight_left",
"slight_right"
] |
dima806/fast_food_image_detection
|
Returns fast food type based on an image with about 98% accuracy.
See https://www.kaggle.com/code/dima806/fast-food-image-detection-vit for more details.
```
Classification report:
precision recall f1-score support
Burger 0.9466 0.9750 0.9606 400
Taco 0.9578 0.9650 0.9614 400
Baked Potato 0.9827 0.9925 0.9876 400
Hot Dog 0.9872 0.9698 0.9784 397
Pizza 0.9875 0.9875 0.9875 400
Sandwich 0.9724 0.9724 0.9724 399
Fries 0.9748 0.9675 0.9711 400
Donut 0.9827 1.0000 0.9913 397
Crispy Chicken 0.9822 0.9650 0.9735 400
Taquito 0.9923 0.9700 0.9810 400
accuracy 0.9765 3993
macro avg 0.9766 0.9765 0.9765 3993
weighted avg 0.9766 0.9765 0.9765 3993
```
|
[
"burger",
"taco",
"baked potato",
"hot dog",
"pizza",
"sandwich",
"fries",
"donut",
"crispy chicken",
"taquito"
] |
gilbertoesp/vit-model-beans-health
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-model-beans-health
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0441
- Accuracy: 0.9774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.134 | 3.85 | 500 | 0.0441 | 0.9774 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
hansin91/emotion_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2677
- Accuracy: 0.575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9379 | 0.97 | 13 | 1.2947 | 0.4875 |
| 0.9235 | 1.95 | 26 | 1.3397 | 0.475 |
| 0.8298 | 3.0 | 40 | 1.2971 | 0.5563 |
| 0.8883 | 3.98 | 53 | 1.3434 | 0.4875 |
| 0.8547 | 4.95 | 66 | 1.3226 | 0.475 |
| 0.8129 | 6.0 | 80 | 1.3077 | 0.5062 |
| 0.8095 | 6.97 | 93 | 1.2503 | 0.525 |
| 0.7764 | 7.95 | 106 | 1.2989 | 0.5312 |
| 0.7004 | 9.0 | 120 | 1.3383 | 0.4813 |
| 0.7013 | 9.97 | 133 | 1.3370 | 0.5125 |
| 0.6416 | 10.95 | 146 | 1.3073 | 0.5125 |
| 0.5831 | 12.0 | 160 | 1.3192 | 0.5 |
| 0.5968 | 12.97 | 173 | 1.2394 | 0.5375 |
| 0.5434 | 13.95 | 186 | 1.3389 | 0.5188 |
| 0.4605 | 15.0 | 200 | 1.2951 | 0.525 |
| 0.4674 | 15.97 | 213 | 1.2038 | 0.5687 |
| 0.3953 | 16.95 | 226 | 1.4019 | 0.5062 |
| 0.3595 | 18.0 | 240 | 1.4442 | 0.4813 |
| 0.3619 | 18.98 | 253 | 1.4213 | 0.525 |
| 0.3304 | 19.95 | 266 | 1.2937 | 0.5437 |
| 0.34 | 21.0 | 280 | 1.3024 | 0.5687 |
| 0.4215 | 21.98 | 293 | 1.4018 | 0.5375 |
| 0.3606 | 22.95 | 306 | 1.4221 | 0.5375 |
| 0.3402 | 24.0 | 320 | 1.4987 | 0.4313 |
| 0.3058 | 24.98 | 333 | 1.5120 | 0.5125 |
| 0.3047 | 25.95 | 346 | 1.5749 | 0.5 |
| 0.3616 | 27.0 | 360 | 1.4293 | 0.5188 |
| 0.3315 | 27.98 | 373 | 1.5326 | 0.5312 |
| 0.3535 | 28.95 | 386 | 1.5095 | 0.5188 |
| 0.3056 | 29.25 | 390 | 1.5366 | 0.5 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
axelit64/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3340
- Accuracy: 0.575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.5156 | 0.45 |
| No log | 2.0 | 80 | 1.4200 | 0.4562 |
| No log | 3.0 | 120 | 1.3790 | 0.5 |
| No log | 4.0 | 160 | 1.2859 | 0.525 |
| No log | 5.0 | 200 | 1.2592 | 0.5125 |
| No log | 6.0 | 240 | 1.3145 | 0.55 |
| No log | 7.0 | 280 | 1.3267 | 0.4813 |
| No log | 8.0 | 320 | 1.3288 | 0.5 |
| No log | 9.0 | 360 | 1.3073 | 0.5 |
| No log | 10.0 | 400 | 1.3066 | 0.5188 |
| No log | 11.0 | 440 | 1.2691 | 0.5563 |
| No log | 12.0 | 480 | 1.2809 | 0.5437 |
| 0.876 | 13.0 | 520 | 1.2963 | 0.5625 |
| 0.876 | 14.0 | 560 | 1.2965 | 0.5312 |
| 0.876 | 15.0 | 600 | 1.3542 | 0.5188 |
| 0.876 | 16.0 | 640 | 1.3489 | 0.5125 |
| 0.876 | 17.0 | 680 | 1.3146 | 0.5687 |
| 0.876 | 18.0 | 720 | 1.2442 | 0.575 |
| 0.876 | 19.0 | 760 | 1.3497 | 0.575 |
| 0.876 | 20.0 | 800 | 1.3316 | 0.5437 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
ahmadtrg/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6734
- Accuracy: 0.35
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.9397 | 0.3125 |
| No log | 2.0 | 80 | 1.7367 | 0.325 |
| No log | 3.0 | 120 | 1.6626 | 0.3812 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
FarizFirdaus/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4916
- Accuracy: 0.4688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 2.0695 | 0.1812 |
| No log | 2.0 | 40 | 2.0566 | 0.2062 |
| No log | 3.0 | 60 | 2.0300 | 0.2625 |
| No log | 4.0 | 80 | 1.9731 | 0.3125 |
| No log | 5.0 | 100 | 1.8858 | 0.3375 |
| No log | 6.0 | 120 | 1.7904 | 0.3438 |
| No log | 7.0 | 140 | 1.7051 | 0.3875 |
| No log | 8.0 | 160 | 1.6312 | 0.4 |
| No log | 9.0 | 180 | 1.5429 | 0.45 |
| No log | 10.0 | 200 | 1.4916 | 0.4688 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
asyafalni/vit-emotion-classifier
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-emotion-classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3090
- Accuracy: 0.55
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4729 | 1.0 | 10 | 1.5748 | 0.4875 |
| 1.4484 | 2.0 | 20 | 1.5526 | 0.4875 |
| 1.4053 | 3.0 | 30 | 1.5228 | 0.4562 |
| 1.3492 | 4.0 | 40 | 1.4721 | 0.5 |
| 1.2664 | 5.0 | 50 | 1.4448 | 0.5125 |
| 1.2005 | 6.0 | 60 | 1.3783 | 0.5062 |
| 1.1231 | 7.0 | 70 | 1.3427 | 0.5375 |
| 1.0472 | 8.0 | 80 | 1.2859 | 0.5625 |
| 0.9852 | 9.0 | 90 | 1.2732 | 0.5813 |
| 0.8974 | 10.0 | 100 | 1.2220 | 0.575 |
| 0.8314 | 11.0 | 110 | 1.2782 | 0.5312 |
| 0.7964 | 12.0 | 120 | 1.2889 | 0.5437 |
| 0.6993 | 13.0 | 130 | 1.2989 | 0.5188 |
| 0.6915 | 14.0 | 140 | 1.3053 | 0.5375 |
| 0.608 | 15.0 | 150 | 1.2563 | 0.5875 |
| 0.5416 | 16.0 | 160 | 1.2473 | 0.5563 |
| 0.5202 | 17.0 | 170 | 1.2753 | 0.5625 |
| 0.5047 | 18.0 | 180 | 1.2791 | 0.5563 |
| 0.4779 | 19.0 | 190 | 1.3142 | 0.5437 |
| 0.4569 | 20.0 | 200 | 1.2743 | 0.5813 |
| 0.4313 | 21.0 | 210 | 1.2727 | 0.5312 |
| 0.4536 | 22.0 | 220 | 1.2514 | 0.5938 |
| 0.4166 | 23.0 | 230 | 1.3260 | 0.5312 |
| 0.3673 | 24.0 | 240 | 1.2950 | 0.55 |
| 0.3544 | 25.0 | 250 | 1.2268 | 0.5875 |
| 0.3568 | 26.0 | 260 | 1.3874 | 0.4875 |
| 0.3509 | 27.0 | 270 | 1.3735 | 0.525 |
| 0.3711 | 28.0 | 280 | 1.2886 | 0.5375 |
| 0.3555 | 29.0 | 290 | 1.3152 | 0.5375 |
| 0.3068 | 30.0 | 300 | 1.3927 | 0.5375 |
| 0.3007 | 31.0 | 310 | 1.4131 | 0.5188 |
| 0.3062 | 32.0 | 320 | 1.3256 | 0.575 |
| 0.3114 | 33.0 | 330 | 1.3714 | 0.5 |
| 0.279 | 34.0 | 340 | 1.4198 | 0.5188 |
| 0.2888 | 35.0 | 350 | 1.5321 | 0.475 |
| 0.2647 | 36.0 | 360 | 1.4342 | 0.5062 |
| 0.2574 | 37.0 | 370 | 1.4149 | 0.5563 |
| 0.2539 | 38.0 | 380 | 1.4286 | 0.5125 |
| 0.2566 | 39.0 | 390 | 1.4805 | 0.5125 |
| 0.2298 | 40.0 | 400 | 1.3820 | 0.4875 |
| 0.2236 | 41.0 | 410 | 1.3683 | 0.5437 |
| 0.2201 | 42.0 | 420 | 1.3332 | 0.5687 |
| 0.2696 | 43.0 | 430 | 1.4725 | 0.5188 |
| 0.2319 | 44.0 | 440 | 1.3926 | 0.5375 |
| 0.2269 | 45.0 | 450 | 1.3477 | 0.5563 |
| 0.2201 | 46.0 | 460 | 1.4054 | 0.5563 |
| 0.2114 | 47.0 | 470 | 1.3308 | 0.55 |
| 0.2319 | 48.0 | 480 | 1.3353 | 0.5625 |
| 0.2177 | 49.0 | 490 | 1.3019 | 0.5437 |
| 0.2042 | 50.0 | 500 | 1.3089 | 0.5875 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
Alfiyani/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4124
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.8082 | 0.3 |
| No log | 2.0 | 80 | 1.5637 | 0.3688 |
| No log | 3.0 | 120 | 1.4570 | 0.4562 |
| No log | 4.0 | 160 | 1.4012 | 0.525 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
irispansee/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8157
- Accuracy: 0.3375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 2.0226 | 0.2625 |
| No log | 2.0 | 40 | 1.8855 | 0.2938 |
| No log | 3.0 | 60 | 1.8171 | 0.35 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
dima806/portuguese_meals_image_detection
|
Return Portuguese meal type based on an image.
See https://www.kaggle.com/code/dima806/portuguese-meals-image-detection-vit for more details.
```
Classification report:
precision recall f1-score support
donuts 1.0000 0.9861 0.9930 216
hamburguer 1.0000 0.9954 0.9977 216
feijoada 0.9954 0.9908 0.9931 217
batatas_fritas 1.0000 1.0000 1.0000 216
esparguete_bolonhesa 1.0000 1.0000 1.0000 216
caldo_verde 0.9954 1.0000 0.9977 217
pasteis_bacalhau 0.9954 1.0000 0.9977 217
cozido_portuguesa 1.0000 1.0000 1.0000 216
jardineira 1.0000 1.0000 1.0000 217
arroz_cabidela 1.0000 1.0000 1.0000 216
nata 1.0000 1.0000 1.0000 216
croissant 1.0000 1.0000 1.0000 216
cachorro 0.9954 0.9954 0.9954 217
tripas_moda_porto 0.9909 1.0000 0.9954 217
aletria 0.9954 1.0000 0.9977 216
pizza 0.9954 0.9954 0.9954 217
bacalhau_natas 1.0000 1.0000 1.0000 216
ovo 0.9954 1.0000 0.9977 217
waffles 1.0000 1.0000 1.0000 216
francesinha 1.0000 1.0000 1.0000 217
bolo_chocolate 1.0000 0.9954 0.9977 216
gelado 0.9954 0.9954 0.9954 217
bacalhau_bras 1.0000 1.0000 1.0000 216
accuracy 0.9980 4978
macro avg 0.9980 0.9980 0.9980 4978
weighted avg 0.9980 0.9980 0.9980 4978
```
|
[
"donuts",
"hamburguer",
"feijoada",
"batatas_fritas",
"esparguete_bolonhesa",
"caldo_verde",
"pasteis_bacalhau",
"cozido_portuguesa",
"jardineira",
"arroz_cabidela",
"nata",
"croissant",
"cachorro",
"tripas_moda_porto",
"aletria",
"pizza",
"bacalhau_natas",
"ovo",
"waffles",
"francesinha",
"bolo_chocolate",
"gelado",
"bacalhau_bras"
] |
gabrieloken/exercise
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exercise
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.4071
- eval_accuracy: 0.55
- eval_runtime: 123.033
- eval_samples_per_second: 1.3
- eval_steps_per_second: 0.081
- epoch: 0.03
- step: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
clauculus/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6838
- Accuracy: 0.525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 10 | 1.3274 | 0.5125 |
| No log | 2.0 | 20 | 1.3119 | 0.5188 |
| No log | 3.0 | 30 | 1.3825 | 0.4625 |
| No log | 4.0 | 40 | 1.2916 | 0.5312 |
| No log | 5.0 | 50 | 1.2821 | 0.525 |
| No log | 6.0 | 60 | 1.2407 | 0.525 |
| No log | 7.0 | 70 | 1.3288 | 0.5125 |
| No log | 8.0 | 80 | 1.2818 | 0.525 |
| No log | 9.0 | 90 | 1.3710 | 0.4875 |
| No log | 10.0 | 100 | 1.3298 | 0.5312 |
| No log | 11.0 | 110 | 1.3539 | 0.475 |
| No log | 12.0 | 120 | 1.4498 | 0.4688 |
| No log | 13.0 | 130 | 1.5422 | 0.4437 |
| No log | 14.0 | 140 | 1.4870 | 0.4625 |
| No log | 15.0 | 150 | 1.4354 | 0.525 |
| No log | 16.0 | 160 | 1.4286 | 0.4938 |
| No log | 17.0 | 170 | 1.5332 | 0.4437 |
| No log | 18.0 | 180 | 1.4164 | 0.5188 |
| No log | 19.0 | 190 | 1.5024 | 0.4625 |
| No log | 20.0 | 200 | 1.4730 | 0.5125 |
| No log | 21.0 | 210 | 1.3083 | 0.55 |
| No log | 22.0 | 220 | 1.4468 | 0.525 |
| No log | 23.0 | 230 | 1.3198 | 0.525 |
| No log | 24.0 | 240 | 1.3530 | 0.5563 |
| No log | 25.0 | 250 | 1.4821 | 0.4938 |
| No log | 26.0 | 260 | 1.3475 | 0.5437 |
| No log | 27.0 | 270 | 1.5152 | 0.4875 |
| No log | 28.0 | 280 | 1.4290 | 0.55 |
| No log | 29.0 | 290 | 1.5505 | 0.5 |
| No log | 30.0 | 300 | 1.5796 | 0.5062 |
| No log | 31.0 | 310 | 1.5988 | 0.5125 |
| No log | 32.0 | 320 | 1.6272 | 0.4875 |
| No log | 33.0 | 330 | 1.4324 | 0.5437 |
| No log | 34.0 | 340 | 1.5245 | 0.5062 |
| No log | 35.0 | 350 | 1.7228 | 0.45 |
| No log | 36.0 | 360 | 1.4861 | 0.525 |
| No log | 37.0 | 370 | 1.5317 | 0.5312 |
| No log | 38.0 | 380 | 1.7776 | 0.475 |
| No log | 39.0 | 390 | 1.5386 | 0.5563 |
| No log | 40.0 | 400 | 1.7608 | 0.475 |
| No log | 41.0 | 410 | 1.5469 | 0.55 |
| No log | 42.0 | 420 | 1.6919 | 0.4625 |
| No log | 43.0 | 430 | 1.5814 | 0.525 |
| No log | 44.0 | 440 | 1.5877 | 0.5125 |
| No log | 45.0 | 450 | 1.6370 | 0.5188 |
| No log | 46.0 | 460 | 1.7375 | 0.5188 |
| No log | 47.0 | 470 | 1.7004 | 0.5 |
| No log | 48.0 | 480 | 1.6309 | 0.4938 |
| No log | 49.0 | 490 | 1.5931 | 0.5437 |
| 0.2996 | 50.0 | 500 | 1.7687 | 0.5062 |
| 0.2996 | 51.0 | 510 | 1.5321 | 0.5188 |
| 0.2996 | 52.0 | 520 | 1.8099 | 0.4688 |
| 0.2996 | 53.0 | 530 | 1.5138 | 0.575 |
| 0.2996 | 54.0 | 540 | 1.7569 | 0.4688 |
| 0.2996 | 55.0 | 550 | 1.7451 | 0.4813 |
| 0.2996 | 56.0 | 560 | 1.6871 | 0.5125 |
| 0.2996 | 57.0 | 570 | 1.6471 | 0.525 |
| 0.2996 | 58.0 | 580 | 1.6966 | 0.525 |
| 0.2996 | 59.0 | 590 | 1.7714 | 0.5 |
| 0.2996 | 60.0 | 600 | 1.4985 | 0.5938 |
| 0.2996 | 61.0 | 610 | 1.9804 | 0.4313 |
| 0.2996 | 62.0 | 620 | 1.6116 | 0.5375 |
| 0.2996 | 63.0 | 630 | 1.6056 | 0.525 |
| 0.2996 | 64.0 | 640 | 1.6115 | 0.5062 |
| 0.2996 | 65.0 | 650 | 1.9694 | 0.4625 |
| 0.2996 | 66.0 | 660 | 1.6338 | 0.5563 |
| 0.2996 | 67.0 | 670 | 1.4823 | 0.5938 |
| 0.2996 | 68.0 | 680 | 1.9253 | 0.5 |
| 0.2996 | 69.0 | 690 | 1.9015 | 0.4813 |
| 0.2996 | 70.0 | 700 | 1.5446 | 0.5687 |
| 0.2996 | 71.0 | 710 | 1.9302 | 0.4938 |
| 0.2996 | 72.0 | 720 | 1.6973 | 0.5375 |
| 0.2996 | 73.0 | 730 | 1.8271 | 0.5 |
| 0.2996 | 74.0 | 740 | 1.7559 | 0.5188 |
| 0.2996 | 75.0 | 750 | 1.8127 | 0.5312 |
| 0.2996 | 76.0 | 760 | 1.8096 | 0.4938 |
| 0.2996 | 77.0 | 770 | 1.8460 | 0.5062 |
| 0.2996 | 78.0 | 780 | 1.8853 | 0.4813 |
| 0.2996 | 79.0 | 790 | 1.7706 | 0.5125 |
| 0.2996 | 80.0 | 800 | 1.8129 | 0.5312 |
| 0.2996 | 81.0 | 810 | 1.9488 | 0.4688 |
| 0.2996 | 82.0 | 820 | 1.8817 | 0.4813 |
| 0.2996 | 83.0 | 830 | 1.6759 | 0.5563 |
| 0.2996 | 84.0 | 840 | 1.6884 | 0.5 |
| 0.2996 | 85.0 | 850 | 1.8146 | 0.4875 |
| 0.2996 | 86.0 | 860 | 1.6610 | 0.55 |
| 0.2996 | 87.0 | 870 | 1.8811 | 0.475 |
| 0.2996 | 88.0 | 880 | 1.8964 | 0.5062 |
| 0.2996 | 89.0 | 890 | 1.6848 | 0.5437 |
| 0.2996 | 90.0 | 900 | 1.8642 | 0.4938 |
| 0.2996 | 91.0 | 910 | 1.8819 | 0.5125 |
| 0.2996 | 92.0 | 920 | 1.9193 | 0.4875 |
| 0.2996 | 93.0 | 930 | 1.8110 | 0.5 |
| 0.2996 | 94.0 | 940 | 1.9086 | 0.4813 |
| 0.2996 | 95.0 | 950 | 1.8895 | 0.4625 |
| 0.2996 | 96.0 | 960 | 1.7554 | 0.5312 |
| 0.2996 | 97.0 | 970 | 1.8978 | 0.5188 |
| 0.2996 | 98.0 | 980 | 1.9791 | 0.4875 |
| 0.2996 | 99.0 | 990 | 1.7030 | 0.5687 |
| 0.0883 | 100.0 | 1000 | 1.8398 | 0.4813 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
aswincandra/rgai_emotion_recognition
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rgai_emotion_recognition
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the [FastJobs/Visual_Emotional_Analysis](https://huggingface.co/datasets/FastJobs/Visual_Emotional_Analysis) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3077
- Accuracy: 0.5813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0698 | 1.0 | 25 | 2.0921 | 0.1125 |
| 1.973 | 2.0 | 50 | 1.9930 | 0.1938 |
| 1.8091 | 3.0 | 75 | 1.8374 | 0.3937 |
| 1.5732 | 4.0 | 100 | 1.6804 | 0.475 |
| 1.4087 | 5.0 | 125 | 1.5660 | 0.5125 |
| 1.2653 | 6.0 | 150 | 1.4769 | 0.5375 |
| 1.1443 | 7.0 | 175 | 1.4084 | 0.55 |
| 0.9888 | 8.0 | 200 | 1.3633 | 0.5625 |
| 0.9029 | 9.0 | 225 | 1.3305 | 0.55 |
| 0.8372 | 10.0 | 250 | 1.3077 | 0.5813 |
| 0.7569 | 11.0 | 275 | 1.2983 | 0.5625 |
| 0.6886 | 12.0 | 300 | 1.2806 | 0.5687 |
| 0.6216 | 13.0 | 325 | 1.2718 | 0.5687 |
| 0.6385 | 14.0 | 350 | 1.2700 | 0.5563 |
| 0.6029 | 15.0 | 375 | 1.2693 | 0.5625 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
Karsinogenic69/emotion_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4512
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.4449 | 0.4688 |
| No log | 2.0 | 80 | 1.4457 | 0.4938 |
| No log | 3.0 | 120 | 1.3813 | 0.5563 |
| No log | 4.0 | 160 | 1.5903 | 0.4313 |
| No log | 5.0 | 200 | 1.4512 | 0.5 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
amona-io/house_fault_classification
|
# 주택 하자 분류
주택 시공 후 하자들에 대한 카테고리를 분류 해주는 모델.
- 분류 가능한 카테고리
```
concentrator_broken: 전기콘센트 파손
faultyopening: 문열림 불량
insectscreen_check: 방충망 불량
pedal_malfunction: 페달 작동 오류
wall_contamination: 벽지 오염
wall_crack: 벽 균열
wall_peeloff: 벽지 훼손
waterleak: 천장 누수
```
|
[
"concentrator_broken",
"faultyopening",
"insectscreen_check",
"pedal_malfunction",
"wall_contamination",
"wall_crack",
"wall_peeloff",
"waterleak"
] |
michaelsinanta/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7674
- Accuracy: 0.325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.9714 | 0.2938 |
| No log | 2.0 | 80 | 1.7702 | 0.3375 |
| No log | 3.0 | 120 | 1.7064 | 0.3125 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
kamilersz/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6249
- Accuracy: 0.3688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.8602 | 0.275 |
| No log | 2.0 | 80 | 1.6744 | 0.3563 |
| No log | 3.0 | 120 | 1.6277 | 0.375 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
amrul-hzz/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6320
- Accuracy: 0.4437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.8561 | 0.4062 |
| No log | 2.0 | 80 | 1.6491 | 0.4313 |
| No log | 3.0 | 120 | 1.5929 | 0.4188 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
grahmatagung/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1877
- Accuracy: 0.625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.8317 | 0.2938 |
| No log | 2.0 | 80 | 1.5647 | 0.4437 |
| No log | 3.0 | 120 | 1.4497 | 0.4938 |
| No log | 4.0 | 160 | 1.3529 | 0.5188 |
| No log | 5.0 | 200 | 1.2883 | 0.5125 |
| No log | 6.0 | 240 | 1.2861 | 0.5125 |
| No log | 7.0 | 280 | 1.2655 | 0.55 |
| No log | 8.0 | 320 | 1.2890 | 0.5125 |
| No log | 9.0 | 360 | 1.1955 | 0.575 |
| No log | 10.0 | 400 | 1.2180 | 0.5687 |
| No log | 11.0 | 440 | 1.2835 | 0.55 |
| No log | 12.0 | 480 | 1.2838 | 0.5188 |
| 1.0368 | 13.0 | 520 | 1.2168 | 0.5875 |
| 1.0368 | 14.0 | 560 | 1.1713 | 0.6312 |
| 1.0368 | 15.0 | 600 | 1.2222 | 0.5875 |
| 1.0368 | 16.0 | 640 | 1.3160 | 0.5563 |
| 1.0368 | 17.0 | 680 | 1.2512 | 0.6125 |
| 1.0368 | 18.0 | 720 | 1.3575 | 0.5563 |
| 1.0368 | 19.0 | 760 | 1.3514 | 0.5375 |
| 1.0368 | 20.0 | 800 | 1.3472 | 0.5625 |
| 1.0368 | 21.0 | 840 | 1.3449 | 0.5375 |
| 1.0368 | 22.0 | 880 | 1.3783 | 0.5375 |
| 1.0368 | 23.0 | 920 | 1.3240 | 0.575 |
| 1.0368 | 24.0 | 960 | 1.3391 | 0.5687 |
| 0.2885 | 25.0 | 1000 | 1.3723 | 0.55 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
dima806/coffee_bean_roast_image_detection
|
Returns coffee roast type given bean image.
See https://www.kaggle.com/code/dima806/roasted-coffee-bean-image-detection-vit for more details.
```
Classification report:
precision recall f1-score support
Dark 1.0000 1.0000 1.0000 160
Light 1.0000 1.0000 1.0000 160
Green 1.0000 1.0000 1.0000 160
Medium 1.0000 1.0000 1.0000 160
accuracy 1.0000 640
macro avg 1.0000 1.0000 1.0000 640
weighted avg 1.0000 1.0000 1.0000 640
```
|
[
"dark",
"light",
"green",
"medium"
] |
ayoubkirouane/VIT_Beans_Leaf_Disease_Classifier
|
# Fine-Tuned ViT for Beans Leaf Disease Classification
## Model Information
* **Model Name**: VIT_Beans_Leaf_Disease_Classifier
* **Base Model**: Google/ViT-base-patch16-224-in21k
* **Task**: Image Classification (Beans Leaf Disease Classification)
* **Dataset**: Beans leaf dataset with images of diseased and healthy leaves.
## Problem Statement
The goal of this model is to classify leaf images into three categories:
```
{
"angular_leaf_spot": 0,
"bean_rust": 1,
"healthy": 2,
}
```

### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1495 | 1.54 | 100 | 0.0910 | 0.9774 |
| 0.0121 | 3.08 | 200 | 0.0155 | 1.0 |
## Framework versions
+ Transformers 4.33.2
+ Pytorch 2.0.1+cu118
+ Datasets 2.14.5
+ Tokenizers 0.13.3
## Get Started With The Model:
```
! pip -q install datasets transformers[torch]
```
```python
from transformers import pipeline
from PIL import Image
# Use a pipeline as a high-level helper
pipe = pipeline("image-classification", model="ayoubkirouane/VIT_Beans_Leaf_Disease_Classifier")
# Load the image
image_path = "Your image_path "
image = Image.open(image_path)
# Run inference using the pipeline
result = pipe(image)
# The result contains the predicted label and the corresponding score
predicted_label = result[0]['label']
confidence_score = result[0]['score']
print(f"Predicted Label: {predicted_label}")
print(f"Confidence Score: {confidence_score}")
```
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
adityagofi/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0228
- Accuracy: 0.2437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 2.0545 | 0.2062 |
| No log | 2.0 | 80 | 2.0342 | 0.2437 |
| No log | 3.0 | 120 | 2.0232 | 0.3375 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
nadyadtm/emotion_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6689
- Accuracy: 0.4062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.8836 | 0.3375 |
| No log | 2.0 | 80 | 1.6596 | 0.4562 |
| No log | 3.0 | 120 | 1.6118 | 0.4125 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
raffel-22/emotion_classification_2_continue
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification_2_continue
This model is a fine-tuned version of [raffel-22/emotion_classification_2](https://huggingface.co/raffel-22/emotion_classification_2) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8978
- Accuracy: 0.725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 0.9714 | 0.7063 |
| No log | 2.0 | 40 | 0.9432 | 0.7188 |
| No log | 3.0 | 60 | 0.9633 | 0.7 |
| No log | 4.0 | 80 | 0.9322 | 0.7375 |
| No log | 5.0 | 100 | 0.8530 | 0.7063 |
| No log | 6.0 | 120 | 0.9063 | 0.7063 |
| No log | 7.0 | 140 | 0.8451 | 0.7125 |
| No log | 8.0 | 160 | 0.9672 | 0.6375 |
| No log | 9.0 | 180 | 0.9036 | 0.6937 |
| No log | 10.0 | 200 | 0.9261 | 0.6562 |
| No log | 11.0 | 220 | 0.8963 | 0.6937 |
| No log | 12.0 | 240 | 0.8852 | 0.7188 |
| No log | 13.0 | 260 | 0.8728 | 0.7063 |
| No log | 14.0 | 280 | 0.9559 | 0.6875 |
| No log | 15.0 | 300 | 0.9352 | 0.65 |
| No log | 16.0 | 320 | 0.8638 | 0.7 |
| No log | 17.0 | 340 | 0.9156 | 0.7 |
| No log | 18.0 | 360 | 1.0299 | 0.6687 |
| No log | 19.0 | 380 | 0.8983 | 0.675 |
| No log | 20.0 | 400 | 0.8858 | 0.7063 |
| No log | 21.0 | 420 | 0.9699 | 0.6937 |
| No log | 22.0 | 440 | 1.0603 | 0.625 |
| No log | 23.0 | 460 | 1.0404 | 0.6312 |
| No log | 24.0 | 480 | 0.8838 | 0.6937 |
| 0.4269 | 25.0 | 500 | 0.9280 | 0.6937 |
| 0.4269 | 26.0 | 520 | 0.9456 | 0.6937 |
| 0.4269 | 27.0 | 540 | 0.9640 | 0.6937 |
| 0.4269 | 28.0 | 560 | 0.9865 | 0.6937 |
| 0.4269 | 29.0 | 580 | 0.8900 | 0.7188 |
| 0.4269 | 30.0 | 600 | 0.9408 | 0.7063 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
kayleenp/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5552
- Accuracy: 0.4688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.7654 | 0.3125 |
| No log | 2.0 | 80 | 1.5370 | 0.4813 |
| No log | 3.0 | 120 | 1.4791 | 0.4813 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
awrysfab/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9328
- Accuracy: 0.3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0637 | 1.0 | 10 | 2.0316 | 0.25 |
| 1.9805 | 2.0 | 20 | 1.9603 | 0.2687 |
| 1.9061 | 3.0 | 30 | 1.9404 | 0.3063 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"calling",
"clapping",
"running",
"sitting",
"sleeping",
"texting",
"using_laptop",
"cycling",
"dancing",
"drinking",
"eating",
"fighting",
"hugging",
"laughing",
"listening_to_music"
] |
ri-xx/vit-base-patch16-224-in21k
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k
This model was trained from scratch on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6306
- Accuracy: 0.5375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.2472 | 0.5312 |
| No log | 2.0 | 80 | 1.2878 | 0.5188 |
| No log | 3.0 | 120 | 1.3116 | 0.525 |
| No log | 4.0 | 160 | 1.2578 | 0.55 |
| No log | 5.0 | 200 | 1.2186 | 0.5563 |
| No log | 6.0 | 240 | 1.2680 | 0.5563 |
| No log | 7.0 | 280 | 1.3674 | 0.5 |
| No log | 8.0 | 320 | 1.3814 | 0.525 |
| No log | 9.0 | 360 | 1.4394 | 0.5 |
| No log | 10.0 | 400 | 1.3710 | 0.5437 |
| No log | 11.0 | 440 | 1.3721 | 0.5437 |
| No log | 12.0 | 480 | 1.4309 | 0.5563 |
| 0.4861 | 13.0 | 520 | 1.3424 | 0.575 |
| 0.4861 | 14.0 | 560 | 1.4617 | 0.525 |
| 0.4861 | 15.0 | 600 | 1.3964 | 0.5813 |
| 0.4861 | 16.0 | 640 | 1.4751 | 0.5687 |
| 0.4861 | 17.0 | 680 | 1.5296 | 0.55 |
| 0.4861 | 18.0 | 720 | 1.5887 | 0.5188 |
| 0.4861 | 19.0 | 760 | 1.5784 | 0.5312 |
| 0.4861 | 20.0 | 800 | 1.7036 | 0.5375 |
| 0.4861 | 21.0 | 840 | 1.6988 | 0.5188 |
| 0.4861 | 22.0 | 880 | 1.6070 | 0.5687 |
| 0.4861 | 23.0 | 920 | 1.7111 | 0.55 |
| 0.4861 | 24.0 | 960 | 1.6730 | 0.55 |
| 0.2042 | 25.0 | 1000 | 1.6559 | 0.55 |
| 0.2042 | 26.0 | 1040 | 1.7221 | 0.5563 |
| 0.2042 | 27.0 | 1080 | 1.6637 | 0.5813 |
| 0.2042 | 28.0 | 1120 | 1.6806 | 0.5625 |
| 0.2042 | 29.0 | 1160 | 1.5743 | 0.5938 |
| 0.2042 | 30.0 | 1200 | 1.7899 | 0.4938 |
| 0.2042 | 31.0 | 1240 | 1.7422 | 0.5312 |
| 0.2042 | 32.0 | 1280 | 1.7712 | 0.55 |
| 0.2042 | 33.0 | 1320 | 1.7480 | 0.5188 |
| 0.2042 | 34.0 | 1360 | 1.7964 | 0.5375 |
| 0.2042 | 35.0 | 1400 | 1.9687 | 0.5188 |
| 0.2042 | 36.0 | 1440 | 1.7412 | 0.5813 |
| 0.2042 | 37.0 | 1480 | 1.9312 | 0.4875 |
| 0.1342 | 38.0 | 1520 | 1.7944 | 0.525 |
| 0.1342 | 39.0 | 1560 | 1.8180 | 0.55 |
| 0.1342 | 40.0 | 1600 | 1.7720 | 0.5563 |
| 0.1342 | 41.0 | 1640 | 1.9014 | 0.5312 |
| 0.1342 | 42.0 | 1680 | 1.7519 | 0.55 |
| 0.1342 | 43.0 | 1720 | 1.9793 | 0.5 |
| 0.1342 | 44.0 | 1760 | 1.8642 | 0.55 |
| 0.1342 | 45.0 | 1800 | 1.7573 | 0.5875 |
| 0.1342 | 46.0 | 1840 | 1.8508 | 0.5125 |
| 0.1342 | 47.0 | 1880 | 1.9741 | 0.5625 |
| 0.1342 | 48.0 | 1920 | 1.9012 | 0.525 |
| 0.1342 | 49.0 | 1960 | 1.8771 | 0.5625 |
| 0.0926 | 50.0 | 2000 | 1.8728 | 0.5125 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
dima806/flowers_image_detection
|
Returns flower type with about 66% accuracy given an image.
See https://www.kaggle.com/code/dima806/flowers-image-detection-vit for more details.
```
Accuracy: 0.6663
F1 Score: 0.6248
Classification report:
precision recall f1-score support
Aeonium 'Emerald Ice' 0.6429 1.0000 0.7826 144
Aeonium 'Jolly Clusters' 0.8079 1.0000 0.8938 143
Aeonium 'Mardi Gras' 0.8477 0.8951 0.8707 143
Aeonium (Aeonium davidbramwellii 'Sunburst') 0.7705 0.3287 0.4608 143
Aeonium (Aeonium nobile) 0.6829 0.1944 0.3027 144
Aeonium castello-paivae 'Harry Mak' 0.8312 0.8889 0.8591 144
Aeoniums (Aeonium) 1.0000 0.0070 0.0139 143
African Blue Basil (Ocimum 'African Blue') 0.6190 0.4545 0.5242 143
Aloe 'Orange Marmalade' 0.7010 1.0000 0.8242 143
Aloes (Aloe) 0.1127 0.1111 0.1119 144
Alpine Strawberry (Fragaria vesca) 0.6859 0.7431 0.7133 144
Althea (Hibiscus syriacus Blueberry SmoothieΓäó) 0.8136 1.0000 0.8972 144
Amazon Jungle Vine (Vitis amazonica) 0.8866 0.6014 0.7167 143
American Arborvitae (Thuja occidentalis 'Hetz Midget') 0.4828 0.0972 0.1618 144
American Arborvitae (Thuja occidentalis 'Rheingold') 0.4490 0.9231 0.6041 143
American Beautyberry (Callicarpa americana) 0.1026 0.0278 0.0437 144
American Cranberrybush Viburnum (Viburnum opulus var. americanum) 0.3889 0.1469 0.2132 143
American Wisteria (Wisteria frutescens 'Amethyst Falls') 0.9762 0.2867 0.4432 143
American Wisteria (Wisteria frutescens 'Blue Moon') 0.6716 0.3125 0.4265 144
Antelope Horns Milkweed (Asclepias asperula subsp. capricornu) 1.0000 0.3566 0.5258 143
Apple (Malus pumila 'Braeburn') 0.4815 0.7222 0.5778 144
Apple (Malus pumila 'Red Delicious') 0.7763 0.4126 0.5388 143
Apple (Malus pumila 'Red Rome') 0.9118 0.2153 0.3483 144
Apple (Malus pumila 'Sweet Bough') 0.7079 1.0000 0.8290 143
Apple (Malus pumila 'Winter Pearmain') 0.8425 0.7483 0.7926 143
Apple Mint (Mentha suaveolens) 1.0000 0.1667 0.2857 144
Apples (Malus) 0.0000 0.0000 0.0000 144
Apricot (Prunus armeniaca 'Gold Kist') 0.4444 1.0000 0.6154 144
Apricot (Prunus armeniaca 'GoldCot') 0.7891 0.7014 0.7426 144
Apricots (Prunus armeniaca) 1.0000 0.0979 0.1783 143
Arborvitae (Thuja 'Green Giant') 0.3821 0.3287 0.3534 143
Arborvitaes (Thuja) 0.7010 1.0000 0.8242 143
Arilbred Iris (Iris 'Stolon Ginger') 0.9796 1.0000 0.9897 144
Aromatic Aster (Symphyotrichum oblongifolium 'October Skies') 0.9565 0.1528 0.2635 144
Arrowwood Viburnum (Viburnum dentatum) 0.1275 0.1319 0.1297 144
Artichoke Agave (Agave parryi var. truncata) 0.4742 0.9650 0.6359 143
Artichokes (Cynara scolymus) 0.8000 0.3333 0.4706 144
Asparagus (Asparagus officinalis) 0.6237 0.4056 0.4915 143
Asparagus officinalis 'Mondeo' 0.8229 1.0000 0.9028 144
Aster (Aster x frikartii 'Monch') 0.2737 0.9301 0.4229 143
Aster (Aster x frikartii Wonder of Stafa) 0.9074 0.6806 0.7778 144
Asters (Aster) 0.8889 0.1667 0.2807 144
Astilbe 'Fanal' 0.5638 0.7413 0.6405 143
Astilbe 'Icecream' 0.8584 0.6736 0.7549 144
Astilbe 'Peach Blossom' 0.5693 0.7986 0.6647 144
Astilbe 'Rheinland' 0.5139 0.5175 0.5157 143
Astilbe 'Straussenfeder' 0.4857 0.9444 0.6415 144
Astilbes (Astilbe) 1.0000 0.0764 0.1419 144
Azalea (Rhododendron 'Blaney's Blue') 0.4881 1.0000 0.6560 143
Azalea (Rhododendron 'Irene Koster') 0.8667 1.0000 0.9286 143
Baby Burro's Tail (Sedum burrito) 0.9211 0.7343 0.8171 143
Baby's Breath (Gypsophila elegans 'Covent Garden') 0.9172 1.0000 0.9568 144
Baby's Breath (Gypsophila elegans 'Kermesina') 0.7826 1.0000 0.8780 144
Baby's Breaths (Gypsophila elegans) 0.8462 1.0000 0.9167 143
Baptisias (Baptisia) 0.5714 0.0278 0.0530 144
Basil (Ocimum basilicum 'Cardinal') 0.7769 0.7014 0.7372 144
Basil (Ocimum basilicum 'Emily') 0.4337 1.0000 0.6050 144
Basils (Ocimum) 0.0000 0.0000 0.0000 144
Beach Morning Glory (Ipomoea pes-caprae) 0.8354 0.4583 0.5919 144
Bean (Phaseolus vulgaris 'Cherokee Trail of Tears') 0.8372 1.0000 0.9114 144
Beardtongue (Penstemon Red Rocks®) 0.8495 0.5524 0.6695 143
Beautyberry (Callicarpa dichotoma 'Early Amethyst') 0.5183 0.6875 0.5910 144
Bee Balm (Monarda 'Blaustrumpf') 0.7222 0.7273 0.7247 143
Bee Balm (Monarda 'Purple Rooster') 0.9250 0.5139 0.6607 144
Bee Balm (Monarda 'Trinity Purple') 1.0000 1.0000 1.0000 143
Bee Balm (Monarda didyma 'Jacob Cline') 0.5509 0.8264 0.6611 144
Bee Balm (Monarda didyma) 0.5714 0.0280 0.0533 143
Beebalm (Monarda didyma 'Marshall's Delight') 0.6133 0.6389 0.6259 144
Beet (Beta vulgaris 'Boro') 0.6164 1.0000 0.7627 143
Beet (Beta vulgaris 'Bull's Blood') 0.9362 0.6111 0.7395 144
Beet (Beta vulgaris 'Camaro') 0.8807 0.6667 0.7589 144
Beet (Beta vulgaris 'Crosby's Egyptian') 0.9919 0.8542 0.9179 144
Beet (Beta vulgaris 'Moneta') 0.9524 0.6944 0.8032 144
Beet (Beta vulgaris 'Robin') 0.6976 1.0000 0.8218 143
Beet (Beta vulgaris 'Solo') 0.7701 1.0000 0.8701 144
Beet (Beta vulgaris 'Zeppo') 0.9051 1.0000 0.9502 143
Beet (Beta vulgaris var. vulgaris) 0.9597 1.0000 0.9795 143
Bellflower (Campanula Fancy Mee®) 0.8720 1.0000 0.9316 143
Bellflower (Campanula rapunculus subsp. rapunculus) 0.8125 1.0000 0.8966 143
Bellflower (Campanula scheuchzeri) 0.8796 0.6597 0.7540 144
Bellflower (Campanula x haylodgensis 'Blue Wonder') 0.4555 0.8951 0.6038 143
Bellflowers (Campanula) 0.9200 0.1597 0.2722 144
Betony (Stachys spathulata) 0.5714 1.0000 0.7273 144
Bigleaf Hydrangea (Hydrangea macrophylla 'Lanarth White') 0.8563 1.0000 0.9226 143
Bigleaf Hydrangea (Hydrangea macrophylla Gentian Dome) 0.5297 0.8681 0.6579 144
Bigleaf Hydrangea (Hydrangea macrophylla) 0.7500 0.0208 0.0405 144
Bitter Aloe (Aloe ferox) 0.5738 0.2431 0.3415 144
Biznaga de Isla Pichilingue (Mammillaria albicans subsp. fraileana) 0.8944 1.0000 0.9443 144
Biznaga de Otero (Mammillaria oteroi) 0.8041 0.8322 0.8179 143
Black Eyed Susan (Rudbeckia fulgida var. sullivantii 'Goldsturm') 0.6604 0.7343 0.6954 143
Black Eyed Susan (Rudbeckia hirta SmileyZΓäó Happy) 0.7857 1.0000 0.8800 143
Black Eyed Susan (Rudbeckia hirta var. hirta) 0.9216 0.3264 0.4821 144
Black Eyed Susans (Rudbeckia) 0.0000 0.0000 0.0000 144
Black-eyed Susan (Rudbeckia hirta 'Autumn Colors') 0.4834 0.7133 0.5763 143
Black-eyed Susan (Rudbeckia hirta 'Cappuccino') 0.6455 0.8472 0.7327 144
Black-eyed Susan (Rudbeckia hirta 'Sputnik') 0.8421 1.0000 0.9143 144
Blackberry (Rubus 'Black Satin') 0.7795 0.6923 0.7333 143
Blanket Flower (Gaillardia 'Arizona Sun') 0.6323 0.6806 0.6555 144
Blanket Flower (Gaillardia MesaΓäó Red) 0.8090 1.0000 0.8944 144
Blanket Flower (Gaillardia pulchella) 0.8182 0.1250 0.2169 144
Blanket Flower (Gaillardia) 1.0000 0.0347 0.0671 144
Blazing Star (Liatris spicata) 0.0000 0.0000 0.0000 144
Bleeding Heart (Dicentra 'Ivory Hearts') 0.7176 0.8531 0.7796 143
Bleeding Heart (Lamprocapnos spectabilis ValentineΓäó) 0.6378 0.5625 0.5978 144
Bleeding Heart (Lamprocapnos spectabilis) 0.4502 0.7273 0.5561 143
Bleeding Hearts (Lamprocapnos) 0.7333 0.0764 0.1384 144
Blue Daisy (Felicia amelloides) 0.9510 0.6736 0.7886 144
Blue Sage (Salvia azurea) 0.7573 0.5417 0.6316 144
Blue Wild Indigo (Baptisia australis) 0.6792 0.2500 0.3655 144
Bok Choy (Brassica rapa subsp. chinensis 'Joi Choi') 1.0000 1.0000 1.0000 144
Bolivian Hummingbird Sage (Salvia oxyphora) 0.8205 0.6713 0.7385 143
Bradford Pear (Pyrus calleryana 'Bradford') 0.4516 0.1944 0.2718 144
Brassicas (Brassica) 0.8889 0.1111 0.1975 144
Bridalwreath Spiraea (Spiraea prunifolia) 0.4320 0.5069 0.4665 144
Bright Green Dudleya (Dudleya virens) 0.3704 0.0699 0.1176 143
Bulbocodium Daffodil (Narcissus 'Spoirot') 0.7871 0.8472 0.8161 144
Bumpy Convolvulaceae (Ipomoea tuberculata) 0.6333 0.6597 0.6463 144
Bush Bean (Phaseolus vulgaris 'Royal Burgundy') 0.7417 0.6224 0.6768 143
Bush Bean (Phaseolus vulgaris 'Topcrop') 0.7586 0.6154 0.6795 143
Butterfly Bush (Buddleja 'Orange Sceptre') 0.7297 0.9375 0.8207 144
Butterfly Bush (Buddleja BuzzΓäó Sky Blue) 1.0000 0.2308 0.3750 143
Butterfly Bush (Buddleja HumdingerΓäó Magenta Munchkin) 0.6698 1.0000 0.8022 144
Butterfly Bush (Buddleja davidii 'Asian Moon') 1.0000 0.3194 0.4842 144
Butterfly Bush (Buddleja davidii 'Black Knight') 0.8617 0.5625 0.6807 144
Butterfly Bush (Buddleja davidii 'Nanho Blue') 0.4169 1.0000 0.5885 143
Butterfly Bush (Buddleja davidii BuzzΓäó Ivory) 1.0000 0.8750 0.9333 144
Butterfly Milkweed (Asclepias tuberosa) 0.2462 0.1111 0.1531 144
Butterfly Weed (Asclepias tuberosa 'Gay Butterflies') 0.7778 0.0486 0.0915 144
Butterfly Weed (Asclepias tuberosa subsp. tuberosa) 0.4715 0.8611 0.6093 144
Butterhead Lettuce (Lactuca sativa 'Tom Thumb') 0.8563 1.0000 0.9226 143
Butternut Squash (Cucurbita moschata 'Waltham') 0.7937 0.6993 0.7435 143
Butterwort (Pinguicula 'Aphrodite') 0.9231 1.0000 0.9600 144
Butterwort (Pinguicula agnata) 0.8000 0.8333 0.8163 144
Butterwort (Pinguicula cyclosecta) 0.8938 0.7063 0.7891 143
Butterwort (Pinguicula esseriana) 1.0000 1.0000 1.0000 144
Butterwort (Pinguicula gigantea) 0.7150 1.0000 0.8338 143
Butterwort (Pinguicula moctezumae) 0.7200 1.0000 0.8372 144
Cabbage (Brassica oleracea var. capitata 'Deep Blue') 0.7044 1.0000 0.8266 143
Cabbage (Brassica oleracea var. capitata 'Red Jewel') 0.9662 1.0000 0.9828 143
Caladium bicolor 'Fiesta' 1.0000 0.3147 0.4787 143
Caladiums (Caladium) 0.8333 0.0694 0.1282 144
California Fishhook Cactus (Mammillaria dioica) 0.6241 0.5804 0.6014 143
Callery Pear (Pyrus calleryana Chanticleer®) 0.9118 0.2168 0.3503 143
Canna 'Annjee' 0.7956 1.0000 0.8862 144
Canna (Canna x generalis 'Maui Punch') 0.8623 1.0000 0.9260 144
Canna CannaSolΓäó Lily 0.9474 1.0000 0.9730 144
Canna Tropicanna® 0.7987 0.8881 0.8411 143
Cannas (Canna) 0.6364 0.0490 0.0909 143
Cantaloupe (Cucumis melo 'Ambrosia') 0.8552 0.8671 0.8611 143
Cantaloupe (Cucumis melo 'Orange Silverwave') 0.8229 1.0000 0.9028 144
Cantaloupes (Cucumis melo) 0.7153 0.7203 0.7178 143
Caraway Thyme (Thymus herba-barona) 0.5806 1.0000 0.7347 144
Carrot (Daucus carota subsp. sativus 'Atomic Red') 0.7416 0.4615 0.5690 143
Carrot (Daucus carota subsp. sativus 'Black Nebula') 0.5902 1.0000 0.7423 144
Carrot (Daucus carota subsp. sativus 'Burpees A#1') 0.0000 0.0000 0.0000 144
Carrot (Daucus carota subsp. sativus 'Envy') 0.7951 0.6736 0.7293 144
Carrot (Daucus carota subsp. sativus 'Purple 68') 0.9730 1.0000 0.9863 144
Carrot (Daucus carota subsp. sativus 'Sugarsnax 54') 0.9536 1.0000 0.9763 144
Carrot (Daucus carota subsp. sativus 'Ultimate Hybrid') 0.7371 1.0000 0.8487 143
Catmint (Nepeta Cat's Meow) 0.8182 0.3776 0.5167 143
Catmint (Nepeta x faassenii 'Walker's Low') 0.3636 0.0559 0.0970 143
Catmints (Nepeta) 0.3469 0.1181 0.1762 144
Catnip (Nepeta cataria) 0.2511 0.3889 0.3052 144
Cauliflower (Brassica oleracea var. botrytis 'Steady') 0.9470 1.0000 0.9728 143
Celeriac (Apium graveolens var. rapaceum 'Prague Giant') 0.8276 1.0000 0.9057 144
Celeriac (Apium graveolens var. rapaceum 'Prinz') 0.9114 1.0000 0.9536 144
Celery (Apium graveolens var. dulce 'Lathom Self Blanching Galaxy') 0.4218 1.0000 0.5934 143
Celery (Apium graveolens var. dulce 'Redventure') 0.4138 1.0000 0.5854 144
Celery (Apium graveolens var. dulce 'Tall Utah') 0.7908 0.8403 0.8148 144
Center Stripe Agave (Agave univittata 'Quadricolor') 0.5592 0.9514 0.7044 144
Chalk Rose (Dudleya candida) 0.5946 0.1528 0.2431 144
Cheddar Pink (Dianthus DessertΓäó Raspberry Swirl) 0.8563 1.0000 0.9226 143
Cheddar Pink (Dianthus gratianopolitanus BluKissΓäó) 0.6890 1.0000 0.8159 144
Cherry Plum (Prunus cerasifera 'Thundercloud') 0.7907 0.7083 0.7473 144
Chinese Astilbe (Astilbe rubra) 0.5394 0.6181 0.5761 144
Chinese Dogwood (Cornus kousa subsp. chinensis 'Milky Way') 0.7935 0.5069 0.6186 144
Chinese Lanterns (Hibiscus schizopetalus) 0.8170 0.8741 0.8446 143
Chinese Pear (Pyrus pyrifolia 'Shinseiki') 0.8834 1.0000 0.9381 144
Chinese Rhubarb (Rheum tanguticum) 0.5820 0.7692 0.6627 143
Chinese Wisteria (Wisteria sinensis 'Prolific') 0.3157 1.0000 0.4799 143
Chinese Wisteria (Wisteria sinensis) 0.0000 0.0000 0.0000 144
Chinese rhubarb (Rheum palmatum 'Bowles Crimson') 0.6034 1.0000 0.7526 143
Chives (Allium schoenoprasum) 1.0000 0.2657 0.4199 143
Chocolate Mint (Mentha x piperita 'Chocolate') 0.4492 0.5874 0.5091 143
Cilantro (Coriandrum sativum 'Confetti') 0.6139 0.8671 0.7188 143
Cilantros (Coriandrum sativum) 0.7143 0.0347 0.0662 144
Citron (Citrus medica) 1.0000 0.1888 0.3176 143
Citrus Fruits (Citrus) 1.0000 0.1818 0.3077 143
Clustered Bellflower (Campanula glomerata) 0.9600 0.5035 0.6606 143
Coconino County Desert Beardtongue (Penstemon pseudospectabilis 'Coconino County') 0.7164 1.0000 0.8348 144
Colorado Narrowleaf Beardtongue (Penstemon linarioides) 1.0000 1.0000 1.0000 143
Columbine (Aquilegia KirigamiΓäó Rose & Pink) 0.6059 1.0000 0.7546 143
Columbine (Aquilegia coerulea OrigamiΓäó Blue & White) 0.6589 0.9792 0.7877 144
Columbine (Aquilegia vulgaris 'Adelaide Addison') 0.8994 1.0000 0.9470 143
Columbines (Aquilegia) 0.3636 0.0559 0.0970 143
Common Bean (Phaseolus vulgaris 'Contender') 0.7672 0.6224 0.6873 143
Common Fig (Ficus carica 'Brown Turkey') 0.8421 0.4444 0.5818 144
Common Fig (Ficus carica 'Chicago Hardy') 0.4764 0.8462 0.6096 143
Common Fig (Ficus carica 'Jolly Tiger') 0.8045 1.0000 0.8916 144
Common Fig (Ficus carica 'Violette de Bordeaux') 0.6558 0.7014 0.6779 144
Common Jujube (Ziziphus jujuba 'Lang') 0.8882 1.0000 0.9408 143
Common Jujube (Ziziphus jujuba 'Li') 1.0000 1.0000 1.0000 143
Common Lilac (Syringa vulgaris 'Arch McKean') 0.5878 1.0000 0.7404 144
Common Lilac (Syringa vulgaris 'Wonder Blue') 0.9765 0.5764 0.7249 144
Common Milkweed (Asclepias syriaca) 0.6667 0.0559 0.1032 143
Common Sage (Salvia officinalis 'Tricolor') 0.8994 1.0000 0.9470 143
Compact Queen Victoria Agave (Agave victoriae-reginae subsp. swobodae) 0.3575 1.0000 0.5267 143
Conchilinque (Mammillaria pectinifera) 0.8521 1.0000 0.9201 144
Concord Grape (Vitis labrusca 'Concord') 0.8873 0.4375 0.5860 144
Coneflower (Echinacea 'Virgin') 0.9290 1.0000 0.9632 144
Coneflower (Echinacea Big SkyΓäó Sundown) 0.3876 0.9161 0.5447 143
Coneflower (Echinacea Double ScoopΓäó Orangeberry) 1.0000 0.4097 0.5813 144
Coneflower (Echinacea Sombrero® Lemon Yellow Improved) 0.8276 1.0000 0.9057 144
Coneflower (Echinacea purpurea 'Green Twister') 0.7222 1.0000 0.8387 143
Confederate Rose (Hibiscus mutabilis) 0.5833 0.0486 0.0897 144
Coppertone Stonecrop (Sedum nussbaumerianum 'Shooting Stars') 0.6976 1.0000 0.8218 143
Coral Bells (Heuchera 'Amethyst Myst') 0.2936 1.0000 0.4540 143
Coral Bells (Heuchera 'Fire Alarm') 0.3644 0.5972 0.4526 144
Coral Bells (Heuchera 'Mahogany') 0.5184 0.9792 0.6779 144
Coral Bells (Heuchera 'Mega Caramel') 0.5760 0.8681 0.6925 144
Coral Bells (Heuchera 'Silver Scrolls') 0.9600 0.1678 0.2857 143
Coral Bells (Heuchera Dolce® Blackberry Ice) 0.8712 0.7986 0.8333 144
Coral Bells (Heuchera micrantha 'Palace Purple') 0.2128 0.0694 0.1047 144
Coral Bells (Heuchera sanguinea 'Ruby Bells') 0.6708 0.7552 0.7105 143
Coral Honeysuckle (Lonicera sempervirens 'Major Wheeler') 0.5474 0.3636 0.4370 143
Coral Honeysuckle (Lonicera sempervirens) 0.6044 0.3846 0.4701 143
Coreopsis Li'l BangΓäó Darling Clementine 0.7566 1.0000 0.8614 143
Corn (Zea mays subsp. mays 'Jackpot') 0.4721 1.0000 0.6414 144
Corn (Zea mays subsp. mays) 0.0000 0.0000 0.0000 144
Cos Lettuce (Lactuca sativa 'Little Gem') 0.8276 0.8333 0.8304 144
Coulter's Mock Orange (Philadelphus coulteri) 0.8727 1.0000 0.9320 144
Crabapple (Malus 'Cardinal') 0.9728 1.0000 0.9862 143
Crabapple (Malus 'Prairie Fire') 0.6757 0.5208 0.5882 144
Cranesbill (Geranium Rozanne®) 1.0000 0.0769 0.1429 143
Cranesbill (Geranium platypetalum) 0.8363 1.0000 0.9108 143
Crape Myrtle (Lagerstroemia indica 'Hopi') 0.3025 1.0000 0.4645 144
Crape Myrtle (Lagerstroemia indica Red Rocket®) 0.5618 0.3497 0.4310 143
Creeping Phlox (Phlox subulata 'Emerald Blue') 0.4448 0.9021 0.5958 143
Creeping Phlox (Phlox subulata) 0.5000 0.0210 0.0403 143
Creeping Speedwell (Veronica teucrium) 0.8727 1.0000 0.9320 144
Crepe Myrtle (Lagerstroemia 'Ebony Flame') 0.9615 0.1748 0.2959 143
Crepe Myrtle (Lagerstroemia 'Natchez') 0.0000 0.0000 0.0000 144
Crepe Myrtle (Lagerstroemia 'Zuni') 0.8293 0.2361 0.3676 144
Crepe Myrtle (Lagerstroemia Pink Velour®) 0.4490 0.3077 0.3651 143
Crepe Myrtle (Lagerstroemia indica 'Peppermint Lace') 0.9062 0.6042 0.7250 144
Crinum 'Marisco' 0.8229 1.0000 0.9028 144
Crinum 'Milk and Wine' 0.4298 0.6853 0.5283 143
Crinum Lily (Crinum 'Stars and Stripes') 0.8444 0.7917 0.8172 144
Crinums (Crinum) 0.0000 0.0000 0.0000 144
Crocus 0.8846 0.4792 0.6216 144
Crocus 'Deep Water' 0.9000 1.0000 0.9474 144
Crocus (Crocus chrysanthus 'Ladykiller') 0.9057 1.0000 0.9505 144
Cucumber (Cucumis sativus 'Artist') 0.6085 1.0000 0.7566 143
Cucumber (Cucumis sativus 'Double Yield') 0.9022 0.5764 0.7034 144
Cucumber (Cucumis sativus 'Early Cluster') 0.8182 1.0000 0.9000 144
Cucumber (Cucumis sativus 'Lemon') 0.5833 0.9301 0.7170 143
Cucumber (Cucumis sativus 'Marketmore 76') 0.9098 0.7708 0.8346 144
Culinary Sages (Salvia officinalis) 0.4872 0.1329 0.2088 143
Curly Parsley (Petroselinum crispum var. crispum) 0.8333 0.6294 0.7171 143
Cutleaf Coneflower (Rudbeckia laciniata) 0.6446 0.5417 0.5887 144
Daffodil (Narcissus 'Lavender Bell') 0.7742 1.0000 0.8727 144
Dahlia 'AC Sadie' 0.8136 1.0000 0.8972 144
Dahlia 'Creme de Cassis' 0.7619 1.0000 0.8649 144
Dahlia 'Destiny's John Michael' 0.8727 1.0000 0.9320 144
Dahlia 'Firepot' 0.9597 1.0000 0.9795 143
Dahlia 'Formby Sunrise' 0.9351 1.0000 0.9664 144
Dahlia 'Hapet Champagne' 0.9172 1.0000 0.9568 144
Dahlia 'Kelsey Annie Joy' 0.8276 1.0000 0.9057 144
Dahlia 'Santa Claus' 0.9110 0.9236 0.9172 144
Dahlia 'Thomas A. Edison' 0.9213 0.8125 0.8635 144
Dahlias (Dahlia) 0.0000 0.0000 0.0000 143
Dalmatian Bellflower (Campanula portenschlagiana) 0.5217 1.0000 0.6857 144
Dark Opal Basil (Ocimum basilicum 'Purpurascens') 0.5939 0.6806 0.6343 144
Daylily (Hemerocallis 'Armed to the Teeth') 1.0000 1.0000 1.0000 143
Daylily (Hemerocallis 'Dearest Mahogany') 0.8421 1.0000 0.9143 144
Daylily (Hemerocallis 'Golden Hibiscus') 0.8521 1.0000 0.9201 144
Daylily (Hemerocallis 'Kathrine Carter') 1.0000 1.0000 1.0000 144
Daylily (Hemerocallis 'Put My Picture on the Cover') 0.8571 1.0000 0.9231 144
Daylily (Hemerocallis 'Quoting Hemingway') 0.5844 0.9375 0.7200 144
Daylily (Hemerocallis 'Soli Deo Gloria') 1.0000 0.2083 0.3448 144
Daylily (Hemerocallis 'Sons of Thunder') 0.6000 1.0000 0.7500 144
Daylily (Hemerocallis 'Vanishing Mist') 0.9697 0.4444 0.6095 144
Daylily (Hemerocallis 'Zollo Omega') 0.9351 1.0000 0.9664 144
Delphinium 'Blue Dawn' 0.9863 1.0000 0.9931 144
Delphinium 'Diamonds Blue' 0.7701 1.0000 0.8701 144
Delphinium 'Percival' 0.8462 1.0000 0.9167 143
Delphinium (Delphinium elatum New MillenniumΓäó Royal Aspirations) 0.8133 0.4236 0.5571 144
Delphiniums (Delphinium) 0.0000 0.0000 0.0000 144
Dianthus 0.0000 0.0000 0.0000 144
Dianthus 'Gran's Favorite' 0.9664 1.0000 0.9829 144
Dianthus (Dianthus chinensis 'Black and White Minstrels') 0.8471 1.0000 0.9172 144
Dianthus (Dianthus longicalyx) 0.8571 1.0000 0.9231 144
Dianthus (Dianthus monspessulanus) 0.8614 1.0000 0.9256 143
Dill (Anethum graveolens 'Bouquet') 0.7452 0.8125 0.7774 144
Dill (Anethum graveolens 'Fernleaf') 0.4842 0.7483 0.5879 143
Dills (Anethum graveolens) 0.0000 0.0000 0.0000 144
Dogwoods (Cornus) 0.0000 0.0000 0.0000 143
Double Daffodil (Narcissus 'Ice King') 0.9500 0.6643 0.7819 143
Double Daffodil (Narcissus 'Tahiti') 0.8248 0.7902 0.8071 143
Double Japanese Wisteria (Wisteria floribunda Black Dragon) 0.6313 0.7847 0.6997 144
Double Reeves Spirea (Spiraea cantoniensis 'Lanceata') 0.8333 0.1042 0.1852 144
Drummond's Hedgenettle (Stachys drummondii) 0.7226 0.6875 0.7046 144
Dry Bean (Phaseolus vulgaris 'Good Mother Stallard') 1.0000 0.6923 0.8182 143
Dudleyas (Dudleya) 0.0000 0.0000 0.0000 144
Dune Aloe (Aloe thraskii) 0.9737 0.2569 0.4066 144
Dutch Hyacinth (Hyacinthus orientalis 'Delft Blue') 0.5579 0.3706 0.4454 143
Dutch Hyacinth (Hyacinthus orientalis 'Hollyhock') 0.4444 1.0000 0.6154 144
Dutch Hyacinth (Hyacinthus orientalis 'Splendid Cornelia') 0.3504 1.0000 0.5189 144
Dutchman's Breeches (Dicentra cucullaria) 0.9669 0.8125 0.8830 144
Dwarf Burford Holly (Ilex cornuta 'Burfordii Nana') 0.2689 0.8403 0.4074 144
Dwarf Caladium (Caladium humboldtii) 0.7742 1.0000 0.8727 144
Dwarf Chinese Astilbe (Astilbe rubra 'Pumila') 0.3856 0.4097 0.3973 144
Dwarf Coneflower (Echinacea Kismet® Red) 0.9931 1.0000 0.9965 143
Dwarf Mouse-ear Tickseed (Coreopsis auriculata 'Nana') 0.7551 0.2569 0.3834 144
Dwarf Peach (Prunus persica 'Bonanza') 0.0000 0.0000 0.0000 143
Eastern Dogwood (Cornus florida var. florida 'Rubra') 0.3226 0.1389 0.1942 144
Eastern Dogwood (Cornus florida var. florida Cherokee BraveΓäó) 0.5448 0.5105 0.5271 143
Eastern Ninebark (Physocarpus opulifolius 'Center Glow') 0.6486 0.1667 0.2652 144
Eastern Ninebark (Physocarpus opulifolius 'Dart's Gold') 0.9857 0.4825 0.6479 143
Eastern Ninebark (Physocarpus opulifolius 'Luteus') 0.9536 1.0000 0.9763 144
Eastern Ninebark (Physocarpus opulifolius CoppertinaΓäó) 0.4286 0.1469 0.2188 143
Eastern Ninebark (Physocarpus opulifolius Diabolo®) 0.7857 0.0764 0.1392 144
Eastern Red Columbine (Aquilegia canadensis) 0.8732 0.4306 0.5767 144
Echeveria 'Afterglow' 0.5238 0.0769 0.1341 143
Echeveria 'Blue Wren' 0.7172 0.4931 0.5844 144
Echeveria 'Irish Mint' 0.5254 0.8611 0.6526 144
Echeveria 'Mauna Loa' 0.9338 0.8819 0.9071 144
Echeveria 'Perle von Nurnberg' 0.1604 0.8542 0.2700 144
Echeveria 'Rain Drops' 0.8333 0.3125 0.4545 144
Echeveria (Echeveria affinis 'Black Knight') 0.5204 0.3542 0.4215 144
Echeveria (Echeveria agavoides 'Love's Fire') 0.7423 1.0000 0.8521 144
Echeveria (Echeveria runyonii) 0.5000 0.0625 0.1111 144
Echeveria (Echeveria setosa var. minor) 0.9256 0.7832 0.8485 143
Eggplant (Solanum melongena 'Annina') 0.5584 0.9021 0.6898 143
Eggplant (Solanum melongena 'Black Beauty') 0.0000 0.0000 0.0000 144
Eggplant (Solanum melongena 'Bride') 1.0000 0.6597 0.7950 144
Eggplant (Solanum melongena 'Icicle') 0.9412 1.0000 0.9697 144
Eggplant (Solanum melongena 'Orient Express') 0.8372 1.0000 0.9114 144
Eggplant (Solanum melongena 'Orlando') 0.8421 1.0000 0.9143 144
Eggplant (Solanum melongena 'Southern Pink') 1.0000 0.7986 0.8880 144
Eggplant (Solanum melongena 'Violet King') 1.0000 0.6528 0.7899 144
Egyptian Walking Onion (Allium x proliferum) 0.3906 0.6319 0.4828 144
Elephant's Foot Plant (Pachypodium gracilius) 0.9730 1.0000 0.9863 144
Elephant's Trunk (Pachypodium namaquanum) 0.9524 0.1389 0.2424 144
Elfin Thyme (Thymus serpyllum 'Elfin') 0.7324 0.3611 0.4837 144
English Pea (Pisum sativum 'Alaska') 0.5257 1.0000 0.6892 143
English Pea (Pisum sativum 'Bistro') 0.6966 0.7014 0.6990 144
English Pea (Pisum sativum 'Green Arrow') 0.4876 0.4097 0.4453 144
English Pea (Pisum sativum 'Penelope') 0.6842 1.0000 0.8125 143
English Thyme (Thymus vulgaris 'Orange Balsam') 0.8783 0.7063 0.7829 143
European Cranberry Viburnum (Viburnum opulus) 0.7500 0.1042 0.1829 144
European Smoketree (Cotinus coggygria Winecraft Black®) 0.4832 1.0000 0.6516 144
European Snowball Bush (Viburnum opulus 'Roseum') 0.5600 0.6853 0.6164 143
Faassen's Catmint (Nepeta x faassenii 'Six Hills Giant') 0.2802 1.0000 0.4377 144
False Goat's Beard (Astilbe Younique CeriseΓäó) 0.6598 0.8951 0.7596 143
Fancy-Leafed Caladium (Caladium bicolor) 0.8824 0.1049 0.1875 143
Fancy-leaf Caladium (Caladium 'Creamsickle') 0.8882 1.0000 0.9408 143
Fancy-leaf Caladium (Caladium 'Red Flash') 0.0000 0.0000 0.0000 143
Fancy-leaf Caladium (Caladium 'White Christmas') 0.7530 0.8681 0.8065 144
Fancy-leaf Caladium (Caladium TapestryΓäó) 0.2623 1.0000 0.4156 144
Feather Cactus (Mammillaria plumosa) 0.6985 0.9653 0.8105 144
Fern Leaf Peony (Paeonia tenuifolia) 0.9524 0.4167 0.5797 144
Figs (Ficus carica) 0.5000 0.0769 0.1333 143
Flat-Flowered Aloe (Aloe marlothii) 0.5500 0.3056 0.3929 144
Flint Corn (Zea mays subsp. mays 'Indian Ornamental') 0.6630 0.8531 0.7462 143
Flower of an Hour (Hibiscus trionum) 0.8909 0.3403 0.4925 144
Flowering Cabbage (Brassica oleracea var. viridis PigeonΓäó White) 0.9008 0.8252 0.8613 143
Flowering Crabapple (Malus Golden Raindrops) 0.8780 1.0000 0.9351 144
Flowering Dogwood (Cornus Stellar Pink®) 0.5652 0.7273 0.6361 143
Flowering Dogwood (Cornus florida) 0.5000 0.0278 0.0526 144
Flowering Kale (Brassica oleracea 'Kamome White') 0.9462 0.8542 0.8978 144
Flowering Pear (Pyrus calleryana 'Cleveland Select') 0.6243 0.7500 0.6814 144
Foothill Beardtongue (Penstemon heterophyllus 'Electric Blue') 0.8321 0.7569 0.7927 144
Fox Grape (Vitis 'Valiant') 0.9703 0.6806 0.8000 144
Fox Grape (Vitis labrusca) 0.6373 0.4545 0.5306 143
Foxglove (Digitalis 'Honey Trumpet') 0.7222 1.0000 0.8387 143
Foxglove (Digitalis purpurea 'Dalmatian Peach') 0.5405 0.9722 0.6948 144
Foxglove (Digitalis purpurea) 0.5131 0.6806 0.5851 144
Foxgloves (Digitalis) 1.0000 0.0417 0.0800 144
Foxtail Agave (Agave attenuata) 0.7826 0.1250 0.2156 144
Fragaria vesca subsp. vesca 0.7448 1.0000 0.8537 143
French Lilac (Syringa vulgaris 'Michel Buchner') 1.0000 1.0000 1.0000 144
French Lilac (Syringa vulgaris 'Miss Ellen Willmott') 0.7869 1.0000 0.8807 144
French Tarragon (Artemisia dracunculus 'Sativa') 0.8899 0.6736 0.7668 144
Fuchsia Flowering Currant (Ribes speciosum) 1.0000 0.7778 0.8750 144
Gaillardia 'Punch Bowl' 0.5417 1.0000 0.7027 143
Garden Bells (Penstemon hartwegii PhoenixΓäó Pink) 0.5926 1.0000 0.7442 144
Garden Onion (Allium cepa 'Super Star') 0.7688 1.0000 0.8693 143
Garden Pea (Pisum sativum 'PLS 534') 1.0000 0.7083 0.8293 144
Garden Phlox (Phlox paniculata 'Blue Paradise') 0.9196 0.7203 0.8078 143
Garden Phlox (Phlox paniculata 'Mount Fuji') 0.6923 1.0000 0.8182 144
Garden Phlox (Phlox paniculata Volcano Pink White Eye) 0.8994 1.0000 0.9470 143
Garden Phlox (Phlox x arendsii 'Miss Mary') 0.6085 1.0000 0.7566 143
Garden Sage (Salvia officinalis 'Robert Grimm') 0.5161 1.0000 0.6809 144
Gardenia (Gardenia jasminoides 'August Beauty') 0.7564 0.4097 0.5315 144
Gardenia (Gardenia jasminoides 'Frostproof') 0.5916 0.7902 0.6766 143
Gardenia (Gardenia jasminoides 'Veitchii') 0.7869 1.0000 0.8807 144
Gardenia (Gardenia jasminoides 'White Gem') 0.6288 1.0000 0.7721 144
Gardenias (Gardenia) 1.0000 0.0556 0.1053 144
Garlic (Allium sativum 'Early Red Italian') 0.8611 0.4336 0.5767 143
Garlic (Allium sativum 'Georgian Crystal') 0.5314 1.0000 0.6940 144
Garlic (Allium sativum 'Russian Red') 0.7347 1.0000 0.8471 144
Garlic (Allium sativum) 0.0000 0.0000 0.0000 143
Gay Feather (Liatris spicata 'Floristan White') 0.8253 0.9514 0.8839 144
Genovese Basil (Ocimum basilicum 'Dolce Fresca') 0.8942 0.6503 0.7530 143
Gentian Speedwell (Veronica gentianoides) 0.9380 0.8403 0.8864 144
Georgia Sweet Vidalia Onion (Allium cepa 'Yellow Granex') 0.8671 0.8611 0.8641 144
Geranium (Geranium wallichianum 'Buxton's Variety') 0.5437 1.0000 0.7044 143
Geranium (Geranium wallichianum 'Crystal Lake') 0.8727 1.0000 0.9320 144
Geraniums (Geranium) 0.0000 0.0000 0.0000 144
Giant Chalk Dudleya (Dudleya brittonii) 0.1818 0.0694 0.1005 144
Gladiola (Gladiolus 'Vista') 0.8947 0.9444 0.9189 144
Gladiola (Gladiolus) 1.0000 0.0140 0.0276 143
Gladiolus 'Atom' 0.8976 0.7972 0.8444 143
Gladiolus 'Fiesta' 0.9474 1.0000 0.9730 144
Globe Artichoke (Cynara scolymus 'Green Globe') 0.6118 0.3636 0.4561 143
Globe Artichoke (Cynara scolymus 'Violet de Provence') 0.8720 1.0000 0.9316 143
Gloriosa Daisy (Rudbeckia hirta 'Prairie Sun') 0.8514 0.8811 0.8660 143
Golden Sage (Salvia officinalis 'Aurea') 0.6560 1.0000 0.7922 143
Gooseberry (Ribes uva-crispa 'Hinnonmaki Rod') 1.0000 1.0000 1.0000 144
Gooseberry (Ribes uva-crispa) 1.0000 0.7483 0.8560 143
Gourds, Squashes and Pumpkins (Cucurbita) 0.6818 0.5208 0.5906 144
Grape (Vitis vinifera 'Gamay') 0.5625 1.0000 0.7200 144
Grape (Vitis vinifera Cotton Candy®) 0.9862 1.0000 0.9931 143
Grapes (Vitis) 0.4734 0.6181 0.5361 144
Green Bean (Phaseolus vulgaris 'Trionfo Violetto') 0.6702 0.4375 0.5294 144
Greigii Tulip (Tulipa 'Fire of Love') 0.9386 0.7483 0.8327 143
Hairy Beardtongue (Penstemon hirsutus) 0.8243 0.8531 0.8385 143
Hardy Geranium (Geranium 'Phoebe Noble') 0.6875 1.0000 0.8148 143
Hardy Geranium (Geranium sanguineum 'Elke') 0.9752 0.8194 0.8906 144
Hardy Geranium (Geranium sanguineum var. striatum) 0.9459 0.7292 0.8235 144
Hardy Hibiscus (Hibiscus moscheutos 'Fireball') 0.6923 0.5664 0.6231 143
Hardy Hibiscus (Hibiscus moscheutos 'Kopper King') 0.2913 0.4167 0.3429 144
Hardy Hibiscus (Hibiscus moscheutos 'Tie Dye') 1.0000 0.3681 0.5381 144
Hardy Hibiscus (Hibiscus moscheutos SummerificΓäó Cherry Cheesecake) 0.9184 0.3125 0.4663 144
Hardy Hibiscus (Hibiscus moscheutos SummerificΓäó Starry Starry Night) 0.6667 0.0556 0.1026 144
Hardy Hibiscus Hybrid (Hibiscus 'Summer in Paradise') 0.3803 0.8112 0.5179 143
Heavenly Bamboo (Nandina domestica 'Moon Bay') 0.6250 0.8042 0.7034 143
Heavenly Bamboos (Nandina domestica) 0.4000 0.0139 0.0268 144
Hen and Chicks (Sempervivum 'Blaukraut') 0.1845 0.8611 0.3039 144
Hen and Chicks (Sempervivum 'Gold Nugget') 0.4286 0.2517 0.3172 143
Hen and Chicks (Sempervivum 'Larissa') 0.2157 0.1528 0.1789 144
Hen and Chicks (Sempervivum 'Lynn's Rose Gold') 0.3827 0.8611 0.5299 144
Hen and Chicks (Sempervivum 'Red Lion') 0.9167 0.3846 0.5419 143
Hen and Chicks (Sempervivum 'Space Dog') 0.8313 0.4792 0.6079 144
Hen and Chicks (Sempervivum calcareum) 0.3333 0.0280 0.0516 143
Hen and Chicks (Sempervivum tectorum 'Grammens') 0.4054 0.3147 0.3543 143
Hen and chicks (Sempervivum 'Dea') 0.9438 0.5874 0.7241 143
Henbit (Lamium amplexicaule) 0.6721 0.2847 0.4000 144
Hibiscus 1.0000 0.1944 0.3256 144
Hibiscus (Hibiscus moscheutos SummerificΓäó Cherry Choco Latte) 0.6311 0.4545 0.5285 143
Hibiscus (Hibiscus moscheutos SummerificΓäó Cranberry Crush) 0.9565 0.1528 0.2635 144
Hibiscus (Hibiscus moscheutos SummerificΓäó Summer Storm) 0.6349 0.2778 0.3865 144
Holly (Ilex 'Nellie R. Stevens') 0.0000 0.0000 0.0000 144
Holy Basil (Ocimum tenuiflorum 'Green Sacred') 0.3207 1.0000 0.4857 144
Honeysuckle (Lonicera 'Gold Flame') 0.8378 0.6458 0.7294 144
Hortulan Plum (Prunus hortulana) 0.7164 1.0000 0.8348 144
Hosta 'Blue Angel' 0.8131 0.6042 0.6932 144
Hosta 'Blue Mouse Ears' 0.6989 0.4514 0.5485 144
Hosta 'Curly Fries' 0.4099 0.8056 0.5433 144
Hosta 'Liberty' 0.7806 0.8403 0.8094 144
Hosta 'Popcorn' 0.9315 0.9510 0.9412 143
Hosta 'Tom Schmid' 0.4768 1.0000 0.6457 144
Hosta 'Whirlwind' 0.8024 0.9306 0.8617 144
Hosta 'White Feather' 0.8989 0.5594 0.6897 143
Hostas (Hosta) 0.0000 0.0000 0.0000 143
Hot Pepper (Capsicum annuum 'Petit Marseillais') 0.8079 1.0000 0.8938 143
Hot Pepper (Capsicum annuum 'Super Chili') 0.6875 1.0000 0.8148 143
Hot Pepper (Capsicum baccatum 'Brazilian Starfish') 0.9496 0.7902 0.8626 143
Hot Pepper (Capsicum sinense 'Black Naga') 0.8288 0.8462 0.8374 143
Hummingbird Sage (Salvia coccinea 'Coral Nymph') 0.8571 0.2500 0.3871 144
Hyacinth (Hyacinthus orientalis 'Blue Jacket') 0.7000 0.2937 0.4138 143
Hyacinth (Hyacinthus orientalis) 0.5000 0.0972 0.1628 144
Hyacinths (Hyacinthus) 0.2800 0.0490 0.0833 143
Hybrid Gladiola (Gladiolus 'Boone') 0.8045 1.0000 0.8916 144
Hybrid Gladiola (Gladiolus x gandavensis 'Priscilla') 0.6857 1.0000 0.8136 144
Hybrid Tickseed (Coreopsis 'Cherry Lemonade') 0.6000 1.0000 0.7500 144
Hydrangea (Hydrangea macrophylla 'Nightingale') 0.4864 1.0000 0.6545 143
Hydrangea (Hydrangea macrophylla L.A. Dreamin'Γäó Lindsey Ann) 0.8452 0.4965 0.6256 143
Hydrangea (Hydrangea quercifolia 'Munchkin') 0.7480 0.6434 0.6917 143
Hydrangeas (Hydrangea) 0.0000 0.0000 0.0000 144
Iceland Poppy (Papaver nudicaule 'Champagne Bubbles White') 0.9231 1.0000 0.9600 144
Iceland Poppy (Papaver nudicaule 'Meadow Pastels') 0.9597 1.0000 0.9795 143
Intersectional Peony (Paeonia 'All That Jazz') 0.6729 1.0000 0.8045 144
Italian Parsley (Petroselinum crispum 'Italian Flat Leaf') 0.4783 0.3077 0.3745 143
Itoh Peony (Paeonia 'Caroline Constabel') 1.0000 0.0350 0.0676 143
Japanese Crepe Myrtle (Lagerstroemia fauriei 'Fantasy') 0.5017 1.0000 0.6682 144
Japanese Cucumber (Cucumis sativus 'Southern Delight') 0.0000 0.0000 0.0000 144
Japanese Hardy Orange (Citrus trifoliata) 0.0000 0.0000 0.0000 144
Japanese Honeysuckle (Lonicera japonica 'Halliana') 0.9593 0.8194 0.8839 144
Japanese Morning Glory (Ipomoea nil 'Seiryu') 0.6085 1.0000 0.7566 143
Japanese Morning Glory (Ipomoea nil) 0.7097 0.1528 0.2514 144
Japanese Spirea (Spiraea japonica 'Magic Carpet') 0.6912 0.6528 0.6714 144
Japanese Spirea (Spiraea japonica 'Neon Flash') 0.6667 0.4306 0.5232 144
Japanese Wisteria (Wisteria floribunda 'Issai Perfect') 0.9536 1.0000 0.9763 144
Japanese Yellow Sage (Salvia koyamae) 0.5477 0.7622 0.6374 143
Jelly Bean (Sedum x rubrotinctum) 0.1429 0.1469 0.1448 143
Jerusalem Artichoke (Helianthus tuberosus 'Clearwater') 0.9000 1.0000 0.9474 144
Jerusalem Artichoke (Helianthus tuberosus 'Stampede') 0.9412 1.0000 0.9697 144
Jonquilla Narcissus (Narcissus 'Blushing Lady') 0.0000 0.0000 0.0000 144
Judd Viburnum (Viburnum carlesii var. bitchiuense) 0.4276 0.8403 0.5667 144
Jujube (Ziziphus jujuba 'Sherwood') 0.8571 1.0000 0.9231 144
Jujubes (Ziziphus jujuba) 0.7867 0.8252 0.8055 143
Kaibab Agave (Agave utahensis subsp. kaibabensis) 0.5556 0.3472 0.4274 144
Kale (Brassica oleracea var. viridis 'Redbor') 0.9355 0.6042 0.7342 144
Koreanspice Viburnum (Viburnum carlesii) 0.3529 0.0417 0.0745 144
Lacecap Hydrangea (Hydrangea macrophylla Endless Summer® Twist-n-Shout®) 1.0000 0.0559 0.1060 143
Lady Tulip (Tulipa clusiana) 0.8000 0.2238 0.3497 143
Lamb's Ears (Stachys) 0.7236 1.0000 0.8397 144
Lambs' Ears (Stachys byzantina) 0.5366 0.1538 0.2391 143
Large Speedwell (Veronica teucrium 'Crater Lake Blue') 0.3789 0.7500 0.5035 144
Large-cupped Daffodil (Narcissus 'Chromacolor') 0.2900 0.8681 0.4348 144
Larkspur (Delphinium 'Benary's Pacific Cameliard') 0.9108 1.0000 0.9533 143
Larkspur (Delphinium elatum 'Guardian Lavender') 0.4983 1.0000 0.6651 143
Larkspur (Delphinium elatum New MillenniumΓäó Black Eyed Angels) 0.9780 0.6224 0.7607 143
Leek (Allium ampeloprasum 'Lancelot') 0.3989 1.0000 0.5703 144
Leek (Allium ampeloprasum 'Large American Flag') 0.0000 0.0000 0.0000 144
Leek (Allium ampeloprasum 'Zermatt') 1.0000 0.7273 0.8421 143
Leeks (Allium ampeloprasum) 0.6984 0.3077 0.4272 143
Lemoine's Mock Orange (Philadelphus 'Belle Etoile') 0.4815 0.0903 0.1520 144
Lemon (Citrus x limon) 0.4952 0.3611 0.4177 144
Lemon Bee Balm (Monarda citriodora) 0.3483 0.9167 0.5048 144
Lemon Thyme (Thymus x citriodorus) 0.7583 0.6319 0.6894 144
Lemon Tree (Citrus x limon 'Eureka') 0.5509 0.6434 0.5935 143
Lettuce (Lactuca sativa 'Parris Island') 0.7744 0.7153 0.7437 144
Lettuce (Lactuca sativa 'Red Romaine') 0.5902 0.5035 0.5434 143
Lettuce (Lactuca sativa 'Rouge d'Hiver') 0.9172 1.0000 0.9568 144
Lettuce (Lactuca sativa 'Yugoslavian Red Butterhead') 0.5950 1.0000 0.7461 144
Lettuces (Lactuca sativa) 0.1379 0.0278 0.0462 144
Lewis' Mockorange (Philadelphus lewisii) 0.3000 0.1458 0.1963 144
Lilac (Syringa First Editions® Virtual Violet™) 1.0000 0.5625 0.7200 144
Lilac (Syringa vulgaris 'Belle de Nancy') 0.4500 0.0629 0.1104 143
Lilac (Syringa vulgaris 'Sensation') 0.8812 0.6181 0.7265 144
Lilac (Syringa x hyacinthiflora 'Sweetheart') 0.4103 1.0000 0.5818 144
Lily (Lilium 'Corsage') 0.9606 0.8472 0.9004 144
Lily (Lilium 'Flavia') 0.9231 1.0000 0.9600 144
Lily (Lilium 'Fusion') 0.8000 0.8112 0.8056 143
Lily (Lilium 'Moonyeen') 0.9351 1.0000 0.9664 144
Lily (Lilium 'Ramona') 0.8090 1.0000 0.8944 144
Lily (Lilium 'Sunny Morning') 0.6745 1.0000 0.8056 143
Lily (Lilium 'Viva La Vida') 0.7784 1.0000 0.8754 144
Lily (Lilium auratum) 0.9296 0.9167 0.9231 144
Lily (Lilium pyrenaicum) 0.8448 0.3403 0.4851 144
Lily Flowering Tulip (Tulipa 'Claudia') 0.8324 1.0000 0.9085 144
Loose-leaf Lettuce (Lactuca sativa 'Salad Bowl') 0.9237 0.7622 0.8352 143
Madagascar Palm (Pachypodium geayi) 1.0000 0.1250 0.2222 144
Madagascar Palm (Pachypodium lamerei) 0.4839 0.2083 0.2913 144
Malagasy Tree Aloe (Aloe vaombe) 0.3662 0.1806 0.2419 144
Marjorams (Origanum laevigatum) 0.7487 1.0000 0.8563 143
Meadow Blazing Star (Liatris ligulistylis) 0.5922 0.8472 0.6971 144
Mealy Cup Sage (Salvia farinacea Cathedral® Shining Seas) 0.5630 1.0000 0.7204 143
Melon (Cucumis melo 'Charentais') 0.9076 0.7500 0.8213 144
Melon (Cucumis melo 'Kajari') 0.7117 0.5524 0.6220 143
Melon (Cucumis melo 'Tigger') 0.9179 0.8542 0.8849 144
Meserve Holly (Ilex 'Casanova') 0.8889 1.0000 0.9412 144
Mexican Butterwort; Mexican Ping (Pinguicula ibarrae) 0.9862 1.0000 0.9931 143
Mexican Dogwood (Cornus florida var. urbiniana) 0.8372 1.0000 0.9114 144
Mexican Plum (Prunus mexicana) 0.4742 0.3217 0.3833 143
Meyer's Lemon (Citrus x limon 'Improved Meyer') 0.5021 0.8182 0.6223 143
Milk and Wine Lily (Crinum fimbriatulum) 0.3280 1.0000 0.4940 143
Miniature Jonquilla Daffodil (Narcissus 'Pipit') 0.5281 0.3264 0.4034 144
Mints (Mentha) 0.3976 0.7014 0.5075 144
Mock Orange (Philadelphus 'Innocence') 0.2156 1.0000 0.3547 144
Mock Orange (Philadelphus 'Snow Dwarf') 0.4660 0.6713 0.5501 143
Moonflower (Ipomoea alba) 0.9559 0.4514 0.6132 144
Morning Glory (Ipomoea 'Split Second') 0.6857 1.0000 0.8136 144
Morning Glory (Ipomoea hederifolia 'Aurantia') 0.9167 1.0000 0.9565 143
Morning Glory (Ipomoea nil 'Kikyo Snowflakes') 0.6408 0.9231 0.7564 143
Morning Glory (Ipomoea purpurea 'Feringa') 0.8171 1.0000 0.8994 143
Morning Glory (Ipomoea tricolor 'Clarke's Heavenly Blue') 0.6792 1.0000 0.8090 144
Mountain Aloe (Aloe broomii) 0.6571 0.4792 0.5542 144
Nectarine (Prunus persica 'Arctic Glo') 0.6180 1.0000 0.7639 144
Nectarine (Prunus persica 'Early Rivers') 0.3538 1.0000 0.5227 144
Nepeta (Nepeta subsessilis) 0.7125 0.3986 0.5112 143
Nepeta (Nepeta x faassenii 'Select Blue') 0.4897 1.0000 0.6575 143
New England Aster (Symphyotrichum novae-angliae 'Andenken an Alma Pötschke') 0.7959 0.5417 0.6446 144
New England Aster (Symphyotrichum novae-angliae) 0.5000 0.0625 0.1111 144
Noble Rhubarb (Rheum nobile) 0.9057 1.0000 0.9505 144
Northern White Cedar (Thuja occidentalis Mr. Bowling BallΓäó) 0.2623 1.0000 0.4156 144
Okra (Abelmoschus esculentus 'Burmese') 0.7929 0.7762 0.7845 143
Okra (Abelmoschus esculentus 'Clemson Spineless') 0.3656 0.2361 0.2869 144
Okra (Abelmoschus esculentus 'Jambalaya') 0.8512 1.0000 0.9196 143
Okra (Abelmoschus esculentus 'Jing Orange') 0.3593 0.8392 0.5031 143
Okra (Abelmoschus esculentus 'Red Burgundy') 0.6927 0.8611 0.7678 144
Okra (Abelmoschus esculentus) 0.6875 0.1528 0.2500 144
Oleander (Nerium oleander 'Calypso') 0.4892 0.9444 0.6445 144
Oleander (Nerium oleander 'Hardy White') 0.9048 0.6597 0.7631 144
Oleander (Nerium oleander 'Red Cardinal') 0.5185 0.1944 0.2828 144
Onion (Allium cepa 'Red Hunter') 0.4696 0.8112 0.5949 143
Onion (Allium cepa 'Red River F1') 0.7044 1.0000 0.8266 143
Onion (Allium cepa 'Walla Walla Sweet') 0.7885 0.2847 0.4184 144
Onions (Allium cepa) 0.1438 0.1538 0.1486 143
Orange (Citrus reticulata 'Satsuma') 0.9474 1.0000 0.9730 144
Oreganos (Origanum vulgare) 0.0000 0.0000 0.0000 144
Oriental Radish (Raphanus sativus 'New White Spring') 0.3696 0.5944 0.4558 143
Ornamental Gourd (Cucurbita pepo 'Tennessee Dancing') 0.6825 1.0000 0.8113 144
Ornamental Oregano (Origanum laevigatum 'Herrenhausen') 0.4491 0.5208 0.4823 144
Ornamental Pepper (Capsicum annuum 'Black Pearl') 1.0000 0.5139 0.6789 144
Ornamental Pepper (Capsicum annuum 'Chilly Chili') 0.8521 1.0000 0.9201 144
Ornamental Sweet Potato (Ipomoea batatas 'Blackie') 0.5769 0.2083 0.3061 144
Ornamental Sweet Potato (Ipomoea batatas 'Margarita') 0.8276 0.3333 0.4752 144
Pachypodium (Pachypodium brevicaule) 0.6712 0.3403 0.4516 144
Pachypodium (Pachypodium sofiense) 0.8881 0.8881 0.8881 143
Pacific Coast Iris (Iris 'Big Waves') 0.9863 1.0000 0.9931 144
Pacific Coast Iris (Iris 'Caught in the Wind') 0.8780 1.0000 0.9351 144
Pacific Coast Iris (Iris 'Finger Pointing') 0.9862 1.0000 0.9931 143
Panicle Hydrangea (Hydrangea paniculata First Editions® Vanilla Strawberry™) 0.4841 0.9514 0.6417 144
Parsleys (Petroselinum crispum) 0.6783 0.5455 0.6047 143
Parsnip (Pastinaca sativa 'Harris Model') 0.9231 1.0000 0.9600 144
Parsnip (Pastinaca sativa 'Hollow Crown') 0.9533 1.0000 0.9761 143
Parsnip (Pastinaca sativa 'Javelin') 1.0000 1.0000 1.0000 143
Parsnips (Pastinaca sativa) 0.5692 1.0000 0.7254 144
Pea (Pisum sativum 'Spring Blush') 1.0000 1.0000 1.0000 143
Peach (Prunus persica 'Canadian Harmony') 0.4157 1.0000 0.5873 143
Peach (Prunus persica 'Elberta') 0.0000 0.0000 0.0000 143
Peach (Prunus persica Flamin' Fury® PF-24C) 0.5411 0.7778 0.6382 144
Peach-Leaved Bellflower (Campanula persicifolia) 0.9178 0.4685 0.6204 143
Peacock Orchid (Gladiolus murielae) 0.8393 0.3287 0.4724 143
Pear (Pyrus communis 'Early Seckel') 0.9040 0.7902 0.8433 143
Pencilled Cranesbill (Geranium versicolor) 0.9412 1.0000 0.9697 144
Penstemon Riding Hood Red 0.8544 0.6111 0.7126 144
Peonies (Paeonia) 0.1250 0.0139 0.0250 144
Peony (Paeonia 'Athena') 0.6711 0.6993 0.6849 143
Peony (Paeonia 'Pastelegance') 0.8675 1.0000 0.9290 144
Peony (Paeonia daurica subsp. coriifolia) 0.7566 1.0000 0.8614 143
Peony (Paeonia lactiflora 'Bowl of Beauty') 0.7231 0.6528 0.6861 144
Peony (Paeonia lactiflora 'Do Tell') 0.5708 0.8741 0.6906 143
Peony (Paeonia lactiflora 'Top Brass') 0.9021 0.9021 0.9021 143
Pepper (Capsicum 'Mad Hatter') 1.0000 0.7133 0.8327 143
Peppers (Capsicum) 0.9773 0.2986 0.4574 144
Persian Catmint (Nepeta racemosa 'Little Titch') 0.8750 0.5347 0.6638 144
Petunia AmoreΓäó Queen of Hearts 0.7164 1.0000 0.8348 144
Petunia Crazytunia® Cosmic Pink 0.8125 1.0000 0.8966 143
Petunia HeadlinerΓäó Night Sky 0.9384 0.9580 0.9481 143
Petunia Midnight Gold 0.8324 1.0000 0.9085 144
Petunia Potunia® Purple Halo 0.8667 1.0000 0.9286 143
Petunia Sweetunia® Fiona Flash 0.6990 1.0000 0.8229 144
Petunias (Petunia) 0.5238 0.0764 0.1333 144
Phlox drummondii 'Sugar Stars' 0.9346 1.0000 0.9662 143
Pineberry (Fragaria x ananassa 'White Carolina') 0.8079 1.0000 0.8938 143
Pineleaf Beardtongue (Penstemon pinifolius Half Pint®) 0.4735 1.0000 0.6427 143
Pinks (Dianthus 'Little Maiden') 0.8521 1.0000 0.9201 144
Plains Coreopsis (Coreopsis tinctoria) 0.9348 0.2986 0.4526 144
Plumeria 'Queen Amber' 0.9536 1.0000 0.9763 144
Plumeria (Plumeria filifolia) 0.8300 0.5804 0.6831 143
Plumeria (Plumeria rubra 'Fireblast') 0.8944 1.0000 0.9443 144
Plumeria (Plumeria rubra 'Flaming Rock Dragon') 0.9580 0.7917 0.8669 144
Plumeria (Plumeria rubra 'J 105') 0.9408 1.0000 0.9695 143
Plumeria (Plumeria rubra 'Mary Helen Eggenberger') 1.0000 1.0000 1.0000 143
Plumeria (Plumeria rubra 'Mellow Yellow') 0.7660 1.0000 0.8675 144
Plumeria (Plumeria rubra 'Naples Sixteen') 0.7347 1.0000 0.8471 144
Plumeria (Plumeria rubra 'Sophie') 0.9730 1.0000 0.9863 144
Plumerias (Plumeria) 0.2500 0.0140 0.0265 143
Plums (Prunus umbellata) 0.7826 0.5035 0.6128 143
Popcorn (Zea mays subsp. mays 'Glass Gem') 0.7250 0.4028 0.5179 144
Poppies (Papaver) 0.8462 0.3056 0.4490 144
Poppy (Papaver 'Sugar Plum') 0.5608 1.0000 0.7186 143
Poppy (Papaver rhoeas 'Shirley Poppy') 0.6250 0.3147 0.4186 143
Possumhaw Holly (Ilex decidua) 0.4889 0.3056 0.3761 144
Potato (Solanum tuberosum 'Adirondack Blue') 0.8889 1.0000 0.9412 144
Potato (Solanum tuberosum 'Baltic Rose') 0.6990 1.0000 0.8229 144
Potato (Solanum tuberosum 'Bojar') 0.5125 1.0000 0.6776 144
Potato (Solanum tuberosum 'Kennebec') 0.7531 0.8531 0.8000 143
Potato (Solanum tuberosum 'Red Pontiac') 0.7292 0.2448 0.3665 143
Potato (Solanum tuberosum 'Vitelotte') 0.9795 1.0000 0.9896 143
Potatoes (Solanum tuberosum) 0.0000 0.0000 0.0000 144
Pumpkin (Cucurbita moschata 'Musquee de Provence') 0.5000 0.9097 0.6453 144
Pumpkin (Cucurbita pepo 'Styrian Hulless') 0.8020 0.5664 0.6639 143
Pumpkin (Cucurbita pepo 'Winter Luxury Pie') 0.9709 0.6993 0.8130 143
Purple Basil (Ocimum basilicum 'Purple Delight') 0.6886 0.7986 0.7395 144
Purple Cherry Plum (Prunus cerasifera 'Hollywood') 0.5872 0.8951 0.7091 143
Purple Coneflower (Echinacea purpurea 'Magnus') 0.0000 0.0000 0.0000 143
Purple Coneflower (Echinacea purpurea 'Rubinstern') 0.4297 0.7847 0.5553 144
Purple Coneflower (Echinacea purpurea) 0.3571 0.0694 0.1163 144
Purple Dead Nettle (Lamium purpureum) 0.5833 0.8811 0.7019 143
Purple Marjoram (Origanum laevigatum 'Hopley's') 0.7024 1.0000 0.8252 144
Purple-flowering raspberry (Rubus odoratus) 0.3298 0.8601 0.4767 143
Quiver Tree (Aloidendron dichotomum) 0.8276 0.3333 0.4752 144
Radish (Raphanus sativus 'Amethyst') 0.9000 1.0000 0.9474 144
Radish (Raphanus sativus 'Burpee Cherry Giant') 0.7024 1.0000 0.8252 144
Radish (Raphanus sativus 'Champion') 0.6636 1.0000 0.7978 144
Radish (Raphanus sativus 'Early Scarlet Globe') 0.5652 0.0909 0.1566 143
Radish (Raphanus sativus 'German Giant') 0.8045 1.0000 0.8916 144
Radishes (Raphanus sativus) 0.4324 0.1111 0.1768 144
Rainbow Carrot (Daucus carota subsp. sativus 'Rainbow') 0.4417 1.0000 0.6128 144
Rape (Brassica napus subsp. napus) 0.7742 1.0000 0.8727 144
Rapini (Brassica rapa subsp. rapa 'Early Fall') 0.3438 1.0000 0.5116 143
Raspberry (Rubus idaeus 'Joan J') 0.4689 1.0000 0.6384 143
Red Currant (Ribes rubrum 'Red Lake') 0.8038 0.8881 0.8439 143
Red Flowering Currant (Ribes sanguineum 'Brocklebankii') 0.9172 1.0000 0.9568 144
Red Table Grape (Vitis labrusca 'Vanessa') 1.0000 1.0000 1.0000 143
Red Twig Dogwood (Cornus sanguinea 'Anny's Winter Orange') 0.8314 1.0000 0.9079 143
Red Twig Dogwood (Cornus sericea) 0.4714 0.2308 0.3099 143
Red-Leaf Hibiscus (Hibiscus acetosella) 0.5200 0.0909 0.1548 143
Rhododendron 'Blue Peter' 0.8896 0.9514 0.9195 144
Rhododendron 'Inga' 0.6234 1.0000 0.7680 144
Rhododendron 'Mother of Pearl' 0.8471 1.0000 0.9172 144
Rhododendron 'Queen of England' 0.7500 1.0000 0.8571 144
Rhododendron 'Roseum Elegans' 1.0000 0.0839 0.1548 143
Rhododendrons (Rhododendron) 0.2174 0.0694 0.1053 144
Rhubarb (Rheum 'Glaskins Perpetual') 0.8741 0.8252 0.8489 143
Rhubarb (Rheum rhabarbarum 'Victoria') 0.9487 0.5175 0.6697 143
Rhubarb (Rheum rhabarbarum) 1.0000 0.2986 0.4599 144
Rhubarbs (Rheum) 0.8240 0.7203 0.7687 143
Rocky Mountain Beardtongue (Penstemon strictus) 1.0000 0.2917 0.4516 144
Rocky Mountain Columbine (Aquilegia coerulea) 0.9167 0.1538 0.2635 143
Romaine (Lactuca sativa 'Willow') 0.5902 1.0000 0.7423 144
Rose (Rosa 'Angel Face') 0.9783 0.3125 0.4737 144
Rose (Rosa 'Ebb Tide') 0.9697 0.6667 0.7901 144
Rose (Rosa 'Institut Lumiere') 0.9057 1.0000 0.9505 144
Rose (Rosa 'Lavender Crush') 0.5496 1.0000 0.7094 144
Rose (Rosa 'Sexy Rexy') 0.9333 0.1944 0.3218 144
Rose (Rosa 'The Pilgrim') 0.9060 0.9375 0.9215 144
Rose (Rosa 'Veilchenblau') 1.0000 0.4825 0.6509 143
Rose (Rosa 'Wife of Bath') 0.4511 1.0000 0.6217 143
Rose of Sharon (Hibiscus PollypetiteΓäó) 0.9536 1.0000 0.9763 144
Rose of Sharon (Hibiscus syriacus 'Danica') 0.5690 0.9167 0.7021 144
Rose of Sharon (Hibiscus syriacus Blue Satin®) 0.8293 0.9444 0.8831 144
Rose of Sharon (Hibiscus syriacus ChateauΓäó de Chantilly) 0.3854 1.0000 0.5564 143
Roses of Sharon (Hibiscus syriacus) 0.0000 0.0000 0.0000 144
Russian Sage (Perovskia atriplicifolia) 0.5484 0.1189 0.1954 143
Russian Sages (Perovskia) 0.4364 0.7153 0.5421 144
Rusty Blackhaw Viburnum (Viburnum rufidulum) 0.9355 0.2014 0.3314 144
Saffron Crocus (Crocus sativus) 0.9898 0.6736 0.8017 144
Salvia (Salvia coerulea 'Sapphire Blue') 0.9913 0.7917 0.8803 144
Salvia (Salvia splendens 'Yvonne's Salvia') 0.5747 0.3472 0.4329 144
Salvia (Salvia x jamensis HeatwaveΓäó Glimmer) 0.8605 0.5175 0.6463 143
Salvias (Salvia) 0.0000 0.0000 0.0000 143
San Gabriel Alumroot (Heuchera abramsii) 0.7079 1.0000 0.8290 143
Sand Lettuce (Dudleya caespitosa) 0.2240 1.0000 0.3659 144
Sand Pink (Dianthus arenarius) 0.8992 0.7483 0.8168 143
Sargent Viburnum (Viburnum sargentii 'Onondaga') 0.6537 0.9371 0.7701 143
Sargent's Crabapple (Malus sieboldii subsp. sieboldii 'Roselow') 0.7423 0.8462 0.7908 143
Saturn Peach (Prunus persica 'Saturn') 0.6588 0.3889 0.4891 144
Scallop Squash (Cucurbita pepo 'Early White Bush Scallop') 0.9746 0.8042 0.8812 143
Sedum (Sedum palmeri) 0.0000 0.0000 0.0000 144
Shallot (Allium cepa 'Creme Brulee') 0.8834 1.0000 0.9381 144
Shasta Daisies (Leucanthemum x superbum) 0.3000 0.0417 0.0732 144
Shasta Daisy (Leucanthemum x superbum 'Aglaya') 0.6300 1.0000 0.7730 143
Shasta Daisy (Leucanthemum x superbum 'Becky') 0.9231 0.0833 0.1529 144
Shasta Daisy (Leucanthemum x superbum 'Snehurka') 0.8358 0.7832 0.8087 143
Shasta Daisy (Leucanthemum x superbum 'Snowcap') 0.4970 0.5833 0.5367 144
Shasta Daisy (Leucanthemum x superbum 'White Breeze') 0.8079 1.0000 0.8938 143
Shasta Daisy (Leucanthemum x superbum Sweet DaisyΓäó Christine) 0.5353 1.0000 0.6973 144
Shirley Poppy (Papaver rhoeas 'Amazing Grey') 1.0000 0.9097 0.9527 144
Shirley Poppy (Papaver rhoeas 'Double Mixed') 0.5108 0.8194 0.6293 144
Siempreviva (Dudleya attenuata) 0.8763 0.5903 0.7054 144
Sierra Canelo Pincushion Cactus (Mammillaria standleyi) 0.8614 1.0000 0.9256 143
Sierra Leone Lily (Chlorophytum 'Fireflash') 0.8282 0.9375 0.8795 144
Silver Margined Holly (Ilex aquifolium 'Argentea Marginata') 0.7515 0.8671 0.8052 143
Slow Bolt Cilantro (Coriandrum sativum 'Santo') 0.4797 0.4097 0.4419 144
Smoke Tree (Cotinus coggygria 'Royal Purple') 0.5714 0.0280 0.0533 143
Smoketree (Cotinus coggygria Golden SpiritΓäó) 0.6603 0.7203 0.6890 143
Smoketrees (Cotinus coggygria) 0.6842 0.5417 0.6047 144
Smooth Hydrangea (Hydrangea arborescens 'Annabelle') 0.9189 0.2378 0.3778 143
Snap Bean (String (Phaseolus vulgaris 'Black Seeded Blue Lake') 0.6102 1.0000 0.7579 144
Snap Bean (String (Phaseolus vulgaris 'Blue Lake Bush #274') 0.5071 1.0000 0.6729 143
Snap Bean (String (Phaseolus vulgaris 'Wren's Egg') 0.6777 1.0000 0.8079 143
Soap Aloe (Aloe maculata) 0.1429 0.0347 0.0559 144
Softneck Garlic (Allium sativum 'Inchelium Red') 0.6413 1.0000 0.7814 143
Spearmint (Mentha spicata) 0.2917 0.0972 0.1458 144
Speedwell (Veronica oltensis) 0.8818 0.6783 0.7668 143
Speedwell (Veronica peduncularis 'Georgia Blue') 0.9737 0.5175 0.6758 143
Spider Plant (Chlorophytum comosum) 0.9286 0.0903 0.1646 144
Spike Speedwell (Veronica spicata Royal Candles) 0.5792 0.8889 0.7014 144
Spinach (Spinacia oleracea 'Alexandria') 0.9730 1.0000 0.9863 144
Spinach (Spinacia oleracea 'America') 0.4630 1.0000 0.6330 144
Spinach (Spinacia oleracea 'Ashley') 0.9231 1.0000 0.9600 144
Spinach (Spinacia oleracea 'Gigante d'Inverno') 0.6429 1.0000 0.7826 144
Spinach (Spinacia oleracea 'Red Kitten') 0.2487 1.0000 0.3983 144
Spinach (Spinacia oleracea 'Reflect') 0.9600 1.0000 0.9796 144
Spinach (Spinacia oleracea 'Seaside') 0.9051 1.0000 0.9502 143
Spinaches (Spinacia oleracea) 0.8750 0.7343 0.7985 143
Spiraeas (Spiraea) 0.6026 0.3264 0.4234 144
Spirea (Spiraea nipponica 'Snowmound') 0.7869 0.3357 0.4706 143
Spotted Beebalm (Monarda punctata var. punctata) 0.8000 0.0833 0.1509 144
Spotted Beebalm (Monarda punctata) 0.4615 0.5417 0.4984 144
Spotted Dead Nettle (Lamium maculatum 'Pink Pewter') 0.7448 1.0000 0.8537 143
Spotted Dead Nettle (Lamium maculatum) 0.8594 0.3846 0.5314 143
Spring Crocus (Crocus versicolor 'Picturatus') 0.8034 1.0000 0.8910 143
Squid Agave (Agave bracteosa) 0.5789 0.7639 0.6587 144
St.Christopher Lily (Crinum jagus) 0.9778 0.6111 0.7521 144
Strawberries (Fragaria) 1.0000 0.2292 0.3729 144
Strawberry (Fragaria x ananassa 'Chandler') 0.9114 1.0000 0.9536 144
Strawberry (Fragaria x ananassa) 0.8768 0.8403 0.8582 144
Strawberry Foxglove (Digitalis x mertonensis) 0.8627 0.3056 0.4513 144
Stringy Stonecrop (Sedum sarmentosum) 0.0408 0.0139 0.0207 144
Summer Squash-Crookneck (Cucurbita pepo 'Summer Crookneck') 0.8786 0.8601 0.8693 143
Sunroot (Helianthus tuberosus 'White Fuseau') 0.6729 1.0000 0.8045 144
Sunroots (Helianthus tuberosus) 0.4286 0.2308 0.3000 143
Swamp Milkweed (Asclepias incarnata) 0.9057 0.3333 0.4873 144
Sweet Basil (Ocimum basilicum) 0.3869 0.3681 0.3772 144
Sweet Cherries (Prunus avium) 0.0000 0.0000 0.0000 144
Sweet Cherry (Prunus avium 'Bing') 1.0000 0.6181 0.7639 144
Sweet Cherry (Prunus avium 'Black Tatarian') 0.9831 0.4028 0.5714 144
Sweet Cherry (Prunus avium 'Van') 0.8045 1.0000 0.8916 144
Sweet Corn (Zea mays 'Essence') 0.0000 0.0000 0.0000 143
Sweet Potato (Ipomoea batatas 'Carolina Ruby') 0.9068 0.7483 0.8199 143
Sweet Potato (Ipomoea batatas Sweet Caroline Sweetheart Jet BlackΓäó) 0.8647 0.8042 0.8333 143
Sweet Potato Vine (Ipomoea batatas 'Little Blackie') 0.3647 0.8951 0.5182 143
Sweet Potato Vine (Ipomoea batatas 'Pink Frost') 0.7784 1.0000 0.8754 144
Sweet Potatoes (Ipomoea batatas) 0.0000 0.0000 0.0000 144
Swiss Chard (Beta vulgaris subsp. cicla 'Bright Lights') 0.5165 0.3264 0.4000 144
Swiss Chard (Beta vulgaris subsp. cicla 'Rhubarb Chard') 0.4965 1.0000 0.6636 143
Swiss Chard (Beta vulgaris subsp. cicla 'Ruby Red') 0.7317 0.2083 0.3243 144
Tall Bearded Iris (Iris 'Blue Me Away') 0.7044 1.0000 0.8266 143
Tall Bearded Iris (Iris 'Lemon Cloud') 0.9796 1.0000 0.9897 144
Tall Bearded Iris (Iris 'Merchant Marine') 0.9176 0.5455 0.6842 143
Tall Bearded Iris (Iris 'Radiant Garnet') 0.8889 1.0000 0.9412 144
Tall Bearded Iris (Iris 'Serene Silence') 0.9470 1.0000 0.9728 143
Tall Bearded Iris (Iris 'Wonders Never Cease') 1.0000 1.0000 1.0000 143
Tall Phlox (Phlox paniculata) 0.6786 0.2657 0.3819 143
Tarragons (Artemisia dracunculus) 0.8738 0.6250 0.7287 144
Tasteless Stonecrop (Sedum sexangulare) 0.7850 0.5874 0.6720 143
Texas Nipple Cactus (Mammillaria prolifera subsp. texana) 0.9597 1.0000 0.9795 143
Texas Star (Hibiscus coccineus) 0.9722 0.4895 0.6512 143
Thimbleberry (Rubus nutkanus) 0.7059 0.0839 0.1500 143
Thornless Blackberry (Rubus 'Apache') 0.7500 0.7133 0.7312 143
Thornless Blackberry (Rubus 'Arapaho') 0.5714 0.1111 0.1860 144
Thornless Blackberry (Rubus 'Navaho') 0.6203 0.3427 0.4414 143
Thyme (Thymus praecox 'Highland Cream') 0.5106 1.0000 0.6761 144
Thyme (Thymus praecox) 1.0000 0.4514 0.6220 144
Thyme (Thymus serpyllum 'Roseum') 0.7423 1.0000 0.8521 144
Tiare (Gardenia taitensis) 0.7487 1.0000 0.8563 143
Tickseed (Coreopsis Cruizin'Γäó Main Street) 0.8623 1.0000 0.9260 144
Tickseed (Coreopsis Satin & LaceΓäó Red Chiffon) 0.9408 1.0000 0.9695 143
Tickseed (Coreopsis UpTickΓäó Yellow & Red) 0.5830 1.0000 0.7366 144
Tickseed (Coreopsis grandiflora 'Sunkiss') 0.7483 0.7431 0.7456 144
Tomato (Solanum lycopersicum 'Buffalo Steak') 0.6193 0.8531 0.7176 143
Tomato (Solanum lycopersicum 'Dark Galaxy') 1.0000 1.0000 1.0000 144
Tomato (Solanum lycopersicum 'Goldman's Italian-American') 0.9754 0.8322 0.8981 143
Tomato (Solanum lycopersicum 'Helsing Junction Blues') 0.8256 0.4931 0.6174 144
Tomato (Solanum lycopersicum 'Park's Whopper') 0.5107 1.0000 0.6761 143
Tomato (Solanum lycopersicum 'Pink Delicious') 0.8412 1.0000 0.9137 143
Tomato (Solanum lycopersicum 'Sungold') 0.8608 0.4722 0.6099 144
Tomato (Solanum lycopersicum 'Yellow Mortgage Lifter') 0.9597 1.0000 0.9795 143
Tomatoes (Solanum lycopersicum) 1.0000 0.1458 0.2545 144
Triandrus Daffodil (Narcissus 'Thalia') 0.7368 0.4895 0.5882 143
Triple Sweet Corn (Zea mays 'Alto') 0.5882 0.6993 0.6390 143
Triumph Tulip (Tulipa 'Aperitif') 0.7664 0.7292 0.7473 144
Triumph Tulip (Tulipa 'Jackpot') 0.9857 0.4792 0.6449 144
Tropical Milkweed (Asclepias curassavica 'Silky Gold') 0.7265 0.5944 0.6538 143
Tropical Milkweed (Asclepias curassavica) 0.9125 0.5105 0.6547 143
Trumpet Daffodil (Narcissus 'Marieke') 0.8050 0.8951 0.8477 143
Trumpet Narcissus (Narcissus 'Bravoure') 0.9375 0.2083 0.3409 144
Tulip (Tulipa 'Brown Sugar') 0.8045 1.0000 0.8916 144
Tulip (Tulipa 'Rasta Parrot') 0.9863 1.0000 0.9931 144
Turnip (Brassica rapa subsp. rapa 'Gold Ball') 0.7784 1.0000 0.8754 144
Turnip (Brassica rapa subsp. rapa 'Purple Top White Globe') 0.8372 1.0000 0.9114 144
Turnip (Brassica rapa subsp. rapa 'Round Red') 0.6745 1.0000 0.8056 143
Turnip (Brassica rapa subsp. rapa 'White Egg') 1.0000 0.1678 0.2874 143
Turnip (Brassica rapa subsp. rapa 'White Lady') 0.7956 1.0000 0.8862 144
Turnips (Brassica rapa subsp. rapa) 0.8773 1.0000 0.9346 143
Twin-Spined Cactus (Mammillaria geminispina) 0.9811 0.7273 0.8353 143
Van Houtte Spiraea (Spiraea x vanhouttei 'Pink Ice') 0.6923 1.0000 0.8182 144
Variegated Pinwheel (Aeonium haworthii 'Variegatum') 0.6714 1.0000 0.8034 143
Variegated Queen Victoria Century Plant (Agave victoriae-reginae 'Albomarginata') 0.7423 1.0000 0.8521 144
Veronica (Veronica longifolia) 0.6667 0.4306 0.5232 144
Vietnamese Gardenia (Gardenia vietnamensis) 0.9351 1.0000 0.9664 144
Waterlily Tulip (Tulipa kaufmanniana 'Corona') 0.8372 1.0000 0.9114 144
Waterlily Tulip (Tulipa kaufmanniana 'Scarlet Baby') 0.5195 0.9236 0.6650 144
Welsh Poppy (Papaver cambricum 'Flore Pleno') 0.9536 1.0000 0.9763 144
Western Red Cedar (Thuja plicata 'Whipcord') 0.5070 1.0000 0.6729 144
Western Red Cedar (Thuja plicata Forever Goldy®) 0.8182 1.0000 0.9000 144
Western Red Cedar (Thuja plicata) 0.8485 0.7832 0.8145 143
White Currant (Ribes rubrum 'White Versailles') 1.0000 0.4583 0.6286 144
White Dead Nettle (Lamium album) 1.0000 0.8112 0.8958 143
White Stonecrop (Sedum album 'Twickel Purple') 0.7129 1.0000 0.8324 144
White Texas Star Hibiscus (Hibiscus coccineus 'Alba') 0.8761 0.6875 0.7704 144
Wild Asparagus (Asparagus officinalis 'Jersey Knight') 0.3871 0.0833 0.1371 144
Wild Asparagus (Asparagus officinalis 'Mary Washington') 0.6441 0.2639 0.3744 144
Wild Bergamot (Monarda fistulosa) 0.0000 0.0000 0.0000 144
Wild Blackberry (Rubus cochinchinensis) 0.8824 0.3125 0.4615 144
Wild Blue Phlox (Phlox divaricata) 0.5000 0.0972 0.1628 144
Wild Indigo (Baptisia 'Brownie Points') 0.9226 1.0000 0.9597 143
Wild Indigo (Baptisia 'Lemon Meringue') 0.7941 0.9441 0.8626 143
Wild Indigo (Baptisia 'Pink Lemonade') 0.9172 1.0000 0.9568 144
Wild Thyme (Thymus serpyllum 'Pink Chintz') 0.4819 0.6458 0.5519 144
Willow Leaf Foxglove (Digitalis obscura) 0.7763 0.8194 0.7973 144
Winter Honeysuckle (Lonicera fragrantissima) 0.8095 0.3542 0.4928 144
Winter Radish (Raphanus sativus 'China Rose') 0.6857 1.0000 0.8136 144
Winter Squash (Cucurbita maxima 'Buttercup') 0.9541 0.7222 0.8221 144
Winterberry (Ilex verticillata) 0.3233 0.5208 0.3989 144
Winterberry Holly (Ilex verticillata 'Chrysocarpa') 0.7784 1.0000 0.8754 144
Winterberry Holly (Ilex verticillata 'Tiasquam') 0.3397 1.0000 0.5071 143
Winterberry Holly (Ilex verticillata 'Winter Red') 0.5909 0.2708 0.3714 144
Wisterias (Wisteria) 1.0000 0.0280 0.0544 143
Woolly Thyme (Thymus praecox subsp. polytrichus) 0.7333 0.5385 0.6210 143
Woolly Turkish Speedwell (Veronica bombycina) 0.9862 1.0000 0.9931 143
Yarrow (Achillea 'Moonshine') 0.7093 0.8472 0.7722 144
Yarrow (Achillea 'Summer Berries') 0.5574 0.2361 0.3317 144
Yarrow (Achillea millefolium 'Paprika') 1.0000 0.0278 0.0541 144
Yarrow (Achillea millefolium 'Sonoma Coast') 0.5697 1.0000 0.7259 143
Yarrow (Achillea millefolium 'Summer Pastels') 0.5294 0.5035 0.5161 143
Yarrow (Achillea millefolium New VintageΓäó Rose) 0.2483 1.0000 0.3978 144
Yarrow (Achillea millefolium) 1.0000 0.0699 0.1307 143
Yarrows (Achillea) 0.0000 0.0000 0.0000 143
Yaupon Holly (Ilex vomitoria) 0.4444 0.2500 0.3200 144
Yellow Archangel (Lamium galeobdolon subsp. montanum 'Florentinum') 0.3165 1.0000 0.4808 144
rose 0.7727 0.8322 0.8013 143
accuracy 0.6663 129240
macro avg 0.6965 0.6664 0.6248 129240
weighted avg 0.6965 0.6663 0.6247 129240
```
|
[
"aeonium 'emerald ice'",
"aeonium 'jolly clusters'",
"aeonium 'mardi gras'",
"aeonium (aeonium davidbramwellii 'sunburst')",
"aeonium (aeonium nobile)",
"aeonium castello-paivae 'harry mak'",
"aeoniums (aeonium)",
"african blue basil (ocimum 'african blue')",
"aloe 'orange marmalade'",
"aloes (aloe)",
"alpine strawberry (fragaria vesca)",
"althea (hibiscus syriacus blueberry smoothieγäó)",
"amazon jungle vine (vitis amazonica)",
"american arborvitae (thuja occidentalis 'hetz midget')",
"american arborvitae (thuja occidentalis 'rheingold')",
"american beautyberry (callicarpa americana)",
"american cranberrybush viburnum (viburnum opulus var. americanum)",
"american wisteria (wisteria frutescens 'amethyst falls')",
"american wisteria (wisteria frutescens 'blue moon')",
"antelope horns milkweed (asclepias asperula subsp. capricornu)",
"apple (malus pumila 'braeburn')",
"apple (malus pumila 'red delicious')",
"apple (malus pumila 'red rome')",
"apple (malus pumila 'sweet bough')",
"apple (malus pumila 'winter pearmain')",
"apple mint (mentha suaveolens)",
"apples (malus)",
"apricot (prunus armeniaca 'gold kist')",
"apricot (prunus armeniaca 'goldcot')",
"apricots (prunus armeniaca)",
"arborvitae (thuja 'green giant')",
"arborvitaes (thuja)",
"arilbred iris (iris 'stolon ginger')",
"aromatic aster (symphyotrichum oblongifolium 'october skies')",
"arrowwood viburnum (viburnum dentatum)",
"artichoke agave (agave parryi var. truncata)",
"artichokes (cynara scolymus)",
"asparagus (asparagus officinalis)",
"asparagus officinalis 'mondeo'",
"aster (aster x frikartii 'monch')",
"aster (aster x frikartii wonder of stafa)",
"asters (aster)",
"astilbe 'fanal'",
"astilbe 'icecream'",
"astilbe 'peach blossom'",
"astilbe 'rheinland'",
"astilbe 'straussenfeder'",
"astilbes (astilbe)",
"azalea (rhododendron 'blaney's blue')",
"azalea (rhododendron 'irene koster')",
"baby burro's tail (sedum burrito)",
"baby's breath (gypsophila elegans 'covent garden')",
"baby's breath (gypsophila elegans 'kermesina')",
"baby's breaths (gypsophila elegans)",
"baptisias (baptisia)",
"basil (ocimum basilicum 'cardinal')",
"basil (ocimum basilicum 'emily')",
"basils (ocimum)",
"beach morning glory (ipomoea pes-caprae)",
"bean (phaseolus vulgaris 'cherokee trail of tears')",
"beardtongue (penstemon red rocks®)",
"beautyberry (callicarpa dichotoma 'early amethyst')",
"bee balm (monarda 'blaustrumpf')",
"bee balm (monarda 'purple rooster')",
"bee balm (monarda 'trinity purple')",
"bee balm (monarda didyma 'jacob cline')",
"bee balm (monarda didyma)",
"beebalm (monarda didyma 'marshall's delight')",
"beet (beta vulgaris 'boro')",
"beet (beta vulgaris 'bull's blood')",
"beet (beta vulgaris 'camaro')",
"beet (beta vulgaris 'crosby's egyptian')",
"beet (beta vulgaris 'moneta')",
"beet (beta vulgaris 'robin')",
"beet (beta vulgaris 'solo')",
"beet (beta vulgaris 'zeppo')",
"beet (beta vulgaris var. vulgaris)",
"bellflower (campanula fancy mee®)",
"bellflower (campanula rapunculus subsp. rapunculus)",
"bellflower (campanula scheuchzeri)",
"bellflower (campanula x haylodgensis 'blue wonder')",
"bellflowers (campanula)",
"betony (stachys spathulata)",
"bigleaf hydrangea (hydrangea macrophylla 'lanarth white')",
"bigleaf hydrangea (hydrangea macrophylla gentian dome)",
"bigleaf hydrangea (hydrangea macrophylla)",
"bitter aloe (aloe ferox)",
"biznaga de isla pichilingue (mammillaria albicans subsp. fraileana)",
"biznaga de otero (mammillaria oteroi)",
"black eyed susan (rudbeckia fulgida var. sullivantii 'goldsturm')",
"black eyed susan (rudbeckia hirta smileyzγäó happy)",
"black eyed susan (rudbeckia hirta var. hirta)",
"black eyed susans (rudbeckia)",
"black-eyed susan (rudbeckia hirta 'autumn colors')",
"black-eyed susan (rudbeckia hirta 'cappuccino')",
"black-eyed susan (rudbeckia hirta 'sputnik')",
"blackberry (rubus 'black satin')",
"blanket flower (gaillardia 'arizona sun')",
"blanket flower (gaillardia mesaγäó red)",
"blanket flower (gaillardia pulchella)",
"blanket flower (gaillardia)",
"blazing star (liatris spicata)",
"bleeding heart (dicentra 'ivory hearts')",
"bleeding heart (lamprocapnos spectabilis valentineγäó)",
"bleeding heart (lamprocapnos spectabilis)",
"bleeding hearts (lamprocapnos)",
"blue daisy (felicia amelloides)",
"blue sage (salvia azurea)",
"blue wild indigo (baptisia australis)",
"bok choy (brassica rapa subsp. chinensis 'joi choi')",
"bolivian hummingbird sage (salvia oxyphora)",
"bradford pear (pyrus calleryana 'bradford')",
"brassicas (brassica)",
"bridalwreath spiraea (spiraea prunifolia)",
"bright green dudleya (dudleya virens)",
"bulbocodium daffodil (narcissus 'spoirot')",
"bumpy convolvulaceae (ipomoea tuberculata)",
"bush bean (phaseolus vulgaris 'royal burgundy')",
"bush bean (phaseolus vulgaris 'topcrop')",
"butterfly bush (buddleja 'orange sceptre')",
"butterfly bush (buddleja buzzγäó sky blue)",
"butterfly bush (buddleja humdingerγäó magenta munchkin)",
"butterfly bush (buddleja davidii 'asian moon')",
"butterfly bush (buddleja davidii 'black knight')",
"butterfly bush (buddleja davidii 'nanho blue')",
"butterfly bush (buddleja davidii buzzγäó ivory)",
"butterfly milkweed (asclepias tuberosa)",
"butterfly weed (asclepias tuberosa 'gay butterflies')",
"butterfly weed (asclepias tuberosa subsp. tuberosa)",
"butterhead lettuce (lactuca sativa 'tom thumb')",
"butternut squash (cucurbita moschata 'waltham')",
"butterwort (pinguicula 'aphrodite')",
"butterwort (pinguicula agnata)",
"butterwort (pinguicula cyclosecta)",
"butterwort (pinguicula esseriana)",
"butterwort (pinguicula gigantea)",
"butterwort (pinguicula moctezumae)",
"cabbage (brassica oleracea var. capitata 'deep blue')",
"cabbage (brassica oleracea var. capitata 'red jewel')",
"caladium bicolor 'fiesta'",
"caladiums (caladium)",
"california fishhook cactus (mammillaria dioica)",
"callery pear (pyrus calleryana chanticleer®)",
"canna 'annjee'",
"canna (canna x generalis 'maui punch')",
"canna cannasolγäó lily",
"canna tropicanna®",
"cannas (canna)",
"cantaloupe (cucumis melo 'ambrosia')",
"cantaloupe (cucumis melo 'orange silverwave')",
"cantaloupes (cucumis melo)",
"caraway thyme (thymus herba-barona)",
"carrot (daucus carota subsp. sativus 'atomic red')",
"carrot (daucus carota subsp. sativus 'black nebula')",
"carrot (daucus carota subsp. sativus 'burpees a#1')",
"carrot (daucus carota subsp. sativus 'envy')",
"carrot (daucus carota subsp. sativus 'purple 68')",
"carrot (daucus carota subsp. sativus 'sugarsnax 54')",
"carrot (daucus carota subsp. sativus 'ultimate hybrid')",
"catmint (nepeta cat's meow)",
"catmint (nepeta x faassenii 'walker's low')",
"catmints (nepeta)",
"catnip (nepeta cataria)",
"cauliflower (brassica oleracea var. botrytis 'steady')",
"celeriac (apium graveolens var. rapaceum 'prague giant')",
"celeriac (apium graveolens var. rapaceum 'prinz')",
"celery (apium graveolens var. dulce 'lathom self blanching galaxy')",
"celery (apium graveolens var. dulce 'redventure')",
"celery (apium graveolens var. dulce 'tall utah')",
"center stripe agave (agave univittata 'quadricolor')",
"chalk rose (dudleya candida)",
"cheddar pink (dianthus dessertγäó raspberry swirl)",
"cheddar pink (dianthus gratianopolitanus blukissγäó)",
"cherry plum (prunus cerasifera 'thundercloud')",
"chinese astilbe (astilbe rubra)",
"chinese dogwood (cornus kousa subsp. chinensis 'milky way')",
"chinese lanterns (hibiscus schizopetalus)",
"chinese pear (pyrus pyrifolia 'shinseiki')",
"chinese rhubarb (rheum tanguticum)",
"chinese wisteria (wisteria sinensis 'prolific')",
"chinese wisteria (wisteria sinensis)",
"chinese rhubarb (rheum palmatum 'bowles crimson')",
"chives (allium schoenoprasum)",
"chocolate mint (mentha x piperita 'chocolate')",
"cilantro (coriandrum sativum 'confetti')",
"cilantros (coriandrum sativum)",
"citron (citrus medica)",
"citrus fruits (citrus)",
"clustered bellflower (campanula glomerata)",
"coconino county desert beardtongue (penstemon pseudospectabilis 'coconino county')",
"colorado narrowleaf beardtongue (penstemon linarioides)",
"columbine (aquilegia kirigamiγäó rose & pink)",
"columbine (aquilegia coerulea origamiγäó blue & white)",
"columbine (aquilegia vulgaris 'adelaide addison')",
"columbines (aquilegia)",
"common bean (phaseolus vulgaris 'contender')",
"common fig (ficus carica 'brown turkey')",
"common fig (ficus carica 'chicago hardy')",
"common fig (ficus carica 'jolly tiger')",
"common fig (ficus carica 'violette de bordeaux')",
"common jujube (ziziphus jujuba 'lang')",
"common jujube (ziziphus jujuba 'li')",
"common lilac (syringa vulgaris 'arch mckean')",
"common lilac (syringa vulgaris 'wonder blue')",
"common milkweed (asclepias syriaca)",
"common sage (salvia officinalis 'tricolor')",
"compact queen victoria agave (agave victoriae-reginae subsp. swobodae)",
"conchilinque (mammillaria pectinifera)",
"concord grape (vitis labrusca 'concord')",
"coneflower (echinacea 'virgin')",
"coneflower (echinacea big skyγäó sundown)",
"coneflower (echinacea double scoopγäó orangeberry)",
"coneflower (echinacea sombrero® lemon yellow improved)",
"coneflower (echinacea purpurea 'green twister')",
"confederate rose (hibiscus mutabilis)",
"coppertone stonecrop (sedum nussbaumerianum 'shooting stars')",
"coral bells (heuchera 'amethyst myst')",
"coral bells (heuchera 'fire alarm')",
"coral bells (heuchera 'mahogany')",
"coral bells (heuchera 'mega caramel')",
"coral bells (heuchera 'silver scrolls')",
"coral bells (heuchera dolce® blackberry ice)",
"coral bells (heuchera micrantha 'palace purple')",
"coral bells (heuchera sanguinea 'ruby bells')",
"coral honeysuckle (lonicera sempervirens 'major wheeler')",
"coral honeysuckle (lonicera sempervirens)",
"coreopsis li'l bangγäó darling clementine",
"corn (zea mays subsp. mays 'jackpot')",
"corn (zea mays subsp. mays)",
"cos lettuce (lactuca sativa 'little gem')",
"coulter's mock orange (philadelphus coulteri)",
"crabapple (malus 'cardinal')",
"crabapple (malus 'prairie fire')",
"cranesbill (geranium rozanne®)",
"cranesbill (geranium platypetalum)",
"crape myrtle (lagerstroemia indica 'hopi')",
"crape myrtle (lagerstroemia indica red rocket®)",
"creeping phlox (phlox subulata 'emerald blue')",
"creeping phlox (phlox subulata)",
"creeping speedwell (veronica teucrium)",
"crepe myrtle (lagerstroemia 'ebony flame')",
"crepe myrtle (lagerstroemia 'natchez')",
"crepe myrtle (lagerstroemia 'zuni')",
"crepe myrtle (lagerstroemia pink velour®)",
"crepe myrtle (lagerstroemia indica 'peppermint lace')",
"crinum 'marisco'",
"crinum 'milk and wine'",
"crinum lily (crinum 'stars and stripes')",
"crinums (crinum)",
"crocus",
"crocus 'deep water'",
"crocus (crocus chrysanthus 'ladykiller')",
"cucumber (cucumis sativus 'artist')",
"cucumber (cucumis sativus 'double yield')",
"cucumber (cucumis sativus 'early cluster')",
"cucumber (cucumis sativus 'lemon')",
"cucumber (cucumis sativus 'marketmore 76')",
"culinary sages (salvia officinalis)",
"curly parsley (petroselinum crispum var. crispum)",
"cutleaf coneflower (rudbeckia laciniata)",
"daffodil (narcissus 'lavender bell')",
"dahlia 'ac sadie'",
"dahlia 'creme de cassis'",
"dahlia 'destiny's john michael'",
"dahlia 'firepot'",
"dahlia 'formby sunrise'",
"dahlia 'hapet champagne'",
"dahlia 'kelsey annie joy'",
"dahlia 'santa claus'",
"dahlia 'thomas a. edison'",
"dahlias (dahlia)",
"dalmatian bellflower (campanula portenschlagiana)",
"dark opal basil (ocimum basilicum 'purpurascens')",
"daylily (hemerocallis 'armed to the teeth')",
"daylily (hemerocallis 'dearest mahogany')",
"daylily (hemerocallis 'golden hibiscus')",
"daylily (hemerocallis 'kathrine carter')",
"daylily (hemerocallis 'put my picture on the cover')",
"daylily (hemerocallis 'quoting hemingway')",
"daylily (hemerocallis 'soli deo gloria')",
"daylily (hemerocallis 'sons of thunder')",
"daylily (hemerocallis 'vanishing mist')",
"daylily (hemerocallis 'zollo omega')",
"delphinium 'blue dawn'",
"delphinium 'diamonds blue'",
"delphinium 'percival'",
"delphinium (delphinium elatum new millenniumγäó royal aspirations)",
"delphiniums (delphinium)",
"dianthus",
"dianthus 'gran's favorite'",
"dianthus (dianthus chinensis 'black and white minstrels')",
"dianthus (dianthus longicalyx)",
"dianthus (dianthus monspessulanus)",
"dill (anethum graveolens 'bouquet')",
"dill (anethum graveolens 'fernleaf')",
"dills (anethum graveolens)",
"dogwoods (cornus)",
"double daffodil (narcissus 'ice king')",
"double daffodil (narcissus 'tahiti')",
"double japanese wisteria (wisteria floribunda black dragon)",
"double reeves spirea (spiraea cantoniensis 'lanceata')",
"drummond's hedgenettle (stachys drummondii)",
"dry bean (phaseolus vulgaris 'good mother stallard')",
"dudleyas (dudleya)",
"dune aloe (aloe thraskii)",
"dutch hyacinth (hyacinthus orientalis 'delft blue')",
"dutch hyacinth (hyacinthus orientalis 'hollyhock')",
"dutch hyacinth (hyacinthus orientalis 'splendid cornelia')",
"dutchman's breeches (dicentra cucullaria)",
"dwarf burford holly (ilex cornuta 'burfordii nana')",
"dwarf caladium (caladium humboldtii)",
"dwarf chinese astilbe (astilbe rubra 'pumila')",
"dwarf coneflower (echinacea kismet® red)",
"dwarf mouse-ear tickseed (coreopsis auriculata 'nana')",
"dwarf peach (prunus persica 'bonanza')",
"eastern dogwood (cornus florida var. florida 'rubra')",
"eastern dogwood (cornus florida var. florida cherokee braveγäó)",
"eastern ninebark (physocarpus opulifolius 'center glow')",
"eastern ninebark (physocarpus opulifolius 'dart's gold')",
"eastern ninebark (physocarpus opulifolius 'luteus')",
"eastern ninebark (physocarpus opulifolius coppertinaγäó)",
"eastern ninebark (physocarpus opulifolius diabolo®)",
"eastern red columbine (aquilegia canadensis)",
"echeveria 'afterglow'",
"echeveria 'blue wren'",
"echeveria 'irish mint'",
"echeveria 'mauna loa'",
"echeveria 'perle von nurnberg'",
"echeveria 'rain drops'",
"echeveria (echeveria affinis 'black knight')",
"echeveria (echeveria agavoides 'love's fire')",
"echeveria (echeveria runyonii)",
"echeveria (echeveria setosa var. minor)",
"eggplant (solanum melongena 'annina')",
"eggplant (solanum melongena 'black beauty')",
"eggplant (solanum melongena 'bride')",
"eggplant (solanum melongena 'icicle')",
"eggplant (solanum melongena 'orient express')",
"eggplant (solanum melongena 'orlando')",
"eggplant (solanum melongena 'southern pink')",
"eggplant (solanum melongena 'violet king')",
"egyptian walking onion (allium x proliferum)",
"elephant's foot plant (pachypodium gracilius)",
"elephant's trunk (pachypodium namaquanum)",
"elfin thyme (thymus serpyllum 'elfin')",
"english pea (pisum sativum 'alaska')",
"english pea (pisum sativum 'bistro')",
"english pea (pisum sativum 'green arrow')",
"english pea (pisum sativum 'penelope')",
"english thyme (thymus vulgaris 'orange balsam')",
"european cranberry viburnum (viburnum opulus)",
"european smoketree (cotinus coggygria winecraft black®)",
"european snowball bush (viburnum opulus 'roseum')",
"faassen's catmint (nepeta x faassenii 'six hills giant')",
"false goat's beard (astilbe younique ceriseγäó)",
"fancy-leafed caladium (caladium bicolor)",
"fancy-leaf caladium (caladium 'creamsickle')",
"fancy-leaf caladium (caladium 'red flash')",
"fancy-leaf caladium (caladium 'white christmas')",
"fancy-leaf caladium (caladium tapestryγäó)",
"feather cactus (mammillaria plumosa)",
"fern leaf peony (paeonia tenuifolia)",
"figs (ficus carica)",
"flat-flowered aloe (aloe marlothii)",
"flint corn (zea mays subsp. mays 'indian ornamental')",
"flower of an hour (hibiscus trionum)",
"flowering cabbage (brassica oleracea var. viridis pigeonγäó white)",
"flowering crabapple (malus golden raindrops)",
"flowering dogwood (cornus stellar pink®)",
"flowering dogwood (cornus florida)",
"flowering kale (brassica oleracea 'kamome white')",
"flowering pear (pyrus calleryana 'cleveland select')",
"foothill beardtongue (penstemon heterophyllus 'electric blue')",
"fox grape (vitis 'valiant')",
"fox grape (vitis labrusca)",
"foxglove (digitalis 'honey trumpet')",
"foxglove (digitalis purpurea 'dalmatian peach')",
"foxglove (digitalis purpurea)",
"foxgloves (digitalis)",
"foxtail agave (agave attenuata)",
"fragaria vesca subsp. vesca",
"french lilac (syringa vulgaris 'michel buchner')",
"french lilac (syringa vulgaris 'miss ellen willmott')",
"french tarragon (artemisia dracunculus 'sativa')",
"fuchsia flowering currant (ribes speciosum)",
"gaillardia 'punch bowl'",
"garden bells (penstemon hartwegii phoenixγäó pink)",
"garden onion (allium cepa 'super star')",
"garden pea (pisum sativum 'pls 534')",
"garden phlox (phlox paniculata 'blue paradise')",
"garden phlox (phlox paniculata 'mount fuji')",
"garden phlox (phlox paniculata volcano pink white eye)",
"garden phlox (phlox x arendsii 'miss mary')",
"garden sage (salvia officinalis 'robert grimm')",
"gardenia (gardenia jasminoides 'august beauty')",
"gardenia (gardenia jasminoides 'frostproof')",
"gardenia (gardenia jasminoides 'veitchii')",
"gardenia (gardenia jasminoides 'white gem')",
"gardenias (gardenia)",
"garlic (allium sativum 'early red italian')",
"garlic (allium sativum 'georgian crystal')",
"garlic (allium sativum 'russian red')",
"garlic (allium sativum)",
"gay feather (liatris spicata 'floristan white')",
"genovese basil (ocimum basilicum 'dolce fresca')",
"gentian speedwell (veronica gentianoides)",
"georgia sweet vidalia onion (allium cepa 'yellow granex')",
"geranium (geranium wallichianum 'buxton's variety')",
"geranium (geranium wallichianum 'crystal lake')",
"geraniums (geranium)",
"giant chalk dudleya (dudleya brittonii)",
"gladiola (gladiolus 'vista')",
"gladiola (gladiolus)",
"gladiolus 'atom'",
"gladiolus 'fiesta'",
"globe artichoke (cynara scolymus 'green globe')",
"globe artichoke (cynara scolymus 'violet de provence')",
"gloriosa daisy (rudbeckia hirta 'prairie sun')",
"golden sage (salvia officinalis 'aurea')",
"gooseberry (ribes uva-crispa 'hinnonmaki rod')",
"gooseberry (ribes uva-crispa)",
"gourds, squashes and pumpkins (cucurbita)",
"grape (vitis vinifera 'gamay')",
"grape (vitis vinifera cotton candy®)",
"grapes (vitis)",
"green bean (phaseolus vulgaris 'trionfo violetto')",
"greigii tulip (tulipa 'fire of love')",
"hairy beardtongue (penstemon hirsutus)",
"hardy geranium (geranium 'phoebe noble')",
"hardy geranium (geranium sanguineum 'elke')",
"hardy geranium (geranium sanguineum var. striatum)",
"hardy hibiscus (hibiscus moscheutos 'fireball')",
"hardy hibiscus (hibiscus moscheutos 'kopper king')",
"hardy hibiscus (hibiscus moscheutos 'tie dye')",
"hardy hibiscus (hibiscus moscheutos summerificγäó cherry cheesecake)",
"hardy hibiscus (hibiscus moscheutos summerificγäó starry starry night)",
"hardy hibiscus hybrid (hibiscus 'summer in paradise')",
"heavenly bamboo (nandina domestica 'moon bay')",
"heavenly bamboos (nandina domestica)",
"hen and chicks (sempervivum 'blaukraut')",
"hen and chicks (sempervivum 'gold nugget')",
"hen and chicks (sempervivum 'larissa')",
"hen and chicks (sempervivum 'lynn's rose gold')",
"hen and chicks (sempervivum 'red lion')",
"hen and chicks (sempervivum 'space dog')",
"hen and chicks (sempervivum calcareum)",
"hen and chicks (sempervivum tectorum 'grammens')",
"hen and chicks (sempervivum 'dea')",
"henbit (lamium amplexicaule)",
"hibiscus",
"hibiscus (hibiscus moscheutos summerificγäó cherry choco latte)",
"hibiscus (hibiscus moscheutos summerificγäó cranberry crush)",
"hibiscus (hibiscus moscheutos summerificγäó summer storm)",
"holly (ilex 'nellie r. stevens')",
"holy basil (ocimum tenuiflorum 'green sacred')",
"honeysuckle (lonicera 'gold flame')",
"hortulan plum (prunus hortulana)",
"hosta 'blue angel'",
"hosta 'blue mouse ears'",
"hosta 'curly fries'",
"hosta 'liberty'",
"hosta 'popcorn'",
"hosta 'tom schmid'",
"hosta 'whirlwind'",
"hosta 'white feather'",
"hostas (hosta)",
"hot pepper (capsicum annuum 'petit marseillais')",
"hot pepper (capsicum annuum 'super chili')",
"hot pepper (capsicum baccatum 'brazilian starfish')",
"hot pepper (capsicum sinense 'black naga')",
"hummingbird sage (salvia coccinea 'coral nymph')",
"hyacinth (hyacinthus orientalis 'blue jacket')",
"hyacinth (hyacinthus orientalis)",
"hyacinths (hyacinthus)",
"hybrid gladiola (gladiolus 'boone')",
"hybrid gladiola (gladiolus x gandavensis 'priscilla')",
"hybrid tickseed (coreopsis 'cherry lemonade')",
"hydrangea (hydrangea macrophylla 'nightingale')",
"hydrangea (hydrangea macrophylla l.a. dreamin'γäó lindsey ann)",
"hydrangea (hydrangea quercifolia 'munchkin')",
"hydrangeas (hydrangea)",
"iceland poppy (papaver nudicaule 'champagne bubbles white')",
"iceland poppy (papaver nudicaule 'meadow pastels')",
"intersectional peony (paeonia 'all that jazz')",
"italian parsley (petroselinum crispum 'italian flat leaf')",
"itoh peony (paeonia 'caroline constabel')",
"japanese crepe myrtle (lagerstroemia fauriei 'fantasy')",
"japanese cucumber (cucumis sativus 'southern delight')",
"japanese hardy orange (citrus trifoliata)",
"japanese honeysuckle (lonicera japonica 'halliana')",
"japanese morning glory (ipomoea nil 'seiryu')",
"japanese morning glory (ipomoea nil)",
"japanese spirea (spiraea japonica 'magic carpet')",
"japanese spirea (spiraea japonica 'neon flash')",
"japanese wisteria (wisteria floribunda 'issai perfect')",
"japanese yellow sage (salvia koyamae)",
"jelly bean (sedum x rubrotinctum)",
"jerusalem artichoke (helianthus tuberosus 'clearwater')",
"jerusalem artichoke (helianthus tuberosus 'stampede')",
"jonquilla narcissus (narcissus 'blushing lady')",
"judd viburnum (viburnum carlesii var. bitchiuense)",
"jujube (ziziphus jujuba 'sherwood')",
"jujubes (ziziphus jujuba)",
"kaibab agave (agave utahensis subsp. kaibabensis)",
"kale (brassica oleracea var. viridis 'redbor')",
"koreanspice viburnum (viburnum carlesii)",
"lacecap hydrangea (hydrangea macrophylla endless summer® twist-n-shout®)",
"lady tulip (tulipa clusiana)",
"lamb's ears (stachys)",
"lambs' ears (stachys byzantina)",
"large speedwell (veronica teucrium 'crater lake blue')",
"large-cupped daffodil (narcissus 'chromacolor')",
"larkspur (delphinium 'benary's pacific cameliard')",
"larkspur (delphinium elatum 'guardian lavender')",
"larkspur (delphinium elatum new millenniumγäó black eyed angels)",
"leek (allium ampeloprasum 'lancelot')",
"leek (allium ampeloprasum 'large american flag')",
"leek (allium ampeloprasum 'zermatt')",
"leeks (allium ampeloprasum)",
"lemoine's mock orange (philadelphus 'belle etoile')",
"lemon (citrus x limon)",
"lemon bee balm (monarda citriodora)",
"lemon thyme (thymus x citriodorus)",
"lemon tree (citrus x limon 'eureka')",
"lettuce (lactuca sativa 'parris island')",
"lettuce (lactuca sativa 'red romaine')",
"lettuce (lactuca sativa 'rouge d'hiver')",
"lettuce (lactuca sativa 'yugoslavian red butterhead')",
"lettuces (lactuca sativa)",
"lewis' mockorange (philadelphus lewisii)",
"lilac (syringa first editions┬« virtual violetγäó)",
"lilac (syringa vulgaris 'belle de nancy')",
"lilac (syringa vulgaris 'sensation')",
"lilac (syringa x hyacinthiflora 'sweetheart')",
"lily (lilium 'corsage')",
"lily (lilium 'flavia')",
"lily (lilium 'fusion')",
"lily (lilium 'moonyeen')",
"lily (lilium 'ramona')",
"lily (lilium 'sunny morning')",
"lily (lilium 'viva la vida')",
"lily (lilium auratum)",
"lily (lilium pyrenaicum)",
"lily flowering tulip (tulipa 'claudia')",
"loose-leaf lettuce (lactuca sativa 'salad bowl')",
"madagascar palm (pachypodium geayi)",
"madagascar palm (pachypodium lamerei)",
"malagasy tree aloe (aloe vaombe)",
"marjorams (origanum laevigatum)",
"meadow blazing star (liatris ligulistylis)",
"mealy cup sage (salvia farinacea cathedral® shining seas)",
"melon (cucumis melo 'charentais')",
"melon (cucumis melo 'kajari')",
"melon (cucumis melo 'tigger')",
"meserve holly (ilex 'casanova')",
"mexican butterwort; mexican ping (pinguicula ibarrae)",
"mexican dogwood (cornus florida var. urbiniana)",
"mexican plum (prunus mexicana)",
"meyer's lemon (citrus x limon 'improved meyer')",
"milk and wine lily (crinum fimbriatulum)",
"miniature jonquilla daffodil (narcissus 'pipit')",
"mints (mentha)",
"mock orange (philadelphus 'innocence')",
"mock orange (philadelphus 'snow dwarf')",
"moonflower (ipomoea alba)",
"morning glory (ipomoea 'split second')",
"morning glory (ipomoea hederifolia 'aurantia')",
"morning glory (ipomoea nil 'kikyo snowflakes')",
"morning glory (ipomoea purpurea 'feringa')",
"morning glory (ipomoea tricolor 'clarke's heavenly blue')",
"mountain aloe (aloe broomii)",
"nectarine (prunus persica 'arctic glo')",
"nectarine (prunus persica 'early rivers')",
"nepeta (nepeta subsessilis)",
"nepeta (nepeta x faassenii 'select blue')",
"new england aster (symphyotrichum novae-angliae 'andenken an alma pötschke')",
"new england aster (symphyotrichum novae-angliae)",
"noble rhubarb (rheum nobile)",
"northern white cedar (thuja occidentalis mr. bowling ballγäó)",
"okra (abelmoschus esculentus 'burmese')",
"okra (abelmoschus esculentus 'clemson spineless')",
"okra (abelmoschus esculentus 'jambalaya')",
"okra (abelmoschus esculentus 'jing orange')",
"okra (abelmoschus esculentus 'red burgundy')",
"okra (abelmoschus esculentus)",
"oleander (nerium oleander 'calypso')",
"oleander (nerium oleander 'hardy white')",
"oleander (nerium oleander 'red cardinal')",
"onion (allium cepa 'red hunter')",
"onion (allium cepa 'red river f1')",
"onion (allium cepa 'walla walla sweet')",
"onions (allium cepa)",
"orange (citrus reticulata 'satsuma')",
"oreganos (origanum vulgare)",
"oriental radish (raphanus sativus 'new white spring')",
"ornamental gourd (cucurbita pepo 'tennessee dancing')",
"ornamental oregano (origanum laevigatum 'herrenhausen')",
"ornamental pepper (capsicum annuum 'black pearl')",
"ornamental pepper (capsicum annuum 'chilly chili')",
"ornamental sweet potato (ipomoea batatas 'blackie')",
"ornamental sweet potato (ipomoea batatas 'margarita')",
"pachypodium (pachypodium brevicaule)",
"pachypodium (pachypodium sofiense)",
"pacific coast iris (iris 'big waves')",
"pacific coast iris (iris 'caught in the wind')",
"pacific coast iris (iris 'finger pointing')",
"panicle hydrangea (hydrangea paniculata first editions┬« vanilla strawberryγäó)",
"parsleys (petroselinum crispum)",
"parsnip (pastinaca sativa 'harris model')",
"parsnip (pastinaca sativa 'hollow crown')",
"parsnip (pastinaca sativa 'javelin')",
"parsnips (pastinaca sativa)",
"pea (pisum sativum 'spring blush')",
"peach (prunus persica 'canadian harmony')",
"peach (prunus persica 'elberta')",
"peach (prunus persica flamin' fury® pf-24c)",
"peach-leaved bellflower (campanula persicifolia)",
"peacock orchid (gladiolus murielae)",
"pear (pyrus communis 'early seckel')",
"pencilled cranesbill (geranium versicolor)",
"penstemon riding hood red",
"peonies (paeonia)",
"peony (paeonia 'athena')",
"peony (paeonia 'pastelegance')",
"peony (paeonia daurica subsp. coriifolia)",
"peony (paeonia lactiflora 'bowl of beauty')",
"peony (paeonia lactiflora 'do tell')",
"peony (paeonia lactiflora 'top brass')",
"pepper (capsicum 'mad hatter')",
"peppers (capsicum)",
"persian catmint (nepeta racemosa 'little titch')",
"petunia amoreγäó queen of hearts",
"petunia crazytunia® cosmic pink",
"petunia headlinerγäó night sky",
"petunia midnight gold",
"petunia potunia® purple halo",
"petunia sweetunia® fiona flash",
"petunias (petunia)",
"phlox drummondii 'sugar stars'",
"pineberry (fragaria x ananassa 'white carolina')",
"pineleaf beardtongue (penstemon pinifolius half pint®)",
"pinks (dianthus 'little maiden')",
"plains coreopsis (coreopsis tinctoria)",
"plumeria 'queen amber'",
"plumeria (plumeria filifolia)",
"plumeria (plumeria rubra 'fireblast')",
"plumeria (plumeria rubra 'flaming rock dragon')",
"plumeria (plumeria rubra 'j 105')",
"plumeria (plumeria rubra 'mary helen eggenberger')",
"plumeria (plumeria rubra 'mellow yellow')",
"plumeria (plumeria rubra 'naples sixteen')",
"plumeria (plumeria rubra 'sophie')",
"plumerias (plumeria)",
"plums (prunus umbellata)",
"popcorn (zea mays subsp. mays 'glass gem')",
"poppies (papaver)",
"poppy (papaver 'sugar plum')",
"poppy (papaver rhoeas 'shirley poppy')",
"possumhaw holly (ilex decidua)",
"potato (solanum tuberosum 'adirondack blue')",
"potato (solanum tuberosum 'baltic rose')",
"potato (solanum tuberosum 'bojar')",
"potato (solanum tuberosum 'kennebec')",
"potato (solanum tuberosum 'red pontiac')",
"potato (solanum tuberosum 'vitelotte')",
"potatoes (solanum tuberosum)",
"pumpkin (cucurbita moschata 'musquee de provence')",
"pumpkin (cucurbita pepo 'styrian hulless')",
"pumpkin (cucurbita pepo 'winter luxury pie')",
"purple basil (ocimum basilicum 'purple delight')",
"purple cherry plum (prunus cerasifera 'hollywood')",
"purple coneflower (echinacea purpurea 'magnus')",
"purple coneflower (echinacea purpurea 'rubinstern')",
"purple coneflower (echinacea purpurea)",
"purple dead nettle (lamium purpureum)",
"purple marjoram (origanum laevigatum 'hopley's')",
"purple-flowering raspberry (rubus odoratus)",
"quiver tree (aloidendron dichotomum)",
"radish (raphanus sativus 'amethyst')",
"radish (raphanus sativus 'burpee cherry giant')",
"radish (raphanus sativus 'champion')",
"radish (raphanus sativus 'early scarlet globe')",
"radish (raphanus sativus 'german giant')",
"radishes (raphanus sativus)",
"rainbow carrot (daucus carota subsp. sativus 'rainbow')",
"rape (brassica napus subsp. napus)",
"rapini (brassica rapa subsp. rapa 'early fall')",
"raspberry (rubus idaeus 'joan j')",
"red currant (ribes rubrum 'red lake')",
"red flowering currant (ribes sanguineum 'brocklebankii')",
"red table grape (vitis labrusca 'vanessa')",
"red twig dogwood (cornus sanguinea 'anny's winter orange')",
"red twig dogwood (cornus sericea)",
"red-leaf hibiscus (hibiscus acetosella)",
"rhododendron 'blue peter'",
"rhododendron 'inga'",
"rhododendron 'mother of pearl'",
"rhododendron 'queen of england'",
"rhododendron 'roseum elegans'",
"rhododendrons (rhododendron)",
"rhubarb (rheum 'glaskins perpetual')",
"rhubarb (rheum rhabarbarum 'victoria')",
"rhubarb (rheum rhabarbarum)",
"rhubarbs (rheum)",
"rocky mountain beardtongue (penstemon strictus)",
"rocky mountain columbine (aquilegia coerulea)",
"romaine (lactuca sativa 'willow')",
"rose (rosa 'angel face')",
"rose (rosa 'ebb tide')",
"rose (rosa 'institut lumiere')",
"rose (rosa 'lavender crush')",
"rose (rosa 'sexy rexy')",
"rose (rosa 'the pilgrim')",
"rose (rosa 'veilchenblau')",
"rose (rosa 'wife of bath')",
"rose of sharon (hibiscus pollypetiteγäó)",
"rose of sharon (hibiscus syriacus 'danica')",
"rose of sharon (hibiscus syriacus blue satin®)",
"rose of sharon (hibiscus syriacus chateauγäó de chantilly)",
"roses of sharon (hibiscus syriacus)",
"russian sage (perovskia atriplicifolia)",
"russian sages (perovskia)",
"rusty blackhaw viburnum (viburnum rufidulum)",
"saffron crocus (crocus sativus)",
"salvia (salvia coerulea 'sapphire blue')",
"salvia (salvia splendens 'yvonne's salvia')",
"salvia (salvia x jamensis heatwaveγäó glimmer)",
"salvias (salvia)",
"san gabriel alumroot (heuchera abramsii)",
"sand lettuce (dudleya caespitosa)",
"sand pink (dianthus arenarius)",
"sargent viburnum (viburnum sargentii 'onondaga')",
"sargent's crabapple (malus sieboldii subsp. sieboldii 'roselow')",
"saturn peach (prunus persica 'saturn')",
"scallop squash (cucurbita pepo 'early white bush scallop')",
"sedum (sedum palmeri)",
"shallot (allium cepa 'creme brulee')",
"shasta daisies (leucanthemum x superbum)",
"shasta daisy (leucanthemum x superbum 'aglaya')",
"shasta daisy (leucanthemum x superbum 'becky')",
"shasta daisy (leucanthemum x superbum 'snehurka')",
"shasta daisy (leucanthemum x superbum 'snowcap')",
"shasta daisy (leucanthemum x superbum 'white breeze')",
"shasta daisy (leucanthemum x superbum sweet daisyγäó christine)",
"shirley poppy (papaver rhoeas 'amazing grey')",
"shirley poppy (papaver rhoeas 'double mixed')",
"siempreviva (dudleya attenuata)",
"sierra canelo pincushion cactus (mammillaria standleyi)",
"sierra leone lily (chlorophytum 'fireflash')",
"silver margined holly (ilex aquifolium 'argentea marginata')",
"slow bolt cilantro (coriandrum sativum 'santo')",
"smoke tree (cotinus coggygria 'royal purple')",
"smoketree (cotinus coggygria golden spiritγäó)",
"smoketrees (cotinus coggygria)",
"smooth hydrangea (hydrangea arborescens 'annabelle')",
"snap bean (string (phaseolus vulgaris 'black seeded blue lake')",
"snap bean (string (phaseolus vulgaris 'blue lake bush #274')",
"snap bean (string (phaseolus vulgaris 'wren's egg')",
"soap aloe (aloe maculata)",
"softneck garlic (allium sativum 'inchelium red')",
"spearmint (mentha spicata)",
"speedwell (veronica oltensis)",
"speedwell (veronica peduncularis 'georgia blue')",
"spider plant (chlorophytum comosum)",
"spike speedwell (veronica spicata royal candles)",
"spinach (spinacia oleracea 'alexandria')",
"spinach (spinacia oleracea 'america')",
"spinach (spinacia oleracea 'ashley')",
"spinach (spinacia oleracea 'gigante d'inverno')",
"spinach (spinacia oleracea 'red kitten')",
"spinach (spinacia oleracea 'reflect')",
"spinach (spinacia oleracea 'seaside')",
"spinaches (spinacia oleracea)",
"spiraeas (spiraea)",
"spirea (spiraea nipponica 'snowmound')",
"spotted beebalm (monarda punctata var. punctata)",
"spotted beebalm (monarda punctata)",
"spotted dead nettle (lamium maculatum 'pink pewter')",
"spotted dead nettle (lamium maculatum)",
"spring crocus (crocus versicolor 'picturatus')",
"squid agave (agave bracteosa)",
"st.christopher lily (crinum jagus)",
"strawberries (fragaria)",
"strawberry (fragaria x ananassa 'chandler')",
"strawberry (fragaria x ananassa)",
"strawberry foxglove (digitalis x mertonensis)",
"stringy stonecrop (sedum sarmentosum)",
"summer squash-crookneck (cucurbita pepo 'summer crookneck')",
"sunroot (helianthus tuberosus 'white fuseau')",
"sunroots (helianthus tuberosus)",
"swamp milkweed (asclepias incarnata)",
"sweet basil (ocimum basilicum)",
"sweet cherries (prunus avium)",
"sweet cherry (prunus avium 'bing')",
"sweet cherry (prunus avium 'black tatarian')",
"sweet cherry (prunus avium 'van')",
"sweet corn (zea mays 'essence')",
"sweet potato (ipomoea batatas 'carolina ruby')",
"sweet potato (ipomoea batatas sweet caroline sweetheart jet blackγäó)",
"sweet potato vine (ipomoea batatas 'little blackie')",
"sweet potato vine (ipomoea batatas 'pink frost')",
"sweet potatoes (ipomoea batatas)",
"swiss chard (beta vulgaris subsp. cicla 'bright lights')",
"swiss chard (beta vulgaris subsp. cicla 'rhubarb chard')",
"swiss chard (beta vulgaris subsp. cicla 'ruby red')",
"tall bearded iris (iris 'blue me away')",
"tall bearded iris (iris 'lemon cloud')",
"tall bearded iris (iris 'merchant marine')",
"tall bearded iris (iris 'radiant garnet')",
"tall bearded iris (iris 'serene silence')",
"tall bearded iris (iris 'wonders never cease')",
"tall phlox (phlox paniculata)",
"tarragons (artemisia dracunculus)",
"tasteless stonecrop (sedum sexangulare)",
"texas nipple cactus (mammillaria prolifera subsp. texana)",
"texas star (hibiscus coccineus)",
"thimbleberry (rubus nutkanus)",
"thornless blackberry (rubus 'apache')",
"thornless blackberry (rubus 'arapaho')",
"thornless blackberry (rubus 'navaho')",
"thyme (thymus praecox 'highland cream')",
"thyme (thymus praecox)",
"thyme (thymus serpyllum 'roseum')",
"tiare (gardenia taitensis)",
"tickseed (coreopsis cruizin'γäó main street)",
"tickseed (coreopsis satin & laceγäó red chiffon)",
"tickseed (coreopsis uptickγäó yellow & red)",
"tickseed (coreopsis grandiflora 'sunkiss')",
"tomato (solanum lycopersicum 'buffalo steak')",
"tomato (solanum lycopersicum 'dark galaxy')",
"tomato (solanum lycopersicum 'goldman's italian-american')",
"tomato (solanum lycopersicum 'helsing junction blues')",
"tomato (solanum lycopersicum 'park's whopper')",
"tomato (solanum lycopersicum 'pink delicious')",
"tomato (solanum lycopersicum 'sungold')",
"tomato (solanum lycopersicum 'yellow mortgage lifter')",
"tomatoes (solanum lycopersicum)",
"triandrus daffodil (narcissus 'thalia')",
"triple sweet corn (zea mays 'alto')",
"triumph tulip (tulipa 'aperitif')",
"triumph tulip (tulipa 'jackpot')",
"tropical milkweed (asclepias curassavica 'silky gold')",
"tropical milkweed (asclepias curassavica)",
"trumpet daffodil (narcissus 'marieke')",
"trumpet narcissus (narcissus 'bravoure')",
"tulip (tulipa 'brown sugar')",
"tulip (tulipa 'rasta parrot')",
"turnip (brassica rapa subsp. rapa 'gold ball')",
"turnip (brassica rapa subsp. rapa 'purple top white globe')",
"turnip (brassica rapa subsp. rapa 'round red')",
"turnip (brassica rapa subsp. rapa 'white egg')",
"turnip (brassica rapa subsp. rapa 'white lady')",
"turnips (brassica rapa subsp. rapa)",
"twin-spined cactus (mammillaria geminispina)",
"van houtte spiraea (spiraea x vanhouttei 'pink ice')",
"variegated pinwheel (aeonium haworthii 'variegatum')",
"variegated queen victoria century plant (agave victoriae-reginae 'albomarginata')",
"veronica (veronica longifolia)",
"vietnamese gardenia (gardenia vietnamensis)",
"waterlily tulip (tulipa kaufmanniana 'corona')",
"waterlily tulip (tulipa kaufmanniana 'scarlet baby')",
"welsh poppy (papaver cambricum 'flore pleno')",
"western red cedar (thuja plicata 'whipcord')",
"western red cedar (thuja plicata forever goldy®)",
"western red cedar (thuja plicata)",
"white currant (ribes rubrum 'white versailles')",
"white dead nettle (lamium album)",
"white stonecrop (sedum album 'twickel purple')",
"white texas star hibiscus (hibiscus coccineus 'alba')",
"wild asparagus (asparagus officinalis 'jersey knight')",
"wild asparagus (asparagus officinalis 'mary washington')",
"wild bergamot (monarda fistulosa)",
"wild blackberry (rubus cochinchinensis)",
"wild blue phlox (phlox divaricata)",
"wild indigo (baptisia 'brownie points')",
"wild indigo (baptisia 'lemon meringue')",
"wild indigo (baptisia 'pink lemonade')",
"wild thyme (thymus serpyllum 'pink chintz')",
"willow leaf foxglove (digitalis obscura)",
"winter honeysuckle (lonicera fragrantissima)",
"winter radish (raphanus sativus 'china rose')",
"winter squash (cucurbita maxima 'buttercup')",
"winterberry (ilex verticillata)",
"winterberry holly (ilex verticillata 'chrysocarpa')",
"winterberry holly (ilex verticillata 'tiasquam')",
"winterberry holly (ilex verticillata 'winter red')",
"wisterias (wisteria)",
"woolly thyme (thymus praecox subsp. polytrichus)",
"woolly turkish speedwell (veronica bombycina)",
"yarrow (achillea 'moonshine')",
"yarrow (achillea 'summer berries')",
"yarrow (achillea millefolium 'paprika')",
"yarrow (achillea millefolium 'sonoma coast')",
"yarrow (achillea millefolium 'summer pastels')",
"yarrow (achillea millefolium new vintageγäó rose)",
"yarrow (achillea millefolium)",
"yarrows (achillea)",
"yaupon holly (ilex vomitoria)",
"yellow archangel (lamium galeobdolon subsp. montanum 'florentinum')",
"rose"
] |
yahyapp/emotion_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4040
- Accuracy: 0.475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.6080 | 0.45 |
| No log | 2.0 | 40 | 1.4799 | 0.4875 |
| No log | 3.0 | 60 | 1.4764 | 0.425 |
| No log | 4.0 | 80 | 1.3875 | 0.5 |
| No log | 5.0 | 100 | 1.4627 | 0.4437 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
nadyanvl/emotion_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3497
- Accuracy: 0.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0823 | 1.0 | 10 | 2.0560 | 0.1625 |
| 2.0479 | 2.0 | 20 | 2.0218 | 0.2812 |
| 1.9636 | 3.0 | 30 | 1.8882 | 0.4062 |
| 1.7902 | 4.0 | 40 | 1.6881 | 0.4313 |
| 1.5792 | 5.0 | 50 | 1.6159 | 0.3688 |
| 1.4429 | 6.0 | 60 | 1.3871 | 0.5687 |
| 1.2854 | 7.0 | 70 | 1.2973 | 0.5437 |
| 1.1487 | 8.0 | 80 | 1.2303 | 0.6 |
| 1.0374 | 9.0 | 90 | 1.2661 | 0.5375 |
| 0.9584 | 10.0 | 100 | 1.1662 | 0.5563 |
| 0.8108 | 11.0 | 110 | 1.2135 | 0.5312 |
| 0.7402 | 12.0 | 120 | 1.2117 | 0.5813 |
| 0.6349 | 13.0 | 130 | 1.1176 | 0.6062 |
| 0.5674 | 14.0 | 140 | 1.1794 | 0.575 |
| 0.5103 | 15.0 | 150 | 1.0948 | 0.6375 |
| 0.4826 | 16.0 | 160 | 1.1833 | 0.5875 |
| 0.4128 | 17.0 | 170 | 1.2601 | 0.5375 |
| 0.3664 | 18.0 | 180 | 1.3378 | 0.55 |
| 0.3112 | 19.0 | 190 | 1.2789 | 0.5437 |
| 0.335 | 20.0 | 200 | 1.2913 | 0.5625 |
| 0.3261 | 21.0 | 210 | 1.1114 | 0.6 |
| 0.3443 | 22.0 | 220 | 1.2177 | 0.5938 |
| 0.2642 | 23.0 | 230 | 1.2299 | 0.5938 |
| 0.2895 | 24.0 | 240 | 1.2339 | 0.5813 |
| 0.266 | 25.0 | 250 | 1.2384 | 0.5875 |
| 0.2725 | 26.0 | 260 | 1.2100 | 0.6062 |
| 0.2725 | 27.0 | 270 | 1.3073 | 0.575 |
| 0.2637 | 28.0 | 280 | 1.3019 | 0.5875 |
| 0.2561 | 29.0 | 290 | 1.3597 | 0.5437 |
| 0.2375 | 30.0 | 300 | 1.3404 | 0.5563 |
| 0.2188 | 31.0 | 310 | 1.2922 | 0.5813 |
| 0.2141 | 32.0 | 320 | 1.3778 | 0.5312 |
| 0.198 | 33.0 | 330 | 1.3473 | 0.5875 |
| 0.1805 | 34.0 | 340 | 1.3984 | 0.5437 |
| 0.1888 | 35.0 | 350 | 1.3508 | 0.5813 |
| 0.1867 | 36.0 | 360 | 1.3531 | 0.575 |
| 0.1596 | 37.0 | 370 | 1.5846 | 0.4875 |
| 0.1564 | 38.0 | 380 | 1.3380 | 0.5687 |
| 0.1719 | 39.0 | 390 | 1.5206 | 0.5312 |
| 0.1678 | 40.0 | 400 | 1.2929 | 0.5875 |
| 0.136 | 41.0 | 410 | 1.5031 | 0.55 |
| 0.1602 | 42.0 | 420 | 1.3855 | 0.5625 |
| 0.174 | 43.0 | 430 | 1.4385 | 0.5875 |
| 0.179 | 44.0 | 440 | 1.3153 | 0.575 |
| 0.1284 | 45.0 | 450 | 1.4295 | 0.5875 |
| 0.1419 | 46.0 | 460 | 1.4126 | 0.575 |
| 0.1425 | 47.0 | 470 | 1.3760 | 0.5687 |
| 0.1602 | 48.0 | 480 | 1.4374 | 0.5875 |
| 0.1473 | 49.0 | 490 | 1.3126 | 0.5813 |
| 0.153 | 50.0 | 500 | 1.3497 | 0.6 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
syahid33/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4068
- Accuracy: 0.5188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.3074 | 0.5563 |
| No log | 2.0 | 80 | 1.4204 | 0.5312 |
| No log | 3.0 | 120 | 1.4447 | 0.525 |
| No log | 4.0 | 160 | 1.3472 | 0.5375 |
| No log | 5.0 | 200 | 1.3472 | 0.5437 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
ShinraC002/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2152
- Accuracy: 0.5687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.3484 | 0.5437 |
| No log | 2.0 | 80 | 1.3268 | 0.4875 |
| No log | 3.0 | 120 | 1.2463 | 0.5437 |
| No log | 4.0 | 160 | 1.2361 | 0.5563 |
| No log | 5.0 | 200 | 1.2089 | 0.5813 |
| No log | 6.0 | 240 | 1.2544 | 0.525 |
| No log | 7.0 | 280 | 1.1947 | 0.5563 |
| No log | 8.0 | 320 | 1.2502 | 0.5188 |
| No log | 9.0 | 360 | 1.3415 | 0.4938 |
| No log | 10.0 | 400 | 1.1336 | 0.6 |
| No log | 11.0 | 440 | 1.2716 | 0.5437 |
| No log | 12.0 | 480 | 1.4631 | 0.5 |
| 0.6882 | 13.0 | 520 | 1.3970 | 0.5563 |
| 0.6882 | 14.0 | 560 | 1.2654 | 0.5188 |
| 0.6882 | 15.0 | 600 | 1.2498 | 0.575 |
| 0.6882 | 16.0 | 640 | 1.2655 | 0.5938 |
| 0.6882 | 17.0 | 680 | 1.3577 | 0.55 |
| 0.6882 | 18.0 | 720 | 1.2711 | 0.5813 |
| 0.6882 | 19.0 | 760 | 1.3127 | 0.5687 |
| 0.6882 | 20.0 | 800 | 1.2478 | 0.575 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
fahmindra/emotion_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4050
- Accuracy: 0.4688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8187 | 1.0 | 10 | 1.8406 | 0.3063 |
| 1.6795 | 2.0 | 20 | 1.6701 | 0.3688 |
| 1.5506 | 3.0 | 30 | 1.5578 | 0.45 |
| 1.4417 | 4.0 | 40 | 1.5077 | 0.4875 |
| 1.3707 | 5.0 | 50 | 1.4297 | 0.5062 |
| 1.3167 | 6.0 | 60 | 1.4157 | 0.4938 |
| 1.267 | 7.0 | 70 | 1.3779 | 0.525 |
| 1.2197 | 8.0 | 80 | 1.3784 | 0.5 |
| 1.191 | 9.0 | 90 | 1.3701 | 0.5188 |
| 1.1649 | 10.0 | 100 | 1.3611 | 0.4938 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
byrocuy/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3393
- Accuracy: 0.5312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.2359 | 0.5625 |
| No log | 2.0 | 80 | 1.2754 | 0.5625 |
| No log | 3.0 | 120 | 1.2272 | 0.5437 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
farhanyh/emotion-classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2636
- Accuracy: 0.5125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 1.9736 | 0.225 |
| No log | 2.0 | 40 | 1.7481 | 0.2687 |
| No log | 3.0 | 60 | 1.6042 | 0.3187 |
| No log | 4.0 | 80 | 1.5067 | 0.4062 |
| No log | 5.0 | 100 | 1.4777 | 0.3875 |
| No log | 6.0 | 120 | 1.4160 | 0.4437 |
| No log | 7.0 | 140 | 1.3415 | 0.4875 |
| No log | 8.0 | 160 | 1.3274 | 0.4813 |
| No log | 9.0 | 180 | 1.3460 | 0.4938 |
| No log | 10.0 | 200 | 1.3201 | 0.5 |
| No log | 11.0 | 220 | 1.2853 | 0.5125 |
| No log | 12.0 | 240 | 1.2671 | 0.5312 |
| No log | 13.0 | 260 | 1.2979 | 0.5062 |
| No log | 14.0 | 280 | 1.2755 | 0.575 |
| No log | 15.0 | 300 | 1.2490 | 0.5312 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
fauzifadhi/image-classificaation
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image-classificaation
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
amaliaam/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.0915
- eval_accuracy: 0.0938
- eval_runtime: 10.0977
- eval_samples_per_second: 15.845
- eval_steps_per_second: 0.99
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
rdtm/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3541
- Accuracy: 0.4813
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.4409 | 0.475 |
| No log | 2.0 | 80 | 1.3711 | 0.4813 |
| No log | 3.0 | 120 | 1.3471 | 0.5125 |
| No log | 4.0 | 160 | 1.3580 | 0.525 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
kausarme/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
arnaucas/wildfire-classifier
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wildfire classifier
This model is a fine-tuned version of [google/vit-base-patch16-384](https://huggingface.co/google/vit-base-patch16-384) on the
[Kaggle Wildfire Dataset](https://www.kaggle.com/datasets/elmadafri/the-wildfire-dataset).
It achieves the following results on the evaluation set:
- Loss: 0.2329
- Accuracy: 0.9202
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1208 | 1.28 | 100 | 0.2329 | 0.9202 |
| 0.0261 | 2.56 | 200 | 0.2469 | 0.9316 |
| 0.0007 | 3.85 | 300 | 0.2358 | 0.9392 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
### Aditional resources
[Fine-tuning tutorial](https://huggingface.co/blog/fine-tune-vit)
|
[
"nofire",
"fire"
] |
stbnlen/pokemon_classifier
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pokemon_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pokemon-classification and the full datasets.
It achieves the following results on the evaluation set:
- Loss: 8.0935
- Accuracy: 0.0885
## Model description
This model, referred to as "PokemonClassifier," is a fine-tuned version of google/vit-base-patch16-224 on Pokemon classification datasets.
Its primary objective is to accurately identify the Pokemon in input images. While this general summary provides information about its performance in terms
of loss and accuracy, its core function lies in precisely classifying Pokemon images.
## Intended uses & limitations
This model is limited to the training data it was exposed to and can only identify the following Pokémon: Golbat, Machoke, Omastar, Diglett, Lapras, Kabuto,
Persian, Weepinbell, Golem, Dodrio, Raichu, Zapdos, Raticate, Magnemite, Ivysaur, Growlithe, Tangela, Drowzee, Rapidash, Venonat, Pidgeot, Nidorino, Porygon,
Lickitung, Rattata, Machop, Charmeleon, Slowbro, Parasect, Eevee, Starmie, Staryu, Psyduck, Dragonair, Magikarp, Vileplume, Marowak, Pidgeotto, Shellder, Mewtwo,
Farfetchd, Kingler, Seel, Kakuna, Doduo, Electabuzz, Charmander, Rhyhorn, Tauros, Dugtrio, Poliwrath, Gengar, Exeggutor, Dewgong, Jigglypuff, Geodude, Kadabra, Nidorina,
Sandshrew, Grimer, MrMime, Pidgey, Koffing, Ekans, Alolan Sandslash, Venusaur, Snorlax, Paras, Jynx, Chansey, Hitmonchan, Gastly, Kangaskhan, Oddish, Wigglytuff,
Graveler, Arcanine, Clefairy, Articuno, Poliwag, Abra, Squirtle, Voltorb, Ponyta, Moltres, Nidoqueen, Magmar, Onix, Vulpix, Butterfree, Krabby, Arbok, Clefable, Goldeen,
Magneton, Dratini, Caterpie, Jolteon, Nidoking, Alakazam, Dragonite, Fearow, Slowpoke, Weezing, Beedrill, Weedle, Cloyster, Vaporeon, Gyarados, Golduck, Machamp, Hitmonlee,
Primeape, Cubone, Sandslash, Scyther, Haunter, Metapod, Tentacruel, Aerodactyl, Kabutops, Ninetales, Zubat, Rhydon, Mew, Pinsir, Ditto, Victreebel, Omanyte, Horsea, Pikachu,
Blastoise, Venomoth, Charizard, Seadra, Muk, Spearow, Bulbasaur, Bellsprout, Electrode, Gloom, Poliwhirl, Flareon, Seaking, Hypno, Wartortle, Mankey, Tentacool, Exeggcute,
and Meowth.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0872 | 0.82 | 500 | 7.2669 | 0.0640 |
| 0.1581 | 1.64 | 1000 | 7.6072 | 0.0712 |
| 0.0536 | 2.46 | 1500 | 7.8952 | 0.0842 |
| 0.0169 | 3.28 | 2000 | 8.0935 | 0.0885 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"golbat",
"machoke",
"raichu",
"dragonite",
"fearow",
"slowpoke",
"weezing",
"beedrill",
"weedle",
"cloyster",
"vaporeon",
"gyarados",
"golduck",
"zapdos",
"machamp",
"hitmonlee",
"primeape",
"cubone",
"sandslash",
"scyther",
"haunter",
"metapod",
"tentacruel",
"aerodactyl",
"raticate",
"kabutops",
"ninetales",
"zubat",
"rhydon",
"mew",
"pinsir",
"ditto",
"victreebel",
"omanyte",
"horsea",
"magnemite",
"pikachu",
"blastoise",
"venomoth",
"charizard",
"seadra",
"muk",
"spearow",
"bulbasaur",
"bellsprout",
"electrode",
"ivysaur",
"gloom",
"poliwhirl",
"flareon",
"seaking",
"hypno",
"wartortle",
"mankey",
"tentacool",
"exeggcute",
"meowth",
"growlithe",
"tangela",
"drowzee",
"rapidash",
"venonat",
"omastar",
"pidgeot",
"nidorino",
"porygon",
"lickitung",
"rattata",
"machop",
"charmeleon",
"slowbro",
"parasect",
"eevee",
"diglett",
"starmie",
"staryu",
"psyduck",
"dragonair",
"magikarp",
"vileplume",
"marowak",
"pidgeotto",
"shellder",
"mewtwo",
"lapras",
"farfetchd",
"kingler",
"seel",
"kakuna",
"doduo",
"electabuzz",
"charmander",
"rhyhorn",
"tauros",
"dugtrio",
"kabuto",
"poliwrath",
"gengar",
"exeggutor",
"dewgong",
"jigglypuff",
"geodude",
"kadabra",
"nidorina",
"sandshrew",
"grimer",
"persian",
"mrmime",
"pidgey",
"koffing",
"ekans",
"alolan sandslash",
"venusaur",
"snorlax",
"paras",
"jynx",
"chansey",
"weepinbell",
"hitmonchan",
"gastly",
"kangaskhan",
"oddish",
"wigglytuff",
"graveler",
"arcanine",
"clefairy",
"articuno",
"poliwag",
"golem",
"abra",
"squirtle",
"voltorb",
"ponyta",
"moltres",
"nidoqueen",
"magmar",
"onix",
"vulpix",
"butterfree",
"dodrio",
"krabby",
"arbok",
"clefable",
"goldeen",
"magneton",
"dratini",
"caterpie",
"jolteon",
"nidoking",
"alakazam"
] |
rizepth/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6857
- Accuracy: 0.4062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.8755 | 0.3125 |
| No log | 2.0 | 80 | 1.6801 | 0.4062 |
| No log | 3.0 | 120 | 1.6357 | 0.3812 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
dima806/medicinal_plants_image_detection
|
Detect type of Indian medicinal plant based on plants/leafs image.
See https://www.kaggle.com/code/dima806/indian-medicinal-plants-image-detection-vit for more details.

```
Classification report:
precision recall f1-score support
Amla 1.0000 1.0000 1.0000 116
Curry 1.0000 1.0000 1.0000 115
Betel 0.9914 1.0000 0.9957 115
Bamboo 1.0000 1.0000 1.0000 116
Palak(Spinach) 1.0000 1.0000 1.0000 116
Coriender 1.0000 1.0000 1.0000 115
Ashoka 1.0000 1.0000 1.0000 115
Seethapala 1.0000 1.0000 1.0000 115
Lemon_grass 1.0000 1.0000 1.0000 116
Pappaya 1.0000 1.0000 1.0000 115
Curry_Leaf 1.0000 1.0000 1.0000 116
Lemon 1.0000 0.9913 0.9956 115
Nooni 1.0000 1.0000 1.0000 116
Henna 1.0000 1.0000 1.0000 116
Mango 1.0000 1.0000 1.0000 116
Doddpathre 1.0000 1.0000 1.0000 115
Amruta_Balli 1.0000 1.0000 1.0000 115
Betel_Nut 1.0000 1.0000 1.0000 116
Tulsi 0.9914 0.9914 0.9914 116
Pomegranate 1.0000 1.0000 1.0000 115
Castor 1.0000 1.0000 1.0000 116
Jackfruit 1.0000 1.0000 1.0000 116
Insulin 1.0000 1.0000 1.0000 116
Pepper 1.0000 1.0000 1.0000 116
Raktachandini 1.0000 1.0000 1.0000 116
Aloevera 1.0000 1.0000 1.0000 116
Jasmine 1.0000 1.0000 1.0000 116
Doddapatre 1.0000 1.0000 1.0000 115
Neem 1.0000 1.0000 1.0000 115
Geranium 1.0000 1.0000 1.0000 115
Rose 1.0000 1.0000 1.0000 115
Gauva 1.0000 1.0000 1.0000 116
Hibiscus 1.0000 1.0000 1.0000 116
Nithyapushpa 1.0000 1.0000 1.0000 116
Wood_sorel 1.0000 1.0000 1.0000 115
Tamarind 1.0000 1.0000 1.0000 116
Guava 1.0000 1.0000 1.0000 116
Bhrami 1.0000 1.0000 1.0000 115
Sapota 1.0000 1.0000 1.0000 116
Basale 1.0000 1.0000 1.0000 116
Avacado 1.0000 1.0000 1.0000 116
Ashwagandha 1.0000 1.0000 1.0000 116
Nagadali 0.9897 0.8348 0.9057 115
Arali 1.0000 1.0000 1.0000 115
Ekka 1.0000 1.0000 1.0000 116
Ganike 0.8582 0.9914 0.9200 116
Tulasi 0.9913 0.9913 0.9913 115
Honge 1.0000 1.0000 1.0000 115
Mint 1.0000 1.0000 1.0000 116
Catharanthus 1.0000 1.0000 1.0000 116
Papaya 1.0000 1.0000 1.0000 116
Brahmi 1.0000 1.0000 1.0000 116
accuracy 0.9962 6012
macro avg 0.9966 0.9962 0.9961 6012
weighted avg 0.9966 0.9962 0.9962 6012
```
|
[
"amla",
"curry",
"betel",
"bamboo",
"palak(spinach)",
"coriender",
"ashoka",
"seethapala",
"lemon_grass",
"pappaya",
"curry_leaf",
"lemon",
"nooni",
"henna",
"mango",
"doddpathre",
"amruta_balli",
"betel_nut",
"tulsi",
"pomegranate",
"castor",
"jackfruit",
"insulin",
"pepper",
"raktachandini",
"aloevera",
"jasmine",
"doddapatre",
"neem",
"geranium",
"rose",
"gauva",
"hibiscus",
"nithyapushpa",
"wood_sorel",
"tamarind",
"guava",
"bhrami",
"sapota",
"basale",
"avacado",
"ashwagandha",
"nagadali",
"arali",
"ekka",
"ganike",
"tulasi",
"honge",
"mint",
"catharanthus",
"papaya",
"brahmi"
] |
3sulton/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6601
- Accuracy: 0.4375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0289 | 1.0 | 10 | 1.9865 | 0.2812 |
| 1.9055 | 2.0 | 20 | 1.8493 | 0.3875 |
| 1.7613 | 3.0 | 30 | 1.7289 | 0.4625 |
| 1.6622 | 4.0 | 40 | 1.6590 | 0.4688 |
| 1.6224 | 5.0 | 50 | 1.6339 | 0.4688 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
DHEIVER/Modelo-Avancado-de-Ultrassom-de-Mama
|
- Perda (Loss): 0.0398
- Precisão (Accuracy): 0.9882
## Descrição do Modelo
Mais informações são necessárias para entender completamente a descrição deste modelo.
## Usos Previstos e Limitações
Mais informações são necessárias para entender completamente os usos previstos e as limitações específicas deste modelo.
## Dados de Treinamento e Avaliação
Mais informações são necessárias para entender os detalhes dos conjuntos de dados utilizados no treinamento e avaliação deste modelo.
## Procedimento de Treinamento
### Hiperparâmetros de Treinamento
Durante o treinamento, os seguintes hiperparâmetros foram utilizados:
- Taxa de Aprendizado (learning_rate): 5e-05
- Tamanho do Lote de Treinamento (train_batch_size): 16
- Tamanho do Lote de Avaliação (eval_batch_size): 16
- Semente (seed): 42
- Acumulação de Gradientes (gradient_accumulation_steps): 2
- Tamanho Total do Lote de Treinamento (total_train_batch_size): 32
- Otimizador: Adam com betas=(0.9, 0.999) e epsilon=1e-08
- Tipo de Programador de Taxa de Aprendizado (lr_scheduler_type): Linear
- Proporção de Aquecimento do Programador de Taxa de Aprendizado (lr_scheduler_warmup_ratio): 0.9
- Número de Épocas (num_epochs): 14
### Resultados do Treinamento
| Perda de Treinamento | Época | Passo | Precisão | Perda de Validação |
|:--------------------:|:-----:|:----:|:--------:|:-------------------:|
| 0.5059 | 1.0 | 199 | 0.9001 | 0.4826 |
| 0.2533 | 2.0 | 398 | 0.9515 | 0.2124 |
| 0.2358 | 3.0 | 597 | 0.9538 | 0.1543 |
| 0.2584 | 4.0 | 796 | 0.9642 | 0.1136 |
| 0.1085 | 5.0 | 995 | 0.9746 | 0.0891 |
| 0.1007 | 6.0 | 1194 | 0.9769 | 0.0725 |
| 0.1463 | 7.0 | 1393 | 0.9840 | 0.0541 |
| 0.3564 | 8.0 | 1592 | 0.9802 | 0.0880 |
| 0.0957 | 9.0 | 1791 | 0.9656 | 0.1375 |
| 0.1481 | 10.0 | 1990 | 0.0511 | 0.9873 |
| 0.1536 | 11.0 | 2189 | 0.0827 | 0.9713 |
| 0.0458 | 12.0 | 2388 | 0.0398 | 0.9882 |
| 0.4956 | 13.0 | 2587 | 0.3474 | 0.8643 |
| 0.0801 | 14.0 | 2786 | 0.0850 | 0.9797 |
### Versões das Frameworks
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
[
"benign",
"malignant"
] |
krismp/emotion_recognition
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_recognition
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3469
- Accuracy: 0.175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 10 | 2.0721 | 0.125 |
| No log | 2.0 | 20 | 2.0633 | 0.125 |
| No log | 3.0 | 30 | 2.0038 | 0.125 |
| No log | 4.0 | 40 | 1.9097 | 0.125 |
| No log | 5.0 | 50 | 1.7412 | 0.125 |
| No log | 6.0 | 60 | 1.6189 | 0.05 |
| No log | 7.0 | 70 | 1.5343 | 0.0375 |
| No log | 8.0 | 80 | 1.4746 | 0.0688 |
| No log | 9.0 | 90 | 1.4330 | 0.0938 |
| No log | 10.0 | 100 | 1.4130 | 0.15 |
| No log | 11.0 | 110 | 1.3735 | 0.1062 |
| No log | 12.0 | 120 | 1.3516 | 0.1062 |
| No log | 13.0 | 130 | 1.2838 | 0.1375 |
| No log | 14.0 | 140 | 1.3058 | 0.1187 |
| No log | 15.0 | 150 | 1.3116 | 0.1 |
| No log | 16.0 | 160 | 1.3269 | 0.1313 |
| No log | 17.0 | 170 | 1.2624 | 0.1062 |
| No log | 18.0 | 180 | 1.3285 | 0.1187 |
| No log | 19.0 | 190 | 1.3490 | 0.1437 |
| No log | 20.0 | 200 | 1.2592 | 0.1375 |
| No log | 21.0 | 210 | 1.3600 | 0.0938 |
| No log | 22.0 | 220 | 1.2835 | 0.1313 |
| No log | 23.0 | 230 | 1.2842 | 0.1375 |
| No log | 24.0 | 240 | 1.2840 | 0.1 |
| No log | 25.0 | 250 | 1.2456 | 0.1313 |
| No log | 26.0 | 260 | 1.2960 | 0.1562 |
| No log | 27.0 | 270 | 1.3208 | 0.1375 |
| No log | 28.0 | 280 | 1.3207 | 0.1375 |
| No log | 29.0 | 290 | 1.2892 | 0.175 |
| No log | 30.0 | 300 | 1.2837 | 0.1812 |
| No log | 31.0 | 310 | 1.3548 | 0.1562 |
| No log | 32.0 | 320 | 1.4371 | 0.1437 |
| No log | 33.0 | 330 | 1.4219 | 0.1562 |
| No log | 34.0 | 340 | 1.4033 | 0.1875 |
| No log | 35.0 | 350 | 1.4505 | 0.1437 |
| No log | 36.0 | 360 | 1.2975 | 0.1562 |
| No log | 37.0 | 370 | 1.3906 | 0.1562 |
| No log | 38.0 | 380 | 1.3547 | 0.1688 |
| No log | 39.0 | 390 | 1.4706 | 0.1938 |
| No log | 40.0 | 400 | 1.3595 | 0.1625 |
| No log | 41.0 | 410 | 1.4236 | 0.1625 |
| No log | 42.0 | 420 | 1.4180 | 0.1812 |
| No log | 43.0 | 430 | 1.3993 | 0.1562 |
| No log | 44.0 | 440 | 1.4066 | 0.1625 |
| No log | 45.0 | 450 | 1.3760 | 0.175 |
| No log | 46.0 | 460 | 1.4221 | 0.1812 |
| No log | 47.0 | 470 | 1.3772 | 0.1625 |
| No log | 48.0 | 480 | 1.4265 | 0.2 |
| No log | 49.0 | 490 | 1.4716 | 0.1625 |
| 0.6962 | 50.0 | 500 | 1.3917 | 0.1625 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.