model_id
stringlengths 7
105
| model_card
stringlengths 1
130k
| model_labels
listlengths 2
80k
|
---|---|---|
gilangr2/image_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2573
- Accuracy: 0.525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.3032 | 0.5125 |
| No log | 2.0 | 80 | 1.2982 | 0.4875 |
| No log | 3.0 | 120 | 1.2802 | 0.55 |
| No log | 4.0 | 160 | 1.2181 | 0.55 |
| No log | 5.0 | 200 | 1.1645 | 0.6 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
lossless/autotrain-vertigo-actors-01-90060144093
|
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 90060144093
- CO2 Emissions (in grams): 0.7630
## Validation Metrics
- Loss: 0.105
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
|
[
"james-stewart",
"kim-novak"
] |
lossless/autotrain-vertigo-actors-02-90066144103
|
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 90066144103
- CO2 Emissions (in grams): 0.8491
## Validation Metrics
- Loss: 0.590
- Accuracy: 0.833
- Macro F1: 0.833
- Micro F1: 0.833
- Weighted F1: 0.833
- Macro Precision: 0.833
- Micro Precision: 0.833
- Weighted Precision: 0.833
- Macro Recall: 0.833
- Micro Recall: 0.833
- Weighted Recall: 0.833
|
[
"james-stewart",
"kim-novak",
"other"
] |
touchtech/fashion-images-perspectives-vit-large-patch16-224-in21k-v4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fashion-images-perspectives-vit-large-patch16-224-in21k-v4
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the touchtech/fashion-images-perspectives dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2203
- Accuracy: 0.9434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4275 | 1.0 | 3081 | 0.3064 | 0.9011 |
| 0.3555 | 2.0 | 6162 | 0.3097 | 0.9103 |
| 0.3069 | 3.0 | 9243 | 0.3036 | 0.9106 |
| 0.2449 | 4.0 | 12324 | 0.2268 | 0.9377 |
| 0.2339 | 5.0 | 15405 | 0.2203 | 0.9434 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"model-back-close",
"model-back-full",
"pack-detail",
"pack-front",
"pack-side",
"pack-top",
"model-detail",
"model-front-close",
"model-front-full",
"model-side-close",
"model-side-full",
"pack-angled",
"pack-back",
"pack-bottom"
] |
fsuarez/autotrain-logo-identifier-90194144191
|
# 📒 logo-identifier-model
This model has been trained on a dataset called "LogoIdentifier" for multi-class classification of logos from 57 renowned brands and companies. These brands encompass a wide spectrum of industries and recognition, ranging from global giants like Coca-Cola, Coleman, Google, IBM, Nike, Pepsi, and many others. Each brand is thoughtfully organized into its designated subfolder, housing a comprehensive set of logo images for precise and accurate classification. Whether you're identifying iconic logos or exploring the branding diversity of these 57 famous names, this model is your go-to solution for logo recognition and classification.
# 🧪 Dataset Content
- The dataset includes logos from various brands and companies.
- The dataset is organized into subfolders, each corresponding to a specific brand or company.
- It contains a wide range of brand logos, including Acer, Acura, Adidas, Samsung, Lenovo, McDonald's, Java, and many more.
- Each brand or company in the dataset is associated with a numerical value, likely representing the number of images available for that brand.
The model has been trained to recognize and classify logos into their respective brand categories based on the images provided in the dataset.
| Company | Quantity of images |
| ----------------- | ------------------ |
| Acer | 67 |
| Acura | 74 |
| Addidas | 90 |
| Ades | 36 |
| Adio | 63 |
| Cadillac | 69 |
| CalvinKlein | 65 |
| Canon | 59 |
| Cocacola | 40 |
| CocaColaZero | 91 |
| Coleman | 57 |
| Converse | 60 |
| CornFlakes | 62 |
| DominossPizza | 99 |
| Excel | 88 |
| Gillette | 86 |
| GMC | 75 |
| Google | 93 |
| HardRockCafe | 93 |
| HBO | 103 |
| Heineken | 84 |
| HewlettPackard | 81 |
| Hp | 87 |
| Huawei | 84 |
| Hyundai | 84 |
| IBM | 84 |
| Java | 62 |
| KFC | 84 |
| Kia | 76 |
| Kingston | 79 |
| Lenovo | 82 |
| LG | 95 |
| Lipton | 94 |
| Mattel | 77 |
| McDonalds | 98 |
| MercedesBenz | 94 |
| Motorola | 86 |
| Nestle | 94 |
| Nickelodeon | 74 |
| Nike | 50 |
| Pennzoil | 82 |
| Pepsi | 93 |
| Peugeot | 60 |
| Porsche | 71 |
| Samsung | 96 |
| SchneiderElectric | 42 |
| Shell | 58 |
To use this model for brand logo identification, you can make use of the Hugging Face Transformers library and load the model using its model ID (90194144191). You can then input an image of a brand logo, and the model should be able to predict the brand it belongs to based on its training.
# 🤗 Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 90194144191
- CO2 Emissions (in grams): 0.0608
## 📐 Validation Metrics
- Loss: 0.300
- Accuracy: 0.924
- Macro F1: 0.924
- Micro F1: 0.924
- Weighted F1: 0.922
- Macro Precision: 0.930
- Micro Precision: 0.924
- Weighted Precision: 0.928
- Macro Recall: 0.924
- Micro Recall: 0.924
- Weighted Recall: 0.924
|
[
"acer",
"acura",
"converse",
"cornflakes",
"dominospizza",
"excel",
"gmc",
"gillette",
"google",
"hardrockcafe",
"heineken",
"hewlettpackard",
"addidas",
"hp",
"huawei",
"hyundai",
"ibm",
"java",
"kfc",
"kia",
"kingston",
"lg",
"lenovo",
"ades",
"lipton",
"mattel",
"mcdonalds",
"mercedesbenz",
"motorola",
"nestle",
"nickelodeon",
"nike",
"pennzoil",
"pepsi",
"adio",
"peugeot",
"porsche",
"samsung",
"schneiderelectric",
"shell",
"cocacola",
"cadillac",
"calvinklein",
"canon",
"cocacolazero",
"coleman"
] |
xlagor/swin-tiny-patch4-window7-224-finetuned-fit
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-fit
This model is a fine-tuned version of [xlagor/swin-tiny-patch4-window7-224-finetuned-fit](https://huggingface.co/xlagor/swin-tiny-patch4-window7-224-finetuned-fit) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0711
- Accuracy: 0.9772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 120
- eval_batch_size: 120
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 480
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3727 | 0.99 | 62 | 0.1103 | 0.9680 |
| 0.3551 | 1.99 | 125 | 0.1018 | 0.9701 |
| 0.3258 | 3.0 | 188 | 0.0995 | 0.9706 |
| 0.3008 | 4.0 | 251 | 0.0939 | 0.9712 |
| 0.2896 | 4.99 | 313 | 0.0872 | 0.9730 |
| 0.2612 | 5.99 | 376 | 0.0829 | 0.9739 |
| 0.2275 | 7.0 | 439 | 0.0815 | 0.9748 |
| 0.2358 | 8.0 | 502 | 0.0839 | 0.9739 |
| 0.2191 | 8.99 | 564 | 0.0778 | 0.9775 |
| 0.2096 | 9.99 | 627 | 0.0759 | 0.9769 |
| 0.2063 | 11.0 | 690 | 0.0749 | 0.9778 |
| 0.1916 | 12.0 | 753 | 0.0735 | 0.9775 |
| 0.2002 | 12.99 | 815 | 0.0732 | 0.9781 |
| 0.1905 | 13.99 | 878 | 0.0713 | 0.9784 |
| 0.1835 | 15.0 | 941 | 0.0707 | 0.9784 |
| 0.1949 | 15.81 | 992 | 0.0711 | 0.9772 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"birds",
"bottles",
"breads",
"butterflies",
"cakes",
"cats",
"chickens",
"cows",
"dogs",
"ducks",
"elephants",
"fishes",
"handguns",
"horses",
"lions",
"lipsticks",
"seals",
"snakes",
"spiders",
"vases"
] |
Jayanth2002/dinov2-base-finetuned-SkinDisease
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dinov2-base-finetuned-SkinDisease
This model is a fine-tuned version of [facebook/dinov2-base](https://huggingface.co/facebook/dinov2-base) on the Custom dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1321
- Accuracy: 0.9557
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained on a large collection of images in a self-supervised fashion.
Images are presented to the model as a sequence of fixed-size patches, which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not include any fine-tuned heads.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## How to use
```python
import torch
from transformers import AutoModelForImageClassification, AutoImageProcessor
repo_name = "Jayanth2002/dinov2-base-finetuned-SkinDisease"
image_processor = AutoImageProcessor.from_pretrained(repo_name)
model = AutoModelForImageClassification.from_pretrained(repo_name)
# Load and preprocess the test image
image_path = "/content/img_416.jpg"
image = Image.open(image_path)
encoding = image_processor(image.convert("RGB"), return_tensors="pt")
# Make a prediction
with torch.no_grad():
outputs = model(**encoding)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
# Get the class name
class_names = ['Basal Cell Carcinoma', 'Darier_s Disease', 'Epidermolysis Bullosa Pruriginosa', 'Hailey-Hailey Disease', 'Herpes Simplex', 'Impetigo', 'Larva Migrans', 'Leprosy Borderline', 'Leprosy Lepromatous', 'Leprosy Tuberculoid', 'Lichen Planus', 'Lupus Erythematosus Chronicus Discoides', 'Melanoma', 'Molluscum Contagiosum', 'Mycosis Fungoides', 'Neurofibromatosis', 'Papilomatosis Confluentes And Reticulate', 'Pediculosis Capitis', 'Pityriasis Rosea', 'Porokeratosis Actinic', 'Psoriasis', 'Tinea Corporis', 'Tinea Nigra', 'Tungiasis', 'actinic keratosis', 'dermatofibroma', 'nevus', 'pigmented benign keratosis', 'seborrheic keratosis', 'squamous cell carcinoma', 'vascular lesion']
predicted_class_name = class_names[predicted_class_idx]
print(predicted_class_name)
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9599 | 1.0 | 282 | 0.6866 | 0.7811 |
| 0.6176 | 2.0 | 565 | 0.4806 | 0.8399 |
| 0.4614 | 3.0 | 847 | 0.3092 | 0.8934 |
| 0.3976 | 4.0 | 1130 | 0.2620 | 0.9141 |
| 0.3606 | 5.0 | 1412 | 0.2514 | 0.9208 |
| 0.3075 | 6.0 | 1695 | 0.1968 | 0.9320 |
| 0.2152 | 7.0 | 1977 | 0.2004 | 0.9377 |
| 0.2194 | 8.0 | 2260 | 0.1627 | 0.9442 |
| 0.1706 | 9.0 | 2542 | 0.1449 | 0.9500 |
| 0.172 | 9.98 | 2820 | 0.1321 | 0.9557 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
## Kindly Cite Our Work
```bibtex
@article{mohan2024enhancing,
title={Enhancing skin disease classification leveraging transformer-based deep learning architectures and explainable ai},
author={Mohan, Jayanth and Sivasubramanian, Arrun and Sowmya, V and Vinayakumar, Ravi},
journal={arXiv preprint arXiv:2407.14757},
year={2024}
}
```
|
[
"basal cell carcinoma",
"darier_s disease",
"epidermolysis bullosa pruriginosa",
"hailey-hailey disease",
"herpes simplex",
"impetigo",
"larva migrans",
"leprosy borderline",
"leprosy lepromatous",
"leprosy tuberculoid",
"lichen planus",
"lupus erythematosus chronicus discoides",
"melanoma",
"molluscum contagiosum",
"mycosis fungoides",
"neurofibromatosis",
"papilomatosis confluentes and reticulate",
"pediculosis capitis",
"pityriasis rosea",
"porokeratosis actinic",
"psoriasis",
"tinea corporis",
"tinea nigra",
"tungiasis",
"actinic keratosis",
"dermatofibroma",
"nevus",
"pigmented benign keratosis",
"seborrheic keratosis",
"squamous cell carcinoma",
"vascular lesion"
] |
Augusto777/vit-base-patch16-224-MSC-dmae
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-MSC-dmae
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6300
- Accuracy: 0.95
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.67 | 1 | 1.2258 | 0.5 |
| No log | 2.0 | 3 | 1.0536 | 0.7 |
| No log | 2.67 | 4 | 0.9143 | 0.75 |
| No log | 4.0 | 6 | 0.6899 | 0.9 |
| No log | 4.67 | 7 | 0.6300 | 0.95 |
| No log | 6.0 | 9 | 0.5069 | 0.9 |
| 0.8554 | 6.67 | 10 | 0.4671 | 0.9 |
| 0.8554 | 8.0 | 12 | 0.4312 | 0.9 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"avanzada",
"leve",
"moderada",
"no amd"
] |
shadowlilac/aesthetic-shadow
|
# Aesthetic Shadow
Aesthetic Shadow is a 1.1b parameters visual transformer designed to evaluate the quality of anime images. It accepts high-resolution 1024x1024 images as input and provides a prediction score that quantifies the aesthetic appeal of the artwork. Leveraging cutting-edge deep learning techniques, this model excels at discerning fine details, proportions, and overall visual coherence in anime illustrations.
**If you do decide to use this model for public stuff, attribution would be appreciated :)**
## How to Use
See Jupyter Notebook in files
## Disclosure
This model does not intend to be offensive towards any artist and may not output an accurate label for an image. A potential use case would be low quality images filtering on image datasets.
|
[
"hq",
"lq"
] |
savioratharv/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1312
- Accuracy: 0.9795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.9594 | 1.0 | 70 | 3.8779 | 0.6189 |
| 3.0869 | 1.99 | 140 | 3.0415 | 0.8549 |
| 2.471 | 2.99 | 210 | 2.4433 | 0.9270 |
| 2.0406 | 4.0 | 281 | 2.0261 | 0.9501 |
| 1.7238 | 5.0 | 351 | 1.7346 | 0.9581 |
| 1.4513 | 5.99 | 421 | 1.4902 | 0.9671 |
| 1.3131 | 6.99 | 491 | 1.3221 | 0.9786 |
| 1.1752 | 8.0 | 562 | 1.2230 | 0.9768 |
| 1.1007 | 9.0 | 632 | 1.1619 | 0.9795 |
| 1.0682 | 9.96 | 700 | 1.1312 | 0.9795 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"alpinia galanga (rasna)",
"amaranthus viridis (arive-dantu)",
"big caltrops.zip",
"black-honey shrub.zip",
"brassica juncea (indian mustard)",
"bristly wild grape.zip",
"butterfly pea.zip",
"cape gooseberry.zip",
"carissa carandas (karanda)",
"citrus limon (lemon)",
"common wireweed.zip",
"country mallow.zip",
"artocarpus heterophyllus (jackfruit)",
"crown flower.zip",
"ficus auriculata (roxburgh fig)",
"ficus religiosa (peepal tree)",
"green chireta.zip",
"hibiscus rosa-sinensis",
"holy basil.zip",
"indian copperleaf.zip",
"indian jujube.zip",
"indian sarsaparilla.zip",
"indian stinging nettle.zip",
"asthma plant.zip",
"indian thornapple.zip",
"indian wormwood.zip",
"ivy gourd.zip",
"jasminum (jasmine)",
"kokilaksha.zip",
"land caltrops (bindii).zip",
"madagascar periwinkle.zip",
"madras pea pumpkin.zip",
"malabar catmint.zip",
"mangifera indica (mango)",
"avaram.zip",
"mentha (mint)",
"mexican mint.zip",
"mexican prickly poppy.zip",
"moringa oleifera (drumstick)",
"mountain knotgrass.zip",
"muntingia calabura (jamaica cherry-gasagase)",
"murraya koenigii (curry)",
"nalta jute.zip",
"nerium oleander (oleander)",
"night blooming cereus.zip",
"azadirachta indica (neem)",
"nyctanthes arbor-tristis (parijata)",
"ocimum tenuiflorum (tulsi)",
"panicled foldwing.zip",
"piper betle (betel)",
"plectranthus amboinicus (mexican mint)",
"pongamia pinnata (indian beech)",
"prickly chaff flower.zip",
"psidium guajava (guava)",
"punarnava.zip",
"punica granatum (pomegranate)",
"balloon vine.zip",
"purple fruited pea eggplant.zip",
"purple tephrosia.zip",
"rosary pea.zip",
"santalum album (sandalwood)",
"shaggy button weed.zip",
"small water clover.zip",
"spiderwisp.zip",
"square stalked vine.zip",
"stinking passionflower.zip",
"sweet basil.zip",
"basella alba (basale)",
"sweet flag.zip",
"syzygium cumini (jamun)",
"syzygium jambos (rose apple)",
"tabernaemontana divaricata (crape jasmine)",
"tinnevelly senna.zip",
"trellis vine.zip",
"trigonella foenum-graecum (fenugreek)",
"velvet bean.zip",
"coatbuttons.zip",
"heart-leaved moonseed.zip",
"bellyache bush (green).zip",
"benghal dayflower.zip"
] |
bgoldfe2/vit-base-beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3674
- Accuracy: 0.9699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9983 | 1.0 | 17 | 0.8032 | 0.9323 |
| 0.6984 | 2.0 | 34 | 0.5943 | 0.9549 |
| 0.5056 | 3.0 | 51 | 0.4566 | 0.9624 |
| 0.4601 | 4.0 | 68 | 0.3892 | 0.9624 |
| 0.3883 | 5.0 | 85 | 0.3674 | 0.9699 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
dima806/flowers_16_types_image_detection
|
Returns flower type given an image with about 99.5% accuracy.
See https://www.kaggle.com/code/dima806/flowers-16-types-image-detection-vit for more details.
```
Classification report:
precision recall f1-score support
calendula 0.9928 0.9786 0.9856 421
coreopsis 0.9882 0.9905 0.9893 421
rose 0.9976 0.9953 0.9964 422
black_eyed_susan 0.9976 0.9976 0.9976 422
water_lily 0.9953 1.0000 0.9976 421
california_poppy 0.9905 0.9929 0.9917 422
dandelion 1.0000 0.9976 0.9988 422
magnolia 0.9952 0.9858 0.9905 422
astilbe 0.9976 0.9976 0.9976 421
sunflower 0.9976 1.0000 0.9988 422
tulip 0.9976 1.0000 0.9988 422
bellflower 0.9952 0.9905 0.9929 422
iris 1.0000 1.0000 1.0000 421
common_daisy 0.9882 0.9952 0.9917 421
daffodil 0.9976 0.9976 0.9976 422
carnation 0.9859 0.9976 0.9918 422
accuracy 0.9948 6746
macro avg 0.9948 0.9948 0.9948 6746
weighted avg 0.9948 0.9948 0.9948 6746
```
|
[
"calendula",
"coreopsis",
"rose",
"black_eyed_susan",
"water_lily",
"california_poppy",
"dandelion",
"magnolia",
"astilbe",
"sunflower",
"tulip",
"bellflower",
"iris",
"common_daisy",
"daffodil",
"carnation"
] |
DataBindu/swinv2-large-patch4-window12to24-192to384-22kto1k-ft-microbes-merged
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-large-patch4-window12to24-192to384-22kto1k-ft-microbes-merged
This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft](https://huggingface.co/microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8626
- Accuracy: 0.7269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.8355 | 0.98 | 15 | 2.5831 | 0.3333 |
| 1.9292 | 1.97 | 30 | 1.6850 | 0.5046 |
| 1.4121 | 2.95 | 45 | 1.2324 | 0.5972 |
| 1.0121 | 4.0 | 61 | 1.0345 | 0.6852 |
| 0.854 | 4.98 | 76 | 0.9663 | 0.6806 |
| 0.701 | 5.97 | 91 | 0.9587 | 0.6991 |
| 0.5956 | 6.95 | 106 | 0.8626 | 0.7269 |
| 0.5713 | 7.87 | 120 | 0.8645 | 0.7222 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cpu
- Datasets 2.14.4
- Tokenizers 0.13.3
|
[
"actinomycetes_mycolata",
"anaerobic type b",
"anaerobic suflur bacteria",
"beggiatoa",
"flexibacter",
"fungi",
"gao",
"haliscomenobacter",
"hyphomicrobium",
"microscrilla",
"microthrix",
"nitrosomonas",
"nostocoida limicola",
"pao",
"sphaerotilus-type 1701",
"spirilla",
"spirochaetes",
"tetrads",
"thiothrix",
"type 0041_0675",
"type 0092",
"type 021n",
"type 0411",
"type 0581_chloroflexi",
"type 0914_0803",
"type 0961",
"type 1863",
"zoogloea",
"bristleworm",
"crustacean",
"dead filament",
"elevated polysaccharide",
"ferric iron",
"fibrous material",
"filament impact on floc structure high",
"filament impact on floc structure low",
"filament impact on floc structure moderate",
"flagellate",
"floc open_diffuse",
"floc strong",
"floc weak",
"free swimming ciliate",
"gastrotrich",
"grease",
"inert",
"iron sulfide",
"irregular growth formations",
"naked amoebae",
"nematode",
"normal polysaccharide",
"oil",
"rotifer",
"stalked ciliate",
"testate amoebae",
"type 1851",
"water bear",
"yeast"
] |
dima806/marvel_heroes_image_detection
|
Return Marvel hero based on image with about 88% accuracy.
See https://www.kaggle.com/code/dima806/marvel-heroes-image-detection-vit for more details.
```
Classification report:
precision recall f1-score support
captain america 0.8519 0.8519 0.8519 162
black widow 0.8634 0.8528 0.8580 163
spider-man 0.9571 0.9630 0.9600 162
thanos 0.8917 0.8589 0.8750 163
ironman 0.8614 0.8827 0.8720 162
hulk 0.8889 0.8395 0.8635 162
loki 0.8957 0.8957 0.8957 163
doctor strange 0.8629 0.9264 0.8935 163
accuracy 0.8838 1300
macro avg 0.8841 0.8838 0.8837 1300
weighted avg 0.8841 0.8838 0.8837 1300
```
|
[
"captain america",
"black widow",
"spider-man",
"thanos",
"ironman",
"hulk",
"loki",
"doctor strange"
] |
Audi24/fire_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Audi24/fire_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1936
- Validation Loss: 0.1743
- Train Accuracy: 0.9889
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1755, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.0088 | 0.8898 | 0.8667 | 0 |
| 0.7325 | 0.6165 | 0.9333 | 1 |
| 0.4620 | 0.3794 | 0.9444 | 2 |
| 0.3100 | 0.2546 | 0.9667 | 3 |
| 0.1936 | 0.1743 | 0.9889 | 4 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"fire",
"high",
"low"
] |
onceiapp/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0340
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1854 | 0.99 | 21 | 0.0688 | 0.9800 |
| 0.0438 | 1.98 | 42 | 0.0410 | 0.9817 |
| 0.0194 | 2.96 | 63 | 0.0340 | 0.9850 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"real",
"spoof"
] |
Jayanth2002/vit_base_patch16_224-finetuned-SkinDisease
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_base_patch16_224-finetuned-SkinDisease
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1992
- Accuracy: 0.9343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9099 | 1.0 | 282 | 0.8248 | 0.7647 |
| 0.5848 | 2.0 | 565 | 0.4236 | 0.8748 |
| 0.3952 | 3.0 | 847 | 0.3154 | 0.9021 |
| 0.3957 | 4.0 | 1130 | 0.2695 | 0.9106 |
| 0.3146 | 5.0 | 1412 | 0.2381 | 0.9198 |
| 0.2883 | 6.0 | 1695 | 0.2407 | 0.9218 |
| 0.2264 | 7.0 | 1977 | 0.2160 | 0.9278 |
| 0.2339 | 8.0 | 2260 | 0.2121 | 0.9283 |
| 0.1966 | 9.0 | 2542 | 0.2044 | 0.9303 |
| 0.2366 | 9.98 | 2820 | 0.1992 | 0.9343 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
[
"basal cell carcinoma",
"darier_s disease",
"epidermolysis bullosa pruriginosa",
"hailey-hailey disease",
"herpes simplex",
"impetigo",
"larva migrans",
"leprosy borderline",
"leprosy lepromatous",
"leprosy tuberculoid",
"lichen planus",
"lupus erythematosus chronicus discoides",
"melanoma",
"molluscum contagiosum",
"mycosis fungoides",
"neurofibromatosis",
"papilomatosis confluentes and reticulate",
"pediculosis capitis",
"pityriasis rosea",
"porokeratosis actinic",
"psoriasis",
"tinea corporis",
"tinea nigra",
"tungiasis",
"actinic keratosis",
"dermatofibroma",
"nevus",
"pigmented benign keratosis",
"seborrheic keratosis",
"squamous cell carcinoma",
"vascular lesion"
] |
MohanaPriyaa/image_classification
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MohanaPriyaa/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2925
- Validation Loss: 0.2284
- Train Accuracy: 0.909
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2925 | 0.2284 | 0.909 | 0 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"bleached_corals",
"healthy_corals"
] |
lossless/autotrain-vertigo-actors-03-90426144283
|
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 90426144283
- CO2 Emissions (in grams): 0.1230
## Validation Metrics
- Loss: 0.414
- Accuracy: 0.800
- Macro F1: 0.711
- Micro F1: 0.800
- Weighted F1: 0.773
- Macro Precision: 0.889
- Micro Precision: 0.800
- Weighted Precision: 0.867
- Macro Recall: 0.708
- Micro Recall: 0.800
- Weighted Recall: 0.800
|
[
"james-stewart",
"kim-novak",
"other"
] |
lossless/autotrain-vertigo-actors-03-90426144282
|
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 90426144282
- CO2 Emissions (in grams): 0.0252
## Validation Metrics
- Loss: 0.550
- Accuracy: 0.850
- Macro F1: 0.798
- Micro F1: 0.850
- Weighted F1: 0.844
- Macro Precision: 0.815
- Micro Precision: 0.850
- Weighted Precision: 0.844
- Macro Recall: 0.792
- Micro Recall: 0.850
- Weighted Recall: 0.850
|
[
"james-stewart",
"kim-novak",
"other"
] |
lossless/autotrain-vertigo-actors-03-90426144285
|
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 90426144285
- CO2 Emissions (in grams): 0.1327
## Validation Metrics
- Loss: 0.397
- Accuracy: 0.800
- Macro F1: 0.750
- Micro F1: 0.800
- Weighted F1: 0.800
- Macro Precision: 0.750
- Micro Precision: 0.800
- Weighted Precision: 0.800
- Macro Recall: 0.750
- Micro Recall: 0.800
- Weighted Recall: 0.800
|
[
"james-stewart",
"kim-novak",
"other"
] |
MohanaPriyaa/Coral_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MohanaPriyaa/Coral_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3025
- Validation Loss: 0.2241
- Train Accuracy: 0.92
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.3025 | 0.2241 | 0.92 | 0 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"bleached_corals",
"healthy_corals"
] |
Jayanth2002/swin-base-patch4-window7-224-rawdata-finetuned-SkinDisease
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-patch4-window7-224-rawdata-finetuned-SkinDisease
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3867
- Accuracy: 0.8819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7301 | 0.98 | 34 | 2.0665 | 0.3910 |
| 1.3672 | 1.99 | 69 | 1.0139 | 0.6660 |
| 0.7673 | 2.99 | 104 | 0.7393 | 0.7760 |
| 0.605 | 4.0 | 139 | 0.6480 | 0.7841 |
| 0.5142 | 4.98 | 173 | 0.5229 | 0.8248 |
| 0.4081 | 5.99 | 208 | 0.4561 | 0.8615 |
| 0.3966 | 6.99 | 243 | 0.4206 | 0.8656 |
| 0.3247 | 8.0 | 278 | 0.4001 | 0.8717 |
| 0.3235 | 8.98 | 312 | 0.3867 | 0.8819 |
| 0.2788 | 9.78 | 340 | 0.3801 | 0.8737 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
[
"basal cell carcinoma",
"darier_s disease",
"epidermolysis bullosa pruriginosa",
"hailey-hailey disease",
"herpes simplex",
"impetigo",
"larva migrans",
"leprosy borderline",
"leprosy lepromatous",
"leprosy tuberculoid",
"lichen planus",
"lupus erythematosus chronicus discoides",
"melanoma",
"molluscum contagiosum",
"mycosis fungoides",
"neurofibromatosis",
"papilomatosis confluentes and reticulate",
"pediculosis capitis",
"pityriasis rosea",
"porokeratosis actinic",
"psoriasis",
"tinea corporis",
"tinea nigra",
"tungiasis",
"actinic keratosis",
"dermatofibroma",
"nevus",
"pigmented benign keratosis",
"seborrheic keratosis",
"squamous cell carcinoma",
"vascular lesion"
] |
FelipeMedina16/vit-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0297
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1385 | 3.85 | 500 | 0.0297 | 0.9925 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
awrysfab/emotion_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2383
- Accuracy: 0.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0769 | 1.0 | 10 | 2.0617 | 0.1812 |
| 2.0383 | 2.0 | 20 | 2.0104 | 0.3 |
| 1.9423 | 3.0 | 30 | 1.8932 | 0.425 |
| 1.7923 | 4.0 | 40 | 1.7442 | 0.475 |
| 1.6547 | 5.0 | 50 | 1.6047 | 0.4875 |
| 1.5297 | 6.0 | 60 | 1.5184 | 0.5437 |
| 1.4345 | 7.0 | 70 | 1.4392 | 0.5625 |
| 1.337 | 8.0 | 80 | 1.3847 | 0.5875 |
| 1.2722 | 9.0 | 90 | 1.3442 | 0.55 |
| 1.217 | 10.0 | 100 | 1.3058 | 0.5625 |
| 1.1497 | 11.0 | 110 | 1.2914 | 0.55 |
| 1.0977 | 12.0 | 120 | 1.2377 | 0.6125 |
| 1.0507 | 13.0 | 130 | 1.2253 | 0.5687 |
| 1.0268 | 14.0 | 140 | 1.2269 | 0.5938 |
| 0.967 | 15.0 | 150 | 1.2260 | 0.5938 |
| 0.9269 | 16.0 | 160 | 1.2421 | 0.5687 |
| 0.9102 | 17.0 | 170 | 1.2218 | 0.5687 |
| 0.8883 | 18.0 | 180 | 1.2207 | 0.5687 |
| 0.8633 | 19.0 | 190 | 1.1933 | 0.6062 |
| 0.8557 | 20.0 | 200 | 1.1830 | 0.575 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
hyeongjin99/vit_base_aihub_model_py
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_base_aihub_model_py
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0228
- Accuracy: 0.9978
- Precision: 0.9981
- Recall: 0.9974
- F1: 0.9978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.1415 | 1.0 | 149 | 0.1286 | 0.9712 | 0.9788 | 0.9623 | 0.9700 |
| 0.0671 | 2.0 | 299 | 0.0463 | 0.9948 | 0.9917 | 0.9946 | 0.9932 |
| 0.0423 | 3.0 | 448 | 0.0356 | 0.9952 | 0.9970 | 0.9908 | 0.9939 |
| 0.0383 | 4.0 | 598 | 0.0242 | 0.9976 | 0.9980 | 0.9972 | 0.9976 |
| 0.033 | 4.98 | 745 | 0.0228 | 0.9978 | 0.9981 | 0.9974 | 0.9978 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
[
"cloudy",
"normal",
"rainy",
"snowy"
] |
dima806/horse_breeds_image_detection
|
Returns horse breed given an image with about 91% accuracy.
See https://www.kaggle.com/code/dima806/horse-breed-image-detection-vit for more details.
```
Classification report:
precision recall f1-score support
Friesian 0.8889 1.0000 0.9412 24
Arabian 0.8571 0.9600 0.9057 25
Percheron 1.0000 0.6400 0.7805 25
Orlov Trotter 0.7931 0.9200 0.8519 25
Akhal-Teke 1.0000 0.9200 0.9583 25
Vladimir Heavy Draft 0.9200 0.9583 0.9388 24
Appaloosa 1.0000 1.0000 1.0000 25
accuracy 0.9133 173
macro avg 0.9227 0.9140 0.9109 173
weighted avg 0.9229 0.9133 0.9106 173
```
|
[
"friesian",
"arabian",
"percheron",
"orlov trotter",
"akhal-teke",
"vladimir heavy draft",
"appaloosa"
] |
randomstate42/vit_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pikachu_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1405
- Accuracy: 0.9786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.9745 | 1.0 | 70 | 3.8989 | 0.5574 |
| 3.0708 | 1.99 | 140 | 3.0319 | 0.8415 |
| 2.4196 | 2.99 | 210 | 2.4623 | 0.9225 |
| 1.9768 | 4.0 | 281 | 2.0344 | 0.9492 |
| 1.6809 | 5.0 | 351 | 1.7300 | 0.9715 |
| 1.4707 | 5.99 | 421 | 1.4962 | 0.9742 |
| 1.2854 | 6.99 | 491 | 1.3465 | 0.9724 |
| 1.1553 | 8.0 | 562 | 1.2592 | 0.9742 |
| 1.0859 | 9.0 | 632 | 1.1849 | 0.9724 |
| 1.0657 | 9.96 | 700 | 1.1405 | 0.9786 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"alpinia galanga (rasna)",
"amaranthus viridis (arive-dantu)",
"big caltrops.zip",
"black-honey shrub.zip",
"brassica juncea (indian mustard)",
"bristly wild grape.zip",
"butterfly pea.zip",
"cape gooseberry.zip",
"carissa carandas (karanda)",
"citrus limon (lemon)",
"common wireweed.zip",
"country mallow.zip",
"artocarpus heterophyllus (jackfruit)",
"crown flower.zip",
"ficus auriculata (roxburgh fig)",
"ficus religiosa (peepal tree)",
"green chireta.zip",
"hibiscus rosa-sinensis",
"holy basil.zip",
"indian copperleaf.zip",
"indian jujube.zip",
"indian sarsaparilla.zip",
"indian stinging nettle.zip",
"asthma plant.zip",
"indian thornapple.zip",
"indian wormwood.zip",
"ivy gourd.zip",
"jasminum (jasmine)",
"kokilaksha.zip",
"land caltrops (bindii).zip",
"madagascar periwinkle.zip",
"madras pea pumpkin.zip",
"malabar catmint.zip",
"mangifera indica (mango)",
"avaram.zip",
"mentha (mint)",
"mexican mint.zip",
"mexican prickly poppy.zip",
"moringa oleifera (drumstick)",
"mountain knotgrass.zip",
"muntingia calabura (jamaica cherry-gasagase)",
"murraya koenigii (curry)",
"nalta jute.zip",
"nerium oleander (oleander)",
"night blooming cereus.zip",
"azadirachta indica (neem)",
"nyctanthes arbor-tristis (parijata)",
"ocimum tenuiflorum (tulsi)",
"panicled foldwing.zip",
"piper betle (betel)",
"plectranthus amboinicus (mexican mint)",
"pongamia pinnata (indian beech)",
"prickly chaff flower.zip",
"psidium guajava (guava)",
"punarnava.zip",
"punica granatum (pomegranate)",
"balloon vine.zip",
"purple fruited pea eggplant.zip",
"purple tephrosia.zip",
"rosary pea.zip",
"santalum album (sandalwood)",
"shaggy button weed.zip",
"small water clover.zip",
"spiderwisp.zip",
"square stalked vine.zip",
"stinking passionflower.zip",
"sweet basil.zip",
"basella alba (basale)",
"sweet flag.zip",
"syzygium cumini (jamun)",
"syzygium jambos (rose apple)",
"tabernaemontana divaricata (crape jasmine)",
"tinnevelly senna.zip",
"trellis vine.zip",
"trigonella foenum-graecum (fenugreek)",
"velvet bean.zip",
"coatbuttons.zip",
"heart-leaved moonseed.zip",
"bellyache bush (green).zip",
"benghal dayflower.zip"
] |
yashika0998/vit-base-patch16-224-finetuned-flower
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.1+cu118
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"daisy",
"dandelion",
"roses",
"sunflowers",
"tulips"
] |
grelade/mmx-feature-extraction
|
# ResNet
ResNet model trained on imagenet-1k. It was introduced in the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) and first released in [this repository](https://github.com/KaimingHe/deep-residual-networks).
Disclaimer: The team releasing ResNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ResNet introduced residual connections, they allow to train networks with an unseen number of layers (up to 1000). ResNet won the 2015 ILSVRC & COCO competition, one important milestone in deep computer vision.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=resnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, ResNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-18")
>>> model = ResNetForImageClassification.from_pretrained("microsoft/resnet-18")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
tiger cat
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/resnet).
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
mmunoz96/results
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cpu
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
dima806/tesla_car_model_image_detection
|
Returns Tesla car model given an image with about 85% accuracy.
See https://www.kaggle.com/code/dima806/tesla-car-model-image-detection-vit for more details.
```
Classification report:
precision recall f1-score support
Model_Y 0.8679 0.8364 0.8519 55
Model_E 0.8462 0.8800 0.8627 100
Model_S 0.8293 0.8095 0.8193 42
Model_X 0.8519 0.8364 0.8440 55
accuracy 0.8492 252
macro avg 0.8488 0.8406 0.8445 252
weighted avg 0.8493 0.8492 0.8490 252
```
|
[
"model_y",
"model_e",
"model_s",
"model_x"
] |
iasolutionss/model_beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1358
- Accuracy: 0.9699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0693 | 3.85 | 500 | 0.1358 | 0.9699 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
Niraya666/swin-tiny-patch4-window7-224-finetuned-ADC-4cls-0922
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-ADC-4cls-0922
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8947
- Accuracy: 0.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.9655 | 0.6714 |
| No log | 2.0 | 4 | 0.9654 | 0.6571 |
| No log | 3.0 | 6 | 0.9651 | 0.6571 |
| No log | 4.0 | 8 | 0.9647 | 0.6571 |
| 1.0064 | 5.0 | 10 | 0.9641 | 0.6571 |
| 1.0064 | 6.0 | 12 | 0.9635 | 0.6571 |
| 1.0064 | 7.0 | 14 | 0.9629 | 0.6571 |
| 1.0064 | 8.0 | 16 | 0.9623 | 0.6571 |
| 1.0064 | 9.0 | 18 | 0.9617 | 0.6571 |
| 0.9821 | 10.0 | 20 | 0.9611 | 0.6571 |
| 0.9821 | 11.0 | 22 | 0.9607 | 0.6571 |
| 0.9821 | 12.0 | 24 | 0.9604 | 0.6714 |
| 0.9821 | 13.0 | 26 | 0.9601 | 0.6714 |
| 0.9821 | 14.0 | 28 | 0.9597 | 0.6714 |
| 1.0278 | 15.0 | 30 | 0.9592 | 0.6714 |
| 1.0278 | 16.0 | 32 | 0.9581 | 0.6714 |
| 1.0278 | 17.0 | 34 | 0.9567 | 0.6714 |
| 1.0278 | 18.0 | 36 | 0.9551 | 0.6714 |
| 1.0278 | 19.0 | 38 | 0.9534 | 0.6714 |
| 0.9986 | 20.0 | 40 | 0.9514 | 0.6571 |
| 0.9986 | 21.0 | 42 | 0.9493 | 0.6571 |
| 0.9986 | 22.0 | 44 | 0.9472 | 0.6429 |
| 0.9986 | 23.0 | 46 | 0.9452 | 0.6429 |
| 0.9986 | 24.0 | 48 | 0.9434 | 0.6429 |
| 0.9973 | 25.0 | 50 | 0.9420 | 0.6429 |
| 0.9973 | 26.0 | 52 | 0.9405 | 0.6429 |
| 0.9973 | 27.0 | 54 | 0.9387 | 0.6286 |
| 0.9973 | 28.0 | 56 | 0.9376 | 0.6286 |
| 0.9973 | 29.0 | 58 | 0.9368 | 0.6429 |
| 0.9936 | 30.0 | 60 | 0.9362 | 0.6429 |
| 0.9936 | 31.0 | 62 | 0.9361 | 0.6571 |
| 0.9936 | 32.0 | 64 | 0.9364 | 0.6714 |
| 0.9936 | 33.0 | 66 | 0.9371 | 0.6714 |
| 0.9936 | 34.0 | 68 | 0.9380 | 0.6429 |
| 0.9746 | 35.0 | 70 | 0.9380 | 0.6571 |
| 0.9746 | 36.0 | 72 | 0.9375 | 0.6714 |
| 0.9746 | 37.0 | 74 | 0.9380 | 0.6714 |
| 0.9746 | 38.0 | 76 | 0.9375 | 0.6714 |
| 0.9746 | 39.0 | 78 | 0.9370 | 0.6714 |
| 1.0113 | 40.0 | 80 | 0.9362 | 0.6714 |
| 1.0113 | 41.0 | 82 | 0.9341 | 0.6714 |
| 1.0113 | 42.0 | 84 | 0.9301 | 0.6857 |
| 1.0113 | 43.0 | 86 | 0.9260 | 0.6714 |
| 1.0113 | 44.0 | 88 | 0.9224 | 0.6571 |
| 0.9756 | 45.0 | 90 | 0.9190 | 0.6714 |
| 0.9756 | 46.0 | 92 | 0.9154 | 0.6714 |
| 0.9756 | 47.0 | 94 | 0.9123 | 0.6714 |
| 0.9756 | 48.0 | 96 | 0.9091 | 0.6571 |
| 0.9756 | 49.0 | 98 | 0.9071 | 0.6571 |
| 0.9721 | 50.0 | 100 | 0.9056 | 0.6571 |
| 0.9721 | 51.0 | 102 | 0.9047 | 0.6571 |
| 0.9721 | 52.0 | 104 | 0.9039 | 0.6571 |
| 0.9721 | 53.0 | 106 | 0.9031 | 0.6714 |
| 0.9721 | 54.0 | 108 | 0.9025 | 0.6714 |
| 0.9698 | 55.0 | 110 | 0.9023 | 0.6714 |
| 0.9698 | 56.0 | 112 | 0.9012 | 0.6714 |
| 0.9698 | 57.0 | 114 | 0.8997 | 0.6714 |
| 0.9698 | 58.0 | 116 | 0.8982 | 0.6714 |
| 0.9698 | 59.0 | 118 | 0.8970 | 0.6714 |
| 0.9341 | 60.0 | 120 | 0.8957 | 0.6857 |
| 0.9341 | 61.0 | 122 | 0.8947 | 0.7 |
| 0.9341 | 62.0 | 124 | 0.8940 | 0.7 |
| 0.9341 | 63.0 | 126 | 0.8941 | 0.6714 |
| 0.9341 | 64.0 | 128 | 0.8934 | 0.6714 |
| 0.9717 | 65.0 | 130 | 0.8917 | 0.6714 |
| 0.9717 | 66.0 | 132 | 0.8898 | 0.6857 |
| 0.9717 | 67.0 | 134 | 0.8884 | 0.6857 |
| 0.9717 | 68.0 | 136 | 0.8870 | 0.6857 |
| 0.9717 | 69.0 | 138 | 0.8854 | 0.6857 |
| 0.9655 | 70.0 | 140 | 0.8840 | 0.6857 |
| 0.9655 | 71.0 | 142 | 0.8827 | 0.6857 |
| 0.9655 | 72.0 | 144 | 0.8814 | 0.6857 |
| 0.9655 | 73.0 | 146 | 0.8805 | 0.6857 |
| 0.9655 | 74.0 | 148 | 0.8803 | 0.6857 |
| 0.9458 | 75.0 | 150 | 0.8802 | 0.6857 |
| 0.9458 | 76.0 | 152 | 0.8797 | 0.6714 |
| 0.9458 | 77.0 | 154 | 0.8794 | 0.6714 |
| 0.9458 | 78.0 | 156 | 0.8796 | 0.6714 |
| 0.9458 | 79.0 | 158 | 0.8808 | 0.6714 |
| 0.9094 | 80.0 | 160 | 0.8817 | 0.6714 |
| 0.9094 | 81.0 | 162 | 0.8828 | 0.6714 |
| 0.9094 | 82.0 | 164 | 0.8836 | 0.6714 |
| 0.9094 | 83.0 | 166 | 0.8830 | 0.6714 |
| 0.9094 | 84.0 | 168 | 0.8821 | 0.6571 |
| 0.8719 | 85.0 | 170 | 0.8813 | 0.6571 |
| 0.8719 | 86.0 | 172 | 0.8804 | 0.6714 |
| 0.8719 | 87.0 | 174 | 0.8798 | 0.6571 |
| 0.8719 | 88.0 | 176 | 0.8787 | 0.6571 |
| 0.8719 | 89.0 | 178 | 0.8770 | 0.6571 |
| 0.9288 | 90.0 | 180 | 0.8752 | 0.6857 |
| 0.9288 | 91.0 | 182 | 0.8722 | 0.6857 |
| 0.9288 | 92.0 | 184 | 0.8694 | 0.6714 |
| 0.9288 | 93.0 | 186 | 0.8670 | 0.6714 |
| 0.9288 | 94.0 | 188 | 0.8645 | 0.6857 |
| 0.9039 | 95.0 | 190 | 0.8624 | 0.6857 |
| 0.9039 | 96.0 | 192 | 0.8603 | 0.6714 |
| 0.9039 | 97.0 | 194 | 0.8584 | 0.6857 |
| 0.9039 | 98.0 | 196 | 0.8566 | 0.6857 |
| 0.9039 | 99.0 | 198 | 0.8553 | 0.6857 |
| 0.9081 | 100.0 | 200 | 0.8550 | 0.6857 |
| 0.9081 | 101.0 | 202 | 0.8551 | 0.6857 |
| 0.9081 | 102.0 | 204 | 0.8556 | 0.6857 |
| 0.9081 | 103.0 | 206 | 0.8558 | 0.6857 |
| 0.9081 | 104.0 | 208 | 0.8554 | 0.6857 |
| 0.9142 | 105.0 | 210 | 0.8551 | 0.6857 |
| 0.9142 | 106.0 | 212 | 0.8553 | 0.6857 |
| 0.9142 | 107.0 | 214 | 0.8551 | 0.6857 |
| 0.9142 | 108.0 | 216 | 0.8549 | 0.6857 |
| 0.9142 | 109.0 | 218 | 0.8549 | 0.6857 |
| 0.9347 | 110.0 | 220 | 0.8551 | 0.6714 |
| 0.9347 | 111.0 | 222 | 0.8554 | 0.6714 |
| 0.9347 | 112.0 | 224 | 0.8548 | 0.6714 |
| 0.9347 | 113.0 | 226 | 0.8538 | 0.6714 |
| 0.9347 | 114.0 | 228 | 0.8525 | 0.6714 |
| 0.8922 | 115.0 | 230 | 0.8512 | 0.6857 |
| 0.8922 | 116.0 | 232 | 0.8505 | 0.6857 |
| 0.8922 | 117.0 | 234 | 0.8495 | 0.6857 |
| 0.8922 | 118.0 | 236 | 0.8484 | 0.6857 |
| 0.8922 | 119.0 | 238 | 0.8472 | 0.6857 |
| 0.8897 | 120.0 | 240 | 0.8456 | 0.6857 |
| 0.8897 | 121.0 | 242 | 0.8440 | 0.6857 |
| 0.8897 | 122.0 | 244 | 0.8426 | 0.6714 |
| 0.8897 | 123.0 | 246 | 0.8412 | 0.6857 |
| 0.8897 | 124.0 | 248 | 0.8396 | 0.6857 |
| 0.8829 | 125.0 | 250 | 0.8384 | 0.6857 |
| 0.8829 | 126.0 | 252 | 0.8373 | 0.6857 |
| 0.8829 | 127.0 | 254 | 0.8365 | 0.6857 |
| 0.8829 | 128.0 | 256 | 0.8360 | 0.6857 |
| 0.8829 | 129.0 | 258 | 0.8353 | 0.6857 |
| 0.8744 | 130.0 | 260 | 0.8344 | 0.6857 |
| 0.8744 | 131.0 | 262 | 0.8337 | 0.6714 |
| 0.8744 | 132.0 | 264 | 0.8329 | 0.6857 |
| 0.8744 | 133.0 | 266 | 0.8325 | 0.6857 |
| 0.8744 | 134.0 | 268 | 0.8318 | 0.6857 |
| 0.8657 | 135.0 | 270 | 0.8312 | 0.6857 |
| 0.8657 | 136.0 | 272 | 0.8306 | 0.6714 |
| 0.8657 | 137.0 | 274 | 0.8300 | 0.6714 |
| 0.8657 | 138.0 | 276 | 0.8296 | 0.6714 |
| 0.8657 | 139.0 | 278 | 0.8294 | 0.6714 |
| 0.9421 | 140.0 | 280 | 0.8292 | 0.6714 |
| 0.9421 | 141.0 | 282 | 0.8291 | 0.6714 |
| 0.9421 | 142.0 | 284 | 0.8290 | 0.6714 |
| 0.9421 | 143.0 | 286 | 0.8290 | 0.6857 |
| 0.9421 | 144.0 | 288 | 0.8289 | 0.6857 |
| 0.9066 | 145.0 | 290 | 0.8287 | 0.6857 |
| 0.9066 | 146.0 | 292 | 0.8290 | 0.6857 |
| 0.9066 | 147.0 | 294 | 0.8293 | 0.6857 |
| 0.9066 | 148.0 | 296 | 0.8294 | 0.6857 |
| 0.9066 | 149.0 | 298 | 0.8295 | 0.6857 |
| 0.9068 | 150.0 | 300 | 0.8295 | 0.6857 |
| 0.9068 | 151.0 | 302 | 0.8294 | 0.6857 |
| 0.9068 | 152.0 | 304 | 0.8293 | 0.6857 |
| 0.9068 | 153.0 | 306 | 0.8293 | 0.6857 |
| 0.9068 | 154.0 | 308 | 0.8290 | 0.6857 |
| 0.8715 | 155.0 | 310 | 0.8287 | 0.6857 |
| 0.8715 | 156.0 | 312 | 0.8283 | 0.6857 |
| 0.8715 | 157.0 | 314 | 0.8277 | 0.6857 |
| 0.8715 | 158.0 | 316 | 0.8274 | 0.6857 |
| 0.8715 | 159.0 | 318 | 0.8269 | 0.6857 |
| 0.8921 | 160.0 | 320 | 0.8266 | 0.6857 |
| 0.8921 | 161.0 | 322 | 0.8264 | 0.6857 |
| 0.8921 | 162.0 | 324 | 0.8261 | 0.6857 |
| 0.8921 | 163.0 | 326 | 0.8260 | 0.6857 |
| 0.8921 | 164.0 | 328 | 0.8258 | 0.6857 |
| 0.8768 | 165.0 | 330 | 0.8252 | 0.6857 |
| 0.8768 | 166.0 | 332 | 0.8248 | 0.6857 |
| 0.8768 | 167.0 | 334 | 0.8243 | 0.6857 |
| 0.8768 | 168.0 | 336 | 0.8237 | 0.6857 |
| 0.8768 | 169.0 | 338 | 0.8231 | 0.6857 |
| 0.8519 | 170.0 | 340 | 0.8227 | 0.6857 |
| 0.8519 | 171.0 | 342 | 0.8223 | 0.6857 |
| 0.8519 | 172.0 | 344 | 0.8221 | 0.6857 |
| 0.8519 | 173.0 | 346 | 0.8220 | 0.6857 |
| 0.8519 | 174.0 | 348 | 0.8218 | 0.6857 |
| 0.92 | 175.0 | 350 | 0.8215 | 0.6857 |
| 0.92 | 176.0 | 352 | 0.8211 | 0.7 |
| 0.92 | 177.0 | 354 | 0.8207 | 0.7 |
| 0.92 | 178.0 | 356 | 0.8204 | 0.7 |
| 0.92 | 179.0 | 358 | 0.8200 | 0.7 |
| 0.879 | 180.0 | 360 | 0.8197 | 0.7 |
| 0.879 | 181.0 | 362 | 0.8194 | 0.7 |
| 0.879 | 182.0 | 364 | 0.8191 | 0.6857 |
| 0.879 | 183.0 | 366 | 0.8187 | 0.6857 |
| 0.879 | 184.0 | 368 | 0.8185 | 0.7 |
| 0.8893 | 185.0 | 370 | 0.8182 | 0.7 |
| 0.8893 | 186.0 | 372 | 0.8180 | 0.7 |
| 0.8893 | 187.0 | 374 | 0.8177 | 0.7 |
| 0.8893 | 188.0 | 376 | 0.8176 | 0.7 |
| 0.8893 | 189.0 | 378 | 0.8175 | 0.7 |
| 0.8501 | 190.0 | 380 | 0.8173 | 0.7 |
| 0.8501 | 191.0 | 382 | 0.8171 | 0.7 |
| 0.8501 | 192.0 | 384 | 0.8170 | 0.7 |
| 0.8501 | 193.0 | 386 | 0.8169 | 0.7 |
| 0.8501 | 194.0 | 388 | 0.8169 | 0.7 |
| 0.8611 | 195.0 | 390 | 0.8168 | 0.7 |
| 0.8611 | 196.0 | 392 | 0.8168 | 0.7 |
| 0.8611 | 197.0 | 394 | 0.8168 | 0.7 |
| 0.8611 | 198.0 | 396 | 0.8168 | 0.7 |
| 0.8611 | 199.0 | 398 | 0.8168 | 0.7 |
| 0.8881 | 200.0 | 400 | 0.8168 | 0.7 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"color",
"pattern_fail",
"residue",
"tiny"
] |
Niraya666/swin-tiny-patch4-window7-224-finetuned-ADC-3cls-0922
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-ADC-3cls-0922
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6771
- Accuracy: 0.8286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.6875 | 0.8143 |
| No log | 2.0 | 4 | 0.6874 | 0.8143 |
| No log | 3.0 | 6 | 0.6873 | 0.8143 |
| No log | 4.0 | 8 | 0.6871 | 0.8143 |
| 0.7555 | 5.0 | 10 | 0.6869 | 0.8143 |
| 0.7555 | 6.0 | 12 | 0.6866 | 0.8143 |
| 0.7555 | 7.0 | 14 | 0.6862 | 0.8143 |
| 0.7555 | 8.0 | 16 | 0.6858 | 0.8143 |
| 0.7555 | 9.0 | 18 | 0.6853 | 0.8143 |
| 0.7576 | 10.0 | 20 | 0.6848 | 0.8143 |
| 0.7576 | 11.0 | 22 | 0.6842 | 0.8143 |
| 0.7576 | 12.0 | 24 | 0.6836 | 0.8143 |
| 0.7576 | 13.0 | 26 | 0.6830 | 0.8143 |
| 0.7576 | 14.0 | 28 | 0.6823 | 0.8143 |
| 0.769 | 15.0 | 30 | 0.6816 | 0.8 |
| 0.769 | 16.0 | 32 | 0.6808 | 0.8 |
| 0.769 | 17.0 | 34 | 0.6800 | 0.8143 |
| 0.769 | 18.0 | 36 | 0.6791 | 0.8143 |
| 0.769 | 19.0 | 38 | 0.6781 | 0.8143 |
| 0.7564 | 20.0 | 40 | 0.6771 | 0.8286 |
| 0.7564 | 21.0 | 42 | 0.6760 | 0.8143 |
| 0.7564 | 22.0 | 44 | 0.6748 | 0.8143 |
| 0.7564 | 23.0 | 46 | 0.6737 | 0.8 |
| 0.7564 | 24.0 | 48 | 0.6725 | 0.8 |
| 0.7508 | 25.0 | 50 | 0.6713 | 0.8143 |
| 0.7508 | 26.0 | 52 | 0.6701 | 0.8143 |
| 0.7508 | 27.0 | 54 | 0.6689 | 0.8143 |
| 0.7508 | 28.0 | 56 | 0.6674 | 0.8143 |
| 0.7508 | 29.0 | 58 | 0.6660 | 0.8143 |
| 0.747 | 30.0 | 60 | 0.6646 | 0.8143 |
| 0.747 | 31.0 | 62 | 0.6631 | 0.8143 |
| 0.747 | 32.0 | 64 | 0.6616 | 0.8143 |
| 0.747 | 33.0 | 66 | 0.6601 | 0.8143 |
| 0.747 | 34.0 | 68 | 0.6586 | 0.8143 |
| 0.7343 | 35.0 | 70 | 0.6570 | 0.8143 |
| 0.7343 | 36.0 | 72 | 0.6553 | 0.8143 |
| 0.7343 | 37.0 | 74 | 0.6536 | 0.8143 |
| 0.7343 | 38.0 | 76 | 0.6517 | 0.8143 |
| 0.7343 | 39.0 | 78 | 0.6499 | 0.8143 |
| 0.7532 | 40.0 | 80 | 0.6480 | 0.8143 |
| 0.7532 | 41.0 | 82 | 0.6461 | 0.8143 |
| 0.7532 | 42.0 | 84 | 0.6442 | 0.8143 |
| 0.7532 | 43.0 | 86 | 0.6423 | 0.8143 |
| 0.7532 | 44.0 | 88 | 0.6405 | 0.8143 |
| 0.7239 | 45.0 | 90 | 0.6387 | 0.8143 |
| 0.7239 | 46.0 | 92 | 0.6368 | 0.8143 |
| 0.7239 | 47.0 | 94 | 0.6352 | 0.8143 |
| 0.7239 | 48.0 | 96 | 0.6337 | 0.8143 |
| 0.7239 | 49.0 | 98 | 0.6321 | 0.8286 |
| 0.7085 | 50.0 | 100 | 0.6307 | 0.8286 |
| 0.7085 | 51.0 | 102 | 0.6294 | 0.8286 |
| 0.7085 | 52.0 | 104 | 0.6278 | 0.8286 |
| 0.7085 | 53.0 | 106 | 0.6263 | 0.8286 |
| 0.7085 | 54.0 | 108 | 0.6248 | 0.8143 |
| 0.7203 | 55.0 | 110 | 0.6233 | 0.8143 |
| 0.7203 | 56.0 | 112 | 0.6219 | 0.8143 |
| 0.7203 | 57.0 | 114 | 0.6205 | 0.8143 |
| 0.7203 | 58.0 | 116 | 0.6191 | 0.8143 |
| 0.7203 | 59.0 | 118 | 0.6179 | 0.8143 |
| 0.7136 | 60.0 | 120 | 0.6167 | 0.8143 |
| 0.7136 | 61.0 | 122 | 0.6157 | 0.8143 |
| 0.7136 | 62.0 | 124 | 0.6148 | 0.8 |
| 0.7136 | 63.0 | 126 | 0.6138 | 0.8 |
| 0.7136 | 64.0 | 128 | 0.6125 | 0.8 |
| 0.7123 | 65.0 | 130 | 0.6111 | 0.8 |
| 0.7123 | 66.0 | 132 | 0.6096 | 0.8143 |
| 0.7123 | 67.0 | 134 | 0.6083 | 0.8143 |
| 0.7123 | 68.0 | 136 | 0.6070 | 0.8143 |
| 0.7123 | 69.0 | 138 | 0.6057 | 0.8143 |
| 0.7076 | 70.0 | 140 | 0.6046 | 0.8143 |
| 0.7076 | 71.0 | 142 | 0.6035 | 0.8143 |
| 0.7076 | 72.0 | 144 | 0.6023 | 0.8143 |
| 0.7076 | 73.0 | 146 | 0.6011 | 0.8143 |
| 0.7076 | 74.0 | 148 | 0.5999 | 0.8143 |
| 0.6878 | 75.0 | 150 | 0.5988 | 0.8143 |
| 0.6878 | 76.0 | 152 | 0.5975 | 0.8143 |
| 0.6878 | 77.0 | 154 | 0.5964 | 0.8143 |
| 0.6878 | 78.0 | 156 | 0.5953 | 0.8143 |
| 0.6878 | 79.0 | 158 | 0.5942 | 0.8143 |
| 0.6657 | 80.0 | 160 | 0.5932 | 0.8143 |
| 0.6657 | 81.0 | 162 | 0.5923 | 0.8143 |
| 0.6657 | 82.0 | 164 | 0.5914 | 0.8143 |
| 0.6657 | 83.0 | 166 | 0.5906 | 0.8143 |
| 0.6657 | 84.0 | 168 | 0.5897 | 0.8143 |
| 0.6434 | 85.0 | 170 | 0.5888 | 0.8143 |
| 0.6434 | 86.0 | 172 | 0.5878 | 0.8143 |
| 0.6434 | 87.0 | 174 | 0.5868 | 0.8143 |
| 0.6434 | 88.0 | 176 | 0.5859 | 0.8143 |
| 0.6434 | 89.0 | 178 | 0.5851 | 0.8143 |
| 0.6825 | 90.0 | 180 | 0.5843 | 0.8143 |
| 0.6825 | 91.0 | 182 | 0.5836 | 0.8143 |
| 0.6825 | 92.0 | 184 | 0.5828 | 0.8143 |
| 0.6825 | 93.0 | 186 | 0.5823 | 0.8143 |
| 0.6825 | 94.0 | 188 | 0.5817 | 0.8286 |
| 0.6695 | 95.0 | 190 | 0.5809 | 0.8143 |
| 0.6695 | 96.0 | 192 | 0.5801 | 0.8143 |
| 0.6695 | 97.0 | 194 | 0.5793 | 0.8143 |
| 0.6695 | 98.0 | 196 | 0.5787 | 0.8143 |
| 0.6695 | 99.0 | 198 | 0.5780 | 0.8143 |
| 0.6672 | 100.0 | 200 | 0.5772 | 0.8143 |
| 0.6672 | 101.0 | 202 | 0.5762 | 0.8143 |
| 0.6672 | 102.0 | 204 | 0.5754 | 0.8143 |
| 0.6672 | 103.0 | 206 | 0.5746 | 0.8143 |
| 0.6672 | 104.0 | 208 | 0.5738 | 0.8143 |
| 0.6569 | 105.0 | 210 | 0.5731 | 0.8143 |
| 0.6569 | 106.0 | 212 | 0.5724 | 0.8143 |
| 0.6569 | 107.0 | 214 | 0.5716 | 0.8143 |
| 0.6569 | 108.0 | 216 | 0.5708 | 0.8143 |
| 0.6569 | 109.0 | 218 | 0.5701 | 0.8143 |
| 0.6748 | 110.0 | 220 | 0.5694 | 0.8143 |
| 0.6748 | 111.0 | 222 | 0.5687 | 0.8143 |
| 0.6748 | 112.0 | 224 | 0.5680 | 0.8143 |
| 0.6748 | 113.0 | 226 | 0.5674 | 0.8143 |
| 0.6748 | 114.0 | 228 | 0.5668 | 0.8143 |
| 0.6388 | 115.0 | 230 | 0.5662 | 0.8143 |
| 0.6388 | 116.0 | 232 | 0.5657 | 0.8143 |
| 0.6388 | 117.0 | 234 | 0.5652 | 0.8143 |
| 0.6388 | 118.0 | 236 | 0.5648 | 0.8286 |
| 0.6388 | 119.0 | 238 | 0.5645 | 0.8286 |
| 0.6551 | 120.0 | 240 | 0.5641 | 0.8286 |
| 0.6551 | 121.0 | 242 | 0.5636 | 0.8143 |
| 0.6551 | 122.0 | 244 | 0.5631 | 0.8143 |
| 0.6551 | 123.0 | 246 | 0.5627 | 0.8143 |
| 0.6551 | 124.0 | 248 | 0.5624 | 0.8143 |
| 0.6452 | 125.0 | 250 | 0.5622 | 0.8143 |
| 0.6452 | 126.0 | 252 | 0.5620 | 0.8143 |
| 0.6452 | 127.0 | 254 | 0.5618 | 0.8143 |
| 0.6452 | 128.0 | 256 | 0.5615 | 0.8143 |
| 0.6452 | 129.0 | 258 | 0.5613 | 0.8143 |
| 0.645 | 130.0 | 260 | 0.5611 | 0.8143 |
| 0.645 | 131.0 | 262 | 0.5608 | 0.8143 |
| 0.645 | 132.0 | 264 | 0.5606 | 0.8143 |
| 0.645 | 133.0 | 266 | 0.5602 | 0.8143 |
| 0.645 | 134.0 | 268 | 0.5596 | 0.8143 |
| 0.629 | 135.0 | 270 | 0.5590 | 0.8143 |
| 0.629 | 136.0 | 272 | 0.5582 | 0.8143 |
| 0.629 | 137.0 | 274 | 0.5576 | 0.8143 |
| 0.629 | 138.0 | 276 | 0.5571 | 0.8143 |
| 0.629 | 139.0 | 278 | 0.5568 | 0.8143 |
| 0.7126 | 140.0 | 280 | 0.5565 | 0.8143 |
| 0.7126 | 141.0 | 282 | 0.5563 | 0.8143 |
| 0.7126 | 142.0 | 284 | 0.5561 | 0.8143 |
| 0.7126 | 143.0 | 286 | 0.5559 | 0.8143 |
| 0.7126 | 144.0 | 288 | 0.5555 | 0.8143 |
| 0.669 | 145.0 | 290 | 0.5552 | 0.8143 |
| 0.669 | 146.0 | 292 | 0.5547 | 0.8143 |
| 0.669 | 147.0 | 294 | 0.5542 | 0.8143 |
| 0.669 | 148.0 | 296 | 0.5538 | 0.8143 |
| 0.669 | 149.0 | 298 | 0.5534 | 0.8143 |
| 0.6481 | 150.0 | 300 | 0.5530 | 0.8143 |
| 0.6481 | 151.0 | 302 | 0.5526 | 0.8143 |
| 0.6481 | 152.0 | 304 | 0.5522 | 0.8143 |
| 0.6481 | 153.0 | 306 | 0.5519 | 0.8143 |
| 0.6481 | 154.0 | 308 | 0.5515 | 0.8143 |
| 0.6211 | 155.0 | 310 | 0.5510 | 0.8143 |
| 0.6211 | 156.0 | 312 | 0.5506 | 0.8143 |
| 0.6211 | 157.0 | 314 | 0.5502 | 0.8143 |
| 0.6211 | 158.0 | 316 | 0.5499 | 0.8143 |
| 0.6211 | 159.0 | 318 | 0.5496 | 0.8143 |
| 0.6458 | 160.0 | 320 | 0.5492 | 0.8286 |
| 0.6458 | 161.0 | 322 | 0.5490 | 0.8143 |
| 0.6458 | 162.0 | 324 | 0.5488 | 0.8143 |
| 0.6458 | 163.0 | 326 | 0.5486 | 0.8143 |
| 0.6458 | 164.0 | 328 | 0.5484 | 0.8143 |
| 0.6317 | 165.0 | 330 | 0.5481 | 0.8143 |
| 0.6317 | 166.0 | 332 | 0.5479 | 0.8286 |
| 0.6317 | 167.0 | 334 | 0.5476 | 0.8286 |
| 0.6317 | 168.0 | 336 | 0.5473 | 0.8286 |
| 0.6317 | 169.0 | 338 | 0.5471 | 0.8286 |
| 0.6154 | 170.0 | 340 | 0.5470 | 0.8286 |
| 0.6154 | 171.0 | 342 | 0.5468 | 0.8286 |
| 0.6154 | 172.0 | 344 | 0.5466 | 0.8286 |
| 0.6154 | 173.0 | 346 | 0.5464 | 0.8286 |
| 0.6154 | 174.0 | 348 | 0.5462 | 0.8286 |
| 0.6323 | 175.0 | 350 | 0.5460 | 0.8286 |
| 0.6323 | 176.0 | 352 | 0.5459 | 0.8286 |
| 0.6323 | 177.0 | 354 | 0.5457 | 0.8286 |
| 0.6323 | 178.0 | 356 | 0.5456 | 0.8286 |
| 0.6323 | 179.0 | 358 | 0.5455 | 0.8286 |
| 0.6331 | 180.0 | 360 | 0.5453 | 0.8286 |
| 0.6331 | 181.0 | 362 | 0.5452 | 0.8286 |
| 0.6331 | 182.0 | 364 | 0.5451 | 0.8286 |
| 0.6331 | 183.0 | 366 | 0.5449 | 0.8286 |
| 0.6331 | 184.0 | 368 | 0.5448 | 0.8286 |
| 0.6333 | 185.0 | 370 | 0.5447 | 0.8286 |
| 0.6333 | 186.0 | 372 | 0.5447 | 0.8286 |
| 0.6333 | 187.0 | 374 | 0.5446 | 0.8286 |
| 0.6333 | 188.0 | 376 | 0.5445 | 0.8286 |
| 0.6333 | 189.0 | 378 | 0.5445 | 0.8286 |
| 0.608 | 190.0 | 380 | 0.5444 | 0.8286 |
| 0.608 | 191.0 | 382 | 0.5444 | 0.8286 |
| 0.608 | 192.0 | 384 | 0.5443 | 0.8286 |
| 0.608 | 193.0 | 386 | 0.5443 | 0.8286 |
| 0.608 | 194.0 | 388 | 0.5442 | 0.8286 |
| 0.6155 | 195.0 | 390 | 0.5442 | 0.8286 |
| 0.6155 | 196.0 | 392 | 0.5442 | 0.8286 |
| 0.6155 | 197.0 | 394 | 0.5442 | 0.8286 |
| 0.6155 | 198.0 | 396 | 0.5441 | 0.8286 |
| 0.6155 | 199.0 | 398 | 0.5441 | 0.8286 |
| 0.6272 | 200.0 | 400 | 0.5441 | 0.8286 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"color",
"pattern_fail",
"residue"
] |
ziauldin/swin-tiny-patch4-window7-224-finetuned-vit
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-vit
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5516
- Crack: {'precision': 0.575, 'recall': 0.71875, 'f1-score': 0.6388888888888888, 'support': 32}
- Environment - ground: {'precision': 0.9714285714285714, 'recall': 0.9714285714285714, 'f1-score': 0.9714285714285714, 'support': 35}
- Environment - other: {'precision': 0.8571428571428571, 'recall': 0.8888888888888888, 'f1-score': 0.8727272727272727, 'support': 27}
- Environment - sky: {'precision': 0.9761904761904762, 'recall': 0.9318181818181818, 'f1-score': 0.9534883720930233, 'support': 44}
- Environment - vegetation: {'precision': 0.9791666666666666, 'recall': 0.9791666666666666, 'f1-score': 0.9791666666666666, 'support': 48}
- Joint defect: {'precision': 0.9166666666666666, 'recall': 0.7096774193548387, 'f1-score': 0.7999999999999999, 'support': 31}
- Loss of section: {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 2}
- Spalling: {'precision': 0.6041666666666666, 'recall': 0.6041666666666666, 'f1-score': 0.6041666666666666, 'support': 48}
- Vegetation: {'precision': 0.8309859154929577, 'recall': 0.8939393939393939, 'f1-score': 0.8613138686131386, 'support': 66}
- Wall - grafitti: {'precision': 0.7, 'recall': 0.9545454545454546, 'f1-score': 0.8076923076923077, 'support': 22}
- Wall - normal: {'precision': 0.6976744186046512, 'recall': 0.7317073170731707, 'f1-score': 0.7142857142857143, 'support': 41}
- Wall - other: {'precision': 0.7910447761194029, 'recall': 0.7794117647058824, 'f1-score': 0.7851851851851852, 'support': 68}
- Wall - stain: {'precision': 0.8222222222222222, 'recall': 0.6491228070175439, 'f1-score': 0.7254901960784313, 'support': 57}
- Accuracy: 0.8061
- Macro avg: {'precision': 0.7478222490154723, 'recall': 0.754817164008097, 'f1-score': 0.7472179777173742, 'support': 521}
- Weighted avg: {'precision': 0.8107856771401473, 'recall': 0.8061420345489443, 'f1-score': 0.8050072232872345, 'support': 521}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Crack | Environment - ground | Environment - other | Environment - sky | Environment - vegetation | Joint defect | Loss of section | Spalling | Vegetation | Wall - grafitti | Wall - normal | Wall - other | Wall - stain | Accuracy | Macro avg | Weighted avg |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------:|:---------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------:|
| 0.9193 | 1.0 | 146 | 0.7596 | {'precision': 0.5681818181818182, 'recall': 0.78125, 'f1-score': 0.6578947368421052, 'support': 32} | {'precision': 0.9444444444444444, 'recall': 0.9714285714285714, 'f1-score': 0.9577464788732395, 'support': 35} | {'precision': 0.8846153846153846, 'recall': 0.8518518518518519, 'f1-score': 0.8679245283018868, 'support': 27} | {'precision': 0.9736842105263158, 'recall': 0.8409090909090909, 'f1-score': 0.9024390243902439, 'support': 44} | {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 48} | {'precision': 0.7419354838709677, 'recall': 0.7419354838709677, 'f1-score': 0.7419354838709677, 'support': 31} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 2} | {'precision': 0.5769230769230769, 'recall': 0.3125, 'f1-score': 0.4054054054054054, 'support': 48} | {'precision': 0.75, 'recall': 0.9090909090909091, 'f1-score': 0.821917808219178, 'support': 66} | {'precision': 0.5142857142857142, 'recall': 0.8181818181818182, 'f1-score': 0.6315789473684209, 'support': 22} | {'precision': 0.7692307692307693, 'recall': 0.4878048780487805, 'f1-score': 0.5970149253731344, 'support': 41} | {'precision': 0.7540983606557377, 'recall': 0.6764705882352942, 'f1-score': 0.7131782945736433, 'support': 68} | {'precision': 0.6428571428571429, 'recall': 0.7894736842105263, 'f1-score': 0.7086614173228346, 'support': 57} | 0.7562 | {'precision': 0.7015581850454902, 'recall': 0.7062228366021391, 'f1-score': 0.692745926964697, 'support': 521} | {'precision': 0.7618631381912654, 'recall': 0.7562380038387716, 'f1-score': 0.7479524876767193, 'support': 521} |
| 0.7347 | 2.0 | 293 | 0.6495 | {'precision': 0.5526315789473685, 'recall': 0.65625, 'f1-score': 0.6, 'support': 32} | {'precision': 1.0, 'recall': 0.9714285714285714, 'f1-score': 0.9855072463768115, 'support': 35} | {'precision': 0.8461538461538461, 'recall': 0.8148148148148148, 'f1-score': 0.830188679245283, 'support': 27} | {'precision': 0.9761904761904762, 'recall': 0.9318181818181818, 'f1-score': 0.9534883720930233, 'support': 44} | {'precision': 0.9591836734693877, 'recall': 0.9791666666666666, 'f1-score': 0.9690721649484536, 'support': 48} | {'precision': 0.9130434782608695, 'recall': 0.6774193548387096, 'f1-score': 0.7777777777777777, 'support': 31} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 2} | {'precision': 0.5306122448979592, 'recall': 0.5416666666666666, 'f1-score': 0.5360824742268041, 'support': 48} | {'precision': 0.7058823529411765, 'recall': 0.9090909090909091, 'f1-score': 0.794701986754967, 'support': 66} | {'precision': 0.6333333333333333, 'recall': 0.8636363636363636, 'f1-score': 0.7307692307692307, 'support': 22} | {'precision': 0.5510204081632653, 'recall': 0.6585365853658537, 'f1-score': 0.6, 'support': 41} | {'precision': 0.8095238095238095, 'recall': 0.75, 'f1-score': 0.7786259541984734, 'support': 68} | {'precision': 0.9393939393939394, 'recall': 0.543859649122807, 'f1-score': 0.688888888888889, 'support': 57} | 0.7678 | {'precision': 0.7243822416365717, 'recall': 0.7152067510345803, 'f1-score': 0.7111617519445933, 'support': 521} | {'precision': 0.7869554245446998, 'recall': 0.7677543186180422, 'f1-score': 0.7672943491004631, 'support': 521} |
| 0.7515 | 2.99 | 438 | 0.5516 | {'precision': 0.575, 'recall': 0.71875, 'f1-score': 0.6388888888888888, 'support': 32} | {'precision': 0.9714285714285714, 'recall': 0.9714285714285714, 'f1-score': 0.9714285714285714, 'support': 35} | {'precision': 0.8571428571428571, 'recall': 0.8888888888888888, 'f1-score': 0.8727272727272727, 'support': 27} | {'precision': 0.9761904761904762, 'recall': 0.9318181818181818, 'f1-score': 0.9534883720930233, 'support': 44} | {'precision': 0.9791666666666666, 'recall': 0.9791666666666666, 'f1-score': 0.9791666666666666, 'support': 48} | {'precision': 0.9166666666666666, 'recall': 0.7096774193548387, 'f1-score': 0.7999999999999999, 'support': 31} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 2} | {'precision': 0.6041666666666666, 'recall': 0.6041666666666666, 'f1-score': 0.6041666666666666, 'support': 48} | {'precision': 0.8309859154929577, 'recall': 0.8939393939393939, 'f1-score': 0.8613138686131386, 'support': 66} | {'precision': 0.7, 'recall': 0.9545454545454546, 'f1-score': 0.8076923076923077, 'support': 22} | {'precision': 0.6976744186046512, 'recall': 0.7317073170731707, 'f1-score': 0.7142857142857143, 'support': 41} | {'precision': 0.7910447761194029, 'recall': 0.7794117647058824, 'f1-score': 0.7851851851851852, 'support': 68} | {'precision': 0.8222222222222222, 'recall': 0.6491228070175439, 'f1-score': 0.7254901960784313, 'support': 57} | 0.8061 | {'precision': 0.7478222490154723, 'recall': 0.754817164008097, 'f1-score': 0.7472179777173742, 'support': 521} | {'precision': 0.8107856771401473, 'recall': 0.8061420345489443, 'f1-score': 0.8050072232872345, 'support': 521} |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"crack",
"environment - ground",
"environment - other",
"environment - sky",
"environment - vegetation",
"joint defect",
"loss of section",
"spalling",
"vegetation",
"wall - grafitti",
"wall - normal",
"wall - other",
"wall - stain"
] |
HorcruxNo13/pvt-tiny-224
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pvt-tiny-224
This model is a fine-tuned version of [Zetatech/pvt-tiny-224](https://huggingface.co/Zetatech/pvt-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4869
- Accuracy: 0.7833
- Precision: 0.7681
- Recall: 0.7833
- F1 Score: 0.7632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:--------:|
| No log | 1.0 | 4 | 0.5984 | 0.7333 | 0.5378 | 0.7333 | 0.6205 |
| No log | 2.0 | 8 | 0.6103 | 0.7333 | 0.5378 | 0.7333 | 0.6205 |
| No log | 3.0 | 12 | 0.5861 | 0.7333 | 0.5378 | 0.7333 | 0.6205 |
| No log | 4.0 | 16 | 0.5478 | 0.7333 | 0.5378 | 0.7333 | 0.6205 |
| No log | 5.0 | 20 | 0.5961 | 0.725 | 0.7119 | 0.725 | 0.7171 |
| No log | 6.0 | 24 | 0.5317 | 0.7542 | 0.7261 | 0.7542 | 0.7159 |
| No log | 7.0 | 28 | 0.5620 | 0.7458 | 0.7289 | 0.7458 | 0.7342 |
| 0.5878 | 8.0 | 32 | 0.5281 | 0.7542 | 0.7316 | 0.7542 | 0.6973 |
| 0.5878 | 9.0 | 36 | 0.5434 | 0.7625 | 0.7395 | 0.7625 | 0.7368 |
| 0.5878 | 10.0 | 40 | 0.5236 | 0.775 | 0.7658 | 0.775 | 0.7321 |
| 0.5878 | 11.0 | 44 | 0.5411 | 0.7542 | 0.7382 | 0.7542 | 0.7429 |
| 0.5878 | 12.0 | 48 | 0.5186 | 0.7708 | 0.7507 | 0.7708 | 0.7460 |
| 0.5878 | 13.0 | 52 | 0.5194 | 0.7667 | 0.7500 | 0.7667 | 0.7533 |
| 0.5878 | 14.0 | 56 | 0.5049 | 0.7875 | 0.7739 | 0.7875 | 0.7621 |
| 0.4973 | 15.0 | 60 | 0.5125 | 0.7833 | 0.7691 | 0.7833 | 0.7709 |
| 0.4973 | 16.0 | 64 | 0.5000 | 0.7917 | 0.7804 | 0.7917 | 0.7656 |
| 0.4973 | 17.0 | 68 | 0.5137 | 0.7583 | 0.7560 | 0.7583 | 0.7571 |
| 0.4973 | 18.0 | 72 | 0.4833 | 0.8 | 0.788 | 0.8 | 0.7833 |
| 0.4973 | 19.0 | 76 | 0.4929 | 0.7917 | 0.7816 | 0.7917 | 0.7843 |
| 0.4973 | 20.0 | 80 | 0.4858 | 0.8042 | 0.7930 | 0.8042 | 0.7887 |
| 0.4973 | 21.0 | 84 | 0.4900 | 0.7917 | 0.7777 | 0.7917 | 0.7743 |
| 0.4973 | 22.0 | 88 | 0.4886 | 0.7958 | 0.7829 | 0.7958 | 0.7815 |
| 0.439 | 23.0 | 92 | 0.4841 | 0.7917 | 0.7778 | 0.7917 | 0.7723 |
| 0.439 | 24.0 | 96 | 0.4855 | 0.8 | 0.7883 | 0.8 | 0.7885 |
| 0.439 | 25.0 | 100 | 0.4856 | 0.8 | 0.7879 | 0.8 | 0.7869 |
| 0.439 | 26.0 | 104 | 0.4839 | 0.8 | 0.7879 | 0.8 | 0.7869 |
| 0.439 | 27.0 | 108 | 0.4811 | 0.8 | 0.7879 | 0.8 | 0.7869 |
| 0.439 | 28.0 | 112 | 0.4834 | 0.8 | 0.7889 | 0.8 | 0.7901 |
| 0.439 | 29.0 | 116 | 0.4839 | 0.8 | 0.7889 | 0.8 | 0.7901 |
| 0.4092 | 30.0 | 120 | 0.4838 | 0.8 | 0.7889 | 0.8 | 0.7901 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"normal",
"abnormal"
] |
HorcruxNo13/swiftformer-xs
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swiftformer-xs
This model is a fine-tuned version of [MBZUAI/swiftformer-xs](https://huggingface.co/MBZUAI/swiftformer-xs) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6833
- Accuracy: 0.57
- Precision: 0.5995
- Recall: 0.57
- F1 Score: 0.5828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:--------:|
| No log | 1.0 | 4 | 0.6713 | 0.6292 | 0.6454 | 0.6292 | 0.6365 |
| No log | 2.0 | 8 | 0.7142 | 0.475 | 0.6155 | 0.475 | 0.5020 |
| No log | 3.0 | 12 | 0.7298 | 0.425 | 0.6026 | 0.425 | 0.4435 |
| No log | 4.0 | 16 | 0.7389 | 0.4792 | 0.6408 | 0.4792 | 0.5023 |
| No log | 5.0 | 20 | 0.7427 | 0.4792 | 0.6408 | 0.4792 | 0.5023 |
| No log | 6.0 | 24 | 0.7235 | 0.5083 | 0.6424 | 0.5083 | 0.5348 |
| No log | 7.0 | 28 | 0.6893 | 0.5875 | 0.6687 | 0.5875 | 0.6107 |
| 0.6981 | 8.0 | 32 | 0.6816 | 0.6042 | 0.6847 | 0.6042 | 0.6264 |
| 0.6981 | 9.0 | 36 | 0.6866 | 0.6042 | 0.6888 | 0.6042 | 0.6266 |
| 0.6981 | 10.0 | 40 | 0.7005 | 0.575 | 0.6751 | 0.575 | 0.5996 |
| 0.6981 | 11.0 | 44 | 0.7127 | 0.525 | 0.6554 | 0.525 | 0.5510 |
| 0.6981 | 12.0 | 48 | 0.7098 | 0.5333 | 0.6595 | 0.5333 | 0.5593 |
| 0.6981 | 13.0 | 52 | 0.7126 | 0.5208 | 0.6579 | 0.5208 | 0.5463 |
| 0.6981 | 14.0 | 56 | 0.7114 | 0.5292 | 0.6575 | 0.5292 | 0.5551 |
| 0.6656 | 15.0 | 60 | 0.6908 | 0.5667 | 0.6712 | 0.5667 | 0.5917 |
| 0.6656 | 16.0 | 64 | 0.6804 | 0.5833 | 0.6749 | 0.5833 | 0.6073 |
| 0.6656 | 17.0 | 68 | 0.6806 | 0.5958 | 0.6808 | 0.5958 | 0.6188 |
| 0.6656 | 18.0 | 72 | 0.6884 | 0.5583 | 0.6629 | 0.5583 | 0.5838 |
| 0.6656 | 19.0 | 76 | 0.6821 | 0.5708 | 0.6647 | 0.5708 | 0.5955 |
| 0.6656 | 20.0 | 80 | 0.6663 | 0.6042 | 0.6806 | 0.6042 | 0.6261 |
| 0.6656 | 21.0 | 84 | 0.6717 | 0.6 | 0.6787 | 0.6 | 0.6223 |
| 0.6656 | 22.0 | 88 | 0.6682 | 0.6083 | 0.6826 | 0.6083 | 0.6299 |
| 0.6443 | 23.0 | 92 | 0.6683 | 0.6167 | 0.6946 | 0.6167 | 0.6381 |
| 0.6443 | 24.0 | 96 | 0.6733 | 0.6 | 0.6911 | 0.6 | 0.6230 |
| 0.6443 | 25.0 | 100 | 0.6647 | 0.6083 | 0.6866 | 0.6083 | 0.6302 |
| 0.6443 | 26.0 | 104 | 0.6729 | 0.6083 | 0.6907 | 0.6083 | 0.6305 |
| 0.6443 | 27.0 | 108 | 0.6740 | 0.6042 | 0.6930 | 0.6042 | 0.6268 |
| 0.6443 | 28.0 | 112 | 0.6809 | 0.5917 | 0.6916 | 0.5917 | 0.6153 |
| 0.6443 | 29.0 | 116 | 0.6778 | 0.6042 | 0.7017 | 0.6042 | 0.6270 |
| 0.6313 | 30.0 | 120 | 0.6794 | 0.5958 | 0.6935 | 0.5958 | 0.6192 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"normal",
"abnormal"
] |
nagyrobert97/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0977
- Accuracy: 0.9644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5012 | 1.0 | 351 | 0.1447 | 0.9502 |
| 0.3732 | 2.0 | 703 | 0.1068 | 0.9626 |
| 0.3398 | 2.99 | 1053 | 0.0977 | 0.9644 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Tokenizers 0.14.0
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
hilmansw/resnet18-catdog-classifier
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model description
This model is a fine-tuned version of [microsoft/resnet-18](https://huggingface.co/microsoft/resnet-18) on an [custom](https://www.kaggle.com/datasets/samuelcortinhas/cats-and-dogs-image-classification) dataset. This model was built using the "Cats & Dogs Classification" dataset obtained from Kaggle. During the model building process, this was done using the Pytorch framework with pre-trained Resnet-18. The method used during the process of building this classification model is fine-tuning with the dataset.
## Training results
| Epoch | Accuracy |
|:-----:|:--------:|
| 1.0 | 0.9357 |
| 2.0 | 0.9786 |
| 3.0 | 0.9000 |
| 4.0 | 0.9214 |
| 5.0 | 0.9143 |
| 6.0 | 0.9429 |
| 7.0 | 0.9714 |
| 8.0 | 0.9929 |
| 9.0 | 0.9714 |
| 10.0 | 0.9714 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- loss_function = CrossEntropyLoss
- optimizer = AdamW
- learning_rate: 0.0001
- batch_size: 16
- num_epochs: 10
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"cats",
"dogs"
] |
jennyc/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9786
- Accuracy: 0.828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.9923 | 0.99 | 62 | 2.9786 | 0.828 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
wuru330/378A1_results_384_4cate_1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 378A1_results_384_4cate_1
This model is a fine-tuned version of [google/vit-base-patch16-384](https://huggingface.co/google/vit-base-patch16-384) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4707
- Accuracy: 0.8997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8756 | 1.0 | 37 | 0.5714 | 0.7908 |
| 0.4508 | 2.0 | 74 | 0.3688 | 0.8418 |
| 0.2344 | 3.0 | 111 | 0.3064 | 0.8741 |
| 0.1445 | 4.0 | 148 | 0.2948 | 0.8946 |
| 0.0774 | 5.0 | 185 | 0.3461 | 0.8793 |
| 0.0393 | 6.0 | 222 | 0.3229 | 0.8997 |
| 0.0164 | 7.0 | 259 | 0.3441 | 0.9048 |
| 0.0222 | 8.0 | 296 | 0.4192 | 0.9099 |
| 0.0125 | 9.0 | 333 | 0.4443 | 0.8810 |
| 0.0029 | 10.0 | 370 | 0.4007 | 0.9116 |
| 0.0014 | 11.0 | 407 | 0.4277 | 0.9150 |
| 0.0003 | 12.0 | 444 | 0.4445 | 0.9014 |
| 0.0002 | 13.0 | 481 | 0.4437 | 0.9031 |
| 0.0002 | 14.0 | 518 | 0.4481 | 0.9048 |
| 0.0002 | 15.0 | 555 | 0.4512 | 0.9031 |
| 0.0002 | 16.0 | 592 | 0.4537 | 0.9014 |
| 0.0002 | 17.0 | 629 | 0.4562 | 0.9014 |
| 0.0002 | 18.0 | 666 | 0.4583 | 0.9014 |
| 0.0001 | 19.0 | 703 | 0.4594 | 0.9014 |
| 0.0001 | 20.0 | 740 | 0.4615 | 0.9031 |
| 0.0001 | 21.0 | 777 | 0.4635 | 0.9031 |
| 0.0001 | 22.0 | 814 | 0.4652 | 0.9031 |
| 0.0001 | 23.0 | 851 | 0.4659 | 0.9031 |
| 0.0001 | 24.0 | 888 | 0.4679 | 0.8997 |
| 0.0001 | 25.0 | 925 | 0.4681 | 0.9014 |
| 0.0001 | 26.0 | 962 | 0.4688 | 0.8997 |
| 0.0001 | 27.0 | 999 | 0.4695 | 0.8997 |
| 0.0001 | 28.0 | 1036 | 0.4701 | 0.8997 |
| 0.0001 | 29.0 | 1073 | 0.4706 | 0.8997 |
| 0.0001 | 30.0 | 1110 | 0.4707 | 0.8997 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
[
"label_0",
"label_1",
"label_2",
"label_3"
] |
890mari/practica2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# practica2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0348
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1299 | 3.85 | 500 | 0.0348 | 0.9850 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
alantaquito6/PRACTICAVIT
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PRACTICAVIT
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0177
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.129 | 3.85 | 500 | 0.0177 | 0.9925 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
purabp1249/swin-tiny-patch4-window7-224-finetuned-herbify
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-herbify
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0378
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 35
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.94 | 4 | 1.8723 | 0.2787 |
| No log | 1.88 | 8 | 1.5899 | 0.6885 |
| 1.8465 | 2.82 | 12 | 1.1661 | 0.8197 |
| 1.8465 | 4.0 | 17 | 0.5156 | 0.9508 |
| 0.9675 | 4.94 | 21 | 0.2177 | 0.9836 |
| 0.9675 | 5.88 | 25 | 0.0929 | 0.9836 |
| 0.9675 | 6.82 | 29 | 0.0378 | 1.0 |
| 0.2342 | 8.0 | 34 | 0.0128 | 1.0 |
| 0.2342 | 8.94 | 38 | 0.0075 | 1.0 |
| 0.1022 | 9.88 | 42 | 0.0053 | 1.0 |
| 0.1022 | 10.82 | 46 | 0.0049 | 1.0 |
| 0.0553 | 12.0 | 51 | 0.0032 | 1.0 |
| 0.0553 | 12.94 | 55 | 0.0022 | 1.0 |
| 0.0553 | 13.88 | 59 | 0.0017 | 1.0 |
| 0.0278 | 14.82 | 63 | 0.0018 | 1.0 |
| 0.0278 | 16.0 | 68 | 0.0012 | 1.0 |
| 0.0266 | 16.94 | 72 | 0.0011 | 1.0 |
| 0.0266 | 17.88 | 76 | 0.0006 | 1.0 |
| 0.046 | 18.82 | 80 | 0.0007 | 1.0 |
| 0.046 | 20.0 | 85 | 0.0007 | 1.0 |
| 0.046 | 20.94 | 89 | 0.0012 | 1.0 |
| 0.0245 | 21.88 | 93 | 0.0015 | 1.0 |
| 0.0245 | 22.82 | 97 | 0.0011 | 1.0 |
| 0.0249 | 24.0 | 102 | 0.0007 | 1.0 |
| 0.0249 | 24.94 | 106 | 0.0006 | 1.0 |
| 0.0201 | 25.88 | 110 | 0.0005 | 1.0 |
| 0.0201 | 26.82 | 114 | 0.0005 | 1.0 |
| 0.0201 | 28.0 | 119 | 0.0004 | 1.0 |
| 0.0208 | 28.94 | 123 | 0.0004 | 1.0 |
| 0.0208 | 29.88 | 127 | 0.0004 | 1.0 |
| 0.0122 | 30.82 | 131 | 0.0004 | 1.0 |
| 0.0122 | 32.0 | 136 | 0.0004 | 1.0 |
| 0.0222 | 32.94 | 140 | 0.0004 | 1.0 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cpu
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"aloevera",
"amla",
"amruthaballi",
"arali",
"astma_weed",
"badipala",
"ashoka"
] |
Niraya666/swin-large-patch4-window12-384-in22k-finetuned-ADC-4cls-0923
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-large-patch4-window12-384-in22k-finetuned-ADC-4cls-0923
This model is a fine-tuned version of [microsoft/swin-large-patch4-window12-384-in22k](https://huggingface.co/microsoft/swin-large-patch4-window12-384-in22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5621
- eval_accuracy: 0.8571
- eval_runtime: 6.0148
- eval_samples_per_second: 11.638
- eval_steps_per_second: 0.499
- epoch: 26.4
- step: 99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 200
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"color",
"pattern_fail",
"residue",
"tiny"
] |
zitrone44/vit-base-tm
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-tm
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4170
- eval_accuracy: 0.9062
- eval_runtime: 207.7695
- eval_samples_per_second: 152.78
- eval_steps_per_second: 19.098
- epoch: 6.79
- step: 12447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"up",
"up-left",
"up-right"
] |
dima806/mushrooms_image_detection
|
Returns mushroom type given an image.

See https://www.kaggle.com/code/dima806/mushrooms-image-detection-vit for more details.
```
Classification report:
precision recall f1-score support
Urnula craterium 0.9804 0.9863 0.9833 2335
Leccinum albostipitatum 0.7755 0.9054 0.8354 2335
Lactarius deliciosus 0.9284 0.8163 0.8687 2335
Clitocybe nebularis 0.9409 0.9550 0.9479 2335
Hypholoma fasciculare 0.8962 0.8176 0.8551 2335
Lactarius torminosus 0.8862 0.9306 0.9078 2334
Lycoperdon perlatum 0.9459 0.9653 0.9555 2335
Verpa bohemica 0.9927 0.9957 0.9942 2335
Schizophyllum commune 0.9346 0.9666 0.9503 2335
Leccinum aurantiacum 0.7167 0.4887 0.5811 2335
Phellinus igniarius 0.8414 0.8338 0.8376 2335
Suillus luteus 0.7222 0.3362 0.4588 2335
Coltricia perennis 0.9756 0.9422 0.9586 2335
Cetraria islandica 0.9851 0.9910 0.9880 2335
Amanita muscaria 0.9956 0.9764 0.9859 2335
Pholiota aurivella 0.9295 0.9486 0.9389 2334
Trichaptum biforme 0.8943 0.8587 0.8761 2335
Artomyces pyxidatus 0.9987 0.9936 0.9961 2335
Calocera viscosa 1.0000 0.9983 0.9991 2335
Sarcosoma globosum 0.9713 0.9700 0.9706 2335
Evernia prunastri 0.8245 0.8934 0.8576 2335
Laetiporus sulphureus 0.9613 0.9782 0.9696 2335
Lobaria pulmonaria 0.9720 0.9820 0.9770 2335
Bjerkandera adusta 0.8449 0.8073 0.8257 2335
Vulpicida pinastri 0.9771 0.9880 0.9825 2335
Imleria badia 0.7537 0.8099 0.7808 2335
Evernia mesomorpha 0.9160 0.9015 0.9087 2335
Physcia adscendens 0.8479 0.8043 0.8255 2335
Coprinellus micaceus 0.9189 0.8985 0.9086 2334
Armillaria borealis 0.9301 0.6444 0.7613 2334
Trametes ochracea 0.7924 0.6737 0.7282 2335
Cantharellus cibarius 0.9110 0.9773 0.9430 2335
Pseudevernia furfuracea 0.8943 0.8373 0.8649 2335
Tremella mesenterica 0.9966 0.9927 0.9946 2335
Gyromitra infula 0.9682 0.9516 0.9598 2335
Leccinum versipelle 0.7239 0.7850 0.7532 2335
Mutinus ravenelii 0.9974 1.0000 0.9987 2335
Pholiota squarrosa 0.8284 0.9285 0.8756 2335
Amanita rubescens 0.8616 0.9062 0.8833 2335
Amanita pantherina 0.9391 0.8723 0.9045 2334
Sarcoscypha austriaca 0.9936 0.9914 0.9925 2334
Boletus edulis 0.5996 0.9336 0.7302 2334
Coprinus comatus 0.9641 0.9897 0.9768 2335
Merulius tremellosus 0.8698 0.9272 0.8976 2335
Stropharia aeruginosa 0.9871 0.9842 0.9856 2335
Cladonia fimbriata 0.9746 0.9854 0.9800 2334
Suillus grevillei 0.8932 0.4981 0.6395 2335
Apioperdon pyriforme 0.9200 0.9499 0.9347 2335
Cerioporus squamosus 0.9427 0.9657 0.9541 2335
Leccinum scabrum 0.7482 0.9152 0.8233 2335
Rhytisma acerinum 1.0000 0.9949 0.9974 2335
Hypholoma lateritium 0.8445 0.9092 0.8756 2335
Flammulina velutipes 0.8947 0.9028 0.8987 2335
Tricholomopsis rutilans 0.9374 0.8587 0.8963 2335
Coprinopsis atramentaria 0.9285 0.9345 0.9315 2335
Trametes versicolor 0.8279 0.8946 0.8600 2334
Graphis scripta 0.9783 0.9871 0.9827 2334
Ganoderma applanatum 0.9162 0.9550 0.9352 2335
Phellinus tremulae 0.9149 0.8514 0.8820 2335
Peltigera aphthosa 0.9888 0.9863 0.9876 2335
Parmelia sulcata 0.8994 0.9229 0.9110 2335
Fomitopsis betulina 0.8678 0.9675 0.9149 2335
Pleurotus pulmonarius 0.8910 0.9139 0.9023 2335
Fomitopsis pinicola 0.9453 0.9615 0.9533 2335
Daedaleopsis confragosa 0.7665 0.8518 0.8069 2335
Hericium coralloides 0.9906 0.9897 0.9901 2334
Trametes hirsuta 0.8239 0.8518 0.8376 2334
Coprinellus disseminatus 0.9406 0.9490 0.9448 2335
Kuehneromyces mutabilis 0.7731 0.9208 0.8405 2335
Pleurotus ostreatus 0.7244 0.8994 0.8024 2335
Phlebia radiata 0.9601 0.9589 0.9595 2335
Boletus reticulatus 0.9405 0.2775 0.4286 2335
Phallus impudicus 0.9956 0.9649 0.9800 2335
Macrolepiota procera 0.9818 0.9923 0.9870 2334
Fomes fomentarius 0.9058 0.9267 0.9161 2334
Suillus granulatus 0.4872 0.9276 0.6388 2335
Gyromitra esculenta 0.9380 0.9465 0.9422 2335
Xanthoria parietina 0.9657 0.9645 0.9651 2335
Nectria cinnabarina 0.9882 0.9704 0.9793 2335
Sarcomyxa serotina 0.9546 0.4411 0.6034 2335
Inonotus obliquus 0.9568 0.9970 0.9765 2334
Panellus stipticus 0.8756 0.8385 0.8566 2334
Hypogymnia physodes 0.8739 0.9327 0.9024 2334
Hygrophoropsis aurantiaca 0.9132 0.9195 0.9163 2334
Cladonia rangiferina 0.9404 0.9195 0.9298 2335
Platismatia glauca 0.9523 0.9567 0.9545 2335
Calycina citrina 0.9822 0.9949 0.9885 2335
Cladonia stellaris 0.9377 0.9610 0.9492 2334
Amanita citrina 0.9392 0.9799 0.9591 2334
Lepista nuda 0.9778 0.9820 0.9799 2335
Gyromitra gigas 0.9701 0.9576 0.9638 2335
Crucibulum laeve 0.9226 0.9602 0.9410 2335
Daedaleopsis tricolor 0.8988 0.8176 0.8562 2335
Stereum hirsutum 0.9009 0.8604 0.8802 2335
Paxillus involutus 0.7496 0.9075 0.8210 2335
Lactarius turpis 0.9355 0.8942 0.9144 2335
Chlorociboria aeruginascens 1.0000 0.9949 0.9974 2335
Chondrostereum purpureum 0.9353 0.8976 0.9161 2335
Phaeophyscia orbicularis 0.8864 0.8424 0.8639 2335
Peltigera praetextata 0.9847 0.9679 0.9762 2335
accuracy 0.8990 233480
macro avg 0.9057 0.8990 0.8960 233480
weighted avg 0.9057 0.8990 0.8960 233480
```
|
[
"urnula craterium",
"leccinum albostipitatum",
"lactarius deliciosus",
"clitocybe nebularis",
"hypholoma fasciculare",
"lactarius torminosus",
"lycoperdon perlatum",
"verpa bohemica",
"schizophyllum commune",
"leccinum aurantiacum",
"phellinus igniarius",
"suillus luteus",
"coltricia perennis",
"cetraria islandica",
"amanita muscaria",
"pholiota aurivella",
"trichaptum biforme",
"artomyces pyxidatus",
"calocera viscosa",
"sarcosoma globosum",
"evernia prunastri",
"laetiporus sulphureus",
"lobaria pulmonaria",
"bjerkandera adusta",
"vulpicida pinastri",
"imleria badia",
"evernia mesomorpha",
"physcia adscendens",
"coprinellus micaceus",
"armillaria borealis",
"trametes ochracea",
"cantharellus cibarius",
"pseudevernia furfuracea",
"tremella mesenterica",
"gyromitra infula",
"leccinum versipelle",
"mutinus ravenelii",
"pholiota squarrosa",
"amanita rubescens",
"amanita pantherina",
"sarcoscypha austriaca",
"boletus edulis",
"coprinus comatus",
"merulius tremellosus",
"stropharia aeruginosa",
"cladonia fimbriata",
"suillus grevillei",
"apioperdon pyriforme",
"cerioporus squamosus",
"leccinum scabrum",
"rhytisma acerinum",
"hypholoma lateritium",
"flammulina velutipes",
"tricholomopsis rutilans",
"coprinopsis atramentaria",
"trametes versicolor",
"graphis scripta",
"ganoderma applanatum",
"phellinus tremulae",
"peltigera aphthosa",
"parmelia sulcata",
"fomitopsis betulina",
"pleurotus pulmonarius",
"fomitopsis pinicola",
"daedaleopsis confragosa",
"hericium coralloides",
"trametes hirsuta",
"coprinellus disseminatus",
"kuehneromyces mutabilis",
"pleurotus ostreatus",
"phlebia radiata",
"boletus reticulatus",
"phallus impudicus",
"macrolepiota procera",
"fomes fomentarius",
"suillus granulatus",
"gyromitra esculenta",
"xanthoria parietina",
"nectria cinnabarina",
"sarcomyxa serotina",
"inonotus obliquus",
"panellus stipticus",
"hypogymnia physodes",
"hygrophoropsis aurantiaca",
"cladonia rangiferina",
"platismatia glauca",
"calycina citrina",
"cladonia stellaris",
"amanita citrina",
"lepista nuda",
"gyromitra gigas",
"crucibulum laeve",
"daedaleopsis tricolor",
"stereum hirsutum",
"paxillus involutus",
"lactarius turpis",
"chlorociboria aeruginascens",
"chondrostereum purpureum",
"phaeophyscia orbicularis",
"peltigera praetextata"
] |
dyaminda/pneumonia-classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pneumonia-classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0288
- Accuracy: 0.9923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1574 | 0.99 | 52 | 0.0976 | 0.9726 |
| 0.0643 | 2.0 | 105 | 0.0535 | 0.9845 |
| 0.0189 | 2.99 | 157 | 0.0490 | 0.9821 |
| 0.0208 | 4.0 | 210 | 0.0484 | 0.9881 |
| 0.0096 | 4.95 | 260 | 0.0463 | 0.9881 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
[
"normal",
"pneumonia"
] |
HorcruxNo13/swin-tiny-patch4-window7-224
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5534
- Accuracy: 0.7433
- Precision: 0.7306
- Recall: 0.7433
- F1 Score: 0.7344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:--------:|
| No log | 1.0 | 4 | 0.7306 | 0.4 | 0.6521 | 0.4 | 0.3821 |
| No log | 2.0 | 8 | 0.5815 | 0.7333 | 0.8050 | 0.7333 | 0.6286 |
| No log | 3.0 | 12 | 0.5700 | 0.725 | 0.5256 | 0.725 | 0.6094 |
| No log | 4.0 | 16 | 0.5635 | 0.725 | 0.5256 | 0.725 | 0.6094 |
| No log | 5.0 | 20 | 0.5509 | 0.7292 | 0.8028 | 0.7292 | 0.6191 |
| No log | 6.0 | 24 | 0.5356 | 0.7417 | 0.7438 | 0.7417 | 0.6589 |
| No log | 7.0 | 28 | 0.5353 | 0.75 | 0.7360 | 0.75 | 0.6895 |
| No log | 8.0 | 32 | 0.5299 | 0.7375 | 0.7090 | 0.7375 | 0.6668 |
| No log | 9.0 | 36 | 0.5335 | 0.7667 | 0.7509 | 0.7667 | 0.7310 |
| No log | 10.0 | 40 | 0.5344 | 0.7417 | 0.7315 | 0.7417 | 0.6644 |
| No log | 11.0 | 44 | 0.5297 | 0.7458 | 0.7279 | 0.7458 | 0.6821 |
| No log | 12.0 | 48 | 0.5202 | 0.75 | 0.7360 | 0.75 | 0.6895 |
| 0.5942 | 13.0 | 52 | 0.5325 | 0.7542 | 0.7411 | 0.7542 | 0.7452 |
| 0.5942 | 14.0 | 56 | 0.5139 | 0.7583 | 0.7505 | 0.7583 | 0.7039 |
| 0.5942 | 15.0 | 60 | 0.5528 | 0.7417 | 0.7347 | 0.7417 | 0.7377 |
| 0.5942 | 16.0 | 64 | 0.5070 | 0.7625 | 0.7437 | 0.7625 | 0.7277 |
| 0.5942 | 17.0 | 68 | 0.5193 | 0.775 | 0.7594 | 0.775 | 0.7592 |
| 0.5942 | 18.0 | 72 | 0.5090 | 0.7583 | 0.7448 | 0.7583 | 0.7487 |
| 0.5942 | 19.0 | 76 | 0.5189 | 0.7792 | 0.7847 | 0.7792 | 0.7816 |
| 0.5942 | 20.0 | 80 | 0.5214 | 0.775 | 0.7795 | 0.775 | 0.7770 |
| 0.5942 | 21.0 | 84 | 0.5188 | 0.775 | 0.7710 | 0.775 | 0.7728 |
| 0.5942 | 22.0 | 88 | 0.5029 | 0.7667 | 0.7526 | 0.7667 | 0.7557 |
| 0.5942 | 23.0 | 92 | 0.5061 | 0.7833 | 0.7734 | 0.7833 | 0.7761 |
| 0.5942 | 24.0 | 96 | 0.5350 | 0.7667 | 0.7713 | 0.7667 | 0.7687 |
| 0.4829 | 25.0 | 100 | 0.5149 | 0.7542 | 0.7330 | 0.7542 | 0.7337 |
| 0.4829 | 26.0 | 104 | 0.5283 | 0.7583 | 0.7737 | 0.7583 | 0.7641 |
| 0.4829 | 27.0 | 108 | 0.5109 | 0.7792 | 0.7647 | 0.7792 | 0.7646 |
| 0.4829 | 28.0 | 112 | 0.5258 | 0.775 | 0.7729 | 0.775 | 0.7739 |
| 0.4829 | 29.0 | 116 | 0.5207 | 0.7625 | 0.745 | 0.7625 | 0.7468 |
| 0.4829 | 30.0 | 120 | 0.5306 | 0.75 | 0.7357 | 0.75 | 0.7400 |
| 0.4829 | 31.0 | 124 | 0.5455 | 0.75 | 0.7375 | 0.75 | 0.7417 |
| 0.4829 | 32.0 | 128 | 0.5653 | 0.7458 | 0.7380 | 0.7458 | 0.7412 |
| 0.4829 | 33.0 | 132 | 0.5565 | 0.7417 | 0.7212 | 0.7417 | 0.7256 |
| 0.4829 | 34.0 | 136 | 0.5468 | 0.7708 | 0.7658 | 0.7708 | 0.7679 |
| 0.4829 | 35.0 | 140 | 0.5268 | 0.7833 | 0.7723 | 0.7833 | 0.7747 |
| 0.4829 | 36.0 | 144 | 0.5260 | 0.775 | 0.7710 | 0.775 | 0.7728 |
| 0.4829 | 37.0 | 148 | 0.5281 | 0.775 | 0.7659 | 0.775 | 0.7689 |
| 0.3846 | 38.0 | 152 | 0.5385 | 0.7708 | 0.7742 | 0.7708 | 0.7724 |
| 0.3846 | 39.0 | 156 | 0.5253 | 0.7708 | 0.7623 | 0.7708 | 0.7653 |
| 0.3846 | 40.0 | 160 | 0.5319 | 0.7708 | 0.7719 | 0.7708 | 0.7714 |
| 0.3846 | 41.0 | 164 | 0.5311 | 0.775 | 0.7631 | 0.775 | 0.7660 |
| 0.3846 | 42.0 | 168 | 0.5325 | 0.7792 | 0.7683 | 0.7792 | 0.7711 |
| 0.3846 | 43.0 | 172 | 0.5254 | 0.7667 | 0.7606 | 0.7667 | 0.7631 |
| 0.3846 | 44.0 | 176 | 0.5232 | 0.7708 | 0.7623 | 0.7708 | 0.7653 |
| 0.3846 | 45.0 | 180 | 0.5291 | 0.7708 | 0.7640 | 0.7708 | 0.7667 |
| 0.3846 | 46.0 | 184 | 0.5356 | 0.7708 | 0.7607 | 0.7708 | 0.7639 |
| 0.3846 | 47.0 | 188 | 0.5400 | 0.7708 | 0.7607 | 0.7708 | 0.7639 |
| 0.3846 | 48.0 | 192 | 0.5409 | 0.7667 | 0.7540 | 0.7667 | 0.7573 |
| 0.3846 | 49.0 | 196 | 0.5403 | 0.7667 | 0.7540 | 0.7667 | 0.7573 |
| 0.3353 | 50.0 | 200 | 0.5397 | 0.7708 | 0.7592 | 0.7708 | 0.7624 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"normal",
"abnormal"
] |
grelade/mmx-resnet-18
|
# ResNet
ResNet model trained on imagenet-1k. It was introduced in the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) and first released in [this repository](https://github.com/KaimingHe/deep-residual-networks).
Disclaimer: The team releasing ResNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ResNet introduced residual connections, they allow to train networks with an unseen number of layers (up to 1000). ResNet won the 2015 ILSVRC & COCO competition, one important milestone in deep computer vision.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=resnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, ResNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-18")
>>> model = ResNetForImageClassification.from_pretrained("microsoft/resnet-18")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
tiger cat
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/resnet).
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
dennisjooo/Birds-Classifier-EfficientNetB2
|
# Bird Classifier EfficientNet-B2
## Model Description
Have you look at a bird and said "Boahh if only I know what bird that is".
Unless you're an avid bird spotter (or just love birds in general), it's hard to differentiate some species of birds.
Well you're in luck, turns out you can use a image classifier to identify bird species!
This model is a fine-tuned version of [google/efficientnet-b2](https://huggingface.co/google/efficientnet-b2)
on the [gpiosenka/100-bird-species](https://www.kaggle.com/datasets/gpiosenka/100-bird-species) dataset available on Kaggle.
The dataset used to train the model was taken on September 24th, 2023.
The original model itself was trained on ImageNet-1K, thus it might still have some useful features for identifying creatures like birds.
In theory, the accuracy for a random guess on this dataset is 0.0019047619 (essentially 1/525).
The model performed significantly well on all three sets with result being:
- **Training**: 0.999480
- **Validation**: 0.985904
- **Test**: 0.991238
## Intended Uses
You can use the raw model for image classification.
Here is an example of the model in action using a picture of a bird
```python
# Importing the libraries needed
import torch
import urllib.request
from PIL import Image
from transformers import EfficientNetImageProcessor, EfficientNetForImageClassification
# Determining the file URL
url = 'some url'
# Opening the image using PIL
img = Image.open(urllib.request.urlretrieve(url)[0])
# Loading the model and preprocessor from HuggingFace
preprocessor = EfficientNetImageProcessor.from_pretrained("dennisjooo/Birds-Classifier-EfficientNetB2")
model = EfficientNetForImageClassification.from_pretrained("dennisjooo/Birds-Classifier-EfficientNetB2")
# Preprocessing the input
inputs = preprocessor(img, return_tensors="pt")
# Running the inference
with torch.no_grad():
logits = model(**inputs).logits
# Getting the predicted label
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
```
Or alternatively you can streamline it using Huggingface's Pipeline
```python
# Importing the libraries needed
import torch
import urllib.request
from PIL import Image
from transformers import pipeline
# Determining the file URL
url = 'some url'
# Opening the image using PIL
img = Image.open(urllib.request.urlretrieve(url)[0])
# Loading the model and preprocessor using Pipeline
pipe = pipeline("image-classification", model="dennisjooo/Birds-Classifier-EfficientNetB2")
# Running the inference
result = pipe(img)[0]
# Printing the result label
print(result['label'])
```
## Training and Evaluation
### Data
The dataset was taken from [gpiosenka/100-bird-species](https://www.kaggle.com/datasets/gpiosenka/100-bird-species) on Kaggle.
It contains a set of 525 bird species, with 84,635 training images, 2,625 each for validation and test images.
Every image in the dataset is a 224 by 224 RGB image.
The training process used the same split provided by the author.
For more details, please refer to the [author's Kaggle page](https://www.kaggle.com/datasets/gpiosenka/100-bird-species).
### Training Procedure
The training was done using PyTorch on Kaggle's free P100 GPU. The process also includes the usage of Lightning and Torchmetrics libraries.
### Preprocessing
Each image is preprocessed according to the the orginal author's [config](https://huggingface.co/google/efficientnet-b2/blob/main/preprocessor_config.json).
The training set was also augmented using:
- Random rotation of 10 degrees with probability of 50%
- Random horizontal flipping with probability of 50%
### Training Hyperparameters
The following are the hyperparameters used for training:
- **Training regime:** fp32
- **Loss:** Cross entropy
- **Optimizer**: Adam with default betas (0.99, 0.999)
- **Learning rate**: 1e-3
- **Learning rate scheduler**: Reduce on plateau which monitors validation loss with patience of 2 and decay rate of 0.1
- **Batch size**: 64
- **Early stopping**: Monitors validation accuracy with patience of 10
### Results
The image below is the result of the training process both on the training and validation set:

|
[
"abbotts babbler",
"abbotts booby",
"abyssinian ground hornbill",
"african crowned crane",
"african emerald cuckoo",
"african firefinch",
"african oyster catcher",
"african pied hornbill",
"african pygmy goose",
"albatross",
"alberts towhee",
"alexandrine parakeet",
"alpine chough",
"altamira yellowthroat",
"american avocet",
"american bittern",
"american coot",
"american dipper",
"american flamingo",
"american goldfinch",
"american kestrel",
"american pipit",
"american redstart",
"american robin",
"american wigeon",
"amethyst woodstar",
"andean goose",
"andean lapwing",
"andean siskin",
"anhinga",
"anianiau",
"annas hummingbird",
"antbird",
"antillean euphonia",
"apapane",
"apostlebird",
"araripe manakin",
"ashy storm petrel",
"ashy thrushbird",
"asian crested ibis",
"asian dollard bird",
"asian green bee eater",
"asian openbill stork",
"auckland shaq",
"austral canastero",
"australasian figbird",
"avadavat",
"azaras spinetail",
"azure breasted pitta",
"azure jay",
"azure tanager",
"azure tit",
"baikal teal",
"bald eagle",
"bald ibis",
"bali starling",
"baltimore oriole",
"bananaquit",
"band tailed guan",
"banded broadbill",
"banded pita",
"banded stilt",
"bar-tailed godwit",
"barn owl",
"barn swallow",
"barred puffbird",
"barrows goldeneye",
"bay-breasted warbler",
"bearded barbet",
"bearded bellbird",
"bearded reedling",
"belted kingfisher",
"bird of paradise",
"black and yellow broadbill",
"black baza",
"black breasted puffbird",
"black cockato",
"black faced spoonbill",
"black francolin",
"black headed caique",
"black necked stilt",
"black skimmer",
"black swan",
"black tail crake",
"black throated bushtit",
"black throated huet",
"black throated warbler",
"black vented shearwater",
"black vulture",
"black-capped chickadee",
"black-necked grebe",
"black-throated sparrow",
"blackburniam warbler",
"blonde crested woodpecker",
"blood pheasant",
"blue coau",
"blue dacnis",
"blue gray gnatcatcher",
"blue grosbeak",
"blue grouse",
"blue heron",
"blue malkoha",
"blue throated piping guan",
"blue throated toucanet",
"bobolink",
"bornean bristlehead",
"bornean leafbird",
"bornean pheasant",
"brandt cormarant",
"brewers blackbird",
"brown crepper",
"brown headed cowbird",
"brown noody",
"brown thrasher",
"bufflehead",
"bulwers pheasant",
"burchells courser",
"bush turkey",
"caatinga cacholote",
"cabots tragopan",
"cactus wren",
"california condor",
"california gull",
"california quail",
"campo flicker",
"canary",
"canvasback",
"cape glossy starling",
"cape longclaw",
"cape may warbler",
"cape rock thrush",
"capped heron",
"capuchinbird",
"carmine bee-eater",
"caspian tern",
"cassowary",
"cedar waxwing",
"cerulean warbler",
"chara de collar",
"chattering lory",
"chestnet bellied euphonia",
"chestnut winged cuckoo",
"chinese bamboo partridge",
"chinese pond heron",
"chipping sparrow",
"chucao tapaculo",
"chukar partridge",
"cinnamon attila",
"cinnamon flycatcher",
"cinnamon teal",
"clarks grebe",
"clarks nutcracker",
"cock of the rock",
"cockatoo",
"collared aracari",
"collared crescentchest",
"common firecrest",
"common grackle",
"common house martin",
"common iora",
"common loon",
"common poorwill",
"common starling",
"coppersmith barbet",
"coppery tailed coucal",
"crab plover",
"crane hawk",
"cream colored woodpecker",
"crested auklet",
"crested caracara",
"crested coua",
"crested fireback",
"crested kingfisher",
"crested nuthatch",
"crested oropendola",
"crested serpent eagle",
"crested shriketit",
"crested wood partridge",
"crimson chat",
"crimson sunbird",
"crow",
"cuban tody",
"cuban trogon",
"curl crested aracuri",
"d-arnauds barbet",
"dalmatian pelican",
"darjeeling woodpecker",
"dark eyed junco",
"daurian redstart",
"demoiselle crane",
"double barred finch",
"double brested cormarant",
"double eyed fig parrot",
"downy woodpecker",
"dunlin",
"dusky lory",
"dusky robin",
"eared pita",
"eastern bluebird",
"eastern bluebonnet",
"eastern golden weaver",
"eastern meadowlark",
"eastern rosella",
"eastern towee",
"eastern wip poor will",
"eastern yellow robin",
"ecuadorian hillstar",
"egyptian goose",
"elegant trogon",
"elliots pheasant",
"emerald tanager",
"emperor penguin",
"emu",
"enggano myna",
"eurasian bullfinch",
"eurasian golden oriole",
"eurasian magpie",
"european goldfinch",
"european turtle dove",
"evening grosbeak",
"fairy bluebird",
"fairy penguin",
"fairy tern",
"fan tailed widow",
"fasciated wren",
"fiery minivet",
"fiordland penguin",
"fire tailled myzornis",
"flame bowerbird",
"flame tanager",
"forest wagtail",
"frigate",
"frill back pigeon",
"gambels quail",
"gang gang cockatoo",
"gila woodpecker",
"gilded flicker",
"glossy ibis",
"go away bird",
"gold wing warbler",
"golden bower bird",
"golden cheeked warbler",
"golden chlorophonia",
"golden eagle",
"golden parakeet",
"golden pheasant",
"golden pipit",
"gouldian finch",
"grandala",
"gray catbird",
"gray kingbird",
"gray partridge",
"great argus",
"great gray owl",
"great jacamar",
"great kiskadee",
"great potoo",
"great tinamou",
"great xenops",
"greater pewee",
"greater prairie chicken",
"greator sage grouse",
"green broadbill",
"green jay",
"green magpie",
"green winged dove",
"grey cuckooshrike",
"grey headed chachalaca",
"grey headed fish eagle",
"grey plover",
"groved billed ani",
"guinea turaco",
"guineafowl",
"gurneys pitta",
"gyrfalcon",
"hamerkop",
"harlequin duck",
"harlequin quail",
"harpy eagle",
"hawaiian goose",
"hawfinch",
"helmet vanga",
"hepatic tanager",
"himalayan bluetail",
"himalayan monal",
"hoatzin",
"hooded merganser",
"hoopoes",
"horned guan",
"horned lark",
"horned sungem",
"house finch",
"house sparrow",
"hyacinth macaw",
"iberian magpie",
"ibisbill",
"imperial shaq",
"inca tern",
"indian bustard",
"indian pitta",
"indian roller",
"indian vulture",
"indigo bunting",
"indigo flycatcher",
"inland dotterel",
"ivory billed aracari",
"ivory gull",
"iwi",
"jabiru",
"jack snipe",
"jacobin pigeon",
"jandaya parakeet",
"japanese robin",
"java sparrow",
"jocotoco antpitta",
"kagu",
"kakapo",
"killdear",
"king eider",
"king vulture",
"kiwi",
"knob billed duck",
"kookaburra",
"lark bunting",
"laughing gull",
"lazuli bunting",
"lesser adjutant",
"lilac roller",
"limpkin",
"little auk",
"loggerhead shrike",
"long-eared owl",
"looney birds",
"lucifer hummingbird",
"magpie goose",
"malabar hornbill",
"malachite kingfisher",
"malagasy white eye",
"maleo",
"mallard duck",
"mandrin duck",
"mangrove cuckoo",
"marabou stork",
"masked bobwhite",
"masked booby",
"masked lapwing",
"mckays bunting",
"merlin",
"mikado pheasant",
"military macaw",
"mourning dove",
"myna",
"nicobar pigeon",
"noisy friarbird",
"northern beardless tyrannulet",
"northern cardinal",
"northern flicker",
"northern fulmar",
"northern gannet",
"northern goshawk",
"northern jacana",
"northern mockingbird",
"northern parula",
"northern red bishop",
"northern shoveler",
"ocellated turkey",
"oilbird",
"okinawa rail",
"orange breasted trogon",
"orange brested bunting",
"oriental bay owl",
"ornate hawk eagle",
"osprey",
"ostrich",
"ovenbird",
"oyster catcher",
"painted bunting",
"palila",
"palm nut vulture",
"paradise tanager",
"parakett auklet",
"parus major",
"patagonian sierra finch",
"peacock",
"peregrine falcon",
"phainopepla",
"philippine eagle",
"pink robin",
"plush crested jay",
"pomarine jaeger",
"puffin",
"puna teal",
"purple finch",
"purple gallinule",
"purple martin",
"purple swamphen",
"pygmy kingfisher",
"pyrrhuloxia",
"quetzal",
"rainbow lorikeet",
"razorbill",
"red bearded bee eater",
"red bellied pitta",
"red billed tropicbird",
"red browed finch",
"red crossbill",
"red faced cormorant",
"red faced warbler",
"red fody",
"red headed duck",
"red headed woodpecker",
"red knot",
"red legged honeycreeper",
"red naped trogon",
"red shouldered hawk",
"red tailed hawk",
"red tailed thrush",
"red winged blackbird",
"red wiskered bulbul",
"regent bowerbird",
"ring-necked pheasant",
"roadrunner",
"rock dove",
"rose breasted cockatoo",
"rose breasted grosbeak",
"roseate spoonbill",
"rosy faced lovebird",
"rough leg buzzard",
"royal flycatcher",
"ruby crowned kinglet",
"ruby throated hummingbird",
"ruddy shelduck",
"rudy kingfisher",
"rufous kingfisher",
"rufous trepe",
"rufuos motmot",
"samatran thrush",
"sand martin",
"sandhill crane",
"satyr tragopan",
"says phoebe",
"scarlet crowned fruit dove",
"scarlet faced liocichla",
"scarlet ibis",
"scarlet macaw",
"scarlet tanager",
"shoebill",
"short billed dowitcher",
"smiths longspur",
"snow goose",
"snow partridge",
"snowy egret",
"snowy owl",
"snowy plover",
"snowy sheathbill",
"sora",
"spangled cotinga",
"splendid wren",
"spoon biled sandpiper",
"spotted catbird",
"spotted whistling duck",
"squacco heron",
"sri lanka blue magpie",
"steamer duck",
"stork billed kingfisher",
"striated caracara",
"striped owl",
"stripped manakin",
"stripped swallow",
"sunbittern",
"superb starling",
"surf scoter",
"swinhoes pheasant",
"tailorbird",
"taiwan magpie",
"takahe",
"tasmanian hen",
"tawny frogmouth",
"teal duck",
"tit mouse",
"touchan",
"townsends warbler",
"tree swallow",
"tricolored blackbird",
"tropical kingbird",
"trumpter swan",
"turkey vulture",
"turquoise motmot",
"umbrella bird",
"varied thrush",
"veery",
"venezuelian troupial",
"verdin",
"vermilion flycather",
"victoria crowned pigeon",
"violet backed starling",
"violet cuckoo",
"violet green swallow",
"violet turaco",
"visayan hornbill",
"vulturine guineafowl",
"wall creaper",
"wattled curassow",
"wattled lapwing",
"whimbrel",
"white breasted waterhen",
"white browed crake",
"white cheeked turaco",
"white crested hornbill",
"white eared hummingbird",
"white necked raven",
"white tailed tropic",
"white throated bee eater",
"wild turkey",
"willow ptarmigan",
"wilsons bird of paradise",
"wood duck",
"wood thrush",
"woodland kingfisher",
"wrentit",
"yellow bellied flowerpecker",
"yellow breasted chat",
"yellow cacique",
"yellow headed blackbird",
"zebra dove"
] |
platzi/platzi-vit-model-eloi-campeny
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-eloi-campeny
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0479
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.2
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
ferno22/vit-beans-finetuned
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-finetuned-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1157
- Accuracy: 0.9712
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.193 | 1.0 | 117 | 0.1099 | 0.9808 |
| 0.0462 | 2.0 | 234 | 0.0857 | 0.9808 |
| 0.0171 | 3.0 | 351 | 0.1237 | 0.9712 |
| 0.0123 | 4.0 | 468 | 0.1088 | 0.9712 |
| 0.0095 | 5.0 | 585 | 0.1135 | 0.9712 |
| 0.0081 | 6.0 | 702 | 0.1162 | 0.9712 |
| 0.0073 | 7.0 | 819 | 0.1158 | 0.9712 |
| 0.0066 | 8.0 | 936 | 0.1152 | 0.9712 |
| 0.0061 | 9.0 | 1053 | 0.1160 | 0.9712 |
| 0.0061 | 10.0 | 1170 | 0.1157 | 0.9712 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
mbehbooei/vit-base-patch16-224-in21k-finetuned-smoking
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3136
- Accuracy: 0.95
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6723 | 0.94 | 12 | 0.5164 | 0.93 |
| 0.5034 | 1.96 | 25 | 0.3136 | 0.95 |
| 0.3964 | 2.82 | 36 | 0.2732 | 0.95 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"notsmoking",
"smoking"
] |
Arya-Bastani23/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4211
- Accuracy: 0.7944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8112 | 0.94 | 12 | 0.6080 | 0.75 |
| 0.6849 | 1.96 | 25 | 0.5325 | 0.7889 |
| 0.6835 | 2.98 | 38 | 0.5046 | 0.7778 |
| 0.6253 | 4.0 | 51 | 0.4427 | 0.8056 |
| 0.6203 | 4.94 | 63 | 0.4305 | 0.8222 |
| 0.559 | 5.96 | 76 | 0.4347 | 0.7833 |
| 0.5664 | 6.59 | 84 | 0.4211 | 0.7944 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"class_0",
"class_1",
"class_2",
"class_3",
"class_4"
] |
mbehbooei/vit-base-patch16-224-in21k-finetuned-middle
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-middle
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6069
- Accuracy: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.6585 | 0.6667 |
| No log | 2.0 | 4 | 0.6069 | 0.75 |
| No log | 3.0 | 6 | 0.5801 | 0.75 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"middle finger",
"safe"
] |
fmagot01/vit-base-beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0622
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1329 | 1.54 | 100 | 0.0408 | 0.9925 |
| 0.0169 | 3.08 | 200 | 0.0622 | 0.9850 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
aviandito/vit-dunham-carbonate-classifier
|
# vit-dunham-carbonate-classifier
## Model description
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the [Lokier & Al Junaibi (2016)](https://onlinelibrary.wiley.com/doi/10.1111/sed.12293) data S1.
The model captures the expertise of 177 volunteers from 33 countries with 3,270 years of academic & industry experience in classifying 14 carbonate thin section samples by using the classical [Dunham (1962)](https://en.wikipedia.org/wiki/Dunham_classification) carbonate classification.

([Source](https://commons.wikimedia.org/wiki/File:Dunham_classification_EN.svg))
In the original paper, the authors intended to objectively analyze whether these volunteers have the same standards in applying Dunham classification.
## Intended uses & limitations
- Input: Carbonate thin section image, can be either parallel-polarized (PPL) or cross-polarized (XPL)
- Output: Dunham classification (Mudstone/Wackestone/Packstone/Grainstone/Boundstone/Crystalline) and the probability value
- Limitation: The original dataset is missing Boundstone sample, hence it cannot classify a Boundstone.
Sample image source: [Grainstone - Wikipedia](https://en.wikipedia.org/wiki/Grainstone)

## Training and evaluation data
Source: [Lokier & Al Junaibi (2016), Data S1](https://onlinelibrary.wiley.com/action/downloadSupplement?doi=10.1111%2Fsed.12293&file=sed12293-sup-0001-SupInfo.zip)
The data consists of 14 samples. Each samples has 3 magnifications (x2, x4, and x10) and taken in PPL and XPL. Hence, there are 14 samples * 3 magnifications * 2 polarizations = 84 images in the training dataset.
Classification for each sample is taken from the most popular respondent's response in Table 7.
- Sample 1: Packstone
- Sample 2: Grainstone
- Sample 3: Wackestone
- Sample 4: Packstone
- Sample 5: Wackestone
- Sample 6: Packstone
- Sample 7: Packstone
- Sample 8: Mudstone
- Sample 9: Crystalline
- Sample 10: Grainstone
- Sample 11: Wackestone
- Sample 12: Grainstone
- Sample 13: Grainstone
- Sample 14: Mudstone
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5764 | 1.0 | 5 | 1.5329 | 0.4444 |
| 1.3991 | 2.0 | 10 | 1.4253 | 0.5556 |
| 1.2792 | 3.0 | 15 | 1.2851 | 0.7778 |
| 1.0119 | 4.0 | 20 | 1.1625 | 0.8889 |
| 0.9916 | 5.0 | 25 | 1.0471 | 0.8889 |
| 0.9202 | 6.0 | 30 | 0.9836 | 0.7778 |
| 0.6994 | 7.0 | 35 | 0.8649 | 0.8889 |
| 0.526 | 8.0 | 40 | 0.7110 | 1.0 |
| 0.5383 | 9.0 | 45 | 0.6127 | 1.0 |
| 0.5128 | 10.0 | 50 | 0.5337 | 1.0 |
| 0.4312 | 11.0 | 55 | 0.4887 | 1.0 |
| 0.3827 | 12.0 | 60 | 0.4365 | 1.0 |
| 0.3452 | 13.0 | 65 | 0.3891 | 1.0 |
| 0.3164 | 14.0 | 70 | 0.3677 | 1.0 |
| 0.2899 | 15.0 | 75 | 0.3555 | 1.0 |
| 0.2878 | 16.0 | 80 | 0.3197 | 1.0 |
| 0.2884 | 17.0 | 85 | 0.3056 | 1.0 |
| 0.2633 | 18.0 | 90 | 0.3107 | 1.0 |
| 0.2669 | 19.0 | 95 | 0.3164 | 1.0 |
| 0.2465 | 20.0 | 100 | 0.2949 | 1.0 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"crystalline",
"grainstone",
"mudstone",
"packstone",
"wackestone"
] |
lincyaw/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5173
- Accuracy: 0.8822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 224
- eval_batch_size: 224
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 896
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0945 | 1.0 | 10 | 1.3185 | 0.5569 |
| 1.1055 | 2.0 | 20 | 0.6962 | 0.8379 |
| 0.6974 | 3.0 | 30 | 0.5173 | 0.8822 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"battery",
"biological",
"brown-glass",
"cardboard",
"clothes",
"green-glass",
"metal",
"paper",
"plastic",
"shoes",
"trash",
"white-glass"
] |
tvganesh/identify_stroke
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# identify_stroke
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1127
- Accuracy: 1.0
## Model description
Model identifies cricket shot - front drive, hook shot or sweep shot
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 4 | 0.4345 | 1.0 |
| No log | 2.0 | 8 | 0.3883 | 1.0 |
| 0.3612 | 3.0 | 12 | 0.4099 | 0.8889 |
| 0.3612 | 4.0 | 16 | 0.2452 | 1.0 |
| 0.2934 | 5.0 | 20 | 0.1969 | 1.0 |
| 0.2934 | 6.0 | 24 | 0.1679 | 1.0 |
| 0.2934 | 7.0 | 28 | 0.1403 | 1.0 |
| 0.203 | 8.0 | 32 | 0.1530 | 1.0 |
| 0.203 | 9.0 | 36 | 0.1161 | 1.0 |
| 0.1505 | 10.0 | 40 | 0.1292 | 1.0 |
| 0.1505 | 11.0 | 44 | 0.1031 | 1.0 |
| 0.1505 | 12.0 | 48 | 0.1084 | 1.0 |
| 0.1388 | 13.0 | 52 | 0.1078 | 1.0 |
| 0.1388 | 14.0 | 56 | 0.0937 | 1.0 |
| 0.1076 | 15.0 | 60 | 0.1008 | 1.0 |
| 0.1076 | 16.0 | 64 | 0.1131 | 1.0 |
| 0.1076 | 17.0 | 68 | 0.1007 | 1.0 |
| 0.1047 | 18.0 | 72 | 0.1775 | 0.8889 |
| 0.1047 | 19.0 | 76 | 0.0844 | 1.0 |
| 0.0902 | 20.0 | 80 | 0.1127 | 1.0 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"front drive",
"hook shot",
"sweep shot"
] |
gcperk20/swin-tiny-patch4-window7-224-finetuned-piid
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-piid
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5715
- Accuracy: 0.7854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2088 | 0.98 | 20 | 1.1661 | 0.4521 |
| 0.7545 | 2.0 | 41 | 0.8866 | 0.6073 |
| 0.6281 | 2.98 | 61 | 0.7788 | 0.6849 |
| 0.5939 | 4.0 | 82 | 0.6443 | 0.7397 |
| 0.5254 | 4.98 | 102 | 0.5097 | 0.7808 |
| 0.5583 | 6.0 | 123 | 0.5715 | 0.7854 |
| 0.3463 | 6.98 | 143 | 0.6163 | 0.7352 |
| 0.3878 | 8.0 | 164 | 0.5671 | 0.7671 |
| 0.3653 | 8.98 | 184 | 0.5690 | 0.7580 |
| 0.3529 | 10.0 | 205 | 0.5940 | 0.7580 |
| 0.301 | 10.98 | 225 | 0.6303 | 0.7626 |
| 0.2639 | 12.0 | 246 | 0.5725 | 0.7763 |
| 0.2847 | 12.98 | 266 | 0.6280 | 0.7717 |
| 0.25 | 14.0 | 287 | 0.5975 | 0.7717 |
| 0.2472 | 14.98 | 307 | 0.5821 | 0.7671 |
| 0.1676 | 16.0 | 328 | 0.6456 | 0.7626 |
| 0.1327 | 16.98 | 348 | 0.6117 | 0.7671 |
| 0.1977 | 18.0 | 369 | 0.6988 | 0.7489 |
| 0.1602 | 18.98 | 389 | 0.6448 | 0.7671 |
| 0.1785 | 19.51 | 400 | 0.6333 | 0.7717 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"1",
"2",
"3",
"4"
] |
LucyintheSky/pose-estimation-crop-uncrop
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Crop vs Full Body
## Model description
This model predicts whether the person in the image is **cropped** or **full body**. It is trained on [Lucy in the Sky](https://www.lucyinthesky.com/shop) images.
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k).
## Training and evaluation data
It achieves the following results on the evaluation set:
- Loss: 0.1513
- Accuracy: 0.9649
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"crop",
"uncrop"
] |
erikD12/ErikDL
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ErikDL
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0467
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1333 | 3.85 | 500 | 0.0467 | 0.9925 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
100rab25/swin-tiny-patch4-window7-224-spa_saloon_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-spa_saloon_classification
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0639
- Accuracy: 0.9798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.337 | 1.0 | 205 | 0.2108 | 0.9175 |
| 0.196 | 2.0 | 411 | 0.1137 | 0.9620 |
| 0.1502 | 3.0 | 616 | 0.1030 | 0.9668 |
| 0.1476 | 4.0 | 822 | 0.0815 | 0.9736 |
| 0.1532 | 5.0 | 1027 | 0.0815 | 0.9760 |
| 0.1311 | 6.0 | 1233 | 0.0667 | 0.9805 |
| 0.1212 | 7.0 | 1438 | 0.0675 | 0.9805 |
| 0.1637 | 8.0 | 1644 | 0.0697 | 0.9798 |
| 0.116 | 9.0 | 1849 | 0.0638 | 0.9812 |
| 0.085 | 9.98 | 2050 | 0.0639 | 0.9798 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"ambience",
"hair_style",
"manicure",
"massage_room",
"others",
"pedicure"
] |
TirathP/fine-tuned
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the custom-huggingface dataset.
It achieves the following results on the evaluation set:
- Loss: 7.3529
- Accuracy: 0.0596
- F1: 0.0075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3773 | 2.54 | 1000 | 7.3529 | 0.0596 | 0.0075 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"calling",
"clapping",
"running",
"sitting",
"sleeping",
"texting",
"using_laptop",
"cycling",
"dancing",
"drinking",
"eating",
"fighting",
"hugging",
"laughing",
"listening_to_music"
] |
yhyan/resnet-50-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-eurosat
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5331
- Accuracy: 0.852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6163 | 1.0 | 351 | 1.3104 | 0.665 |
| 1.0927 | 2.0 | 703 | 0.6382 | 0.8286 |
| 1.0099 | 2.99 | 1053 | 0.5331 | 0.852 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
yaojiapeng/vit-base-beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0861
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3095 | 1.0 | 130 | 0.2102 | 0.9774 |
| 0.2114 | 2.0 | 260 | 0.1360 | 0.9624 |
| 0.1861 | 3.0 | 390 | 0.1154 | 0.9699 |
| 0.0827 | 4.0 | 520 | 0.1022 | 0.9774 |
| 0.1281 | 5.0 | 650 | 0.0861 | 0.9850 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
Abhiram4/vit-base-patch16-224-abhi1-finetuned
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-abhi1-finetuned
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1858
- Accuracy: 0.1663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.9292 | 0.99 | 17 | 4.6892 | 0.0380 |
| 4.5033 | 1.97 | 34 | 4.3391 | 0.1191 |
| 4.1992 | 2.96 | 51 | 4.1858 | 0.1663 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
[
"abigail_williams_(fate)",
"aegis_(persona)",
"aisaka_taiga",
"albedo",
"anastasia_(idolmaster)",
"aqua_(konosuba)",
"arcueid_brunestud",
"asia_argento",
"astolfo_(fate)",
"asuna_(sao)",
"atago_(azur_lane)",
"ayanami_rei",
"belfast_(azur_lane)",
"bremerton_(azur_lane)",
"c.c",
"chitanda_eru",
"chloe_von_einzbern",
"cleveland_(azur_lane)",
"d.va_(overwatch)",
"dido_(azur_lane)",
"emilia_rezero",
"enterprise_(azur_lane)",
"formidable_(azur_lane)",
"fubuki_(one-punch_man)",
"fujibayashi_kyou",
"fujiwara_chika",
"furukawa_nagisa",
"gawr_gura",
"gilgamesh",
"giorno_giovanna",
"hanekawa_tsubasa",
"hatsune_miku",
"hayasaka_ai",
"hirasawa_yui",
"hyuuga_hinata",
"ichigo_(darling_in_the_franxx)",
"illyasviel_von_einzbern",
"irisviel_von_einzbern",
"ishtar_(fate_grand_order)",
"isshiki_iroha",
"jonathan_joestar",
"kamado_nezuko",
"kaname_madoka",
"kanbaru_suruga",
"karin_(blue_archive)",
"karna_(fate)",
"katsuragi_misato",
"keqing_(genshin_impact)",
"kirito",
"kiryu_coco",
"kizuna_ai",
"kochou_shinobu",
"komi_shouko",
"laffey_(azur_lane)",
"lancer",
"makise_kurisu",
"mash_kyrielight",
"matou_sakura",
"megumin",
"mei_(pokemon)",
"meltlilith",
"minato_aqua",
"misaka_mikoto",
"miyazono_kawori",
"mori_calliope",
"nagato_yuki",
"nakano_azusa",
"nakano_itsuki",
"nakano_miku",
"nakano_nino",
"nakano_yotsuba",
"nami_(one_piece)",
"nekomata_okayu",
"nico_robin",
"ninomae_ina'nis",
"nishikino_maki",
"okita_souji_(fate)",
"ookami_mio",
"oshino_ougi",
"oshino_shinobu",
"ouro_kronii",
"paimon_(genshin_impact)",
"platelet_(hataraku_saibou)",
"ram_rezero",
"raphtalia",
"rem_rezero",
"rias_gremory",
"rider",
"ryougi_shiki",
"sakura_futaba",
"sakurajima_mai",
"sakurauchi_riko",
"satonaka_chie",
"semiramis_(fate)",
"sengoku_nadeko",
"senjougahara_hitagi",
"shidare_hotaru",
"shinomiya_kaguya",
"shirakami_fubuki",
"shirogane_naoto",
"shirogane_noel",
"shishiro_botan",
"shuten_douji_(fate)",
"sinon",
"souryuu_asuka_langley",
"st_ar-15_(girls_frontline)",
"super_sonico",
"suzuhara_lulu",
"suzumiya_haruhi",
"taihou_(azur_lane)",
"takagi-san",
"takamaki_anne",
"takanashi_rikka",
"takao_(azur_lane)",
"takarada_rikka",
"takimoto_hifumi",
"tokoyami_towa",
"toosaka_rin",
"toujou_nozomi",
"tsushima_yoshiko",
"unicorn_(azur_lane)",
"usada_pekora",
"utsumi_erise",
"watson_amelia",
"waver_velvet",
"xenovia_(high_school_dxd)",
"yui_(angel_beats!)",
"yuigahama_yui",
"yukinoshita_yukino",
"zero_two_(darling_in_the_franxx)"
] |
sdgroeve/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0702
- Accuracy: 0.9770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2433 | 1.0 | 190 | 0.1255 | 0.9585 |
| 0.1721 | 2.0 | 380 | 0.0852 | 0.9704 |
| 0.1388 | 3.0 | 570 | 0.0702 | 0.9770 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
[
"annualcrop",
"forest",
"herbaceousvegetation",
"highway",
"industrial",
"pasture",
"permanentcrop",
"residential",
"river",
"sealake"
] |
platzi/platzi-vit-model-Carlos-Moreno
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-Carlos-Moreno
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0368
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.144 | 3.85 | 500 | 0.0368 | 0.9850 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
gchabcou/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8834
- Accuracy: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.6073 | 0.99 | 62 | 3.3725 | 0.818 |
| 2.2956 | 2.0 | 125 | 2.1579 | 0.854 |
| 1.7042 | 2.99 | 187 | 1.6201 | 0.887 |
| 1.3278 | 4.0 | 250 | 1.3513 | 0.89 |
| 1.1314 | 4.99 | 312 | 1.1549 | 0.908 |
| 1.007 | 6.0 | 375 | 1.0737 | 0.889 |
| 0.905 | 6.99 | 437 | 0.9600 | 0.906 |
| 0.8227 | 8.0 | 500 | 0.9113 | 0.912 |
| 0.7948 | 8.99 | 562 | 0.8908 | 0.909 |
| 0.7598 | 9.92 | 620 | 0.8834 | 0.9 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
TirathP/cifar10-lt
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cifar10-lt
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the cifar10-lt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1132
- Accuracy: 0.9659
- F1: 0.9660
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
DamarJati/plastic-recycling-codes
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
More information needed
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-5
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 5 | 1.847501 | 0.260870 |
| 1.9354 | 2.0 | 10 | 1.729485 | 0.333333 |
| 1.9354 | 3.0 | 15 | 1.681863 | 0.391304 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"1_polyethylene_pet",
"2_high_density_polyethylene_pe-hd",
"3_polyvinylchloride_pvc",
"4_low_density_polyethylene_pe-ld",
"5_polypropylene_pp",
"6_polystyrene_ps",
"7_other_resins",
"8_no_plastic"
] |
tejp/finetuned-cifar10
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-cifar10
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the finetuned-cifar10-lt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0976
- Accuracy: 0.971
- F1: 0.9711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
tejp/human-actions
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# human-actions
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the Human_Action_Recognition dataset.
It achieves the following results on the evaluation set:
- Loss: 7.1747
- Accuracy: 0.0676
- F1: 0.0084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3842 | 2.54 | 1000 | 7.1747 | 0.0676 | 0.0084 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"calling",
"clapping",
"running",
"sitting",
"sleeping",
"texting",
"using_laptop",
"cycling",
"dancing",
"drinking",
"eating",
"fighting",
"hugging",
"laughing",
"listening_to_music"
] |
navradio/swin-tiny-patch4-window7-224-finetuned-200k
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-200k
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4347
- Accuracy: 0.7961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.634 | 0.99 | 36 | 0.6243 | 0.6262 |
| 0.5551 | 1.99 | 72 | 0.5186 | 0.7250 |
| 0.5183 | 2.98 | 108 | 0.4826 | 0.7673 |
| 0.4854 | 4.0 | 145 | 0.5640 | 0.7261 |
| 0.4645 | 4.99 | 181 | 0.4598 | 0.7817 |
| 0.4655 | 5.99 | 217 | 0.4787 | 0.7786 |
| 0.4582 | 6.98 | 253 | 0.4483 | 0.7899 |
| 0.4415 | 8.0 | 290 | 0.4709 | 0.7765 |
| 0.4546 | 8.99 | 326 | 0.4717 | 0.7817 |
| 0.4566 | 9.99 | 362 | 0.4538 | 0.7951 |
| 0.4675 | 10.98 | 398 | 0.4491 | 0.7817 |
| 0.4449 | 12.0 | 435 | 0.4992 | 0.7652 |
| 0.4349 | 12.99 | 471 | 0.4627 | 0.7817 |
| 0.4253 | 13.99 | 507 | 0.4492 | 0.7858 |
| 0.4278 | 14.98 | 543 | 0.4442 | 0.7951 |
| 0.4567 | 16.0 | 580 | 0.4362 | 0.7899 |
| 0.4205 | 16.99 | 616 | 0.4550 | 0.7889 |
| 0.4233 | 17.99 | 652 | 0.4336 | 0.7909 |
| 0.4014 | 18.98 | 688 | 0.4565 | 0.7889 |
| 0.4176 | 20.0 | 725 | 0.4323 | 0.7940 |
| 0.411 | 20.99 | 761 | 0.4348 | 0.7951 |
| 0.4128 | 21.99 | 797 | 0.4378 | 0.7971 |
| 0.4045 | 22.98 | 833 | 0.4317 | 0.7951 |
| 0.4001 | 24.0 | 870 | 0.4452 | 0.7868 |
| 0.4061 | 24.99 | 906 | 0.4286 | 0.7920 |
| 0.4033 | 25.99 | 942 | 0.4306 | 0.7951 |
| 0.3953 | 26.98 | 978 | 0.4320 | 0.7920 |
| 0.3924 | 28.0 | 1015 | 0.4338 | 0.7940 |
| 0.4056 | 28.99 | 1051 | 0.4329 | 0.7930 |
| 0.4032 | 29.79 | 1080 | 0.4347 | 0.7961 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"abnormal",
"normal"
] |
twm213/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# twm213/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3748
- Validation Loss: 0.3432
- Train Accuracy: 0.914
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7859 | 1.6483 | 0.799 | 0 |
| 1.2220 | 0.9133 | 0.842 | 1 |
| 0.7054 | 0.5449 | 0.898 | 2 |
| 0.4945 | 0.4446 | 0.892 | 3 |
| 0.3748 | 0.3432 | 0.914 | 4 |
### Framework versions
- Transformers 4.33.3
- TensorFlow 2.9.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
mbehbooei/vit-base-patch16-224-in21k-finetuned-moderation
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-moderation
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2400
- Accuracy: 0.9043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1295 | 1.0 | 2863 | 0.3140 | 0.8736 |
| 0.1181 | 2.0 | 5726 | 0.2400 | 0.9043 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"nude",
"safe",
"sexy"
] |
DamarJati/GreenLabel-Waste-Types
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0150
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5566 | 0.98 | 11 | 0.1846 | 0.975 |
| 0.1031 | 1.96 | 22 | 0.0150 | 1.0 |
| 0.0345 | 2.93 | 33 | 0.0031 | 1.0 |
| 0.0117 | 4.0 | 45 | 0.0008 | 1.0 |
| 0.0256 | 4.98 | 56 | 0.0008 | 1.0 |
| 0.0136 | 5.87 | 66 | 0.0007 | 1.0 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"o",
"r"
] |
dima806/pokemon_types_image_detection
|
Returns pokemon type given an image.
See https://www.kaggle.com/code/dima806/pokemon-common-types-image-detection-vit for more details.
```
Accuracy: 0.9588
F1 Score: 0.9459
Classification report:
precision recall f1-score support
Wartortle 0.9615 0.9615 0.9615 26
Arcanine 1.0000 1.0000 1.0000 27
Staryu 1.0000 1.0000 1.0000 27
Arbok 1.0000 1.0000 1.0000 26
Butterfree 0.0000 0.0000 0.0000 26
Geodude 1.0000 1.0000 1.0000 27
Seaking 1.0000 1.0000 1.0000 26
Diglett 1.0000 1.0000 1.0000 27
Jynx 1.0000 1.0000 1.0000 26
Sandslash 0.9286 1.0000 0.9630 26
Magneton 1.0000 1.0000 1.0000 27
Scyther 1.0000 1.0000 1.0000 27
Kabuto 1.0000 1.0000 1.0000 26
Cubone 0.8276 0.9231 0.8727 26
Golem 1.0000 1.0000 1.0000 26
Dewgong 0.9630 1.0000 0.9811 26
Pidgey 1.0000 0.9259 0.9615 27
Kadabra 0.5200 1.0000 0.6842 26
Ditto 1.0000 1.0000 1.0000 26
Venomoth 0.5400 1.0000 0.7013 27
Rattata 1.0000 1.0000 1.0000 27
Alakazam 0.0000 0.0000 0.0000 26
Machoke 1.0000 0.9615 0.9804 26
Farfetchd 1.0000 1.0000 1.0000 27
Omastar 1.0000 0.9615 0.9804 26
Machamp 0.9630 1.0000 0.9811 26
Jigglypuff 1.0000 1.0000 1.0000 27
Dragonite 1.0000 1.0000 1.0000 26
Weepinbell 1.0000 1.0000 1.0000 26
Sandshrew 1.0000 1.0000 1.0000 26
Dugtrio 1.0000 1.0000 1.0000 27
Mankey 0.8276 0.8889 0.8571 27
Hitmonchan 0.8667 1.0000 0.9286 26
Spearow 1.0000 1.0000 1.0000 26
Caterpie 1.0000 1.0000 1.0000 27
Dratini 0.0000 0.0000 0.0000 26
Bulbasaur 1.0000 1.0000 1.0000 26
Tentacool 1.0000 1.0000 1.0000 26
Gengar 1.0000 1.0000 1.0000 26
Machop 0.9643 1.0000 0.9818 27
Raichu 1.0000 1.0000 1.0000 26
Alolan Sandslash 0.0000 0.0000 0.0000 26
Eevee 1.0000 1.0000 1.0000 27
Abra 1.0000 1.0000 1.0000 27
Haunter 1.0000 1.0000 1.0000 27
Metapod 1.0000 1.0000 1.0000 27
Fearow 0.9630 1.0000 0.9811 26
Nidorina 0.8966 1.0000 0.9455 26
Zapdos 1.0000 1.0000 1.0000 27
Ninetales 1.0000 0.9630 0.9811 27
Chansey 1.0000 1.0000 1.0000 27
Kangaskhan 0.9630 1.0000 0.9811 26
Poliwrath 1.0000 0.9630 0.9811 27
Gyarados 1.0000 1.0000 1.0000 27
Charmeleon 1.0000 1.0000 1.0000 26
Vulpix 1.0000 1.0000 1.0000 26
Pidgeot 1.0000 0.8846 0.9388 26
Blastoise 0.9630 1.0000 0.9811 26
Porygon 1.0000 1.0000 1.0000 26
Psyduck 0.9643 1.0000 0.9818 27
Dragonair 0.5400 1.0000 0.7013 27
Raticate 0.9630 1.0000 0.9811 26
Squirtle 1.0000 0.9615 0.9804 26
Charizard 1.0000 1.0000 1.0000 26
Electrode 1.0000 0.9615 0.9804 26
Flareon 1.0000 1.0000 1.0000 26
Exeggutor 0.9643 1.0000 0.9818 27
Pikachu 1.0000 1.0000 1.0000 26
Wigglytuff 1.0000 1.0000 1.0000 26
Venusaur 1.0000 0.9615 0.9804 26
Mewtwo 1.0000 1.0000 1.0000 26
Clefable 1.0000 1.0000 1.0000 27
Oddish 1.0000 1.0000 1.0000 26
Ekans 1.0000 1.0000 1.0000 26
Shellder 1.0000 1.0000 1.0000 26
Marowak 0.9130 0.8077 0.8571 26
Kakuna 1.0000 1.0000 1.0000 27
Rapidash 1.0000 0.9615 0.9804 26
Rhydon 1.0000 0.9630 0.9811 27
Ivysaur 1.0000 1.0000 1.0000 26
Slowpoke 1.0000 1.0000 1.0000 26
Lapras 1.0000 1.0000 1.0000 27
Clefairy 1.0000 1.0000 1.0000 26
Hitmonlee 1.0000 1.0000 1.0000 26
Jolteon 1.0000 1.0000 1.0000 26
Growlithe 1.0000 1.0000 1.0000 27
Gastly 1.0000 1.0000 1.0000 27
Aerodactyl 1.0000 1.0000 1.0000 27
Weedle 1.0000 1.0000 1.0000 26
Tauros 1.0000 1.0000 1.0000 27
Seel 0.8929 0.9615 0.9259 26
Zubat 1.0000 1.0000 1.0000 26
Meowth 0.0000 0.0000 0.0000 26
Persian 0.6341 1.0000 0.7761 26
Articuno 0.9310 1.0000 0.9643 27
Weezing 0.9643 1.0000 0.9818 27
Magnemite 1.0000 1.0000 1.0000 27
Omanyte 0.9630 1.0000 0.9811 26
Mew 1.0000 1.0000 1.0000 26
Vileplume 1.0000 1.0000 1.0000 27
Nidoqueen 0.9615 0.9259 0.9434 27
Vaporeon 0.9000 1.0000 0.9474 27
Ponyta 0.9630 1.0000 0.9811 26
Moltres 1.0000 1.0000 1.0000 27
Voltorb 0.9630 1.0000 0.9811 26
Magikarp 1.0000 1.0000 1.0000 27
Beedrill 1.0000 1.0000 1.0000 26
Nidoking 1.0000 1.0000 1.0000 27
Paras 1.0000 1.0000 1.0000 26
Grimer 1.0000 0.9615 0.9804 26
Dodrio 1.0000 1.0000 1.0000 26
Charmander 1.0000 1.0000 1.0000 26
Muk 1.0000 0.9615 0.9804 26
Primeape 0.8966 0.9630 0.9286 27
Victreebel 1.0000 1.0000 1.0000 26
Golbat 1.0000 1.0000 1.0000 26
Horsea 1.0000 1.0000 1.0000 27
Goldeen 1.0000 1.0000 1.0000 27
Pidgeotto 0.8966 1.0000 0.9455 26
Koffing 0.9630 1.0000 0.9811 26
Seadra 0.5870 1.0000 0.7397 27
Tentacruel 1.0000 1.0000 1.0000 26
Pinsir 1.0000 1.0000 1.0000 26
Cloyster 1.0000 1.0000 1.0000 26
Gloom 1.0000 1.0000 1.0000 26
Graveler 1.0000 1.0000 1.0000 26
Magmar 1.0000 1.0000 1.0000 27
Krabby 0.9286 1.0000 0.9630 26
Electabuzz 1.0000 1.0000 1.0000 27
Poliwhirl 0.9643 1.0000 0.9818 27
Golduck 0.9310 1.0000 0.9643 27
Onix 1.0000 1.0000 1.0000 27
Nidorino 1.0000 1.0000 1.0000 27
Snorlax 0.9630 1.0000 0.9811 26
Starmie 1.0000 1.0000 1.0000 27
Slowbro 1.0000 1.0000 1.0000 26
MrMime 1.0000 1.0000 1.0000 26
Venonat 1.0000 1.0000 1.0000 27
Kabutops 1.0000 1.0000 1.0000 26
Drowzee 1.0000 1.0000 1.0000 26
Rhyhorn 1.0000 1.0000 1.0000 26
Tangela 1.0000 1.0000 1.0000 27
Doduo 1.0000 1.0000 1.0000 27
Exeggcute 1.0000 1.0000 1.0000 26
Poliwag 1.0000 1.0000 1.0000 27
Lickitung 1.0000 1.0000 1.0000 26
Hypno 0.9286 1.0000 0.9630 26
Bellsprout 1.0000 1.0000 1.0000 27
Parasect 1.0000 1.0000 1.0000 26
Kingler 1.0000 0.9231 0.9600 26
accuracy 0.9588 3960
macro avg 0.9382 0.9583 0.9459 3960
weighted avg 0.9386 0.9588 0.9463 3960
```
|
[
"wartortle",
"arcanine",
"staryu",
"arbok",
"butterfree",
"geodude",
"seaking",
"diglett",
"jynx",
"sandslash",
"magneton",
"scyther",
"kabuto",
"cubone",
"golem",
"dewgong",
"pidgey",
"kadabra",
"ditto",
"venomoth",
"rattata",
"alakazam",
"machoke",
"farfetchd",
"omastar",
"machamp",
"jigglypuff",
"dragonite",
"weepinbell",
"sandshrew",
"dugtrio",
"mankey",
"hitmonchan",
"spearow",
"caterpie",
"dratini",
"bulbasaur",
"tentacool",
"gengar",
"machop",
"raichu",
"alolan sandslash",
"eevee",
"abra",
"haunter",
"metapod",
"fearow",
"nidorina",
"zapdos",
"ninetales",
"chansey",
"kangaskhan",
"poliwrath",
"gyarados",
"charmeleon",
"vulpix",
"pidgeot",
"blastoise",
"porygon",
"psyduck",
"dragonair",
"raticate",
"squirtle",
"charizard",
"electrode",
"flareon",
"exeggutor",
"pikachu",
"wigglytuff",
"venusaur",
"mewtwo",
"clefable",
"oddish",
"ekans",
"shellder",
"marowak",
"kakuna",
"rapidash",
"rhydon",
"ivysaur",
"slowpoke",
"lapras",
"clefairy",
"hitmonlee",
"jolteon",
"growlithe",
"gastly",
"aerodactyl",
"weedle",
"tauros",
"seel",
"zubat",
"meowth",
"persian",
"articuno",
"weezing",
"magnemite",
"omanyte",
"mew",
"vileplume",
"nidoqueen",
"vaporeon",
"ponyta",
"moltres",
"voltorb",
"magikarp",
"beedrill",
"nidoking",
"paras",
"grimer",
"dodrio",
"charmander",
"muk",
"primeape",
"victreebel",
"golbat",
"horsea",
"goldeen",
"pidgeotto",
"koffing",
"seadra",
"tentacruel",
"pinsir",
"cloyster",
"gloom",
"graveler",
"magmar",
"krabby",
"electabuzz",
"poliwhirl",
"golduck",
"onix",
"nidorino",
"snorlax",
"starmie",
"slowbro",
"mrmime",
"venonat",
"kabutops",
"drowzee",
"rhyhorn",
"tangela",
"doduo",
"exeggcute",
"poliwag",
"lickitung",
"hypno",
"bellsprout",
"parasect",
"kingler"
] |
navradio/swinv2-tiny-patch4-window8-256-finetuned-PE
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-finetuned-PE
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3083
- Accuracy: 0.8720
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 9 | 0.6391 | 0.6690 |
| 0.6873 | 1.95 | 19 | 0.5293 | 0.7376 |
| 0.6233 | 2.97 | 29 | 0.6385 | 0.6853 |
| 0.5976 | 4.0 | 39 | 0.4447 | 0.7970 |
| 0.5552 | 4.92 | 48 | 0.4029 | 0.8266 |
| 0.552 | 5.95 | 58 | 0.3675 | 0.8429 |
| 0.5055 | 6.97 | 68 | 0.3409 | 0.8581 |
| 0.4816 | 8.0 | 78 | 0.3322 | 0.8615 |
| 0.455 | 8.92 | 87 | 0.3166 | 0.8639 |
| 0.4428 | 9.95 | 97 | 0.3100 | 0.8662 |
| 0.4398 | 10.97 | 107 | 0.3713 | 0.8365 |
| 0.4318 | 12.0 | 117 | 0.4019 | 0.8284 |
| 0.4431 | 12.92 | 126 | 0.3074 | 0.8714 |
| 0.4437 | 13.95 | 136 | 0.3156 | 0.8656 |
| 0.4482 | 14.97 | 146 | 0.3516 | 0.8476 |
| 0.4353 | 16.0 | 156 | 0.3162 | 0.8598 |
| 0.4218 | 16.92 | 165 | 0.3018 | 0.8685 |
| 0.4111 | 17.95 | 175 | 0.3143 | 0.8650 |
| 0.4224 | 18.97 | 185 | 0.3146 | 0.8592 |
| 0.4114 | 20.0 | 195 | 0.3097 | 0.8691 |
| 0.4103 | 20.92 | 204 | 0.3038 | 0.8703 |
| 0.3989 | 21.95 | 214 | 0.2893 | 0.8796 |
| 0.3908 | 22.97 | 224 | 0.2956 | 0.8755 |
| 0.3923 | 24.0 | 234 | 0.3041 | 0.8685 |
| 0.3842 | 24.92 | 243 | 0.2876 | 0.8749 |
| 0.3808 | 25.95 | 253 | 0.2907 | 0.8767 |
| 0.382 | 26.97 | 263 | 0.3018 | 0.8738 |
| 0.3816 | 28.0 | 273 | 0.2812 | 0.8825 |
| 0.379 | 28.92 | 282 | 0.2960 | 0.8633 |
| 0.3858 | 29.95 | 292 | 0.2960 | 0.8743 |
| 0.3546 | 30.97 | 302 | 0.2850 | 0.8807 |
| 0.3656 | 32.0 | 312 | 0.2905 | 0.8784 |
| 0.3707 | 32.92 | 321 | 0.2926 | 0.8743 |
| 0.3651 | 33.95 | 331 | 0.2941 | 0.8796 |
| 0.3584 | 34.97 | 341 | 0.3133 | 0.8615 |
| 0.36 | 36.0 | 351 | 0.3181 | 0.8679 |
| 0.3496 | 36.92 | 360 | 0.3036 | 0.8685 |
| 0.3458 | 37.95 | 370 | 0.2939 | 0.8732 |
| 0.3431 | 38.97 | 380 | 0.3062 | 0.8703 |
| 0.3512 | 40.0 | 390 | 0.2914 | 0.8755 |
| 0.3512 | 40.92 | 399 | 0.3164 | 0.8674 |
| 0.3403 | 41.95 | 409 | 0.3063 | 0.8679 |
| 0.3423 | 42.97 | 419 | 0.3018 | 0.8720 |
| 0.3312 | 44.0 | 429 | 0.3094 | 0.8697 |
| 0.3365 | 44.92 | 438 | 0.3062 | 0.8755 |
| 0.3319 | 45.95 | 448 | 0.3081 | 0.8720 |
| 0.3409 | 46.15 | 450 | 0.3083 | 0.8720 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"non_pe",
"pe"
] |
gcperk20/deit-small-patch16-224-finetuned-piid
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-small-patch16-224-finetuned-piid
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5615
- Accuracy: 0.7945
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1803 | 0.98 | 20 | 1.0233 | 0.5753 |
| 0.706 | 2.0 | 41 | 0.7299 | 0.7078 |
| 0.6016 | 2.98 | 61 | 0.6877 | 0.7123 |
| 0.4903 | 4.0 | 82 | 0.6139 | 0.7671 |
| 0.4692 | 4.98 | 102 | 0.5667 | 0.7626 |
| 0.374 | 6.0 | 123 | 0.5146 | 0.8037 |
| 0.2995 | 6.98 | 143 | 0.5596 | 0.7534 |
| 0.2905 | 8.0 | 164 | 0.5313 | 0.7534 |
| 0.2612 | 8.98 | 184 | 0.5328 | 0.7900 |
| 0.2499 | 10.0 | 205 | 0.5369 | 0.7991 |
| 0.185 | 10.98 | 225 | 0.5754 | 0.7808 |
| 0.1927 | 12.0 | 246 | 0.5886 | 0.7717 |
| 0.1446 | 12.98 | 266 | 0.5160 | 0.7991 |
| 0.155 | 14.0 | 287 | 0.5353 | 0.8082 |
| 0.1577 | 14.98 | 307 | 0.5848 | 0.7808 |
| 0.1243 | 16.0 | 328 | 0.5572 | 0.7991 |
| 0.1038 | 16.98 | 348 | 0.5859 | 0.7763 |
| 0.1305 | 18.0 | 369 | 0.5752 | 0.7900 |
| 0.0868 | 18.98 | 389 | 0.5616 | 0.8037 |
| 0.1364 | 19.51 | 400 | 0.5615 | 0.7945 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"1",
"2",
"3",
"4"
] |
gcperk20/deit-tiny-patch16-224-finetuned-piid
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-tiny-patch16-224-finetuned-piid
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5426
- Accuracy: 0.7626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2274 | 0.98 | 20 | 1.1185 | 0.4658 |
| 0.8485 | 2.0 | 41 | 0.8690 | 0.6119 |
| 0.6793 | 2.98 | 61 | 0.8749 | 0.6073 |
| 0.6028 | 4.0 | 82 | 0.6864 | 0.6804 |
| 0.5693 | 4.98 | 102 | 0.5618 | 0.7717 |
| 0.5092 | 6.0 | 123 | 0.5958 | 0.7260 |
| 0.3788 | 6.98 | 143 | 0.6444 | 0.7352 |
| 0.4106 | 8.0 | 164 | 0.5277 | 0.7443 |
| 0.3716 | 8.98 | 184 | 0.6081 | 0.7352 |
| 0.3466 | 10.0 | 205 | 0.4976 | 0.7580 |
| 0.3587 | 10.98 | 225 | 0.5429 | 0.7443 |
| 0.2661 | 12.0 | 246 | 0.4933 | 0.7763 |
| 0.2628 | 12.98 | 266 | 0.5078 | 0.7671 |
| 0.2473 | 14.0 | 287 | 0.5264 | 0.7945 |
| 0.2633 | 14.98 | 307 | 0.5262 | 0.7671 |
| 0.2017 | 16.0 | 328 | 0.5509 | 0.7763 |
| 0.1861 | 16.98 | 348 | 0.5513 | 0.7443 |
| 0.2031 | 18.0 | 369 | 0.5516 | 0.7580 |
| 0.1604 | 18.98 | 389 | 0.5430 | 0.7671 |
| 0.2346 | 19.51 | 400 | 0.5426 | 0.7626 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"1",
"2",
"3",
"4"
] |
gcperk20/convnext-small-224-finetuned-piid
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-small-224-finetuned-piid
This model is a fine-tuned version of [facebook/convnext-small-224](https://huggingface.co/facebook/convnext-small-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5651
- Accuracy: 0.7626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3405 | 0.98 | 20 | 1.3201 | 0.4155 |
| 1.1715 | 2.0 | 41 | 1.1362 | 0.5708 |
| 0.9231 | 2.98 | 61 | 0.9255 | 0.6438 |
| 0.7128 | 4.0 | 82 | 0.7558 | 0.6986 |
| 0.6204 | 4.98 | 102 | 0.7056 | 0.7534 |
| 0.5322 | 6.0 | 123 | 0.6610 | 0.7397 |
| 0.4403 | 6.98 | 143 | 0.6639 | 0.7443 |
| 0.4388 | 8.0 | 164 | 0.6472 | 0.7306 |
| 0.3901 | 8.98 | 184 | 0.6684 | 0.7352 |
| 0.4202 | 10.0 | 205 | 0.5934 | 0.7397 |
| 0.3784 | 10.98 | 225 | 0.5651 | 0.7626 |
| 0.2973 | 12.0 | 246 | 0.6439 | 0.7580 |
| 0.3614 | 12.98 | 266 | 0.5844 | 0.7534 |
| 0.2795 | 14.0 | 287 | 0.6015 | 0.7306 |
| 0.2825 | 14.98 | 307 | 0.6031 | 0.7626 |
| 0.2364 | 16.0 | 328 | 0.6249 | 0.7534 |
| 0.2162 | 16.98 | 348 | 0.6248 | 0.7626 |
| 0.2455 | 18.0 | 369 | 0.6153 | 0.7489 |
| 0.2314 | 18.98 | 389 | 0.6113 | 0.7580 |
| 0.248 | 19.51 | 400 | 0.6131 | 0.7580 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"1",
"2",
"3",
"4"
] |
juniorjukeko/swin-tiny-patch4-window7-224_ft_mango_leaf_disease
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224_ft_mango_leaf_disease
This model was trained from scratch on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0089
- Accuracy: 0.9986
## Model description
Multiclass image classification model based on [swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) and fine-tuned with Mango🥭 Leaf🍃🍂 Disease Dataset.
Model was trained on 8 classes based on mango leaves health :
Anthracnose, Bacterial Canker, Cutting Weevil, Die Back, Gall Midge, Powdery Mildew, Sooty Mould, Healthy
## Intended uses & limitations
More information needed
## Training and evaluation data
Traning and evaluation data are from this Kaggle dataset [Mango🥭 Leaf🍃🍂 Disease Dataset](https://www.kaggle.com/datasets/aryashah2k/mango-leaf-disease-dataset).
Amount of images used was 90% of total images (3600 of 4000, 450 images from each class).
## Training procedure
Dataset split : 75% train set, 20% validation set, 5% test set.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 143
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.93 | 10 | 0.1208 | 0.9931 |
| 0.1082 | 1.95 | 21 | 0.0551 | 0.9958 |
| 0.1082 | 2.98 | 32 | 0.0297 | 0.9958 |
| 0.0342 | 4.0 | 43 | 0.0189 | 0.9986 |
| 0.0342 | 4.93 | 53 | 0.0156 | 0.9972 |
| 0.0164 | 5.95 | 64 | 0.0122 | 0.9972 |
| 0.0164 | 6.98 | 75 | 0.0100 | 0.9986 |
| 0.0099 | 8.0 | 86 | 0.0096 | 0.9986 |
| 0.0099 | 8.93 | 96 | 0.0090 | 0.9986 |
| 0.0085 | 9.3 | 100 | 0.0089 | 0.9986 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"anthracnose",
"bacterial canker",
"cutting weevil",
"die back",
"gall midge",
"healthy",
"powdery mildew",
"sooty mould"
] |
gcperk20/convnextv2-tiny-22k-224-finetuned-piid
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-tiny-22k-224-finetuned-piid
This model is a fine-tuned version of [facebook/convnextv2-tiny-22k-224](https://huggingface.co/facebook/convnextv2-tiny-22k-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6118
- Accuracy: 0.7854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2083 | 0.98 | 20 | 1.0137 | 0.6027 |
| 0.6826 | 2.0 | 41 | 0.6901 | 0.6895 |
| 0.5161 | 2.98 | 61 | 0.6377 | 0.7078 |
| 0.4475 | 4.0 | 82 | 0.5423 | 0.7215 |
| 0.4325 | 4.98 | 102 | 0.5165 | 0.7671 |
| 0.3433 | 6.0 | 123 | 0.5916 | 0.7763 |
| 0.2677 | 6.98 | 143 | 0.5866 | 0.7534 |
| 0.2498 | 8.0 | 164 | 0.5146 | 0.7900 |
| 0.2387 | 8.98 | 184 | 0.5631 | 0.7580 |
| 0.2132 | 10.0 | 205 | 0.5320 | 0.7991 |
| 0.2178 | 10.98 | 225 | 0.5833 | 0.7854 |
| 0.1474 | 12.0 | 246 | 0.5902 | 0.7900 |
| 0.1627 | 12.98 | 266 | 0.6142 | 0.7808 |
| 0.1651 | 14.0 | 287 | 0.6063 | 0.7808 |
| 0.158 | 14.98 | 307 | 0.6130 | 0.7808 |
| 0.126 | 16.0 | 328 | 0.6647 | 0.7671 |
| 0.0821 | 16.98 | 348 | 0.5972 | 0.7808 |
| 0.1062 | 18.0 | 369 | 0.5975 | 0.7945 |
| 0.1031 | 18.98 | 389 | 0.6129 | 0.7808 |
| 0.1268 | 19.51 | 400 | 0.6118 | 0.7854 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"1",
"2",
"3",
"4"
] |
dima806/flower_groups_image_detection
|
Returns flower group given an image.
See https://www.kaggle.com/code/dima806/flower-groups-image-detection-vit for more details.

```
Classification report:
precision recall f1-score support
tarragon 0.0000 0.0000 0.0000 247
blanketflowers 0.9868 0.9109 0.9474 247
coralbells 0.8791 0.9717 0.9231 247
tulips 0.9741 0.9150 0.9436 247
daffodils 0.8719 0.9919 0.9280 247
peas 0.8972 0.9190 0.9080 247
garlic 0.0000 0.0000 0.0000 247
sunroots 1.0000 0.0486 0.0927 247
milkweed 0.8509 0.9474 0.8966 247
celery 0.0000 0.0000 0.0000 247
dill 0.4819 0.9717 0.6443 247
phlox 0.9137 0.9433 0.9283 247
peonies 0.5545 0.9879 0.7103 247
coneflowers 0.9679 0.9757 0.9718 247
beets 0.7526 0.8745 0.8090 247
beans 0.8824 0.9756 0.9266 246
onions 0.3012 0.9231 0.4542 247
bellflowers 0.9865 0.8907 0.9362 247
delphiniums 0.9955 0.8866 0.9379 247
oleanders 0.9875 0.9595 0.9733 247
roseofsharon 0.9727 0.4350 0.6011 246
cantaloupes 0.9329 0.6194 0.7445 247
deadnettles 0.9534 0.9109 0.9317 247
viburnums 0.5501 0.8664 0.6730 247
dianthus 0.8298 0.9512 0.8864 246
peaches 0.8902 0.5911 0.7105 247
aloes 0.7724 0.9757 0.8623 247
parsley 0.3561 0.9717 0.5212 247
penstemon 0.9782 0.9106 0.9432 246
thyme 0.6685 0.9879 0.7974 247
citrus 0.8479 0.9028 0.8745 247
bleeding-hearts 0.9679 0.9757 0.9718 247
dogwoods 0.5442 0.9231 0.6847 247
black-eyed-susans 0.5501 0.9555 0.6982 247
petunias 0.9790 0.9472 0.9628 246
jujubes 0.0000 0.0000 0.0000 247
arborvitaes 0.8880 0.8664 0.8770 247
lilies 0.9783 0.9109 0.9434 247
crinums 0.7704 0.8421 0.8046 247
catmints 0.6334 0.8745 0.7347 247
astilbe 0.9597 0.9636 0.9616 247
beautyberries 0.7500 0.8988 0.8177 247
beebalms 0.8484 0.9514 0.8969 247
foxgloves 0.9713 0.9595 0.9654 247
gladiolus 0.9048 0.9231 0.9138 247
plums 0.8571 0.4615 0.6000 247
vitis 1.0000 0.5466 0.7068 247
ninebarks 1.0000 0.0445 0.0853 247
lettuces 0.7921 0.8947 0.8403 247
poppies 0.9679 0.9757 0.9718 247
smoketrees 0.9202 0.8866 0.9031 247
irises 1.0000 0.9960 0.9980 247
cilantro 0.9600 0.0972 0.1765 247
artichokes 1.0000 0.7895 0.8824 247
lambsears 0.6519 0.7764 0.7087 246
butterworts 0.9286 0.2105 0.3432 247
babysbreath 1.0000 0.1700 0.2907 247
cucurbits 0.5658 0.9959 0.7216 246
plumerias 0.8051 0.8902 0.8456 246
liatris 0.9720 0.8455 0.9043 246
carrots 0.6364 0.5407 0.5846 246
crepe-myrtles 0.9710 0.9474 0.9590 247
oregano 0.6372 0.2927 0.4011 246
ilex 0.5610 0.9676 0.7103 247
butterflybushes 0.9726 0.8623 0.9142 247
sage 0.4910 0.4413 0.4648 247
baptisia 0.9744 0.7692 0.8597 247
sempervivum 0.9910 0.8943 0.9402 246
asparagus 0.9610 0.3008 0.4582 246
radishes 0.5153 0.7490 0.6106 247
parsnips 1.0000 0.1174 0.2101 247
hibiscus 0.4605 0.9715 0.6248 246
rhododendrons 0.8918 0.9676 0.9282 247
potatoes 1.0000 0.4130 0.5845 247
hydrangeas 0.9504 0.9350 0.9426 246
swisschard 0.8154 0.9878 0.8934 246
cannas 0.9360 0.9474 0.9416 247
brassicas 0.6437 0.8740 0.7414 246
rubus 0.8631 0.8421 0.8525 247
columbines 0.9717 0.9717 0.9717 247
echeverias 0.6384 0.9150 0.7521 247
okra 0.9901 0.8138 0.8933 247
aeoniums 0.5124 0.9190 0.6580 247
yarrows 0.7126 0.9636 0.8193 247
roses 0.9880 0.9960 0.9919 247
basil 0.6419 0.9433 0.7639 247
spiraeas 0.5897 0.9717 0.7339 247
caladiums 0.7804 0.9352 0.8508 247
spinach 0.8947 0.2753 0.4211 247
wisterias 0.9609 0.8947 0.9266 247
cherries 1.0000 0.1862 0.3140 247
marjoram 1.0000 0.3927 0.5640 247
hyacinths 0.9711 0.9514 0.9611 247
rhubarbs 0.9651 0.8947 0.9286 247
tickseeds 0.8588 0.8866 0.8725 247
perovskia 0.7869 0.5830 0.6698 247
crocus 0.9789 0.9431 0.9607 246
mints 0.6088 0.9514 0.7425 247
heavenly-bamboos 0.9493 0.8340 0.8879 247
agaves 0.9025 0.8623 0.8820 247
pears 0.3087 0.4575 0.3687 247
dudleyas 0.8291 0.5304 0.6469 247
pachypodiums 0.8820 0.6356 0.7388 247
mockoranges 0.9958 0.9676 0.9815 247
asters 0.9957 0.9512 0.9730 246
geraniums 0.9750 0.9474 0.9610 247
mammillarias 0.9447 0.9715 0.9579 246
cucumbers 1.0000 0.6235 0.7681 247
veronicas 0.9368 0.9595 0.9480 247
turnips 0.0000 0.0000 0.0000 247
peppers 0.8053 0.9919 0.8889 246
hardyhibiscuses 1.0000 0.4593 0.6295 246
morning-glories 0.8316 0.9595 0.8910 247
gardenias 0.9954 0.8785 0.9333 247
ribes 0.9837 0.7358 0.8419 246
loniceras 0.9540 0.9231 0.9383 247
eggplants 0.9837 0.9798 0.9817 247
hostas 0.8167 0.9919 0.8958 247
chlorophytums 0.9709 0.6761 0.7971 247
chives 0.7029 0.9676 0.8143 247
tomatoes 0.6619 0.9352 0.7752 247
lilacs 1.0000 0.9595 0.9793 247
leeks 0.0000 0.0000 0.0000 246
shastadaisies 0.9592 0.9514 0.9553 247
apricots 1.0000 0.5830 0.7366 247
apples 0.4027 0.9636 0.5680 247
strawberries 0.8897 0.9798 0.9326 247
salvias 0.4479 0.9393 0.6065 247
sedums 0.7639 0.9472 0.8457 246
corn 0.9129 0.8907 0.9016 247
daylilies 1.0000 0.9960 0.9980 247
figs 0.9711 0.9553 0.9631 246
dahlias 0.9757 0.9757 0.9757 247
sweetpotatoes 0.7183 0.9393 0.8140 247
accuracy 0.7785 33072
macro avg 0.8044 0.7785 0.7529 33072
weighted avg 0.8044 0.7785 0.7528 33072
```
|
[
"tarragon",
"blanketflowers",
"coralbells",
"tulips",
"daffodils",
"peas",
"garlic",
"sunroots",
"milkweed",
"celery",
"dill",
"phlox",
"peonies",
"coneflowers",
"beets",
"beans",
"onions",
"bellflowers",
"delphiniums",
"oleanders",
"roseofsharon",
"cantaloupes",
"deadnettles",
"viburnums",
"dianthus",
"peaches",
"aloes",
"parsley",
"penstemon",
"thyme",
"citrus",
"bleeding-hearts",
"dogwoods",
"black-eyed-susans",
"petunias",
"jujubes",
"arborvitaes",
"lilies",
"crinums",
"catmints",
"astilbe",
"beautyberries",
"beebalms",
"foxgloves",
"gladiolus",
"plums",
"vitis",
"ninebarks",
"lettuces",
"poppies",
"smoketrees",
"irises",
"cilantro",
"artichokes",
"lambsears",
"butterworts",
"babysbreath",
"cucurbits",
"plumerias",
"liatris",
"carrots",
"crepe-myrtles",
"oregano",
"ilex",
"butterflybushes",
"sage",
"baptisia",
"sempervivum",
"asparagus",
"radishes",
"parsnips",
"hibiscus",
"rhododendrons",
"potatoes",
"hydrangeas",
"swisschard",
"cannas",
"brassicas",
"rubus",
"columbines",
"echeverias",
"okra",
"aeoniums",
"yarrows",
"roses",
"basil",
"spiraeas",
"caladiums",
"spinach",
"wisterias",
"cherries",
"marjoram",
"hyacinths",
"rhubarbs",
"tickseeds",
"perovskia",
"crocus",
"mints",
"heavenly-bamboos",
"agaves",
"pears",
"dudleyas",
"pachypodiums",
"mockoranges",
"asters",
"geraniums",
"mammillarias",
"cucumbers",
"veronicas",
"turnips",
"peppers",
"hardyhibiscuses",
"morning-glories",
"gardenias",
"ribes",
"loniceras",
"eggplants",
"hostas",
"chlorophytums",
"chives",
"tomatoes",
"lilacs",
"leeks",
"shastadaisies",
"apricots",
"apples",
"strawberries",
"salvias",
"sedums",
"corn",
"daylilies",
"figs",
"dahlias",
"sweetpotatoes"
] |
dima806/lemon_quality_image_detection
|
Returns lemon quality given an image.
See https://www.kaggle.com/code/dima806/lemon-quality-image-detection-vit for more details.

```
Classification report:
precision recall f1-score support
good_quality 1.0000 1.0000 1.0000 450
empty_background 1.0000 1.0000 1.0000 450
bad_quality 1.0000 1.0000 1.0000 450
accuracy 1.0000 1350
macro avg 1.0000 1.0000 1.0000 1350
weighted avg 1.0000 1.0000 1.0000 1350
```
|
[
"good_quality",
"empty_background",
"bad_quality"
] |
amrul-hzz/watermark_detector
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# watermark_detector
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6014
- Accuracy: 0.6574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6492 | 1.0 | 1139 | 0.6375 | 0.6262 |
| 0.6172 | 2.0 | 2278 | 0.6253 | 0.6438 |
| 0.578 | 3.0 | 3417 | 0.6110 | 0.6508 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"no_watermark",
"watermark"
] |
purabp1249/swin-tiny-patch4-window7-224-finetuned-herbify2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-herbify2
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0655
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6649 | 0.97 | 18 | 0.5193 | 0.9242 |
| 0.4002 | 2.0 | 37 | 0.0655 | 1.0 |
| 0.1095 | 2.97 | 55 | 0.0249 | 1.0 |
| 0.0486 | 3.89 | 72 | 0.0154 | 1.0 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cpu
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"aloevera",
"amruthaballi",
"badipala",
"bamboo",
"beans",
"ashoka"
] |
bdpc/vit-base_rvl_tobacco-tiny_tobacco3482_hint_b0.0_dit-tiny_test
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl_tobacco-tiny_tobacco3482_hint_b0.0_dit-tiny_test
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 200 | 64.8970 | 0.17 | 0.8761 | 7.1182 | 0.17 | 0.0466 | 0.2430 | 0.7785 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
canadianjosieharrison/swinv2-large-patch4-window12-192-22k-finetuned-ethzurich
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-large-patch4-window12-192-22k-finetuned-ethzurich
This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12-192-22k](https://huggingface.co/microsoft/swinv2-large-patch4-window12-192-22k) on the Urban Resource Cadastre dataset created by Deepika Raghu, Martin Juan José Bucher, and Catherine De Wolf (https://github.com/raghudeepika/urban-resource-cadastre-repository).
It achieves the following results on the evaluation set:
- Loss: 0.6083
- Accuracy: 0.8295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.96 | 6 | 1.2578 | 0.6364 |
| 1.6142 | 1.92 | 12 | 0.7696 | 0.75 |
| 1.6142 | 2.88 | 18 | 0.6083 | 0.8295 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"brick",
"metal",
"null",
"other",
"rustication",
"siding",
"stucco",
"wood"
] |
bryandts/image_classification_food_indian
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification_food_indian
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3097
- Accuracy: 0.9267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 333 | 0.4028 | 0.8969 |
| 0.6617 | 2.0 | 666 | 0.3750 | 0.9044 |
| 0.6617 | 3.0 | 999 | 0.3231 | 0.9224 |
| 0.1215 | 4.0 | 1332 | 0.3105 | 0.9277 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"burger",
"butter_naan",
"kaathi_rolls",
"kadai_paneer",
"kulfi",
"masala_dosa",
"momos",
"paani_puri",
"pakode",
"pav_bhaji",
"pizza",
"samosa",
"chai",
"chapati",
"chole_bhature",
"dal_makhani",
"dhokla",
"fried_rice",
"idli",
"jalebi"
] |
hansin91/scene_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scene_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indoor-scene-classification dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6106
- Accuracy: 0.8492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.3172 | 1.0 | 341 | 2.8572 | 0.5109 |
| 2.2254 | 2.0 | 682 | 2.1453 | 0.6329 |
| 1.6202 | 3.0 | 1023 | 1.6283 | 0.7336 |
| 1.2313 | 4.0 | 1364 | 1.3402 | 0.7599 |
| 0.9576 | 5.0 | 1705 | 1.1237 | 0.8010 |
| 0.7654 | 6.0 | 2046 | 1.0270 | 0.8023 |
| 0.6416 | 7.0 | 2387 | 0.8848 | 0.8171 |
| 0.5353 | 8.0 | 2728 | 0.8381 | 0.8087 |
| 0.4516 | 9.0 | 3069 | 0.7570 | 0.8254 |
| 0.3925 | 10.0 | 3410 | 0.6667 | 0.8524 |
| 0.3453 | 11.0 | 3751 | 0.7583 | 0.8164 |
| 0.2944 | 12.0 | 4092 | 0.6783 | 0.8350 |
| 0.294 | 13.0 | 4433 | 0.7128 | 0.8312 |
| 0.2507 | 14.0 | 4774 | 0.6632 | 0.8331 |
| 0.2355 | 15.0 | 5115 | 0.6730 | 0.8421 |
| 0.2267 | 16.0 | 5456 | 0.6572 | 0.8357 |
| 0.2032 | 17.0 | 5797 | 0.7058 | 0.8280 |
| 0.1908 | 18.0 | 6138 | 0.6374 | 0.8485 |
| 0.1857 | 19.0 | 6479 | 0.6831 | 0.8312 |
| 0.1727 | 20.0 | 6820 | 0.6961 | 0.8254 |
| 0.1692 | 21.0 | 7161 | 0.6306 | 0.8402 |
| 0.1642 | 22.0 | 7502 | 0.6291 | 0.8485 |
| 0.1618 | 23.0 | 7843 | 0.6058 | 0.8582 |
| 0.1593 | 24.0 | 8184 | 0.6780 | 0.8389 |
| 0.1399 | 25.0 | 8525 | 0.6330 | 0.8485 |
| 0.1373 | 26.0 | 8866 | 0.6550 | 0.8408 |
| 0.1334 | 27.0 | 9207 | 0.6857 | 0.8421 |
| 0.1388 | 28.0 | 9548 | 0.6338 | 0.8415 |
| 0.1423 | 29.0 | 9889 | 0.6272 | 0.8517 |
| 0.1288 | 30.0 | 10230 | 0.6409 | 0.8556 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"meeting_room",
"cloister",
"computerroom",
"grocerystore",
"hospitalroom",
"buffet",
"office",
"warehouse",
"garage",
"bookstore",
"florist",
"locker_room",
"stairscase",
"inside_bus",
"subway",
"fastfood_restaurant",
"auditorium",
"studiomusic",
"airport_inside",
"pantry",
"restaurant_kitchen",
"casino",
"movietheater",
"restaurant",
"kitchen",
"waitingroom",
"artstudio",
"toystore",
"kindergarden",
"trainstation",
"bedroom",
"mall",
"corridor",
"bar",
"hairsalon",
"classroom",
"shoeshop",
"dentaloffice",
"videostore",
"laboratorywet",
"tv_studio",
"church_inside",
"operating_room",
"jewelleryshop",
"bathroom",
"children_room",
"clothingstore",
"closet",
"winecellar",
"livingroom",
"nursery",
"gameroom",
"inside_subway",
"deli",
"bakery",
"library",
"dining_room",
"prisoncell",
"gym",
"concert_hall",
"greenhouse",
"elevator",
"poolinside",
"bowling",
"lobby",
"museum",
"laundromat"
] |
bdpc/resnet101-base_tobacco
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101-base_tobacco
This model is a fine-tuned version of [microsoft/resnet-101](https://huggingface.co/microsoft/resnet-101) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6332
- Accuracy: 0.435
- Brier Loss: 0.6886
- Nll: 4.4967
- F1 Micro: 0.435
- F1 Macro: 0.2876
- Ece: 0.2482
- Aurc: 0.3432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 13 | 2.3065 | 0.06 | 0.9008 | 7.5257 | 0.06 | 0.0563 | 0.1444 | 0.9505 |
| No log | 2.0 | 26 | 2.3098 | 0.075 | 0.9014 | 8.6176 | 0.075 | 0.0468 | 0.1535 | 0.9485 |
| No log | 3.0 | 39 | 2.3082 | 0.09 | 0.9011 | 7.8490 | 0.09 | 0.0647 | 0.1662 | 0.9336 |
| No log | 4.0 | 52 | 2.3056 | 0.12 | 0.9006 | 7.6932 | 0.12 | 0.0809 | 0.1814 | 0.8887 |
| No log | 5.0 | 65 | 2.3004 | 0.125 | 0.8995 | 7.1356 | 0.125 | 0.0750 | 0.1841 | 0.8198 |
| No log | 6.0 | 78 | 2.2921 | 0.155 | 0.8979 | 5.9637 | 0.155 | 0.0706 | 0.2036 | 0.7930 |
| No log | 7.0 | 91 | 2.2917 | 0.165 | 0.8978 | 5.7926 | 0.165 | 0.0785 | 0.2139 | 0.8056 |
| No log | 8.0 | 104 | 2.2842 | 0.185 | 0.8963 | 4.7947 | 0.185 | 0.0595 | 0.2244 | 0.8344 |
| No log | 9.0 | 117 | 2.2742 | 0.215 | 0.8942 | 4.4573 | 0.2150 | 0.0830 | 0.2424 | 0.7961 |
| No log | 10.0 | 130 | 2.2638 | 0.2 | 0.8921 | 4.8564 | 0.2000 | 0.0554 | 0.2376 | 0.7663 |
| No log | 11.0 | 143 | 2.2530 | 0.215 | 0.8898 | 5.0772 | 0.2150 | 0.0740 | 0.2467 | 0.7908 |
| No log | 12.0 | 156 | 2.2479 | 0.19 | 0.8888 | 5.3276 | 0.19 | 0.0421 | 0.2220 | 0.7856 |
| No log | 13.0 | 169 | 2.2406 | 0.18 | 0.8873 | 5.2973 | 0.18 | 0.0308 | 0.2248 | 0.8007 |
| No log | 14.0 | 182 | 2.2202 | 0.285 | 0.8826 | 5.4657 | 0.285 | 0.1167 | 0.2855 | 0.6743 |
| No log | 15.0 | 195 | 2.2085 | 0.29 | 0.8801 | 5.7797 | 0.29 | 0.1154 | 0.2909 | 0.6660 |
| No log | 16.0 | 208 | 2.1850 | 0.305 | 0.8742 | 5.7600 | 0.305 | 0.1194 | 0.3063 | 0.4897 |
| No log | 17.0 | 221 | 2.2017 | 0.18 | 0.8789 | 5.7405 | 0.18 | 0.0306 | 0.2309 | 0.7654 |
| No log | 18.0 | 234 | 2.1998 | 0.18 | 0.8784 | 5.8985 | 0.18 | 0.0305 | 0.2377 | 0.7525 |
| No log | 19.0 | 247 | 2.1429 | 0.285 | 0.8640 | 5.9614 | 0.285 | 0.1117 | 0.2970 | 0.5007 |
| No log | 20.0 | 260 | 2.1240 | 0.315 | 0.8587 | 5.9916 | 0.315 | 0.1232 | 0.3057 | 0.4288 |
| No log | 21.0 | 273 | 2.0986 | 0.305 | 0.8513 | 5.9764 | 0.305 | 0.1166 | 0.3001 | 0.4526 |
| No log | 22.0 | 286 | 2.0909 | 0.315 | 0.8494 | 5.9914 | 0.315 | 0.1234 | 0.3062 | 0.4385 |
| No log | 23.0 | 299 | 2.0451 | 0.295 | 0.8313 | 6.1078 | 0.295 | 0.1115 | 0.2901 | 0.4619 |
| No log | 24.0 | 312 | 2.0662 | 0.3 | 0.8413 | 6.1029 | 0.3 | 0.1168 | 0.3014 | 0.4544 |
| No log | 25.0 | 325 | 2.0235 | 0.3 | 0.8238 | 6.1798 | 0.3 | 0.1156 | 0.2885 | 0.4553 |
| No log | 26.0 | 338 | 2.0669 | 0.305 | 0.8439 | 6.2056 | 0.305 | 0.1207 | 0.3046 | 0.4579 |
| No log | 27.0 | 351 | 2.0223 | 0.315 | 0.8256 | 6.1083 | 0.315 | 0.1232 | 0.2860 | 0.4308 |
| No log | 28.0 | 364 | 2.1075 | 0.185 | 0.8574 | 6.0867 | 0.185 | 0.0370 | 0.2317 | 0.7416 |
| No log | 29.0 | 377 | 1.9127 | 0.295 | 0.7709 | 6.1567 | 0.295 | 0.1155 | 0.2464 | 0.4630 |
| No log | 30.0 | 390 | 1.9407 | 0.315 | 0.7889 | 6.1398 | 0.315 | 0.1283 | 0.2696 | 0.4244 |
| No log | 31.0 | 403 | 1.9099 | 0.305 | 0.7737 | 6.1311 | 0.305 | 0.1216 | 0.2626 | 0.4441 |
| No log | 32.0 | 416 | 1.9071 | 0.31 | 0.7731 | 6.1004 | 0.31 | 0.1237 | 0.2803 | 0.4387 |
| No log | 33.0 | 429 | 1.9097 | 0.31 | 0.7774 | 6.1658 | 0.31 | 0.1212 | 0.2701 | 0.4328 |
| No log | 34.0 | 442 | 1.9008 | 0.3 | 0.7724 | 6.2049 | 0.3 | 0.1180 | 0.2415 | 0.4452 |
| No log | 35.0 | 455 | 2.0340 | 0.275 | 0.8382 | 5.8659 | 0.275 | 0.1095 | 0.2873 | 0.6352 |
| No log | 36.0 | 468 | 1.9324 | 0.315 | 0.7937 | 6.0328 | 0.315 | 0.1248 | 0.2865 | 0.4177 |
| No log | 37.0 | 481 | 2.0698 | 0.18 | 0.8483 | 6.1172 | 0.18 | 0.0306 | 0.2448 | 0.7024 |
| No log | 38.0 | 494 | 1.8436 | 0.3 | 0.7492 | 6.1508 | 0.3 | 0.1192 | 0.2461 | 0.4406 |
| 2.0752 | 39.0 | 507 | 1.8504 | 0.31 | 0.7556 | 6.0528 | 0.31 | 0.1222 | 0.2696 | 0.4355 |
| 2.0752 | 40.0 | 520 | 1.8523 | 0.315 | 0.7582 | 6.0492 | 0.315 | 0.1245 | 0.2522 | 0.4341 |
| 2.0752 | 41.0 | 533 | 1.8858 | 0.305 | 0.7785 | 6.1136 | 0.305 | 0.1244 | 0.2756 | 0.4559 |
| 2.0752 | 42.0 | 546 | 1.8466 | 0.305 | 0.7594 | 5.9124 | 0.305 | 0.1205 | 0.2739 | 0.4469 |
| 2.0752 | 43.0 | 559 | 1.9921 | 0.195 | 0.8300 | 5.6106 | 0.195 | 0.0490 | 0.2368 | 0.7141 |
| 2.0752 | 44.0 | 572 | 1.8133 | 0.31 | 0.7447 | 5.6505 | 0.31 | 0.1242 | 0.2708 | 0.4189 |
| 2.0752 | 45.0 | 585 | 1.8022 | 0.32 | 0.7397 | 5.6263 | 0.32 | 0.1324 | 0.2557 | 0.4213 |
| 2.0752 | 46.0 | 598 | 1.8361 | 0.32 | 0.7599 | 5.6068 | 0.32 | 0.1281 | 0.2719 | 0.4239 |
| 2.0752 | 47.0 | 611 | 1.7972 | 0.32 | 0.7376 | 5.8954 | 0.32 | 0.1306 | 0.2418 | 0.4311 |
| 2.0752 | 48.0 | 624 | 1.7850 | 0.325 | 0.7357 | 5.8208 | 0.325 | 0.1397 | 0.2528 | 0.3984 |
| 2.0752 | 49.0 | 637 | 1.7808 | 0.315 | 0.7332 | 5.5883 | 0.315 | 0.1325 | 0.2551 | 0.4255 |
| 2.0752 | 50.0 | 650 | 1.7838 | 0.31 | 0.7338 | 5.6850 | 0.31 | 0.1314 | 0.2530 | 0.4247 |
| 2.0752 | 51.0 | 663 | 1.7767 | 0.305 | 0.7316 | 5.4974 | 0.305 | 0.1241 | 0.2515 | 0.4253 |
| 2.0752 | 52.0 | 676 | 1.7607 | 0.32 | 0.7263 | 5.3077 | 0.32 | 0.1321 | 0.2458 | 0.4148 |
| 2.0752 | 53.0 | 689 | 1.7486 | 0.32 | 0.7224 | 5.1734 | 0.32 | 0.1355 | 0.2510 | 0.4190 |
| 2.0752 | 54.0 | 702 | 1.7693 | 0.33 | 0.7323 | 5.1578 | 0.33 | 0.1446 | 0.2638 | 0.3970 |
| 2.0752 | 55.0 | 715 | 1.7476 | 0.325 | 0.7235 | 5.1481 | 0.325 | 0.1602 | 0.2285 | 0.4140 |
| 2.0752 | 56.0 | 728 | 1.7384 | 0.31 | 0.7189 | 5.3248 | 0.31 | 0.1507 | 0.2295 | 0.4202 |
| 2.0752 | 57.0 | 741 | 1.7454 | 0.32 | 0.7228 | 5.2669 | 0.32 | 0.1575 | 0.2602 | 0.4218 |
| 2.0752 | 58.0 | 754 | 1.8063 | 0.33 | 0.7551 | 5.0652 | 0.33 | 0.1574 | 0.2835 | 0.4092 |
| 2.0752 | 59.0 | 767 | 1.7466 | 0.34 | 0.7237 | 4.9430 | 0.34 | 0.1783 | 0.2729 | 0.4124 |
| 2.0752 | 60.0 | 780 | 1.7240 | 0.345 | 0.7166 | 5.0165 | 0.345 | 0.1776 | 0.2397 | 0.4118 |
| 2.0752 | 61.0 | 793 | 1.7105 | 0.325 | 0.7126 | 5.0261 | 0.325 | 0.1647 | 0.2564 | 0.4149 |
| 2.0752 | 62.0 | 806 | 1.7078 | 0.345 | 0.7157 | 5.0160 | 0.345 | 0.1797 | 0.2612 | 0.4013 |
| 2.0752 | 63.0 | 819 | 1.7982 | 0.305 | 0.7575 | 4.9876 | 0.305 | 0.1614 | 0.2733 | 0.4650 |
| 2.0752 | 64.0 | 832 | 1.8072 | 0.33 | 0.7635 | 5.0080 | 0.33 | 0.1954 | 0.2928 | 0.4487 |
| 2.0752 | 65.0 | 845 | 1.7201 | 0.35 | 0.7180 | 4.8708 | 0.35 | 0.2071 | 0.2445 | 0.4114 |
| 2.0752 | 66.0 | 858 | 1.7131 | 0.335 | 0.7167 | 4.9248 | 0.335 | 0.1936 | 0.2531 | 0.4223 |
| 2.0752 | 67.0 | 871 | 1.7071 | 0.345 | 0.7138 | 4.8657 | 0.345 | 0.1948 | 0.2664 | 0.4128 |
| 2.0752 | 68.0 | 884 | 1.7022 | 0.36 | 0.7128 | 4.7996 | 0.36 | 0.2147 | 0.2443 | 0.4023 |
| 2.0752 | 69.0 | 897 | 1.6859 | 0.37 | 0.7055 | 4.7318 | 0.37 | 0.2296 | 0.2577 | 0.3909 |
| 2.0752 | 70.0 | 910 | 1.6860 | 0.37 | 0.7038 | 4.8293 | 0.37 | 0.2314 | 0.2594 | 0.3894 |
| 2.0752 | 71.0 | 923 | 1.6823 | 0.36 | 0.7038 | 4.7070 | 0.36 | 0.2170 | 0.2485 | 0.3934 |
| 2.0752 | 72.0 | 936 | 1.7656 | 0.335 | 0.7457 | 4.8009 | 0.335 | 0.2035 | 0.2760 | 0.4503 |
| 2.0752 | 73.0 | 949 | 1.8235 | 0.32 | 0.7754 | 4.7280 | 0.32 | 0.2028 | 0.2752 | 0.5244 |
| 2.0752 | 74.0 | 962 | 1.6878 | 0.37 | 0.7073 | 4.7660 | 0.37 | 0.2290 | 0.2455 | 0.3996 |
| 2.0752 | 75.0 | 975 | 1.6717 | 0.365 | 0.7003 | 4.7709 | 0.3650 | 0.2209 | 0.2404 | 0.3906 |
| 2.0752 | 76.0 | 988 | 1.6610 | 0.365 | 0.6972 | 4.6921 | 0.3650 | 0.2223 | 0.2640 | 0.3910 |
| 1.6288 | 77.0 | 1001 | 1.6740 | 0.4 | 0.7016 | 4.6791 | 0.4000 | 0.2519 | 0.2794 | 0.3693 |
| 1.6288 | 78.0 | 1014 | 1.6792 | 0.385 | 0.7048 | 4.7411 | 0.3850 | 0.2434 | 0.2594 | 0.3913 |
| 1.6288 | 79.0 | 1027 | 1.6752 | 0.395 | 0.7030 | 4.5595 | 0.395 | 0.2608 | 0.2906 | 0.3887 |
| 1.6288 | 80.0 | 1040 | 1.6554 | 0.395 | 0.6951 | 4.5213 | 0.395 | 0.2653 | 0.2696 | 0.3821 |
| 1.6288 | 81.0 | 1053 | 1.6688 | 0.385 | 0.7013 | 4.5993 | 0.3850 | 0.2441 | 0.2614 | 0.3886 |
| 1.6288 | 82.0 | 1066 | 1.6892 | 0.35 | 0.7121 | 4.6296 | 0.35 | 0.2187 | 0.2701 | 0.4067 |
| 1.6288 | 83.0 | 1079 | 1.6691 | 0.4 | 0.7031 | 4.5448 | 0.4000 | 0.2570 | 0.2845 | 0.3756 |
| 1.6288 | 84.0 | 1092 | 1.6544 | 0.39 | 0.6946 | 4.6295 | 0.39 | 0.2357 | 0.2522 | 0.3806 |
| 1.6288 | 85.0 | 1105 | 1.6592 | 0.395 | 0.6983 | 4.4632 | 0.395 | 0.2515 | 0.2793 | 0.3815 |
| 1.6288 | 86.0 | 1118 | 1.6526 | 0.4 | 0.6945 | 4.5685 | 0.4000 | 0.2579 | 0.2527 | 0.3781 |
| 1.6288 | 87.0 | 1131 | 1.6558 | 0.4 | 0.6968 | 4.5767 | 0.4000 | 0.2623 | 0.2435 | 0.3804 |
| 1.6288 | 88.0 | 1144 | 1.6507 | 0.395 | 0.6961 | 4.5355 | 0.395 | 0.2390 | 0.2554 | 0.3710 |
| 1.6288 | 89.0 | 1157 | 1.6462 | 0.4 | 0.6941 | 4.5278 | 0.4000 | 0.2525 | 0.2406 | 0.3704 |
| 1.6288 | 90.0 | 1170 | 1.6490 | 0.39 | 0.6954 | 4.5513 | 0.39 | 0.2430 | 0.2497 | 0.3700 |
| 1.6288 | 91.0 | 1183 | 1.6568 | 0.405 | 0.6980 | 4.5792 | 0.405 | 0.2545 | 0.2584 | 0.3675 |
| 1.6288 | 92.0 | 1196 | 1.6421 | 0.41 | 0.6909 | 4.5731 | 0.41 | 0.2666 | 0.2527 | 0.3609 |
| 1.6288 | 93.0 | 1209 | 1.6489 | 0.405 | 0.6952 | 4.3408 | 0.405 | 0.2695 | 0.2738 | 0.3716 |
| 1.6288 | 94.0 | 1222 | 1.6440 | 0.41 | 0.6933 | 4.3845 | 0.41 | 0.2713 | 0.2629 | 0.3619 |
| 1.6288 | 95.0 | 1235 | 1.6411 | 0.435 | 0.6919 | 4.4244 | 0.435 | 0.2878 | 0.2634 | 0.3516 |
| 1.6288 | 96.0 | 1248 | 1.6391 | 0.41 | 0.6918 | 4.4251 | 0.41 | 0.2628 | 0.2655 | 0.3743 |
| 1.6288 | 97.0 | 1261 | 1.6341 | 0.42 | 0.6893 | 4.4415 | 0.4200 | 0.2761 | 0.2549 | 0.3598 |
| 1.6288 | 98.0 | 1274 | 1.6476 | 0.415 | 0.6952 | 4.5149 | 0.415 | 0.2778 | 0.2385 | 0.3639 |
| 1.6288 | 99.0 | 1287 | 1.6463 | 0.42 | 0.6939 | 4.5027 | 0.4200 | 0.2792 | 0.2806 | 0.3593 |
| 1.6288 | 100.0 | 1300 | 1.6332 | 0.435 | 0.6886 | 4.4967 | 0.435 | 0.2876 | 0.2482 | 0.3432 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
bdpc/resnet101_rvl-cdip
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101_rvl-cdip
This model is a fine-tuned version of [microsoft/resnet-101](https://huggingface.co/microsoft/resnet-101) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6158
- Accuracy: 0.8210
- Brier Loss: 0.2556
- Nll: 1.7696
- F1 Micro: 0.8210
- F1 Macro: 0.8209
- Ece: 0.0176
- Aurc: 0.0418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 1.3521 | 1.0 | 5000 | 1.2626 | 0.6133 | 0.5108 | 2.7262 | 0.6133 | 0.6042 | 0.0455 | 0.1644 |
| 0.942 | 2.0 | 10000 | 0.9005 | 0.7318 | 0.3723 | 2.2139 | 0.7318 | 0.7293 | 0.0174 | 0.0862 |
| 0.7983 | 3.0 | 15000 | 0.7691 | 0.7723 | 0.3198 | 2.0444 | 0.7723 | 0.7714 | 0.0139 | 0.0641 |
| 0.7167 | 4.0 | 20000 | 0.7048 | 0.7924 | 0.2931 | 1.9414 | 0.7924 | 0.7931 | 0.0135 | 0.0541 |
| 0.6656 | 5.0 | 25000 | 0.6658 | 0.8052 | 0.2770 | 1.8581 | 0.8052 | 0.8056 | 0.0108 | 0.0486 |
| 0.6252 | 6.0 | 30000 | 0.6415 | 0.8117 | 0.2670 | 1.8157 | 0.8117 | 0.8112 | 0.0128 | 0.0455 |
| 0.6038 | 7.0 | 35000 | 0.6269 | 0.8176 | 0.2607 | 1.7833 | 0.8176 | 0.8180 | 0.0144 | 0.0432 |
| 0.5784 | 8.0 | 40000 | 0.6217 | 0.8195 | 0.2583 | 1.7723 | 0.8195 | 0.8195 | 0.0151 | 0.0425 |
| 0.5583 | 9.0 | 45000 | 0.6150 | 0.8214 | 0.2553 | 1.7719 | 0.8214 | 0.8214 | 0.0164 | 0.0415 |
| 0.5519 | 10.0 | 50000 | 0.6158 | 0.8210 | 0.2556 | 1.7696 | 0.8210 | 0.8209 | 0.0176 | 0.0418 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"letter",
"form",
"email",
"handwritten",
"advertisement",
"scientific_report",
"scientific_publication",
"specification",
"file_folder",
"news_article",
"budget",
"invoice",
"presentation",
"questionnaire",
"resume",
"memo"
] |
frncscp/dinotron
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Dinotron
This model is a fine-tuned version of [facebook/dinov2-base](https://huggingface.co/facebook/dinov2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0265
- Accuracy: 0.9932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.1146 | 0.9638 |
| 0.3773 | 2.0 | 14 | 0.0336 | 0.9932 |
| 0.0541 | 3.0 | 21 | 0.0402 | 0.9887 |
| 0.0541 | 4.0 | 28 | 0.0463 | 0.9887 |
| 0.0476 | 5.0 | 35 | 0.0594 | 0.9819 |
| 0.1408 | 6.0 | 42 | 0.1296 | 0.9570 |
| 0.1408 | 7.0 | 49 | 0.0872 | 0.9729 |
| 0.0898 | 8.0 | 56 | 0.2245 | 0.9344 |
| 0.216 | 9.0 | 63 | 0.1444 | 0.9570 |
| 0.076 | 10.0 | 70 | 0.0316 | 0.9887 |
| 0.076 | 11.0 | 77 | 0.0411 | 0.9864 |
| 0.0369 | 12.0 | 84 | 0.0275 | 0.9887 |
| 0.0505 | 13.0 | 91 | 0.1610 | 0.9638 |
| 0.0505 | 14.0 | 98 | 0.0513 | 0.9910 |
| 0.0274 | 15.0 | 105 | 0.2366 | 0.9615 |
| 0.0735 | 16.0 | 112 | 0.0738 | 0.9796 |
| 0.0735 | 17.0 | 119 | 0.0529 | 0.9819 |
| 0.0334 | 18.0 | 126 | 0.1024 | 0.9661 |
| 0.0347 | 19.0 | 133 | 0.0919 | 0.9819 |
| 0.0206 | 20.0 | 140 | 0.0851 | 0.9864 |
| 0.0206 | 21.0 | 147 | 0.1004 | 0.9796 |
| 0.0516 | 22.0 | 154 | 0.1706 | 0.9638 |
| 0.0418 | 23.0 | 161 | 0.0505 | 0.9910 |
| 0.0418 | 24.0 | 168 | 0.0939 | 0.9774 |
| 0.0173 | 25.0 | 175 | 0.0553 | 0.9842 |
| 0.0239 | 26.0 | 182 | 0.1255 | 0.9796 |
| 0.0239 | 27.0 | 189 | 0.2256 | 0.9661 |
| 0.0286 | 28.0 | 196 | 0.0943 | 0.9751 |
| 0.0502 | 29.0 | 203 | 0.0937 | 0.9751 |
| 0.0102 | 30.0 | 210 | 0.0910 | 0.9842 |
| 0.0102 | 31.0 | 217 | 0.0336 | 0.9887 |
| 0.0182 | 32.0 | 224 | 0.0870 | 0.9796 |
| 0.0126 | 33.0 | 231 | 0.0565 | 0.9842 |
| 0.0126 | 34.0 | 238 | 0.0541 | 0.9842 |
| 0.0157 | 35.0 | 245 | 0.0591 | 0.9932 |
| 0.0059 | 36.0 | 252 | 0.0985 | 0.9819 |
| 0.0059 | 37.0 | 259 | 0.0813 | 0.9819 |
| 0.0092 | 38.0 | 266 | 0.0239 | 0.9955 |
| 0.0225 | 39.0 | 273 | 0.0982 | 0.9706 |
| 0.0105 | 40.0 | 280 | 0.0113 | 0.9955 |
| 0.0105 | 41.0 | 287 | 0.0127 | 0.9977 |
| 0.007 | 42.0 | 294 | 0.0760 | 0.9887 |
| 0.0032 | 43.0 | 301 | 0.0196 | 0.9932 |
| 0.0032 | 44.0 | 308 | 0.0171 | 0.9932 |
| 0.0206 | 45.0 | 315 | 0.0501 | 0.9910 |
| 0.0001 | 46.0 | 322 | 0.0925 | 0.9842 |
| 0.0001 | 47.0 | 329 | 0.0318 | 0.9910 |
| 0.0017 | 48.0 | 336 | 0.0612 | 0.9864 |
| 0.0023 | 49.0 | 343 | 0.0685 | 0.9864 |
| 0.0013 | 50.0 | 350 | 0.0265 | 0.9932 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"patacon-false",
"patacon-true"
] |
agustin228/pokemon_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pokemon_classification
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pokemon-classification dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7861
- Accuracy: 0.8927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 240 | 2.0497 | 0.7542 |
| No log | 2.0 | 480 | 0.9561 | 0.8760 |
| 2.3345 | 3.0 | 720 | 0.7754 | 0.8917 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
[
"golbat",
"machoke",
"raichu",
"dragonite",
"fearow",
"slowpoke",
"weezing",
"beedrill",
"weedle",
"cloyster",
"vaporeon",
"gyarados",
"golduck",
"zapdos",
"machamp",
"hitmonlee",
"primeape",
"cubone",
"sandslash",
"scyther",
"haunter",
"metapod",
"tentacruel",
"aerodactyl",
"raticate",
"kabutops",
"ninetales",
"zubat",
"rhydon",
"mew",
"pinsir",
"ditto",
"victreebel",
"omanyte",
"horsea",
"magnemite",
"pikachu",
"blastoise",
"venomoth",
"charizard",
"seadra",
"muk",
"spearow",
"bulbasaur",
"bellsprout",
"electrode",
"ivysaur",
"gloom",
"poliwhirl",
"flareon",
"seaking",
"hypno",
"wartortle",
"mankey",
"tentacool",
"exeggcute",
"meowth",
"growlithe",
"tangela",
"drowzee",
"rapidash",
"venonat",
"omastar",
"pidgeot",
"nidorino",
"porygon",
"lickitung",
"rattata",
"machop",
"charmeleon",
"slowbro",
"parasect",
"eevee",
"diglett",
"starmie",
"staryu",
"psyduck",
"dragonair",
"magikarp",
"vileplume",
"marowak",
"pidgeotto",
"shellder",
"mewtwo",
"lapras",
"farfetchd",
"kingler",
"seel",
"kakuna",
"doduo",
"electabuzz",
"charmander",
"rhyhorn",
"tauros",
"dugtrio",
"kabuto",
"poliwrath",
"gengar",
"exeggutor",
"dewgong",
"jigglypuff",
"geodude",
"kadabra",
"nidorina",
"sandshrew",
"grimer",
"persian",
"mrmime",
"pidgey",
"koffing",
"ekans",
"alolan sandslash",
"venusaur",
"snorlax",
"paras",
"jynx",
"chansey",
"weepinbell",
"hitmonchan",
"gastly",
"kangaskhan",
"oddish",
"wigglytuff",
"graveler",
"arcanine",
"clefairy",
"articuno",
"poliwag",
"golem",
"abra",
"squirtle",
"voltorb",
"ponyta",
"moltres",
"nidoqueen",
"magmar",
"onix",
"vulpix",
"butterfree",
"dodrio",
"krabby",
"arbok",
"clefable",
"goldeen",
"magneton",
"dratini",
"caterpie",
"jolteon",
"nidoking",
"alakazam"
] |
stevanojs/my_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_classification
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3033
- Accuracy: 0.7277
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.7973 | 1.0 | 175 | 4.2373 | 0.1537 |
| 3.3114 | 2.0 | 350 | 2.8087 | 0.4224 |
| 1.68 | 3.0 | 525 | 1.9823 | 0.5983 |
| 0.7776 | 4.0 | 700 | 1.6113 | 0.6648 |
| 0.3974 | 5.0 | 875 | 1.4166 | 0.6962 |
| 0.1666 | 6.0 | 1050 | 1.3312 | 0.7119 |
| 0.0657 | 7.0 | 1225 | 1.3033 | 0.7277 |
| 0.0315 | 8.0 | 1400 | 1.3021 | 0.7191 |
| 0.0187 | 9.0 | 1575 | 1.2946 | 0.7198 |
| 0.0146 | 10.0 | 1750 | 1.3018 | 0.7191 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.