model_id
stringlengths 7
105
| model_card
stringlengths 1
130k
| model_labels
listlengths 2
80k
|
---|---|---|
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV67
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-dmae-humeda-DAV67
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2829
- Accuracy: 0.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 45
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.103 | 1.0 | 18 | 1.0588 | 0.4 |
| 0.9007 | 2.0 | 36 | 0.5651 | 0.8229 |
| 0.497 | 3.0 | 54 | 0.3559 | 0.8743 |
| 0.4178 | 4.0 | 72 | 0.4171 | 0.7886 |
| 0.4303 | 5.0 | 90 | 0.6884 | 0.7314 |
| 0.485 | 6.0 | 108 | 0.3255 | 0.8629 |
| 0.3345 | 7.0 | 126 | 0.2631 | 0.9029 |
| 0.3058 | 8.0 | 144 | 0.3533 | 0.8343 |
| 0.35 | 9.0 | 162 | 0.2853 | 0.8686 |
| 0.2535 | 10.0 | 180 | 0.2529 | 0.9143 |
| 0.21 | 11.0 | 198 | 0.3806 | 0.84 |
| 0.2414 | 12.0 | 216 | 0.2829 | 0.92 |
| 0.1978 | 13.0 | 234 | 0.3011 | 0.9143 |
| 0.1683 | 14.0 | 252 | 0.2486 | 0.9086 |
| 0.2351 | 15.0 | 270 | 0.3612 | 0.8629 |
| 0.264 | 16.0 | 288 | 0.3643 | 0.88 |
| 0.1714 | 17.0 | 306 | 0.2481 | 0.9086 |
| 0.1714 | 18.0 | 324 | 0.3479 | 0.8914 |
| 0.1886 | 19.0 | 342 | 0.2644 | 0.9029 |
| 0.1522 | 20.0 | 360 | 0.2587 | 0.8971 |
| 0.1468 | 21.0 | 378 | 0.2832 | 0.9029 |
| 0.1364 | 22.0 | 396 | 0.2830 | 0.9029 |
| 0.1294 | 23.0 | 414 | 0.2954 | 0.8914 |
| 0.122 | 24.0 | 432 | 0.3801 | 0.8743 |
| 0.1114 | 25.0 | 450 | 0.3375 | 0.9029 |
| 0.12 | 26.0 | 468 | 0.3696 | 0.8743 |
| 0.121 | 27.0 | 486 | 0.3460 | 0.8857 |
| 0.1056 | 28.0 | 504 | 0.3296 | 0.9029 |
| 0.0941 | 29.0 | 522 | 0.3977 | 0.8971 |
| 0.0882 | 30.0 | 540 | 0.3436 | 0.8914 |
| 0.1131 | 31.0 | 558 | 0.3397 | 0.8914 |
| 0.0959 | 32.0 | 576 | 0.3420 | 0.8971 |
| 0.1058 | 33.0 | 594 | 0.3312 | 0.8971 |
| 0.0707 | 34.0 | 612 | 0.3917 | 0.8857 |
| 0.0885 | 35.0 | 630 | 0.3855 | 0.88 |
| 0.0788 | 36.0 | 648 | 0.3519 | 0.8857 |
| 0.112 | 37.0 | 666 | 0.3473 | 0.8914 |
| 0.0588 | 38.0 | 684 | 0.3718 | 0.9029 |
| 0.0963 | 39.0 | 702 | 0.4022 | 0.88 |
| 0.0681 | 40.0 | 720 | 0.3574 | 0.8971 |
| 0.0841 | 41.0 | 738 | 0.3621 | 0.88 |
| 0.0739 | 42.0 | 756 | 0.3782 | 0.8914 |
| 0.0649 | 42.5217 | 765 | 0.3766 | 0.8914 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
[
"avanzada",
"avanzada humeda",
"no dmae"
] |
RobertoSonic/swinv2-tiny-patch4-window8-256-dmae-humeda-DAV68
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-dmae-humeda-DAV68
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2868
- Accuracy: 0.9314
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 45
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.0651 | 1.0 | 18 | 1.0769 | 0.4686 |
| 0.9503 | 2.0 | 36 | 0.8111 | 0.7257 |
| 0.5745 | 3.0 | 54 | 0.4972 | 0.7314 |
| 0.4746 | 4.0 | 72 | 0.4788 | 0.7486 |
| 0.4363 | 5.0 | 90 | 0.5427 | 0.7314 |
| 0.4362 | 6.0 | 108 | 0.3581 | 0.8686 |
| 0.3476 | 7.0 | 126 | 0.3572 | 0.8686 |
| 0.3113 | 8.0 | 144 | 0.4335 | 0.7886 |
| 0.3943 | 9.0 | 162 | 0.2782 | 0.8686 |
| 0.2574 | 10.0 | 180 | 0.3320 | 0.8686 |
| 0.2345 | 11.0 | 198 | 0.4383 | 0.8343 |
| 0.3002 | 12.0 | 216 | 0.3053 | 0.8686 |
| 0.2038 | 13.0 | 234 | 0.3189 | 0.8743 |
| 0.2244 | 14.0 | 252 | 0.2766 | 0.8743 |
| 0.2277 | 15.0 | 270 | 0.2637 | 0.8857 |
| 0.2318 | 16.0 | 288 | 0.4612 | 0.8114 |
| 0.1908 | 17.0 | 306 | 0.3167 | 0.8857 |
| 0.1932 | 18.0 | 324 | 0.2949 | 0.9029 |
| 0.1676 | 19.0 | 342 | 0.2627 | 0.9086 |
| 0.1442 | 20.0 | 360 | 0.2584 | 0.9143 |
| 0.1606 | 21.0 | 378 | 0.2626 | 0.9143 |
| 0.1624 | 22.0 | 396 | 0.2351 | 0.9257 |
| 0.1735 | 23.0 | 414 | 0.2746 | 0.9257 |
| 0.1604 | 24.0 | 432 | 0.3237 | 0.8914 |
| 0.122 | 25.0 | 450 | 0.2852 | 0.8914 |
| 0.1447 | 26.0 | 468 | 0.2594 | 0.92 |
| 0.1265 | 27.0 | 486 | 0.2857 | 0.9029 |
| 0.1265 | 28.0 | 504 | 0.3238 | 0.8743 |
| 0.122 | 29.0 | 522 | 0.3029 | 0.8857 |
| 0.0929 | 30.0 | 540 | 0.2936 | 0.9029 |
| 0.1276 | 31.0 | 558 | 0.2777 | 0.9143 |
| 0.1118 | 32.0 | 576 | 0.2812 | 0.9143 |
| 0.1058 | 33.0 | 594 | 0.2925 | 0.92 |
| 0.0824 | 34.0 | 612 | 0.3519 | 0.8914 |
| 0.1084 | 35.0 | 630 | 0.2847 | 0.92 |
| 0.1074 | 36.0 | 648 | 0.2735 | 0.9143 |
| 0.1415 | 37.0 | 666 | 0.2724 | 0.9257 |
| 0.0702 | 38.0 | 684 | 0.2873 | 0.92 |
| 0.0987 | 39.0 | 702 | 0.2924 | 0.92 |
| 0.0637 | 40.0 | 720 | 0.2868 | 0.9314 |
| 0.1183 | 41.0 | 738 | 0.2892 | 0.92 |
| 0.096 | 42.0 | 756 | 0.2910 | 0.9143 |
| 0.0719 | 42.5217 | 765 | 0.2897 | 0.9143 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
[
"avanzada",
"avanzada humeda",
"no dmae"
] |
prithivMLmods/Alphabet-Sign-Language-Detection
|

# **Alphabet-Sign-Language-Detection**
> **Alphabet-Sign-Language-Detection** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify images into **sign language alphabet** categories using the **SiglipForImageClassification** architecture.
```py
Classification Report:
precision recall f1-score support
A 0.9995 1.0000 0.9998 4384
B 1.0000 1.0000 1.0000 4441
C 1.0000 1.0000 1.0000 3993
D 1.0000 0.9998 0.9999 4940
E 1.0000 1.0000 1.0000 4658
F 1.0000 1.0000 1.0000 5750
G 0.9992 0.9996 0.9994 4978
H 1.0000 0.9979 0.9990 4807
I 0.9992 1.0000 0.9996 4856
J 1.0000 0.9996 0.9998 5227
K 0.9972 1.0000 0.9986 5426
L 1.0000 0.9998 0.9999 5089
M 1.0000 0.9964 0.9982 3328
N 0.9955 1.0000 0.9977 2635
O 0.9998 1.0000 0.9999 4564
P 1.0000 0.9993 0.9996 4100
Q 1.0000 1.0000 1.0000 4187
R 0.9998 0.9984 0.9991 5122
S 0.9998 0.9998 0.9998 5147
T 1.0000 1.0000 1.0000 4722
U 0.9984 0.9998 0.9991 5041
V 1.0000 0.9984 0.9992 5116
W 0.9998 1.0000 0.9999 4926
X 1.0000 0.9995 0.9998 4387
Y 1.0000 1.0000 1.0000 5185
Z 0.9996 1.0000 0.9998 4760
accuracy 0.9996 121769
macro avg 0.9995 0.9996 0.9995 121769
weighted avg 0.9996 0.9996 0.9996 121769
```

The model categorizes images into the following 26 classes:
- **Class 0:** "A"
- **Class 1:** "B"
- **Class 2:** "C"
- **Class 3:** "D"
- **Class 4:** "E"
- **Class 5:** "F"
- **Class 6:** "G"
- **Class 7:** "H"
- **Class 8:** "I"
- **Class 9:** "J"
- **Class 10:** "K"
- **Class 11:** "L"
- **Class 12:** "M"
- **Class 13:** "N"
- **Class 14:** "O"
- **Class 15:** "P"
- **Class 16:** "Q"
- **Class 17:** "R"
- **Class 18:** "S"
- **Class 19:** "T"
- **Class 20:** "U"
- **Class 21:** "V"
- **Class 22:** "W"
- **Class 23:** "X"
- **Class 24:** "Y"
- **Class 25:** "Z"
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Alphabet-Sign-Language-Detection"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def sign_language_classification(image):
"""Predicts sign language alphabet category for an image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "A", "1": "B", "2": "C", "3": "D", "4": "E", "5": "F", "6": "G", "7": "H", "8": "I", "9": "J",
"10": "K", "11": "L", "12": "M", "13": "N", "14": "O", "15": "P", "16": "Q", "17": "R", "18": "S", "19": "T",
"20": "U", "21": "V", "22": "W", "23": "X", "24": "Y", "25": "Z"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=sign_language_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Alphabet Sign Language Detection",
description="Upload an image to classify it into one of the 26 sign language alphabet categories."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
# **Intended Use:**
The **Alphabet-Sign-Language-Detection** model is designed for sign language image classification. It helps categorize images of hand signs into predefined alphabet categories. Potential use cases include:
- **Sign Language Education:** Assisting learners in recognizing and practicing sign language alphabets.
- **Accessibility Enhancement:** Supporting applications that improve communication for the hearing impaired.
- **AI Research:** Advancing computer vision models in sign language recognition.
- **Gesture Recognition Systems:** Enabling interactive applications with real-time sign language detection.
|
[
"a",
"b",
"c",
"d",
"e",
"f",
"g",
"h",
"i",
"j",
"k",
"l",
"m",
"n",
"o",
"p",
"q",
"r",
"s",
"t",
"u",
"v",
"w",
"x",
"y",
"z"
] |
prithivMLmods/Fashion-Mnist-SigLIP2
|

# **Fashion-Mnist-SigLIP2**
> **Fashion-Mnist-SigLIP2** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify images into **Fashion-MNIST** categories using the **SiglipForImageClassification** architecture.

*SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features* https://arxiv.org/pdf/2502.14786
```py
Classification Report:
precision recall f1-score support
T-shirt / top 0.8142 0.9147 0.8615 6000
Trouser 0.9935 0.9870 0.9902 6000
Pullover 0.8901 0.8610 0.8753 6000
Dress 0.9098 0.9300 0.9198 6000
Coat 0.8636 0.8865 0.8749 6000
Sandal 0.9857 0.9847 0.9852 6000
Shirt 0.8076 0.6962 0.7478 6000
Sneaker 0.9663 0.9695 0.9679 6000
Bag 0.9779 0.9805 0.9792 6000
Ankle boot 0.9698 0.9700 0.9699 6000
accuracy 0.9180 60000
macro avg 0.9179 0.9180 0.9172 60000
weighted avg 0.9179 0.9180 0.9172 60000
```

The model categorizes images into the following 10 classes:
- **Class 0:** "T-shirt / top"
- **Class 1:** "Trouser"
- **Class 2:** "Pullover"
- **Class 3:** "Dress"
- **Class 4:** "Coat"
- **Class 5:** "Sandal"
- **Class 6:** "Shirt"
- **Class 7:** "Sneaker"
- **Class 8:** "Bag"
- **Class 9:** "Ankle boot"
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Fashion-Mnist-SigLIP2"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def fashion_mnist_classification(image):
"""Predicts fashion category for an image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "T-shirt / top", "1": "Trouser", "2": "Pullover", "3": "Dress", "4": "Coat",
"5": "Sandal", "6": "Shirt", "7": "Sneaker", "8": "Bag", "9": "Ankle boot"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=fashion_mnist_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Fashion MNIST Classification Labels",
description="Upload an image to classify it into one of the 10 Fashion-MNIST categories."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
# **Intended Use:**
The **Fashion-Mnist-SigLIP2** model is designed for fashion image classification. It helps categorize clothing and footwear items into predefined Fashion-MNIST classes. Potential use cases include:
- **Fashion Recognition:** Classifying fashion images into common categories like shirts, sneakers, and dresses.
- **E-commerce Applications:** Assisting online retailers in organizing and tagging clothing items for better search and recommendations.
- **Automated Fashion Sorting:** Helping automated inventory management systems classify fashion items.
- **Educational Purposes:** Supporting AI and ML research in vision-based fashion classification models.
|
[
"t-shirt / top",
"trouser",
"pullover",
"dress",
"coat",
"sandal",
"shirt",
"sneaker",
"bag",
"ankle boot"
] |
Ivanrs/vit-base-kidney-stone-2-Jonathan_El-Beze_-w256_1k_v1-_MIX
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-2-Jonathan_El-Beze_-w256_1k_v1-_MIX
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5205
- Accuracy: 0.8642
- Precision: 0.8742
- Recall: 0.8642
- F1: 0.8636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.3382 | 0.1667 | 100 | 0.7037 | 0.7592 | 0.8533 | 0.7592 | 0.7413 |
| 0.2441 | 0.3333 | 200 | 0.5509 | 0.8167 | 0.8354 | 0.8167 | 0.8179 |
| 0.1033 | 0.5 | 300 | 0.5433 | 0.8508 | 0.8663 | 0.8508 | 0.8492 |
| 0.0863 | 0.6667 | 400 | 0.5815 | 0.8104 | 0.8328 | 0.8104 | 0.7969 |
| 0.1032 | 0.8333 | 500 | 0.7683 | 0.7908 | 0.8394 | 0.7908 | 0.7771 |
| 0.0681 | 1.0 | 600 | 0.6216 | 0.8392 | 0.8451 | 0.8392 | 0.8393 |
| 0.0098 | 1.1667 | 700 | 0.8241 | 0.8087 | 0.8317 | 0.8087 | 0.8010 |
| 0.1486 | 1.3333 | 800 | 0.5205 | 0.8642 | 0.8742 | 0.8642 | 0.8636 |
| 0.0552 | 1.5 | 900 | 0.8228 | 0.8092 | 0.8290 | 0.8092 | 0.8074 |
| 0.1194 | 1.6667 | 1000 | 0.9466 | 0.7479 | 0.8266 | 0.7479 | 0.7067 |
| 0.1081 | 1.8333 | 1100 | 0.7999 | 0.8379 | 0.8586 | 0.8379 | 0.8334 |
| 0.0024 | 2.0 | 1200 | 0.8330 | 0.8438 | 0.8629 | 0.8438 | 0.8434 |
| 0.0799 | 2.1667 | 1300 | 0.7392 | 0.8588 | 0.8771 | 0.8588 | 0.8560 |
| 0.0018 | 2.3333 | 1400 | 0.9487 | 0.8158 | 0.8222 | 0.8158 | 0.8153 |
| 0.0052 | 2.5 | 1500 | 0.6795 | 0.8712 | 0.8739 | 0.8712 | 0.8678 |
| 0.0012 | 2.6667 | 1600 | 0.7281 | 0.8821 | 0.8859 | 0.8821 | 0.8812 |
| 0.0022 | 2.8333 | 1700 | 1.2392 | 0.795 | 0.7874 | 0.795 | 0.7857 |
| 0.0835 | 3.0 | 1800 | 1.0174 | 0.8163 | 0.8503 | 0.8163 | 0.8178 |
| 0.063 | 3.1667 | 1900 | 0.6986 | 0.8275 | 0.8288 | 0.8275 | 0.8258 |
| 0.0124 | 3.3333 | 2000 | 1.3449 | 0.7354 | 0.7889 | 0.7354 | 0.7215 |
| 0.0751 | 3.5 | 2100 | 0.9783 | 0.8292 | 0.8578 | 0.8292 | 0.8224 |
| 0.0089 | 3.6667 | 2200 | 0.6416 | 0.8871 | 0.8909 | 0.8871 | 0.8851 |
| 0.0833 | 3.8333 | 2300 | 0.9829 | 0.8025 | 0.8282 | 0.8025 | 0.8019 |
| 0.024 | 4.0 | 2400 | 0.7989 | 0.8508 | 0.8659 | 0.8508 | 0.8475 |
| 0.0221 | 4.1667 | 2500 | 0.6812 | 0.8842 | 0.8845 | 0.8842 | 0.8837 |
| 0.0005 | 4.3333 | 2600 | 0.9451 | 0.8429 | 0.8614 | 0.8429 | 0.8360 |
| 0.0005 | 4.5 | 2700 | 0.6669 | 0.8875 | 0.8882 | 0.8875 | 0.8865 |
| 0.0005 | 4.6667 | 2800 | 1.2303 | 0.8017 | 0.8330 | 0.8017 | 0.7984 |
| 0.0071 | 4.8333 | 2900 | 0.7767 | 0.8725 | 0.8790 | 0.8725 | 0.8725 |
| 0.1049 | 5.0 | 3000 | 0.7006 | 0.8646 | 0.8834 | 0.8646 | 0.8665 |
| 0.0761 | 5.1667 | 3100 | 0.7335 | 0.8892 | 0.8912 | 0.8892 | 0.8867 |
| 0.0007 | 5.3333 | 3200 | 0.6957 | 0.8867 | 0.8934 | 0.8867 | 0.8861 |
| 0.0006 | 5.5 | 3300 | 0.7774 | 0.8629 | 0.8739 | 0.8629 | 0.8637 |
| 0.0387 | 5.6667 | 3400 | 1.3677 | 0.7971 | 0.8275 | 0.7971 | 0.7944 |
| 0.0032 | 5.8333 | 3500 | 0.7322 | 0.8729 | 0.8836 | 0.8729 | 0.8710 |
| 0.0008 | 6.0 | 3600 | 0.9531 | 0.8517 | 0.8768 | 0.8517 | 0.8438 |
| 0.0014 | 6.1667 | 3700 | 0.8285 | 0.8654 | 0.8687 | 0.8654 | 0.8632 |
| 0.0004 | 6.3333 | 3800 | 0.7225 | 0.8875 | 0.8897 | 0.8875 | 0.8865 |
| 0.0009 | 6.5 | 3900 | 0.8248 | 0.87 | 0.8797 | 0.87 | 0.8705 |
| 0.0003 | 6.6667 | 4000 | 0.8972 | 0.8658 | 0.8805 | 0.8658 | 0.8665 |
| 0.0002 | 6.8333 | 4100 | 0.8997 | 0.8654 | 0.8800 | 0.8654 | 0.8662 |
| 0.0002 | 7.0 | 4200 | 0.8968 | 0.8667 | 0.8808 | 0.8667 | 0.8674 |
| 0.0002 | 7.1667 | 4300 | 0.8712 | 0.8725 | 0.8839 | 0.8725 | 0.8728 |
| 0.0002 | 7.3333 | 4400 | 0.8688 | 0.8838 | 0.8971 | 0.8838 | 0.8827 |
| 0.0002 | 7.5 | 4500 | 0.8917 | 0.8712 | 0.8818 | 0.8712 | 0.8686 |
| 0.0477 | 7.6667 | 4600 | 0.8017 | 0.8692 | 0.8832 | 0.8692 | 0.8703 |
| 0.0002 | 7.8333 | 4700 | 0.9936 | 0.85 | 0.8654 | 0.85 | 0.8445 |
| 0.0004 | 8.0 | 4800 | 0.9378 | 0.8396 | 0.8719 | 0.8396 | 0.8411 |
| 0.0007 | 8.1667 | 4900 | 1.2102 | 0.8013 | 0.8376 | 0.8013 | 0.7975 |
| 0.0004 | 8.3333 | 5000 | 0.7613 | 0.8883 | 0.9041 | 0.8883 | 0.8885 |
| 0.0005 | 8.5 | 5100 | 0.9156 | 0.8571 | 0.8821 | 0.8571 | 0.8573 |
| 0.0002 | 8.6667 | 5200 | 0.6973 | 0.8996 | 0.9065 | 0.8996 | 0.8969 |
| 0.0002 | 8.8333 | 5300 | 0.9252 | 0.8625 | 0.8938 | 0.8625 | 0.8636 |
| 0.0002 | 9.0 | 5400 | 0.7714 | 0.8854 | 0.9038 | 0.8854 | 0.8857 |
| 0.0001 | 9.1667 | 5500 | 0.7521 | 0.8892 | 0.9048 | 0.8892 | 0.8893 |
| 0.0002 | 9.3333 | 5600 | 0.7296 | 0.8971 | 0.9053 | 0.8971 | 0.8961 |
| 0.0002 | 9.5 | 5700 | 0.8592 | 0.8812 | 0.8882 | 0.8812 | 0.8807 |
| 0.027 | 9.6667 | 5800 | 1.0926 | 0.8346 | 0.8684 | 0.8346 | 0.8350 |
| 0.0002 | 9.8333 | 5900 | 0.8884 | 0.8654 | 0.8749 | 0.8654 | 0.8650 |
| 0.0255 | 10.0 | 6000 | 0.8784 | 0.8708 | 0.8809 | 0.8708 | 0.8704 |
| 0.0002 | 10.1667 | 6100 | 1.2491 | 0.7992 | 0.8409 | 0.7992 | 0.7816 |
| 0.0003 | 10.3333 | 6200 | 0.6981 | 0.8796 | 0.8850 | 0.8796 | 0.8776 |
| 0.0002 | 10.5 | 6300 | 0.8654 | 0.8725 | 0.8861 | 0.8725 | 0.8679 |
| 0.0002 | 10.6667 | 6400 | 0.5566 | 0.9012 | 0.9041 | 0.9012 | 0.8998 |
| 0.0002 | 10.8333 | 6500 | 0.6042 | 0.9025 | 0.9048 | 0.9025 | 0.9010 |
| 0.0002 | 11.0 | 6600 | 0.6078 | 0.9042 | 0.9062 | 0.9042 | 0.9027 |
| 0.0001 | 11.1667 | 6700 | 0.6105 | 0.9046 | 0.9066 | 0.9046 | 0.9030 |
| 0.0001 | 11.3333 | 6800 | 0.6138 | 0.9025 | 0.9047 | 0.9025 | 0.9010 |
| 0.0001 | 11.5 | 6900 | 0.6188 | 0.9025 | 0.9047 | 0.9025 | 0.9010 |
| 0.0001 | 11.6667 | 7000 | 0.6243 | 0.9017 | 0.9038 | 0.9017 | 0.9001 |
| 0.0001 | 11.8333 | 7100 | 0.6208 | 0.8992 | 0.9001 | 0.8992 | 0.8982 |
| 0.0067 | 12.0 | 7200 | 0.7476 | 0.8846 | 0.8948 | 0.8846 | 0.8835 |
| 0.0139 | 12.1667 | 7300 | 0.6116 | 0.9025 | 0.9042 | 0.9025 | 0.9013 |
| 0.0001 | 12.3333 | 7400 | 0.6976 | 0.8971 | 0.9053 | 0.8971 | 0.8962 |
| 0.0001 | 12.5 | 7500 | 0.7213 | 0.8946 | 0.9041 | 0.8946 | 0.8938 |
| 0.0001 | 12.6667 | 7600 | 0.7205 | 0.8954 | 0.9047 | 0.8954 | 0.8946 |
| 0.0001 | 12.8333 | 7700 | 0.6671 | 0.9029 | 0.9075 | 0.9029 | 0.9008 |
| 0.0001 | 13.0 | 7800 | 0.6448 | 0.9071 | 0.9130 | 0.9071 | 0.9059 |
| 0.0001 | 13.1667 | 7900 | 0.6449 | 0.9071 | 0.9130 | 0.9071 | 0.9059 |
| 0.0001 | 13.3333 | 8000 | 0.6453 | 0.9071 | 0.9130 | 0.9071 | 0.9059 |
| 0.0001 | 13.5 | 8100 | 0.6340 | 0.9087 | 0.9136 | 0.9087 | 0.9075 |
| 0.0001 | 13.6667 | 8200 | 0.6347 | 0.9087 | 0.9136 | 0.9087 | 0.9075 |
| 0.0001 | 13.8333 | 8300 | 0.6350 | 0.9092 | 0.9141 | 0.9092 | 0.9079 |
| 0.0001 | 14.0 | 8400 | 0.6355 | 0.9096 | 0.9144 | 0.9096 | 0.9084 |
| 0.0001 | 14.1667 | 8500 | 0.6358 | 0.9092 | 0.9139 | 0.9092 | 0.9080 |
| 0.0001 | 14.3333 | 8600 | 0.6360 | 0.9092 | 0.9139 | 0.9092 | 0.9080 |
| 0.0001 | 14.5 | 8700 | 0.6363 | 0.9092 | 0.9139 | 0.9092 | 0.9080 |
| 0.0001 | 14.6667 | 8800 | 0.6365 | 0.9096 | 0.9143 | 0.9096 | 0.9084 |
| 0.0001 | 14.8333 | 8900 | 0.6367 | 0.9096 | 0.9143 | 0.9096 | 0.9084 |
| 0.0001 | 15.0 | 9000 | 0.6369 | 0.9096 | 0.9143 | 0.9096 | 0.9084 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"mix-subtype_iiia",
"mix-subtype_iia",
"mix-subtype_ivc",
"mix-subtype_ivd",
"mix-subtype_ia",
"mix-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-2-Jonathan_El-Beze_-w256_1k_v1-_SEC
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-2-Jonathan_El-Beze_-w256_1k_v1-_SEC
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1129
- Accuracy: 0.9708
- Precision: 0.9708
- Recall: 0.9708
- F1: 0.9708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2926 | 0.3333 | 100 | 0.6214 | 0.8408 | 0.8814 | 0.8408 | 0.8038 |
| 0.0637 | 0.6667 | 200 | 0.6714 | 0.8083 | 0.8903 | 0.8083 | 0.8003 |
| 0.058 | 1.0 | 300 | 1.0799 | 0.745 | 0.8358 | 0.745 | 0.7350 |
| 0.156 | 1.3333 | 400 | 1.1535 | 0.7142 | 0.8241 | 0.7142 | 0.6937 |
| 0.0075 | 1.6667 | 500 | 1.6682 | 0.6625 | 0.7947 | 0.6625 | 0.6207 |
| 0.0076 | 2.0 | 600 | 0.5363 | 0.8517 | 0.9048 | 0.8517 | 0.8568 |
| 0.0436 | 2.3333 | 700 | 0.1960 | 0.9558 | 0.9615 | 0.9558 | 0.9564 |
| 0.0019 | 2.6667 | 800 | 0.1241 | 0.975 | 0.9763 | 0.975 | 0.9746 |
| 0.0015 | 3.0 | 900 | 0.1129 | 0.9708 | 0.9708 | 0.9708 | 0.9708 |
| 0.0012 | 3.3333 | 1000 | 0.1154 | 0.9708 | 0.9708 | 0.9708 | 0.9708 |
| 0.001 | 3.6667 | 1100 | 0.1176 | 0.9717 | 0.9717 | 0.9717 | 0.9716 |
| 0.0009 | 4.0 | 1200 | 0.1204 | 0.9717 | 0.9717 | 0.9717 | 0.9717 |
| 0.0007 | 4.3333 | 1300 | 0.1223 | 0.9725 | 0.9725 | 0.9725 | 0.9725 |
| 0.0007 | 4.6667 | 1400 | 0.1246 | 0.9742 | 0.9742 | 0.9742 | 0.9742 |
| 0.0006 | 5.0 | 1500 | 0.1260 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0005 | 5.3333 | 1600 | 0.1281 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0005 | 5.6667 | 1700 | 0.1289 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0004 | 6.0 | 1800 | 0.1306 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0004 | 6.3333 | 1900 | 0.1321 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0004 | 6.6667 | 2000 | 0.1330 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0003 | 7.0 | 2100 | 0.1345 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0003 | 7.3333 | 2200 | 0.1357 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0003 | 7.6667 | 2300 | 0.1371 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0003 | 8.0 | 2400 | 0.1380 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0003 | 8.3333 | 2500 | 0.1392 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0002 | 8.6667 | 2600 | 0.1400 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0002 | 9.0 | 2700 | 0.1408 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0002 | 9.3333 | 2800 | 0.1417 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0002 | 9.6667 | 2900 | 0.1426 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0002 | 10.0 | 3000 | 0.1432 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0002 | 10.3333 | 3100 | 0.1441 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0002 | 10.6667 | 3200 | 0.1448 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0002 | 11.0 | 3300 | 0.1454 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0002 | 11.3333 | 3400 | 0.1460 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0002 | 11.6667 | 3500 | 0.1466 | 0.975 | 0.9751 | 0.975 | 0.9750 |
| 0.0001 | 12.0 | 3600 | 0.1471 | 0.9758 | 0.9760 | 0.9758 | 0.9759 |
| 0.0001 | 12.3333 | 3700 | 0.1476 | 0.975 | 0.9752 | 0.975 | 0.9751 |
| 0.0001 | 12.6667 | 3800 | 0.1480 | 0.975 | 0.9752 | 0.975 | 0.9751 |
| 0.0001 | 13.0 | 3900 | 0.1484 | 0.975 | 0.9752 | 0.975 | 0.9751 |
| 0.0001 | 13.3333 | 4000 | 0.1487 | 0.975 | 0.9752 | 0.975 | 0.9751 |
| 0.0001 | 13.6667 | 4100 | 0.1490 | 0.975 | 0.9752 | 0.975 | 0.9751 |
| 0.0001 | 14.0 | 4200 | 0.1493 | 0.975 | 0.9752 | 0.975 | 0.9751 |
| 0.0001 | 14.3333 | 4300 | 0.1494 | 0.975 | 0.9752 | 0.975 | 0.9751 |
| 0.0001 | 14.6667 | 4400 | 0.1495 | 0.975 | 0.9752 | 0.975 | 0.9751 |
| 0.0001 | 15.0 | 4500 | 0.1496 | 0.975 | 0.9752 | 0.975 | 0.9751 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"sec-subtype_iiia",
"sec-subtype_iia",
"sec-subtype_ivc",
"sec-subtype_ivd",
"sec-subtype_ia",
"sec-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-2-Jonathan_El-Beze_-w256_1k_v1-_SUR
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-2-Jonathan_El-Beze_-w256_1k_v1-_SUR
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5352
- Accuracy: 0.8542
- Precision: 0.8593
- Recall: 0.8542
- F1: 0.8516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.3658 | 0.3333 | 100 | 0.7426 | 0.7017 | 0.6844 | 0.7017 | 0.6699 |
| 0.3256 | 0.6667 | 200 | 0.7536 | 0.7608 | 0.8199 | 0.7608 | 0.7638 |
| 0.0727 | 1.0 | 300 | 0.5352 | 0.8542 | 0.8593 | 0.8542 | 0.8516 |
| 0.0553 | 1.3333 | 400 | 0.5903 | 0.8575 | 0.8636 | 0.8575 | 0.8547 |
| 0.116 | 1.6667 | 500 | 0.8102 | 0.8075 | 0.8478 | 0.8075 | 0.8036 |
| 0.1034 | 2.0 | 600 | 0.9591 | 0.79 | 0.8360 | 0.79 | 0.7929 |
| 0.0921 | 2.3333 | 700 | 1.0530 | 0.7917 | 0.8153 | 0.7917 | 0.7890 |
| 0.0845 | 2.6667 | 800 | 0.8513 | 0.81 | 0.8188 | 0.81 | 0.8074 |
| 0.0027 | 3.0 | 900 | 1.1166 | 0.7883 | 0.8020 | 0.7883 | 0.7852 |
| 0.0046 | 3.3333 | 1000 | 1.0594 | 0.8075 | 0.8496 | 0.8075 | 0.7994 |
| 0.1194 | 3.6667 | 1100 | 1.1294 | 0.7992 | 0.8259 | 0.7992 | 0.7985 |
| 0.0865 | 4.0 | 1200 | 1.0208 | 0.7908 | 0.8241 | 0.7908 | 0.7874 |
| 0.0015 | 4.3333 | 1300 | 0.6127 | 0.8783 | 0.8875 | 0.8783 | 0.8778 |
| 0.0086 | 4.6667 | 1400 | 0.9398 | 0.8383 | 0.8601 | 0.8383 | 0.8352 |
| 0.0016 | 5.0 | 1500 | 0.9671 | 0.835 | 0.8414 | 0.835 | 0.8361 |
| 0.0031 | 5.3333 | 1600 | 0.7669 | 0.8425 | 0.8480 | 0.8425 | 0.8379 |
| 0.0015 | 5.6667 | 1700 | 1.6634 | 0.7092 | 0.7774 | 0.7092 | 0.6878 |
| 0.0011 | 6.0 | 1800 | 0.9625 | 0.8517 | 0.8701 | 0.8517 | 0.8464 |
| 0.0015 | 6.3333 | 1900 | 0.9576 | 0.8392 | 0.8558 | 0.8392 | 0.8367 |
| 0.0009 | 6.6667 | 2000 | 0.9355 | 0.84 | 0.8615 | 0.84 | 0.8390 |
| 0.0629 | 7.0 | 2100 | 0.8580 | 0.8508 | 0.8527 | 0.8508 | 0.8490 |
| 0.0446 | 7.3333 | 2200 | 0.7906 | 0.8783 | 0.8798 | 0.8783 | 0.8759 |
| 0.0007 | 7.6667 | 2300 | 0.9514 | 0.8283 | 0.8405 | 0.8283 | 0.8258 |
| 0.0006 | 8.0 | 2400 | 1.0413 | 0.8317 | 0.8407 | 0.8317 | 0.8298 |
| 0.0006 | 8.3333 | 2500 | 1.0492 | 0.8342 | 0.8427 | 0.8342 | 0.8324 |
| 0.0478 | 8.6667 | 2600 | 0.7952 | 0.8667 | 0.8701 | 0.8667 | 0.8664 |
| 0.0006 | 9.0 | 2700 | 0.8355 | 0.8708 | 0.8827 | 0.8708 | 0.8689 |
| 0.0004 | 9.3333 | 2800 | 1.0021 | 0.8508 | 0.8675 | 0.8508 | 0.8501 |
| 0.0004 | 9.6667 | 2900 | 1.0899 | 0.84 | 0.8573 | 0.84 | 0.8378 |
| 0.0004 | 10.0 | 3000 | 0.9897 | 0.8533 | 0.8614 | 0.8533 | 0.8505 |
| 0.0007 | 10.3333 | 3100 | 1.4134 | 0.8008 | 0.8407 | 0.8008 | 0.7956 |
| 0.0004 | 10.6667 | 3200 | 1.2195 | 0.8225 | 0.8459 | 0.8225 | 0.8212 |
| 0.0003 | 11.0 | 3300 | 1.2032 | 0.8242 | 0.8459 | 0.8242 | 0.8230 |
| 0.0003 | 11.3333 | 3400 | 1.1995 | 0.8267 | 0.8479 | 0.8267 | 0.8255 |
| 0.0003 | 11.6667 | 3500 | 1.1979 | 0.825 | 0.8453 | 0.825 | 0.8239 |
| 0.0003 | 12.0 | 3600 | 1.1959 | 0.8258 | 0.8461 | 0.8258 | 0.8248 |
| 0.0003 | 12.3333 | 3700 | 1.1960 | 0.8275 | 0.8473 | 0.8275 | 0.8264 |
| 0.0003 | 12.6667 | 3800 | 1.1960 | 0.8275 | 0.8473 | 0.8275 | 0.8264 |
| 0.0003 | 13.0 | 3900 | 1.1972 | 0.8275 | 0.8473 | 0.8275 | 0.8264 |
| 0.0003 | 13.3333 | 4000 | 1.1986 | 0.8283 | 0.8479 | 0.8283 | 0.8273 |
| 0.0003 | 13.6667 | 4100 | 1.1993 | 0.8292 | 0.8484 | 0.8292 | 0.8280 |
| 0.0003 | 14.0 | 4200 | 1.1999 | 0.8292 | 0.8484 | 0.8292 | 0.8280 |
| 0.0002 | 14.3333 | 4300 | 1.2012 | 0.8292 | 0.8484 | 0.8292 | 0.8280 |
| 0.0002 | 14.6667 | 4400 | 1.2014 | 0.8292 | 0.8484 | 0.8292 | 0.8280 |
| 0.0002 | 15.0 | 4500 | 1.2016 | 0.8292 | 0.8484 | 0.8292 | 0.8280 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"sur-subtype_iiia",
"sur-subtype_iia",
"sur-subtype_ivc",
"sur-subtype_ivd",
"sur-subtype_ia",
"sur-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-2-Michel_Daudon_-w256_1k_v1-_MIX
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-2-Michel_Daudon_-w256_1k_v1-_MIX
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5737
- Accuracy: 0.8158
- Precision: 0.8397
- Recall: 0.8158
- F1: 0.8059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.3412 | 0.1667 | 100 | 0.5737 | 0.8158 | 0.8397 | 0.8158 | 0.8059 |
| 0.2476 | 0.3333 | 200 | 0.7298 | 0.7883 | 0.7944 | 0.7883 | 0.7866 |
| 0.3971 | 0.5 | 300 | 0.9254 | 0.7475 | 0.8222 | 0.7475 | 0.7476 |
| 0.2939 | 0.6667 | 400 | 0.7719 | 0.7854 | 0.8224 | 0.7854 | 0.7833 |
| 0.0961 | 0.8333 | 500 | 1.1358 | 0.7429 | 0.7665 | 0.7429 | 0.7448 |
| 0.238 | 1.0 | 600 | 0.8758 | 0.7904 | 0.8178 | 0.7904 | 0.7896 |
| 0.1902 | 1.1667 | 700 | 0.7430 | 0.8271 | 0.8554 | 0.8271 | 0.8101 |
| 0.0787 | 1.3333 | 800 | 0.5883 | 0.8525 | 0.8816 | 0.8525 | 0.8557 |
| 0.0381 | 1.5 | 900 | 0.7656 | 0.8204 | 0.8333 | 0.8204 | 0.8244 |
| 0.1304 | 1.6667 | 1000 | 0.7800 | 0.8275 | 0.8513 | 0.8275 | 0.8225 |
| 0.217 | 1.8333 | 1100 | 0.7208 | 0.83 | 0.8507 | 0.83 | 0.8323 |
| 0.0806 | 2.0 | 1200 | 0.9077 | 0.805 | 0.8299 | 0.805 | 0.8000 |
| 0.0387 | 2.1667 | 1300 | 0.8138 | 0.845 | 0.8725 | 0.845 | 0.8453 |
| 0.1055 | 2.3333 | 1400 | 0.7708 | 0.8283 | 0.8588 | 0.8283 | 0.8280 |
| 0.0429 | 2.5 | 1500 | 0.8968 | 0.8154 | 0.8358 | 0.8154 | 0.8175 |
| 0.198 | 2.6667 | 1600 | 0.9388 | 0.8237 | 0.8290 | 0.8237 | 0.8199 |
| 0.099 | 2.8333 | 1700 | 1.0072 | 0.8217 | 0.8562 | 0.8217 | 0.8151 |
| 0.0665 | 3.0 | 1800 | 0.8864 | 0.8054 | 0.8032 | 0.8054 | 0.7963 |
| 0.0573 | 3.1667 | 1900 | 0.9131 | 0.8196 | 0.8291 | 0.8196 | 0.8162 |
| 0.0028 | 3.3333 | 2000 | 0.7288 | 0.8588 | 0.8648 | 0.8588 | 0.8564 |
| 0.0016 | 3.5 | 2100 | 1.1735 | 0.785 | 0.8147 | 0.785 | 0.7910 |
| 0.004 | 3.6667 | 2200 | 0.9195 | 0.84 | 0.8724 | 0.84 | 0.8414 |
| 0.0013 | 3.8333 | 2300 | 0.8082 | 0.8483 | 0.8759 | 0.8483 | 0.8497 |
| 0.0141 | 4.0 | 2400 | 0.9805 | 0.8342 | 0.8719 | 0.8342 | 0.8321 |
| 0.0015 | 4.1667 | 2500 | 0.7858 | 0.8538 | 0.8766 | 0.8538 | 0.8557 |
| 0.0011 | 4.3333 | 2600 | 1.1658 | 0.8037 | 0.8268 | 0.8037 | 0.7992 |
| 0.0008 | 4.5 | 2700 | 0.9506 | 0.8562 | 0.8762 | 0.8562 | 0.8578 |
| 0.0429 | 4.6667 | 2800 | 0.9533 | 0.8458 | 0.8712 | 0.8458 | 0.8437 |
| 0.0014 | 4.8333 | 2900 | 1.0837 | 0.81 | 0.8275 | 0.81 | 0.8072 |
| 0.1233 | 5.0 | 3000 | 1.0915 | 0.8104 | 0.8363 | 0.8104 | 0.8123 |
| 0.004 | 5.1667 | 3100 | 0.8199 | 0.8421 | 0.8415 | 0.8421 | 0.8401 |
| 0.0012 | 5.3333 | 3200 | 0.9103 | 0.8496 | 0.8690 | 0.8496 | 0.8538 |
| 0.0009 | 5.5 | 3300 | 1.0330 | 0.84 | 0.8761 | 0.84 | 0.8448 |
| 0.001 | 5.6667 | 3400 | 1.0544 | 0.8379 | 0.8699 | 0.8379 | 0.8385 |
| 0.0006 | 5.8333 | 3500 | 0.9087 | 0.8542 | 0.8699 | 0.8542 | 0.8560 |
| 0.0465 | 6.0 | 3600 | 0.9690 | 0.8504 | 0.8530 | 0.8504 | 0.8471 |
| 0.0015 | 6.1667 | 3700 | 0.9574 | 0.8425 | 0.8561 | 0.8425 | 0.8385 |
| 0.0022 | 6.3333 | 3800 | 1.0041 | 0.8325 | 0.8584 | 0.8325 | 0.8324 |
| 0.0774 | 6.5 | 3900 | 1.1730 | 0.8079 | 0.8185 | 0.8079 | 0.8044 |
| 0.0024 | 6.6667 | 4000 | 1.1644 | 0.8179 | 0.8302 | 0.8179 | 0.8154 |
| 0.0005 | 6.8333 | 4100 | 1.0119 | 0.84 | 0.8419 | 0.84 | 0.8347 |
| 0.0004 | 7.0 | 4200 | 1.0782 | 0.8217 | 0.8278 | 0.8217 | 0.8222 |
| 0.0752 | 7.1667 | 4300 | 1.3249 | 0.8 | 0.8340 | 0.8 | 0.7931 |
| 0.0315 | 7.3333 | 4400 | 0.8367 | 0.8446 | 0.8556 | 0.8446 | 0.8455 |
| 0.002 | 7.5 | 4500 | 1.0440 | 0.8417 | 0.8638 | 0.8417 | 0.8408 |
| 0.0006 | 7.6667 | 4600 | 0.9891 | 0.8554 | 0.8557 | 0.8554 | 0.8518 |
| 0.0006 | 7.8333 | 4700 | 1.0665 | 0.8275 | 0.8457 | 0.8275 | 0.8255 |
| 0.0005 | 8.0 | 4800 | 1.0764 | 0.8308 | 0.8458 | 0.8308 | 0.8308 |
| 0.0004 | 8.1667 | 4900 | 1.0959 | 0.8292 | 0.8517 | 0.8292 | 0.8298 |
| 0.0003 | 8.3333 | 5000 | 1.0436 | 0.8442 | 0.8650 | 0.8442 | 0.8445 |
| 0.0355 | 8.5 | 5100 | 1.2265 | 0.8183 | 0.8401 | 0.8183 | 0.8074 |
| 0.0026 | 8.6667 | 5200 | 0.9908 | 0.8492 | 0.8567 | 0.8492 | 0.8431 |
| 0.0006 | 8.8333 | 5300 | 1.0108 | 0.8492 | 0.8758 | 0.8492 | 0.8510 |
| 0.0009 | 9.0 | 5400 | 1.0780 | 0.8258 | 0.8473 | 0.8258 | 0.8275 |
| 0.0003 | 9.1667 | 5500 | 0.8827 | 0.8538 | 0.8674 | 0.8538 | 0.8553 |
| 0.0009 | 9.3333 | 5600 | 0.8098 | 0.8792 | 0.8974 | 0.8792 | 0.8813 |
| 0.0003 | 9.5 | 5700 | 0.7615 | 0.8871 | 0.8989 | 0.8871 | 0.8870 |
| 0.0003 | 9.6667 | 5800 | 0.7723 | 0.8867 | 0.8978 | 0.8867 | 0.8865 |
| 0.0002 | 9.8333 | 5900 | 0.7841 | 0.8838 | 0.8949 | 0.8838 | 0.8837 |
| 0.0002 | 10.0 | 6000 | 0.7924 | 0.8833 | 0.8944 | 0.8833 | 0.8833 |
| 0.0002 | 10.1667 | 6100 | 0.7995 | 0.8838 | 0.8949 | 0.8838 | 0.8837 |
| 0.0002 | 10.3333 | 6200 | 0.8072 | 0.8829 | 0.8944 | 0.8829 | 0.8830 |
| 0.0002 | 10.5 | 6300 | 0.8127 | 0.8825 | 0.8942 | 0.8825 | 0.8826 |
| 0.0002 | 10.6667 | 6400 | 0.8188 | 0.8825 | 0.8940 | 0.8825 | 0.8826 |
| 0.0002 | 10.8333 | 6500 | 0.8247 | 0.8825 | 0.8940 | 0.8825 | 0.8826 |
| 0.0002 | 11.0 | 6600 | 0.8301 | 0.8821 | 0.8934 | 0.8821 | 0.8820 |
| 0.0002 | 11.1667 | 6700 | 0.8340 | 0.8821 | 0.8933 | 0.8821 | 0.8819 |
| 0.0001 | 11.3333 | 6800 | 0.8387 | 0.8821 | 0.8931 | 0.8821 | 0.8819 |
| 0.0001 | 11.5 | 6900 | 0.8439 | 0.8821 | 0.8931 | 0.8821 | 0.8819 |
| 0.0001 | 11.6667 | 7000 | 0.8475 | 0.8821 | 0.8934 | 0.8821 | 0.8820 |
| 0.0001 | 11.8333 | 7100 | 0.8511 | 0.8821 | 0.8935 | 0.8821 | 0.8821 |
| 0.0001 | 12.0 | 7200 | 0.8555 | 0.8817 | 0.8932 | 0.8817 | 0.8817 |
| 0.0001 | 12.1667 | 7300 | 0.8588 | 0.8817 | 0.8932 | 0.8817 | 0.8817 |
| 0.0001 | 12.3333 | 7400 | 0.8621 | 0.8817 | 0.8932 | 0.8817 | 0.8817 |
| 0.0001 | 12.5 | 7500 | 0.8649 | 0.8817 | 0.8935 | 0.8817 | 0.8817 |
| 0.0001 | 12.6667 | 7600 | 0.8681 | 0.8812 | 0.8933 | 0.8812 | 0.8814 |
| 0.0001 | 12.8333 | 7700 | 0.8708 | 0.8812 | 0.8933 | 0.8812 | 0.8814 |
| 0.0001 | 13.0 | 7800 | 0.8738 | 0.8812 | 0.8933 | 0.8812 | 0.8814 |
| 0.0001 | 13.1667 | 7900 | 0.8767 | 0.8812 | 0.8932 | 0.8812 | 0.8813 |
| 0.0001 | 13.3333 | 8000 | 0.8787 | 0.8808 | 0.8929 | 0.8808 | 0.8810 |
| 0.0001 | 13.5 | 8100 | 0.8809 | 0.8808 | 0.8929 | 0.8808 | 0.8810 |
| 0.0001 | 13.6667 | 8200 | 0.8830 | 0.8812 | 0.8934 | 0.8812 | 0.8814 |
| 0.0001 | 13.8333 | 8300 | 0.8847 | 0.8812 | 0.8934 | 0.8812 | 0.8814 |
| 0.0001 | 14.0 | 8400 | 0.8861 | 0.8812 | 0.8934 | 0.8812 | 0.8814 |
| 0.0001 | 14.1667 | 8500 | 0.8877 | 0.8812 | 0.8934 | 0.8812 | 0.8814 |
| 0.0001 | 14.3333 | 8600 | 0.8887 | 0.8812 | 0.8936 | 0.8812 | 0.8814 |
| 0.0001 | 14.5 | 8700 | 0.8896 | 0.8808 | 0.8933 | 0.8808 | 0.8811 |
| 0.0001 | 14.6667 | 8800 | 0.8903 | 0.8812 | 0.8937 | 0.8812 | 0.8816 |
| 0.0001 | 14.8333 | 8900 | 0.8907 | 0.8812 | 0.8937 | 0.8812 | 0.8816 |
| 0.0001 | 15.0 | 9000 | 0.8909 | 0.8812 | 0.8937 | 0.8812 | 0.8816 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"mix-subtype_iva",
"mix-subtype_iva2",
"mix-subtype_ivc",
"mix-subtype_ivd",
"mix-subtype_ia",
"mix-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-2-Michel_Daudon_-w256_1k_v1-_SEC
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-2-Michel_Daudon_-w256_1k_v1-_SEC
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3420
- Accuracy: 0.9192
- Precision: 0.9216
- Recall: 0.9192
- F1: 0.9190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2755 | 0.3333 | 100 | 0.7287 | 0.7708 | 0.7925 | 0.7708 | 0.7574 |
| 0.1543 | 0.6667 | 200 | 0.4145 | 0.8708 | 0.8855 | 0.8708 | 0.8705 |
| 0.0739 | 1.0 | 300 | 0.5222 | 0.8467 | 0.8812 | 0.8467 | 0.8463 |
| 0.0491 | 1.3333 | 400 | 0.5282 | 0.8408 | 0.8582 | 0.8408 | 0.8427 |
| 0.0666 | 1.6667 | 500 | 0.6483 | 0.8592 | 0.8691 | 0.8592 | 0.8596 |
| 0.078 | 2.0 | 600 | 0.6382 | 0.8592 | 0.8602 | 0.8592 | 0.8580 |
| 0.011 | 2.3333 | 700 | 0.8982 | 0.8217 | 0.8582 | 0.8217 | 0.8191 |
| 0.0499 | 2.6667 | 800 | 0.8965 | 0.8475 | 0.8902 | 0.8475 | 0.8470 |
| 0.0035 | 3.0 | 900 | 0.8278 | 0.8392 | 0.8674 | 0.8392 | 0.8398 |
| 0.0707 | 3.3333 | 1000 | 0.3420 | 0.9192 | 0.9216 | 0.9192 | 0.9190 |
| 0.003 | 3.6667 | 1100 | 0.5066 | 0.88 | 0.8971 | 0.88 | 0.8810 |
| 0.0587 | 4.0 | 1200 | 0.6408 | 0.8817 | 0.8882 | 0.8817 | 0.8825 |
| 0.0018 | 4.3333 | 1300 | 0.6582 | 0.8692 | 0.8759 | 0.8692 | 0.8693 |
| 0.1528 | 4.6667 | 1400 | 0.6080 | 0.8758 | 0.9034 | 0.8758 | 0.8728 |
| 0.0266 | 5.0 | 1500 | 0.5895 | 0.8708 | 0.8943 | 0.8708 | 0.8688 |
| 0.0019 | 5.3333 | 1600 | 0.4804 | 0.8967 | 0.9022 | 0.8967 | 0.8966 |
| 0.0011 | 5.6667 | 1700 | 0.6821 | 0.885 | 0.8926 | 0.885 | 0.8813 |
| 0.0009 | 6.0 | 1800 | 0.6932 | 0.8683 | 0.8733 | 0.8683 | 0.8645 |
| 0.0299 | 6.3333 | 1900 | 0.7787 | 0.8667 | 0.8843 | 0.8667 | 0.8663 |
| 0.0007 | 6.6667 | 2000 | 0.5522 | 0.9042 | 0.9057 | 0.9042 | 0.9027 |
| 0.0007 | 7.0 | 2100 | 0.5208 | 0.9067 | 0.9096 | 0.9067 | 0.9072 |
| 0.0006 | 7.3333 | 2200 | 0.5342 | 0.905 | 0.9076 | 0.905 | 0.9053 |
| 0.0006 | 7.6667 | 2300 | 0.7917 | 0.8517 | 0.8734 | 0.8517 | 0.8516 |
| 0.0008 | 8.0 | 2400 | 0.9942 | 0.85 | 0.8666 | 0.85 | 0.8483 |
| 0.0005 | 8.3333 | 2500 | 0.7367 | 0.8842 | 0.8853 | 0.8842 | 0.8815 |
| 0.0075 | 8.6667 | 2600 | 0.6106 | 0.8833 | 0.8934 | 0.8833 | 0.8842 |
| 0.0007 | 9.0 | 2700 | 0.6440 | 0.8817 | 0.8837 | 0.8817 | 0.8781 |
| 0.0005 | 9.3333 | 2800 | 0.5905 | 0.905 | 0.9065 | 0.905 | 0.9047 |
| 0.0004 | 9.6667 | 2900 | 0.5889 | 0.9033 | 0.9046 | 0.9033 | 0.9030 |
| 0.0004 | 10.0 | 3000 | 0.7286 | 0.89 | 0.8981 | 0.89 | 0.8889 |
| 0.0003 | 10.3333 | 3100 | 0.8314 | 0.875 | 0.8883 | 0.875 | 0.8754 |
| 0.0003 | 10.6667 | 3200 | 0.7812 | 0.8808 | 0.8902 | 0.8808 | 0.8802 |
| 0.0003 | 11.0 | 3300 | 0.7806 | 0.8817 | 0.8908 | 0.8817 | 0.8811 |
| 0.0003 | 11.3333 | 3400 | 0.7808 | 0.8825 | 0.8910 | 0.8825 | 0.8821 |
| 0.0003 | 11.6667 | 3500 | 0.5853 | 0.9025 | 0.9026 | 0.9025 | 0.9023 |
| 0.0003 | 12.0 | 3600 | 0.8102 | 0.88 | 0.8876 | 0.88 | 0.8804 |
| 0.0003 | 12.3333 | 3700 | 0.8667 | 0.8742 | 0.8802 | 0.8742 | 0.8744 |
| 0.0003 | 12.6667 | 3800 | 0.8161 | 0.8783 | 0.8838 | 0.8783 | 0.8786 |
| 0.0003 | 13.0 | 3900 | 0.8035 | 0.88 | 0.8854 | 0.88 | 0.8803 |
| 0.0003 | 13.3333 | 4000 | 0.7989 | 0.88 | 0.8854 | 0.88 | 0.8803 |
| 0.0002 | 13.6667 | 4100 | 0.8006 | 0.88 | 0.8850 | 0.88 | 0.8803 |
| 0.0002 | 14.0 | 4200 | 0.8021 | 0.88 | 0.8850 | 0.88 | 0.8803 |
| 0.0002 | 14.3333 | 4300 | 0.8028 | 0.8808 | 0.8858 | 0.8808 | 0.8811 |
| 0.0002 | 14.6667 | 4400 | 0.8035 | 0.8808 | 0.8858 | 0.8808 | 0.8811 |
| 0.0002 | 15.0 | 4500 | 0.8036 | 0.8808 | 0.8858 | 0.8808 | 0.8811 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"sec-subtype_iva",
"sec-subtype_iva2",
"sec-subtype_ivc",
"sec-subtype_ivd",
"sec-subtype_ia",
"sec-subtype_va"
] |
prithivMLmods/Traffic-Density-Classification
|

# **Traffic-Density-Classification**
> **Traffic-Density-Classification** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify images into **traffic density** categories using the **SiglipForImageClassification** architecture.
```py
Classification Report:
precision recall f1-score support
high-traffic 0.8647 0.8410 0.8527 585
low-traffic 0.8778 0.9485 0.9118 3803
medium-traffic 0.7785 0.6453 0.7057 1187
no-traffic 0.8730 0.7292 0.7946 528
accuracy 0.8602 6103
macro avg 0.8485 0.7910 0.8162 6103
weighted avg 0.8568 0.8602 0.8559 6103
```

The model categorizes images into the following 4 classes:
- **Class 0:** "high-traffic"
- **Class 1:** "low-traffic"
- **Class 2:** "medium-traffic"
- **Class 3:** "no-traffic"
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Traffic-Density-Classification"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def traffic_density_classification(image):
"""Predicts traffic density category for an image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "high-traffic", "1": "low-traffic", "2": "medium-traffic", "3": "no-traffic"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=traffic_density_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Traffic Density Classification",
description="Upload an image to classify it into one of the 4 traffic density categories."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
# **Intended Use:**
The **Traffic-Density-Classification** model is designed for traffic image classification. It helps categorize traffic density levels into predefined categories. Potential use cases include:
- **Traffic Monitoring:** Classifying images from traffic cameras to assess congestion levels.
- **Smart City Applications:** Assisting in traffic flow management and congestion reduction strategies.
- **Automated Traffic Analysis:** Helping transportation authorities analyze and optimize road usage.
- **AI Research:** Supporting computer vision-based traffic density classification models.
|
[
"high-traffic",
"low-traffic",
"medium-traffic",
"no-traffic"
] |
Ivanrs/vit-base-kidney-stone-2-Michel_Daudon_-w256_1k_v1-_SUR
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-2-Michel_Daudon_-w256_1k_v1-_SUR
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0040
- Accuracy: 0.6917
- Precision: 0.7078
- Recall: 0.6917
- F1: 0.6859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.3876 | 0.3333 | 100 | 1.0040 | 0.6917 | 0.7078 | 0.6917 | 0.6859 |
| 0.1233 | 0.6667 | 200 | 1.0383 | 0.7416 | 0.7515 | 0.7416 | 0.7427 |
| 0.0709 | 1.0 | 300 | 1.3706 | 0.7294 | 0.7222 | 0.7294 | 0.7186 |
| 0.0379 | 1.3333 | 400 | 1.3745 | 0.7105 | 0.7178 | 0.7105 | 0.7045 |
| 0.0256 | 1.6667 | 500 | 1.1379 | 0.7939 | 0.8114 | 0.7939 | 0.7879 |
| 0.0722 | 2.0 | 600 | 1.6149 | 0.6966 | 0.7899 | 0.6966 | 0.6896 |
| 0.006 | 2.3333 | 700 | 1.2398 | 0.7351 | 0.7541 | 0.7351 | 0.7410 |
| 0.0055 | 2.6667 | 800 | 1.6718 | 0.6893 | 0.7319 | 0.6893 | 0.6792 |
| 0.0597 | 3.0 | 900 | 1.3485 | 0.7637 | 0.7550 | 0.7637 | 0.7530 |
| 0.0621 | 3.3333 | 1000 | 1.2455 | 0.7907 | 0.7990 | 0.7907 | 0.7801 |
| 0.049 | 3.6667 | 1100 | 1.3096 | 0.7841 | 0.7851 | 0.7841 | 0.7808 |
| 0.0023 | 4.0 | 1200 | 1.3507 | 0.7800 | 0.7836 | 0.7800 | 0.7802 |
| 0.0807 | 4.3333 | 1300 | 1.5510 | 0.7318 | 0.7666 | 0.7318 | 0.7421 |
| 0.0486 | 4.6667 | 1400 | 1.7065 | 0.6860 | 0.7611 | 0.6860 | 0.6799 |
| 0.0861 | 5.0 | 1500 | 1.2896 | 0.7702 | 0.7706 | 0.7702 | 0.7677 |
| 0.0046 | 5.3333 | 1600 | 1.4991 | 0.7473 | 0.7584 | 0.7473 | 0.7467 |
| 0.0015 | 5.6667 | 1700 | 1.5548 | 0.7539 | 0.7529 | 0.7539 | 0.7502 |
| 0.0117 | 6.0 | 1800 | 1.6813 | 0.7261 | 0.7456 | 0.7261 | 0.7325 |
| 0.0481 | 6.3333 | 1900 | 1.8190 | 0.7490 | 0.7836 | 0.7490 | 0.7511 |
| 0.0011 | 6.6667 | 2000 | 1.8774 | 0.6877 | 0.6960 | 0.6877 | 0.6881 |
| 0.0636 | 7.0 | 2100 | 1.8792 | 0.7204 | 0.7292 | 0.7204 | 0.7164 |
| 0.0183 | 7.3333 | 2200 | 1.7606 | 0.7596 | 0.8027 | 0.7596 | 0.7589 |
| 0.0023 | 7.6667 | 2300 | 1.4724 | 0.7449 | 0.7879 | 0.7449 | 0.7466 |
| 0.0007 | 8.0 | 2400 | 1.4367 | 0.7751 | 0.7979 | 0.7751 | 0.7740 |
| 0.0007 | 8.3333 | 2500 | 1.4553 | 0.7760 | 0.7965 | 0.7760 | 0.7749 |
| 0.0006 | 8.6667 | 2600 | 1.4727 | 0.7776 | 0.7982 | 0.7776 | 0.7767 |
| 0.0006 | 9.0 | 2700 | 1.4842 | 0.7768 | 0.7960 | 0.7768 | 0.7758 |
| 0.0005 | 9.3333 | 2800 | 1.4965 | 0.7776 | 0.7963 | 0.7776 | 0.7766 |
| 0.0005 | 9.6667 | 2900 | 1.5049 | 0.7792 | 0.7966 | 0.7792 | 0.7789 |
| 0.0005 | 10.0 | 3000 | 1.5151 | 0.7792 | 0.7966 | 0.7792 | 0.7789 |
| 0.0004 | 10.3333 | 3100 | 1.5238 | 0.7792 | 0.7958 | 0.7792 | 0.7792 |
| 0.0004 | 10.6667 | 3200 | 1.5329 | 0.7776 | 0.7932 | 0.7776 | 0.7775 |
| 0.0004 | 11.0 | 3300 | 1.5415 | 0.7760 | 0.7907 | 0.7760 | 0.7758 |
| 0.0004 | 11.3333 | 3400 | 1.5492 | 0.7743 | 0.7882 | 0.7743 | 0.7742 |
| 0.0003 | 11.6667 | 3500 | 1.5563 | 0.7735 | 0.7870 | 0.7735 | 0.7734 |
| 0.0003 | 12.0 | 3600 | 1.5631 | 0.7735 | 0.7870 | 0.7735 | 0.7734 |
| 0.0003 | 12.3333 | 3700 | 1.5691 | 0.7735 | 0.7870 | 0.7735 | 0.7734 |
| 0.0003 | 12.6667 | 3800 | 1.5742 | 0.7735 | 0.7870 | 0.7735 | 0.7734 |
| 0.0003 | 13.0 | 3900 | 1.5795 | 0.7743 | 0.7878 | 0.7743 | 0.7743 |
| 0.0003 | 13.3333 | 4000 | 1.5838 | 0.7743 | 0.7875 | 0.7743 | 0.7745 |
| 0.0003 | 13.6667 | 4100 | 1.5876 | 0.7727 | 0.7851 | 0.7727 | 0.7728 |
| 0.0003 | 14.0 | 4200 | 1.5903 | 0.7735 | 0.7858 | 0.7735 | 0.7737 |
| 0.0003 | 14.3333 | 4300 | 1.5926 | 0.7735 | 0.7858 | 0.7735 | 0.7737 |
| 0.0003 | 14.6667 | 4400 | 1.5938 | 0.7735 | 0.7858 | 0.7735 | 0.7737 |
| 0.0003 | 15.0 | 4500 | 1.5943 | 0.7735 | 0.7858 | 0.7735 | 0.7737 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"sur-subtype_iva",
"sur-subtype_iva2",
"sur-subtype_ivc",
"sur-subtype_ivd",
"sur-subtype_ia",
"sur-subtype_va"
] |
Schwa456/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
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
Schwa456/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6731
- Accuracy: 0.872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7034 | 1.0 | 63 | 2.5287 | 0.818 |
| 1.8181 | 2.0 | 126 | 1.8146 | 0.852 |
| 1.5928 | 2.96 | 186 | 1.6731 | 0.872 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cpu
- Datasets 3.4.1
- Tokenizers 0.21.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
FatimaK6/vitModelV1
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"not reached",
"reached"
] |
prithivMLmods/Dog-Breed-120
|

# **Dog-Breed-120**
> **Dog-Breed-120** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify dog images into specific breed categories using the **SiglipForImageClassification** architecture.
> [!Note]
> Accuracy : 86.81
```py
{'eval_loss': 0.49717578291893005,
'eval_model_preparation_time': 0.0042,
'eval_accuracy': 0.8681275679906085,
'eval_runtime': 146.2493,
'eval_samples_per_second': 69.894,
'eval_steps_per_second': 8.739,
'epoch': 7.0}
```
The model categorizes images into the following 121 classes (0-120):
- **Class 0:** "affenpinscher"
- **Class 1:** "afghan_hound"
- **Class 2:** "african_hunting_dog"
- **Class 3:** "airedale"
- **Class 4:** "american_staffordshire_terrier"
- **Class 5:** "appenzeller"
- **Class 6:** "australian_terrier"
- **Class 7:** "basenji"
- **Class 8:** "basset"
- **Class 9:** "beagle"
- **Class 10:** "bedlington_terrier"
- **Class 11:** "bernese_mountain_dog"
- **Class 12:** "black-and-tan_coonhound"
- **Class 13:** "blenheim_spaniel"
- **Class 14:** "bloodhound"
- **Class 15:** "bluetick"
- **Class 16:** "border_collie"
- **Class 17:** "border_terrier"
- **Class 18:** "borzoi"
- **Class 19:** "boston_bull"
- **Class 20:** "bouvier_des_flandres"
- **Class 21:** "boxer"
- **Class 22:** "brabancon_griffon"
- **Class 23:** "briard"
- **Class 24:** "brittany_spaniel"
- **Class 25:** "bull_mastiff"
- **Class 26:** "cairn"
- **Class 27:** "cardigan"
- **Class 28:** "chesapeake_bay_retriever"
- **Class 29:** "chihuahua"
- **Class 30:** "chow"
- **Class 31:** "clumber"
- **Class 32:** "cocker_spaniel"
- **Class 33:** "collie"
- **Class 34:** "curly-coated_retriever"
- **Class 35:** "dandie_dinmont"
- **Class 36:** "dhole"
- **Class 37:** "dingo"
- **Class 38:** "doberman"
- **Class 39:** "english_foxhound"
- **Class 40:** "english_setter"
- **Class 41:** "english_springer"
- **Class 42:** "entlebucher"
- **Class 43:** "eskimo_dog"
- **Class 44:** "flat-coated_retriever"
- **Class 45:** "french_bulldog"
- **Class 46:** "german_shepherd"
- **Class 47:** "german_short-haired_pointer"
- **Class 48:** "giant_schnauzer"
- **Class 49:** "golden_retriever"
- **Class 50:** "gordon_setter"
- **Class 51:** "great_dane"
- **Class 52:** "great_pyrenees"
- **Class 53:** "greater_swiss_mountain_dog"
- **Class 54:** "groenendael"
- **Class 55:** "ibizan_hound"
- **Class 56:** "irish_setter"
- **Class 57:** "irish_terrier"
- **Class 58:** "irish_water_spaniel"
- **Class 59:** "irish_wolfhound"
- **Class 60:** "italian_greyhound"
- **Class 61:** "japanese_spaniel"
- **Class 62:** "keeshond"
- **Class 63:** "kelpie"
- **Class 64:** "kerry_blue_terrier"
- **Class 65:** "komondor"
- **Class 66:** "kuvasz"
- **Class 67:** "labrador_retriever"
- **Class 68:** "lakeland_terrier"
- **Class 69:** "leonberg"
- **Class 70:** "lhasa"
- **Class 71:** "malamute"
- **Class 72:** "malinois"
- **Class 73:** "maltese_dog"
- **Class 74:** "mexican_hairless"
- **Class 75:** "miniature_pinscher"
- **Class 76:** "miniature_poodle"
- **Class 77:** "miniature_schnauzer"
- **Class 78:** "newfoundland"
- **Class 79:** "norfolk_terrier"
- **Class 80:** "norwegian_elkhound"
- **Class 81:** "norwich_terrier"
- **Class 82:** "old_english_sheepdog"
- **Class 83:** "otterhound"
- **Class 84:** "papillon"
- **Class 85:** "pekinese"
- **Class 86:** "pembroke"
- **Class 87:** "pomeranian"
- **Class 88:** "pug"
- **Class 89:** "redbone"
- **Class 90:** "rhodesian_ridgeback"
- **Class 91:** "rottweiler"
- **Class 92:** "saint_bernard"
- **Class 93:** "saluki"
- **Class 94:** "samoyed"
- **Class 95:** "schipperke"
- **Class 96:** "scotch_terrier"
- **Class 97:** "scottish_deerhound"
- **Class 98:** "sealyham_terrier"
- **Class 99:** "shetland_sheepdog"
- **Class 100:** "shih-tzu"
- **Class 101:** "siberian_husky"
- **Class 102:** "silky_terrier"
- **Class 103:** "soft-coated_wheaten_terrier"
- **Class 104:** "staffordshire_bullterrier"
- **Class 105:** "standard_poodle"
- **Class 106:** "standard_schnauzer"
- **Class 107:** "sussex_spaniel"
- **Class 108:** "test"
- **Class 109:** "tibetan_mastiff"
- **Class 110:** "tibetan_terrier"
- **Class 111:** "toy_poodle"
- **Class 112:** "toy_terrier"
- **Class 113:** "vizsla"
- **Class 114:** "walker_hound"
- **Class 115:** "weimaraner"
- **Class 116:** "welsh_springer_spaniel"
- **Class 117:** "west_highland_white_terrier"
- **Class 118:** "whippet"
- **Class 119:** "wire-haired_fox_terrier"
- **Class 120:** "yorkshire_terrier"
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Dog-Breed-120"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def dog_breed_classification(image):
"""Predicts the dog breed for an image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "affenpinscher",
"1": "afghan_hound",
"2": "african_hunting_dog",
"3": "airedale",
"4": "american_staffordshire_terrier",
"5": "appenzeller",
"6": "australian_terrier",
"7": "basenji",
"8": "basset",
"9": "beagle",
"10": "bedlington_terrier",
"11": "bernese_mountain_dog",
"12": "black-and-tan_coonhound",
"13": "blenheim_spaniel",
"14": "bloodhound",
"15": "bluetick",
"16": "border_collie",
"17": "border_terrier",
"18": "borzoi",
"19": "boston_bull",
"20": "bouvier_des_flandres",
"21": "boxer",
"22": "brabancon_griffon",
"23": "briard",
"24": "brittany_spaniel",
"25": "bull_mastiff",
"26": "cairn",
"27": "cardigan",
"28": "chesapeake_bay_retriever",
"29": "chihuahua",
"30": "chow",
"31": "clumber",
"32": "cocker_spaniel",
"33": "collie",
"34": "curly-coated_retriever",
"35": "dandie_dinmont",
"36": "dhole",
"37": "dingo",
"38": "doberman",
"39": "english_foxhound",
"40": "english_setter",
"41": "english_springer",
"42": "entlebucher",
"43": "eskimo_dog",
"44": "flat-coated_retriever",
"45": "french_bulldog",
"46": "german_shepherd",
"47": "german_short-haired_pointer",
"48": "giant_schnauzer",
"49": "golden_retriever",
"50": "gordon_setter",
"51": "great_dane",
"52": "great_pyrenees",
"53": "greater_swiss_mountain_dog",
"54": "groenendael",
"55": "ibizan_hound",
"56": "irish_setter",
"57": "irish_terrier",
"58": "irish_water_spaniel",
"59": "irish_wolfhound",
"60": "italian_greyhound",
"61": "japanese_spaniel",
"62": "keeshond",
"63": "kelpie",
"64": "kerry_blue_terrier",
"65": "komondor",
"66": "kuvasz",
"67": "labrador_retriever",
"68": "lakeland_terrier",
"69": "leonberg",
"70": "lhasa",
"71": "malamute",
"72": "malinois",
"73": "maltese_dog",
"74": "mexican_hairless",
"75": "miniature_pinscher",
"76": "miniature_poodle",
"77": "miniature_schnauzer",
"78": "newfoundland",
"79": "norfolk_terrier",
"80": "norwegian_elkhound",
"81": "norwich_terrier",
"82": "old_english_sheepdog",
"83": "otterhound",
"84": "papillon",
"85": "pekinese",
"86": "pembroke",
"87": "pomeranian",
"88": "pug",
"89": "redbone",
"90": "rhodesian_ridgeback",
"91": "rottweiler",
"92": "saint_bernard",
"93": "saluki",
"94": "samoyed",
"95": "schipperke",
"96": "scotch_terrier",
"97": "scottish_deerhound",
"98": "sealyham_terrier",
"99": "shetland_sheepdog",
"100": "shih-tzu",
"101": "siberian_husky",
"102": "silky_terrier",
"103": "soft-coated_wheaten_terrier",
"104": "staffordshire_bullterrier",
"105": "standard_poodle",
"106": "standard_schnauzer",
"107": "sussex_spaniel",
"108": "test",
"109": "tibetan_mastiff",
"110": "tibetan_terrier",
"111": "toy_poodle",
"112": "toy_terrier",
"113": "vizsla",
"114": "walker_hound",
"115": "weimaraner",
"116": "welsh_springer_spaniel",
"117": "west_highland_white_terrier",
"118": "whippet",
"119": "wire-haired_fox_terrier",
"120": "yorkshire_terrier"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=dog_breed_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Dog Breed Classification",
description="Upload an image to classify it into one of the 121 dog breed categories."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
# **Intended Use:**
The **Dog-Breed-120** model is designed for dog breed image classification. It helps categorize dog images into 121 specific breed categories. Potential use cases include:
- **Pet Identification:** Assisting pet owners and veterinarians in identifying dog breeds.
- **Animal Research:** Supporting research in canine genetics and behavior studies.
- **E-commerce Applications:** Enhancing pet-related product recommendations and searches.
- **Educational Purposes:** Aiding in learning and teaching about various dog breeds.
|
[
"affenpinscher",
"afghan_hound",
"african_hunting_dog",
"airedale",
"american_staffordshire_terrier",
"appenzeller",
"australian_terrier",
"basenji",
"basset",
"beagle",
"bedlington_terrier",
"bernese_mountain_dog",
"black-and-tan_coonhound",
"blenheim_spaniel",
"bloodhound",
"bluetick",
"border_collie",
"border_terrier",
"borzoi",
"boston_bull",
"bouvier_des_flandres",
"boxer",
"brabancon_griffon",
"briard",
"brittany_spaniel",
"bull_mastiff",
"cairn",
"cardigan",
"chesapeake_bay_retriever",
"chihuahua",
"chow",
"clumber",
"cocker_spaniel",
"collie",
"curly-coated_retriever",
"dandie_dinmont",
"dhole",
"dingo",
"doberman",
"english_foxhound",
"english_setter",
"english_springer",
"entlebucher",
"eskimo_dog",
"flat-coated_retriever",
"french_bulldog",
"german_shepherd",
"german_short-haired_pointer",
"giant_schnauzer",
"golden_retriever",
"gordon_setter",
"great_dane",
"great_pyrenees",
"greater_swiss_mountain_dog",
"groenendael",
"ibizan_hound",
"irish_setter",
"irish_terrier",
"irish_water_spaniel",
"irish_wolfhound",
"italian_greyhound",
"japanese_spaniel",
"keeshond",
"kelpie",
"kerry_blue_terrier",
"komondor",
"kuvasz",
"labrador_retriever",
"lakeland_terrier",
"leonberg",
"lhasa",
"malamute",
"malinois",
"maltese_dog",
"mexican_hairless",
"miniature_pinscher",
"miniature_poodle",
"miniature_schnauzer",
"newfoundland",
"norfolk_terrier",
"norwegian_elkhound",
"norwich_terrier",
"old_english_sheepdog",
"otterhound",
"papillon",
"pekinese",
"pembroke",
"pomeranian",
"pug",
"redbone",
"rhodesian_ridgeback",
"rottweiler",
"saint_bernard",
"saluki",
"samoyed",
"schipperke",
"scotch_terrier",
"scottish_deerhound",
"sealyham_terrier",
"shetland_sheepdog",
"shih-tzu",
"siberian_husky",
"silky_terrier",
"soft-coated_wheaten_terrier",
"staffordshire_bullterrier",
"standard_poodle",
"standard_schnauzer",
"sussex_spaniel",
"test",
"tibetan_mastiff",
"tibetan_terrier",
"toy_poodle",
"toy_terrier",
"vizsla",
"walker_hound",
"weimaraner",
"welsh_springer_spaniel",
"west_highland_white_terrier",
"whippet",
"wire-haired_fox_terrier",
"yorkshire_terrier"
] |
prithivMLmods/BrainTumor-Classification-Mini
|

# **BrainTumor-Classification-Mini**
> **BrainTumor-Classification-Mini** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify brain tumor images using the **SiglipForImageClassification** architecture.
```py
Classification Report:
precision recall f1-score support
No Tumor 0.9975 0.9962 0.9969 1595
Glioma 0.9872 0.9947 0.9910 1321
Meningioma 0.9880 0.9821 0.9850 1339
Pituitary 0.9931 0.9931 0.9931 1457
accuracy 0.9918 5712
macro avg 0.9915 0.9915 0.9915 5712
weighted avg 0.9918 0.9918 0.9918 5712
```

The model categorizes images into the following 4 classes:
- **Class 0:** "No Tumor"
- **Class 1:** "Glioma"
- **Class 2:** "Meningioma"
- **Class 3:** "Pituitary"
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/BrainTumor-Classification-Mini"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def brain_tumor_classification(image):
"""Predicts brain tumor category for an image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "No Tumor", "1": "Glioma", "2": "Meningioma", "3": "Pituitary"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=brain_tumor_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Brain Tumor Classification",
description="Upload an image to classify it into one of the 4 brain tumor categories."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
# **Intended Use:**
The **BrainTumor-Classification-Mini** model is designed for brain tumor image classification. It helps categorize MRI images into predefined tumor types. Potential use cases include:
- **Medical Diagnosis Assistance:** Supporting radiologists in preliminary tumor classification.
- **AI-Assisted Healthcare:** Enhancing automated tumor detection in medical imaging.
- **Research & Development:** Facilitating studies in AI-driven medical imaging solutions.
- **Educational Purposes:** Helping students and professionals learn about tumor classification using AI.
|
[
"no tumor",
"glioma",
"meningioma",
"pituitary"
] |
Ivanrs/vit-base-kidney-stone-3-Jonathan_El-Beze_-w256_1k_v1-_MIX
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-3-Jonathan_El-Beze_-w256_1k_v1-_MIX
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4696
- Accuracy: 0.895
- Precision: 0.9027
- Recall: 0.895
- F1: 0.8932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.4341 | 0.1667 | 100 | 0.6618 | 0.7542 | 0.8323 | 0.7542 | 0.7028 |
| 0.1842 | 0.3333 | 200 | 0.5375 | 0.8292 | 0.8571 | 0.8292 | 0.8250 |
| 0.1017 | 0.5 | 300 | 0.5146 | 0.8446 | 0.8707 | 0.8446 | 0.8440 |
| 0.1571 | 0.6667 | 400 | 0.6456 | 0.8213 | 0.8446 | 0.8213 | 0.8214 |
| 0.2427 | 0.8333 | 500 | 1.0066 | 0.7275 | 0.7704 | 0.7275 | 0.7065 |
| 0.0171 | 1.0 | 600 | 0.8354 | 0.7738 | 0.8158 | 0.7738 | 0.7607 |
| 0.0093 | 1.1667 | 700 | 0.5837 | 0.8558 | 0.8664 | 0.8558 | 0.8568 |
| 0.0892 | 1.3333 | 800 | 0.9045 | 0.7779 | 0.8225 | 0.7779 | 0.7605 |
| 0.0053 | 1.5 | 900 | 0.5252 | 0.8771 | 0.8890 | 0.8771 | 0.8744 |
| 0.0345 | 1.6667 | 1000 | 0.4696 | 0.895 | 0.9027 | 0.895 | 0.8932 |
| 0.1789 | 1.8333 | 1100 | 1.3185 | 0.7338 | 0.7993 | 0.7338 | 0.7002 |
| 0.0037 | 2.0 | 1200 | 0.9742 | 0.7746 | 0.8050 | 0.7746 | 0.7705 |
| 0.0034 | 2.1667 | 1300 | 0.5805 | 0.8704 | 0.8765 | 0.8704 | 0.8711 |
| 0.0026 | 2.3333 | 1400 | 0.8349 | 0.8346 | 0.8663 | 0.8346 | 0.8260 |
| 0.1052 | 2.5 | 1500 | 0.5899 | 0.8571 | 0.8584 | 0.8571 | 0.8566 |
| 0.1003 | 2.6667 | 1600 | 1.1080 | 0.7846 | 0.7992 | 0.7846 | 0.7588 |
| 0.0012 | 2.8333 | 1700 | 0.5852 | 0.885 | 0.8915 | 0.885 | 0.8845 |
| 0.0013 | 3.0 | 1800 | 1.4393 | 0.7429 | 0.8031 | 0.7429 | 0.7125 |
| 0.0499 | 3.1667 | 1900 | 0.9394 | 0.8067 | 0.8500 | 0.8067 | 0.7941 |
| 0.013 | 3.3333 | 2000 | 0.7218 | 0.8558 | 0.8681 | 0.8558 | 0.8488 |
| 0.0034 | 3.5 | 2100 | 0.8017 | 0.8467 | 0.8627 | 0.8467 | 0.8401 |
| 0.0084 | 3.6667 | 2200 | 0.6204 | 0.85 | 0.8566 | 0.85 | 0.8502 |
| 0.0009 | 3.8333 | 2300 | 0.6290 | 0.8788 | 0.8819 | 0.8788 | 0.8786 |
| 0.0076 | 4.0 | 2400 | 1.3498 | 0.7921 | 0.8431 | 0.7921 | 0.7847 |
| 0.0011 | 4.1667 | 2500 | 0.6609 | 0.8812 | 0.8936 | 0.8812 | 0.8813 |
| 0.0573 | 4.3333 | 2600 | 0.5998 | 0.8983 | 0.9000 | 0.8983 | 0.8974 |
| 0.0007 | 4.5 | 2700 | 0.9958 | 0.8158 | 0.8427 | 0.8158 | 0.8092 |
| 0.0011 | 4.6667 | 2800 | 0.7610 | 0.8775 | 0.8800 | 0.8775 | 0.8759 |
| 0.0014 | 4.8333 | 2900 | 0.9071 | 0.8538 | 0.8722 | 0.8538 | 0.8548 |
| 0.001 | 5.0 | 3000 | 0.9948 | 0.8258 | 0.8567 | 0.8258 | 0.8229 |
| 0.0377 | 5.1667 | 3100 | 0.8527 | 0.8525 | 0.8921 | 0.8525 | 0.8519 |
| 0.0008 | 5.3333 | 3200 | 1.0262 | 0.8225 | 0.8494 | 0.8225 | 0.8189 |
| 0.0006 | 5.5 | 3300 | 0.8837 | 0.8433 | 0.8668 | 0.8433 | 0.8389 |
| 0.0007 | 5.6667 | 3400 | 1.1268 | 0.8113 | 0.8290 | 0.8113 | 0.8061 |
| 0.0005 | 5.8333 | 3500 | 0.6874 | 0.89 | 0.8925 | 0.89 | 0.8898 |
| 0.0009 | 6.0 | 3600 | 0.6892 | 0.8742 | 0.8738 | 0.8742 | 0.8733 |
| 0.0006 | 6.1667 | 3700 | 0.5795 | 0.8812 | 0.8820 | 0.8812 | 0.8810 |
| 0.0009 | 6.3333 | 3800 | 1.6193 | 0.7342 | 0.7824 | 0.7342 | 0.7179 |
| 0.0007 | 6.5 | 3900 | 1.0575 | 0.835 | 0.8548 | 0.835 | 0.8268 |
| 0.0594 | 6.6667 | 4000 | 1.1842 | 0.7858 | 0.8102 | 0.7858 | 0.7794 |
| 0.0003 | 6.8333 | 4100 | 0.9934 | 0.8517 | 0.8720 | 0.8517 | 0.8469 |
| 0.1235 | 7.0 | 4200 | 0.9902 | 0.8183 | 0.8452 | 0.8183 | 0.8132 |
| 0.0007 | 7.1667 | 4300 | 0.8515 | 0.8604 | 0.8711 | 0.8604 | 0.8574 |
| 0.0005 | 7.3333 | 4400 | 0.6680 | 0.8929 | 0.9026 | 0.8929 | 0.8911 |
| 0.0003 | 7.5 | 4500 | 1.5196 | 0.7696 | 0.8260 | 0.7696 | 0.7366 |
| 0.0003 | 7.6667 | 4600 | 1.3149 | 0.7883 | 0.8369 | 0.7883 | 0.7865 |
| 0.0003 | 7.8333 | 4700 | 0.7309 | 0.8717 | 0.8818 | 0.8717 | 0.8710 |
| 0.0002 | 8.0 | 4800 | 0.8831 | 0.8638 | 0.8734 | 0.8638 | 0.8648 |
| 0.0002 | 8.1667 | 4900 | 1.1670 | 0.8133 | 0.8512 | 0.8133 | 0.8105 |
| 0.0003 | 8.3333 | 5000 | 0.6684 | 0.8979 | 0.9055 | 0.8979 | 0.8985 |
| 0.0002 | 8.5 | 5100 | 0.6811 | 0.8971 | 0.9046 | 0.8971 | 0.8977 |
| 0.0002 | 8.6667 | 5200 | 0.6814 | 0.8971 | 0.9044 | 0.8971 | 0.8977 |
| 0.0002 | 8.8333 | 5300 | 0.6898 | 0.8979 | 0.9059 | 0.8979 | 0.8986 |
| 0.0002 | 9.0 | 5400 | 0.6942 | 0.8992 | 0.9073 | 0.8992 | 0.8999 |
| 0.0002 | 9.1667 | 5500 | 0.6987 | 0.8992 | 0.9073 | 0.8992 | 0.8999 |
| 0.0002 | 9.3333 | 5600 | 0.7072 | 0.8992 | 0.9076 | 0.8992 | 0.8999 |
| 0.0001 | 9.5 | 5700 | 0.7091 | 0.8983 | 0.9066 | 0.8983 | 0.8990 |
| 0.0001 | 9.6667 | 5800 | 0.7138 | 0.8983 | 0.9067 | 0.8983 | 0.8990 |
| 0.0001 | 9.8333 | 5900 | 0.7185 | 0.8992 | 0.9074 | 0.8992 | 0.8998 |
| 0.0001 | 10.0 | 6000 | 0.7225 | 0.8992 | 0.9074 | 0.8992 | 0.8998 |
| 0.0001 | 10.1667 | 6100 | 0.7255 | 0.9 | 0.9082 | 0.9 | 0.9006 |
| 0.0001 | 10.3333 | 6200 | 0.7305 | 0.8992 | 0.9076 | 0.8992 | 0.8998 |
| 0.0001 | 10.5 | 6300 | 0.7354 | 0.8992 | 0.9076 | 0.8992 | 0.8998 |
| 0.0001 | 10.6667 | 6400 | 0.7386 | 0.8988 | 0.9072 | 0.8988 | 0.8995 |
| 0.0001 | 10.8333 | 6500 | 0.7436 | 0.8988 | 0.9072 | 0.8988 | 0.8995 |
| 0.0001 | 11.0 | 6600 | 0.7478 | 0.8983 | 0.9069 | 0.8983 | 0.8991 |
| 0.0001 | 11.1667 | 6700 | 0.7506 | 0.8983 | 0.9069 | 0.8983 | 0.8991 |
| 0.0001 | 11.3333 | 6800 | 0.7561 | 0.8979 | 0.9067 | 0.8979 | 0.8987 |
| 0.0001 | 11.5 | 6900 | 0.7599 | 0.8975 | 0.9062 | 0.8975 | 0.8983 |
| 0.0001 | 11.6667 | 7000 | 0.7634 | 0.8979 | 0.9067 | 0.8979 | 0.8987 |
| 0.0001 | 11.8333 | 7100 | 0.7652 | 0.8988 | 0.9074 | 0.8988 | 0.8995 |
| 0.0001 | 12.0 | 7200 | 0.7675 | 0.8988 | 0.9074 | 0.8988 | 0.8995 |
| 0.0001 | 12.1667 | 7300 | 0.7700 | 0.8988 | 0.9074 | 0.8988 | 0.8995 |
| 0.0001 | 12.3333 | 7400 | 0.7727 | 0.8988 | 0.9074 | 0.8988 | 0.8995 |
| 0.0001 | 12.5 | 7500 | 0.7764 | 0.8979 | 0.9069 | 0.8979 | 0.8987 |
| 0.0001 | 12.6667 | 7600 | 0.7793 | 0.8979 | 0.9069 | 0.8979 | 0.8987 |
| 0.0001 | 12.8333 | 7700 | 0.7809 | 0.8979 | 0.9069 | 0.8979 | 0.8987 |
| 0.0001 | 13.0 | 7800 | 0.7831 | 0.8979 | 0.9069 | 0.8979 | 0.8987 |
| 0.0001 | 13.1667 | 7900 | 0.7857 | 0.8979 | 0.9069 | 0.8979 | 0.8987 |
| 0.0001 | 13.3333 | 8000 | 0.7878 | 0.8979 | 0.9069 | 0.8979 | 0.8987 |
| 0.0001 | 13.5 | 8100 | 0.7895 | 0.8979 | 0.9070 | 0.8979 | 0.8986 |
| 0.0001 | 13.6667 | 8200 | 0.7910 | 0.8979 | 0.9070 | 0.8979 | 0.8986 |
| 0.0001 | 13.8333 | 8300 | 0.7926 | 0.8979 | 0.9070 | 0.8979 | 0.8986 |
| 0.0001 | 14.0 | 8400 | 0.7939 | 0.8979 | 0.9070 | 0.8979 | 0.8986 |
| 0.0001 | 14.1667 | 8500 | 0.7955 | 0.8979 | 0.9070 | 0.8979 | 0.8986 |
| 0.0001 | 14.3333 | 8600 | 0.7961 | 0.8979 | 0.9070 | 0.8979 | 0.8986 |
| 0.0001 | 14.5 | 8700 | 0.7970 | 0.8979 | 0.9070 | 0.8979 | 0.8986 |
| 0.0001 | 14.6667 | 8800 | 0.7977 | 0.8983 | 0.9076 | 0.8983 | 0.8991 |
| 0.0001 | 14.8333 | 8900 | 0.7982 | 0.8983 | 0.9076 | 0.8983 | 0.8991 |
| 0.0001 | 15.0 | 9000 | 0.7983 | 0.8983 | 0.9076 | 0.8983 | 0.8991 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"mix-subtype_iiia",
"mix-subtype_iia",
"mix-subtype_ivc",
"mix-subtype_ivd",
"mix-subtype_ia",
"mix-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-3-Jonathan_El-Beze_-w256_1k_v1-_SEC
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-3-Jonathan_El-Beze_-w256_1k_v1-_SEC
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1421
- Accuracy: 0.97
- Precision: 0.9711
- Recall: 0.97
- F1: 0.9700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.1782 | 0.3333 | 100 | 1.5537 | 0.59 | 0.6419 | 0.59 | 0.5106 |
| 0.0982 | 0.6667 | 200 | 1.5012 | 0.6658 | 0.6563 | 0.6658 | 0.6262 |
| 0.1236 | 1.0 | 300 | 0.3710 | 0.895 | 0.9085 | 0.895 | 0.8958 |
| 0.0078 | 1.3333 | 400 | 1.4374 | 0.6992 | 0.7299 | 0.6992 | 0.6613 |
| 0.0049 | 1.6667 | 500 | 0.4037 | 0.9058 | 0.9181 | 0.9058 | 0.9064 |
| 0.0047 | 2.0 | 600 | 1.7908 | 0.675 | 0.7138 | 0.675 | 0.6297 |
| 0.0032 | 2.3333 | 700 | 1.1430 | 0.8233 | 0.8831 | 0.8233 | 0.7906 |
| 0.0027 | 2.6667 | 800 | 1.1627 | 0.735 | 0.8254 | 0.735 | 0.7005 |
| 0.0018 | 3.0 | 900 | 0.8254 | 0.8292 | 0.8864 | 0.8292 | 0.8050 |
| 0.0016 | 3.3333 | 1000 | 1.2364 | 0.7625 | 0.8527 | 0.7625 | 0.7462 |
| 0.0027 | 3.6667 | 1100 | 0.2785 | 0.9267 | 0.9359 | 0.9267 | 0.9271 |
| 0.001 | 4.0 | 1200 | 0.6703 | 0.8775 | 0.9013 | 0.8775 | 0.8784 |
| 0.001 | 4.3333 | 1300 | 0.8848 | 0.8458 | 0.8925 | 0.8458 | 0.8397 |
| 0.0009 | 4.6667 | 1400 | 0.3603 | 0.9183 | 0.9325 | 0.9183 | 0.9199 |
| 0.0007 | 5.0 | 1500 | 0.4274 | 0.9183 | 0.9325 | 0.9183 | 0.9144 |
| 0.0006 | 5.3333 | 1600 | 0.3995 | 0.9233 | 0.9368 | 0.9233 | 0.9200 |
| 0.0005 | 5.6667 | 1700 | 0.3866 | 0.9258 | 0.9384 | 0.9258 | 0.9229 |
| 0.0012 | 6.0 | 1800 | 0.5027 | 0.9083 | 0.9401 | 0.9083 | 0.9110 |
| 0.0004 | 6.3333 | 1900 | 0.1421 | 0.97 | 0.9711 | 0.97 | 0.9700 |
| 0.0004 | 6.6667 | 2000 | 0.1475 | 0.97 | 0.9713 | 0.97 | 0.9700 |
| 0.0004 | 7.0 | 2100 | 0.1484 | 0.9708 | 0.9720 | 0.9708 | 0.9709 |
| 0.0003 | 7.3333 | 2200 | 0.1502 | 0.97 | 0.9712 | 0.97 | 0.9700 |
| 0.0003 | 7.6667 | 2300 | 0.1530 | 0.97 | 0.9712 | 0.97 | 0.9700 |
| 0.0003 | 8.0 | 2400 | 0.1539 | 0.9708 | 0.9720 | 0.9708 | 0.9709 |
| 0.0003 | 8.3333 | 2500 | 0.1565 | 0.9708 | 0.9719 | 0.9708 | 0.9708 |
| 0.0003 | 8.6667 | 2600 | 0.1574 | 0.9708 | 0.9719 | 0.9708 | 0.9708 |
| 0.0002 | 9.0 | 2700 | 0.1592 | 0.9717 | 0.9727 | 0.9717 | 0.9717 |
| 0.0002 | 9.3333 | 2800 | 0.1610 | 0.9717 | 0.9727 | 0.9717 | 0.9717 |
| 0.0002 | 9.6667 | 2900 | 0.1626 | 0.9708 | 0.9719 | 0.9708 | 0.9708 |
| 0.0002 | 10.0 | 3000 | 0.1636 | 0.9708 | 0.9719 | 0.9708 | 0.9708 |
| 0.0002 | 10.3333 | 3100 | 0.1645 | 0.9708 | 0.9719 | 0.9708 | 0.9708 |
| 0.0002 | 10.6667 | 3200 | 0.1657 | 0.9708 | 0.9719 | 0.9708 | 0.9708 |
| 0.0002 | 11.0 | 3300 | 0.1669 | 0.9708 | 0.9719 | 0.9708 | 0.9708 |
| 0.0002 | 11.3333 | 3400 | 0.1682 | 0.97 | 0.9712 | 0.97 | 0.9700 |
| 0.0002 | 11.6667 | 3500 | 0.1691 | 0.97 | 0.9712 | 0.97 | 0.9700 |
| 0.0002 | 12.0 | 3600 | 0.1697 | 0.97 | 0.9712 | 0.97 | 0.9700 |
| 0.0002 | 12.3333 | 3700 | 0.1704 | 0.97 | 0.9712 | 0.97 | 0.9700 |
| 0.0002 | 12.6667 | 3800 | 0.1709 | 0.97 | 0.9712 | 0.97 | 0.9700 |
| 0.0001 | 13.0 | 3900 | 0.1715 | 0.9692 | 0.9704 | 0.9692 | 0.9692 |
| 0.0001 | 13.3333 | 4000 | 0.1721 | 0.9692 | 0.9704 | 0.9692 | 0.9692 |
| 0.0001 | 13.6667 | 4100 | 0.1727 | 0.9692 | 0.9704 | 0.9692 | 0.9692 |
| 0.0001 | 14.0 | 4200 | 0.1730 | 0.9692 | 0.9704 | 0.9692 | 0.9692 |
| 0.0001 | 14.3333 | 4300 | 0.1731 | 0.9692 | 0.9704 | 0.9692 | 0.9692 |
| 0.0001 | 14.6667 | 4400 | 0.1733 | 0.9692 | 0.9704 | 0.9692 | 0.9692 |
| 0.0001 | 15.0 | 4500 | 0.1734 | 0.9692 | 0.9704 | 0.9692 | 0.9692 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"sec-subtype_iiia",
"sec-subtype_iia",
"sec-subtype_ivc",
"sec-subtype_ivd",
"sec-subtype_ia",
"sec-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-3-Jonathan_El-Beze_-w256_1k_v1-_SUR
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-3-Jonathan_El-Beze_-w256_1k_v1-_SUR
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5478
- Accuracy: 0.8875
- Precision: 0.8942
- Recall: 0.8875
- F1: 0.8875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.3968 | 0.3333 | 100 | 0.7205 | 0.7083 | 0.7287 | 0.7083 | 0.6701 |
| 0.0922 | 0.6667 | 200 | 0.7700 | 0.7433 | 0.7885 | 0.7433 | 0.7336 |
| 0.216 | 1.0 | 300 | 0.7658 | 0.7875 | 0.8259 | 0.7875 | 0.7863 |
| 0.0292 | 1.3333 | 400 | 0.7448 | 0.7983 | 0.8228 | 0.7983 | 0.7899 |
| 0.0139 | 1.6667 | 500 | 0.7137 | 0.8433 | 0.8527 | 0.8433 | 0.8416 |
| 0.0841 | 2.0 | 600 | 0.6836 | 0.8608 | 0.8715 | 0.8608 | 0.8603 |
| 0.0769 | 2.3333 | 700 | 0.5478 | 0.8875 | 0.8942 | 0.8875 | 0.8875 |
| 0.0046 | 2.6667 | 800 | 0.8076 | 0.8308 | 0.8564 | 0.8308 | 0.8314 |
| 0.019 | 3.0 | 900 | 0.8791 | 0.8408 | 0.8617 | 0.8408 | 0.8297 |
| 0.0451 | 3.3333 | 1000 | 0.7948 | 0.8567 | 0.8578 | 0.8567 | 0.8549 |
| 0.0022 | 3.6667 | 1100 | 0.7782 | 0.8592 | 0.8610 | 0.8592 | 0.8592 |
| 0.1346 | 4.0 | 1200 | 2.1560 | 0.62 | 0.7251 | 0.62 | 0.5922 |
| 0.0825 | 4.3333 | 1300 | 0.8192 | 0.8317 | 0.8600 | 0.8317 | 0.8297 |
| 0.0035 | 4.6667 | 1400 | 0.9398 | 0.8325 | 0.8360 | 0.8325 | 0.8265 |
| 0.0015 | 5.0 | 1500 | 0.8447 | 0.8367 | 0.8504 | 0.8367 | 0.8321 |
| 0.0013 | 5.3333 | 1600 | 1.1910 | 0.765 | 0.7940 | 0.765 | 0.7562 |
| 0.0009 | 5.6667 | 1700 | 0.9889 | 0.8317 | 0.8360 | 0.8317 | 0.8288 |
| 0.009 | 6.0 | 1800 | 0.8982 | 0.8517 | 0.8577 | 0.8517 | 0.8497 |
| 0.0007 | 6.3333 | 1900 | 0.8245 | 0.8683 | 0.8690 | 0.8683 | 0.8659 |
| 0.0006 | 6.6667 | 2000 | 0.8204 | 0.8708 | 0.8718 | 0.8708 | 0.8686 |
| 0.001 | 7.0 | 2100 | 1.3166 | 0.8 | 0.7992 | 0.8 | 0.7964 |
| 0.0006 | 7.3333 | 2200 | 1.0597 | 0.8383 | 0.8440 | 0.8383 | 0.8306 |
| 0.001 | 7.6667 | 2300 | 0.8703 | 0.8617 | 0.8592 | 0.8617 | 0.8586 |
| 0.0005 | 8.0 | 2400 | 1.0801 | 0.835 | 0.8377 | 0.835 | 0.8334 |
| 0.0007 | 8.3333 | 2500 | 1.3133 | 0.7975 | 0.8092 | 0.7975 | 0.7974 |
| 0.0004 | 8.6667 | 2600 | 1.0982 | 0.845 | 0.8581 | 0.845 | 0.8420 |
| 0.0004 | 9.0 | 2700 | 0.9103 | 0.8575 | 0.8742 | 0.8575 | 0.8558 |
| 0.0003 | 9.3333 | 2800 | 0.9156 | 0.8517 | 0.8642 | 0.8517 | 0.8506 |
| 0.0003 | 9.6667 | 2900 | 0.9209 | 0.8517 | 0.8645 | 0.8517 | 0.8506 |
| 0.0003 | 10.0 | 3000 | 0.9283 | 0.8517 | 0.8645 | 0.8517 | 0.8506 |
| 0.0003 | 10.3333 | 3100 | 0.9326 | 0.8533 | 0.8658 | 0.8533 | 0.8524 |
| 0.0003 | 10.6667 | 3200 | 0.9352 | 0.8542 | 0.8664 | 0.8542 | 0.8531 |
| 0.0003 | 11.0 | 3300 | 0.9393 | 0.8533 | 0.8655 | 0.8533 | 0.8522 |
| 0.0003 | 11.3333 | 3400 | 0.9418 | 0.8558 | 0.8672 | 0.8558 | 0.8545 |
| 0.0002 | 11.6667 | 3500 | 0.9446 | 0.855 | 0.8662 | 0.855 | 0.8537 |
| 0.0002 | 12.0 | 3600 | 0.9476 | 0.8567 | 0.8681 | 0.8567 | 0.8553 |
| 0.0002 | 12.3333 | 3700 | 0.9502 | 0.8567 | 0.8681 | 0.8567 | 0.8553 |
| 0.0002 | 12.6667 | 3800 | 0.9523 | 0.8567 | 0.8681 | 0.8567 | 0.8553 |
| 0.0002 | 13.0 | 3900 | 0.9538 | 0.8567 | 0.8681 | 0.8567 | 0.8553 |
| 0.0002 | 13.3333 | 4000 | 0.9558 | 0.8567 | 0.8681 | 0.8567 | 0.8553 |
| 0.0002 | 13.6667 | 4100 | 0.9572 | 0.8567 | 0.8681 | 0.8567 | 0.8553 |
| 0.0002 | 14.0 | 4200 | 0.9584 | 0.8567 | 0.8681 | 0.8567 | 0.8553 |
| 0.0002 | 14.3333 | 4300 | 0.9588 | 0.8567 | 0.8681 | 0.8567 | 0.8553 |
| 0.0002 | 14.6667 | 4400 | 0.9595 | 0.8558 | 0.8669 | 0.8558 | 0.8545 |
| 0.0002 | 15.0 | 4500 | 0.9597 | 0.8558 | 0.8669 | 0.8558 | 0.8545 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"sur-subtype_iiia",
"sur-subtype_iia",
"sur-subtype_ivc",
"sur-subtype_ivd",
"sur-subtype_ia",
"sur-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-3-Michel_Daudon_-w256_1k_v1-_MIX
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-3-Michel_Daudon_-w256_1k_v1-_MIX
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6275
- Accuracy: 0.8742
- Precision: 0.8819
- Recall: 0.8742
- F1: 0.8750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.3615 | 0.1667 | 100 | 0.7152 | 0.7458 | 0.8206 | 0.7458 | 0.7378 |
| 0.2196 | 0.3333 | 200 | 0.6798 | 0.775 | 0.8188 | 0.775 | 0.7769 |
| 0.2042 | 0.5 | 300 | 0.6700 | 0.7971 | 0.8383 | 0.7971 | 0.8057 |
| 0.177 | 0.6667 | 400 | 0.7327 | 0.8092 | 0.8387 | 0.8092 | 0.8142 |
| 0.2132 | 0.8333 | 500 | 0.7204 | 0.8054 | 0.8224 | 0.8054 | 0.8087 |
| 0.4081 | 1.0 | 600 | 0.8022 | 0.8067 | 0.8246 | 0.8067 | 0.8045 |
| 0.138 | 1.1667 | 700 | 0.7309 | 0.82 | 0.8416 | 0.82 | 0.8224 |
| 0.0145 | 1.3333 | 800 | 0.6764 | 0.8367 | 0.8514 | 0.8367 | 0.8408 |
| 0.0566 | 1.5 | 900 | 0.7420 | 0.8192 | 0.8387 | 0.8192 | 0.8223 |
| 0.0072 | 1.6667 | 1000 | 0.6850 | 0.8313 | 0.8399 | 0.8313 | 0.8328 |
| 0.0273 | 1.8333 | 1100 | 1.0173 | 0.8013 | 0.7947 | 0.8013 | 0.7908 |
| 0.0378 | 2.0 | 1200 | 0.7624 | 0.83 | 0.8341 | 0.83 | 0.8281 |
| 0.01 | 2.1667 | 1300 | 1.0041 | 0.7971 | 0.8459 | 0.7971 | 0.7972 |
| 0.2192 | 2.3333 | 1400 | 0.9177 | 0.81 | 0.8593 | 0.81 | 0.8109 |
| 0.045 | 2.5 | 1500 | 0.9214 | 0.8008 | 0.8468 | 0.8008 | 0.8065 |
| 0.0032 | 2.6667 | 1600 | 0.8712 | 0.8171 | 0.8436 | 0.8171 | 0.8208 |
| 0.134 | 2.8333 | 1700 | 0.9849 | 0.8129 | 0.8288 | 0.8129 | 0.8129 |
| 0.0571 | 3.0 | 1800 | 1.0024 | 0.8175 | 0.8620 | 0.8175 | 0.8214 |
| 0.0015 | 3.1667 | 1900 | 0.6275 | 0.8742 | 0.8819 | 0.8742 | 0.8750 |
| 0.0013 | 3.3333 | 2000 | 0.8558 | 0.84 | 0.8442 | 0.84 | 0.8409 |
| 0.1176 | 3.5 | 2100 | 0.9387 | 0.8379 | 0.8570 | 0.8379 | 0.8375 |
| 0.0081 | 3.6667 | 2200 | 1.3262 | 0.7858 | 0.8560 | 0.7858 | 0.7928 |
| 0.0012 | 3.8333 | 2300 | 1.2201 | 0.8033 | 0.8241 | 0.8033 | 0.8030 |
| 0.0018 | 4.0 | 2400 | 0.9460 | 0.8325 | 0.8694 | 0.8325 | 0.8389 |
| 0.0412 | 4.1667 | 2500 | 0.9619 | 0.8387 | 0.8617 | 0.8387 | 0.8425 |
| 0.0013 | 4.3333 | 2600 | 1.3212 | 0.8037 | 0.8370 | 0.8037 | 0.8037 |
| 0.011 | 4.5 | 2700 | 1.1590 | 0.8113 | 0.8201 | 0.8113 | 0.8085 |
| 0.0835 | 4.6667 | 2800 | 1.0838 | 0.8154 | 0.8495 | 0.8154 | 0.8194 |
| 0.162 | 4.8333 | 2900 | 1.1564 | 0.8071 | 0.8309 | 0.8071 | 0.8045 |
| 0.0013 | 5.0 | 3000 | 1.1460 | 0.785 | 0.8074 | 0.785 | 0.7915 |
| 0.0043 | 5.1667 | 3100 | 0.7268 | 0.8371 | 0.8578 | 0.8371 | 0.8383 |
| 0.0064 | 5.3333 | 3200 | 1.1635 | 0.8163 | 0.8599 | 0.8163 | 0.8171 |
| 0.0669 | 5.5 | 3300 | 1.1532 | 0.8008 | 0.8245 | 0.8008 | 0.8030 |
| 0.0009 | 5.6667 | 3400 | 0.9171 | 0.8342 | 0.8579 | 0.8342 | 0.8309 |
| 0.0307 | 5.8333 | 3500 | 1.0002 | 0.8333 | 0.8535 | 0.8333 | 0.8355 |
| 0.037 | 6.0 | 3600 | 1.1057 | 0.7979 | 0.8193 | 0.7979 | 0.8046 |
| 0.0008 | 6.1667 | 3700 | 0.9506 | 0.8342 | 0.8477 | 0.8342 | 0.8336 |
| 0.0039 | 6.3333 | 3800 | 0.9781 | 0.8317 | 0.8335 | 0.8317 | 0.8293 |
| 0.0006 | 6.5 | 3900 | 0.9525 | 0.8554 | 0.8659 | 0.8554 | 0.8510 |
| 0.0204 | 6.6667 | 4000 | 0.8203 | 0.8558 | 0.8536 | 0.8558 | 0.8535 |
| 0.0007 | 6.8333 | 4100 | 1.0635 | 0.8392 | 0.8640 | 0.8392 | 0.8346 |
| 0.0364 | 7.0 | 4200 | 0.8218 | 0.8508 | 0.8667 | 0.8508 | 0.8495 |
| 0.0011 | 7.1667 | 4300 | 1.1496 | 0.8217 | 0.8489 | 0.8217 | 0.8214 |
| 0.0754 | 7.3333 | 4400 | 0.7383 | 0.8521 | 0.8567 | 0.8521 | 0.8509 |
| 0.0007 | 7.5 | 4500 | 1.0083 | 0.8246 | 0.8397 | 0.8246 | 0.8216 |
| 0.0005 | 7.6667 | 4600 | 0.8850 | 0.8458 | 0.8587 | 0.8458 | 0.8456 |
| 0.0004 | 7.8333 | 4700 | 0.8987 | 0.8488 | 0.8621 | 0.8488 | 0.8483 |
| 0.0067 | 8.0 | 4800 | 0.8969 | 0.8421 | 0.8541 | 0.8421 | 0.8432 |
| 0.0003 | 8.1667 | 4900 | 1.1115 | 0.8171 | 0.8233 | 0.8171 | 0.8175 |
| 0.0002 | 8.3333 | 5000 | 1.1313 | 0.8154 | 0.8225 | 0.8154 | 0.8165 |
| 0.0004 | 8.5 | 5100 | 1.5668 | 0.8017 | 0.8439 | 0.8017 | 0.7970 |
| 0.0003 | 8.6667 | 5200 | 1.2458 | 0.8237 | 0.8579 | 0.8237 | 0.8247 |
| 0.0009 | 8.8333 | 5300 | 1.1443 | 0.815 | 0.8376 | 0.815 | 0.8158 |
| 0.0014 | 9.0 | 5400 | 1.3838 | 0.8092 | 0.8375 | 0.8092 | 0.8114 |
| 0.0554 | 9.1667 | 5500 | 1.2331 | 0.8108 | 0.8576 | 0.8108 | 0.8192 |
| 0.0003 | 9.3333 | 5600 | 0.9874 | 0.8504 | 0.8658 | 0.8504 | 0.8529 |
| 0.0003 | 9.5 | 5700 | 0.9882 | 0.8488 | 0.8602 | 0.8488 | 0.8514 |
| 0.0002 | 9.6667 | 5800 | 1.0519 | 0.8492 | 0.8653 | 0.8492 | 0.8524 |
| 0.0002 | 9.8333 | 5900 | 1.1310 | 0.8371 | 0.8587 | 0.8371 | 0.8414 |
| 0.0002 | 10.0 | 6000 | 1.1190 | 0.8333 | 0.8570 | 0.8333 | 0.8387 |
| 0.0002 | 10.1667 | 6100 | 1.1356 | 0.8333 | 0.8547 | 0.8333 | 0.8388 |
| 0.0002 | 10.3333 | 6200 | 1.2443 | 0.8279 | 0.8492 | 0.8279 | 0.8304 |
| 0.0002 | 10.5 | 6300 | 1.2286 | 0.8246 | 0.8534 | 0.8246 | 0.8304 |
| 0.0002 | 10.6667 | 6400 | 1.2313 | 0.8275 | 0.8508 | 0.8275 | 0.8319 |
| 0.0002 | 10.8333 | 6500 | 1.2065 | 0.8283 | 0.8377 | 0.8283 | 0.8289 |
| 0.0002 | 11.0 | 6600 | 1.3052 | 0.8046 | 0.8181 | 0.8046 | 0.8056 |
| 0.0001 | 11.1667 | 6700 | 1.2192 | 0.8233 | 0.8403 | 0.8233 | 0.8270 |
| 0.0002 | 11.3333 | 6800 | 1.2350 | 0.8233 | 0.8331 | 0.8233 | 0.8261 |
| 0.0013 | 11.5 | 6900 | 1.2510 | 0.8283 | 0.8474 | 0.8283 | 0.8317 |
| 0.004 | 11.6667 | 7000 | 1.4225 | 0.8075 | 0.8197 | 0.8075 | 0.8082 |
| 0.0002 | 11.8333 | 7100 | 1.5583 | 0.7904 | 0.8012 | 0.7904 | 0.7876 |
| 0.0003 | 12.0 | 7200 | 1.7201 | 0.7696 | 0.7996 | 0.7696 | 0.7696 |
| 0.0001 | 12.1667 | 7300 | 1.4283 | 0.8075 | 0.8297 | 0.8075 | 0.8113 |
| 0.0001 | 12.3333 | 7400 | 1.2310 | 0.8246 | 0.8425 | 0.8246 | 0.8280 |
| 0.0001 | 12.5 | 7500 | 1.2366 | 0.8279 | 0.8447 | 0.8279 | 0.8309 |
| 0.0002 | 12.6667 | 7600 | 1.2410 | 0.8279 | 0.8448 | 0.8279 | 0.8309 |
| 0.0001 | 12.8333 | 7700 | 1.2434 | 0.8287 | 0.8457 | 0.8287 | 0.8317 |
| 0.0001 | 13.0 | 7800 | 1.2539 | 0.8263 | 0.8438 | 0.8263 | 0.8293 |
| 0.0001 | 13.1667 | 7900 | 1.2479 | 0.8287 | 0.8444 | 0.8287 | 0.8313 |
| 0.0001 | 13.3333 | 8000 | 1.2510 | 0.8292 | 0.8449 | 0.8292 | 0.8317 |
| 0.0001 | 13.5 | 8100 | 1.2544 | 0.8296 | 0.8451 | 0.8296 | 0.8321 |
| 0.0001 | 13.6667 | 8200 | 1.2575 | 0.8296 | 0.8452 | 0.8296 | 0.8321 |
| 0.0001 | 13.8333 | 8300 | 1.2597 | 0.8296 | 0.8452 | 0.8296 | 0.8321 |
| 0.0001 | 14.0 | 8400 | 1.2618 | 0.8292 | 0.8447 | 0.8292 | 0.8316 |
| 0.0001 | 14.1667 | 8500 | 1.2632 | 0.8292 | 0.8447 | 0.8292 | 0.8316 |
| 0.0001 | 14.3333 | 8600 | 1.2651 | 0.8292 | 0.8447 | 0.8292 | 0.8316 |
| 0.0001 | 14.5 | 8700 | 1.2662 | 0.8292 | 0.8447 | 0.8292 | 0.8316 |
| 0.0001 | 14.6667 | 8800 | 1.2672 | 0.8292 | 0.8447 | 0.8292 | 0.8316 |
| 0.0001 | 14.8333 | 8900 | 1.2678 | 0.8292 | 0.8447 | 0.8292 | 0.8316 |
| 0.0001 | 15.0 | 9000 | 1.2680 | 0.8292 | 0.8447 | 0.8292 | 0.8316 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"mix-subtype_iva",
"mix-subtype_iva2",
"mix-subtype_ivc",
"mix-subtype_ivd",
"mix-subtype_ia",
"mix-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-3-Michel_Daudon_-w256_1k_v1-_SEC
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-3-Michel_Daudon_-w256_1k_v1-_SEC
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4251
- Accuracy: 0.885
- Precision: 0.9079
- Recall: 0.885
- F1: 0.8879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2433 | 0.3333 | 100 | 0.6496 | 0.7967 | 0.8609 | 0.7967 | 0.7672 |
| 0.2097 | 0.6667 | 200 | 0.7346 | 0.7875 | 0.8299 | 0.7875 | 0.7848 |
| 0.1057 | 1.0 | 300 | 0.4491 | 0.8725 | 0.8916 | 0.8725 | 0.8719 |
| 0.0154 | 1.3333 | 400 | 0.6859 | 0.8508 | 0.8583 | 0.8508 | 0.8379 |
| 0.1202 | 1.6667 | 500 | 0.6336 | 0.8525 | 0.8773 | 0.8525 | 0.8478 |
| 0.0187 | 2.0 | 600 | 0.4251 | 0.885 | 0.9079 | 0.885 | 0.8879 |
| 0.0527 | 2.3333 | 700 | 0.6578 | 0.8533 | 0.8676 | 0.8533 | 0.8524 |
| 0.0191 | 2.6667 | 800 | 0.8956 | 0.8308 | 0.8736 | 0.8308 | 0.8306 |
| 0.0616 | 3.0 | 900 | 1.0589 | 0.8042 | 0.8572 | 0.8042 | 0.8088 |
| 0.0187 | 3.3333 | 1000 | 0.8005 | 0.8425 | 0.8624 | 0.8425 | 0.8383 |
| 0.0355 | 3.6667 | 1100 | 0.7664 | 0.865 | 0.8956 | 0.865 | 0.8614 |
| 0.0777 | 4.0 | 1200 | 0.9895 | 0.8158 | 0.8409 | 0.8158 | 0.8131 |
| 0.0017 | 4.3333 | 1300 | 0.5217 | 0.8983 | 0.9122 | 0.8983 | 0.8960 |
| 0.0013 | 4.6667 | 1400 | 0.5152 | 0.9 | 0.9129 | 0.9 | 0.8981 |
| 0.0011 | 5.0 | 1500 | 0.5119 | 0.905 | 0.9168 | 0.905 | 0.9036 |
| 0.0009 | 5.3333 | 1600 | 0.5259 | 0.905 | 0.9170 | 0.905 | 0.9038 |
| 0.0008 | 5.6667 | 1700 | 0.5235 | 0.9033 | 0.9151 | 0.9033 | 0.9020 |
| 0.0007 | 6.0 | 1800 | 0.5293 | 0.9042 | 0.9157 | 0.9042 | 0.9030 |
| 0.0007 | 6.3333 | 1900 | 0.5337 | 0.905 | 0.9163 | 0.905 | 0.9039 |
| 0.0006 | 6.6667 | 2000 | 0.5352 | 0.905 | 0.9165 | 0.905 | 0.9040 |
| 0.0005 | 7.0 | 2100 | 0.5415 | 0.9058 | 0.9170 | 0.9058 | 0.9049 |
| 0.0005 | 7.3333 | 2200 | 0.5467 | 0.9042 | 0.9152 | 0.9042 | 0.9033 |
| 0.0005 | 7.6667 | 2300 | 0.5490 | 0.905 | 0.9159 | 0.905 | 0.9040 |
| 0.0004 | 8.0 | 2400 | 0.5517 | 0.9067 | 0.9172 | 0.9067 | 0.9059 |
| 0.0004 | 8.3333 | 2500 | 0.5559 | 0.9075 | 0.9179 | 0.9075 | 0.9068 |
| 0.0004 | 8.6667 | 2600 | 0.5575 | 0.9075 | 0.9179 | 0.9075 | 0.9068 |
| 0.0003 | 9.0 | 2700 | 0.5613 | 0.9075 | 0.9179 | 0.9075 | 0.9068 |
| 0.0003 | 9.3333 | 2800 | 0.5647 | 0.9075 | 0.9183 | 0.9075 | 0.9069 |
| 0.0003 | 9.6667 | 2900 | 0.5675 | 0.9075 | 0.9183 | 0.9075 | 0.9069 |
| 0.0003 | 10.0 | 3000 | 0.5700 | 0.9075 | 0.9177 | 0.9075 | 0.9069 |
| 0.0003 | 10.3333 | 3100 | 0.5712 | 0.9067 | 0.9165 | 0.9067 | 0.9060 |
| 0.0003 | 10.6667 | 3200 | 0.5738 | 0.9067 | 0.9159 | 0.9067 | 0.9061 |
| 0.0003 | 11.0 | 3300 | 0.5768 | 0.9067 | 0.9159 | 0.9067 | 0.9061 |
| 0.0003 | 11.3333 | 3400 | 0.5792 | 0.9067 | 0.9159 | 0.9067 | 0.9061 |
| 0.0002 | 11.6667 | 3500 | 0.5806 | 0.9067 | 0.9159 | 0.9067 | 0.9061 |
| 0.0002 | 12.0 | 3600 | 0.5830 | 0.9067 | 0.9159 | 0.9067 | 0.9061 |
| 0.0002 | 12.3333 | 3700 | 0.5847 | 0.9067 | 0.9159 | 0.9067 | 0.9061 |
| 0.0002 | 12.6667 | 3800 | 0.5860 | 0.9067 | 0.9159 | 0.9067 | 0.9061 |
| 0.0002 | 13.0 | 3900 | 0.5875 | 0.9067 | 0.9159 | 0.9067 | 0.9061 |
| 0.0002 | 13.3333 | 4000 | 0.5889 | 0.9067 | 0.9159 | 0.9067 | 0.9061 |
| 0.0002 | 13.6667 | 4100 | 0.5898 | 0.9067 | 0.9159 | 0.9067 | 0.9061 |
| 0.0002 | 14.0 | 4200 | 0.5906 | 0.9067 | 0.9159 | 0.9067 | 0.9061 |
| 0.0002 | 14.3333 | 4300 | 0.5914 | 0.9067 | 0.9159 | 0.9067 | 0.9061 |
| 0.0002 | 14.6667 | 4400 | 0.5918 | 0.9067 | 0.9159 | 0.9067 | 0.9061 |
| 0.0002 | 15.0 | 4500 | 0.5919 | 0.9067 | 0.9159 | 0.9067 | 0.9061 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"sec-subtype_iva",
"sec-subtype_iva2",
"sec-subtype_ivc",
"sec-subtype_ivd",
"sec-subtype_ia",
"sec-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-3-Michel_Daudon_-w256_1k_v1-_SUR
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-3-Michel_Daudon_-w256_1k_v1-_SUR
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0259
- Accuracy: 0.7850
- Precision: 0.7927
- Recall: 0.7850
- F1: 0.7850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.3729 | 0.3333 | 100 | 1.0563 | 0.6631 | 0.7502 | 0.6631 | 0.6797 |
| 0.2029 | 0.6667 | 200 | 1.2777 | 0.7056 | 0.7455 | 0.7056 | 0.6872 |
| 0.1969 | 1.0 | 300 | 1.1211 | 0.7653 | 0.7679 | 0.7653 | 0.7600 |
| 0.1467 | 1.3333 | 400 | 1.2951 | 0.7048 | 0.7488 | 0.7048 | 0.7088 |
| 0.1034 | 1.6667 | 500 | 1.1112 | 0.8087 | 0.8384 | 0.8087 | 0.8075 |
| 0.0749 | 2.0 | 600 | 1.3484 | 0.7441 | 0.7662 | 0.7441 | 0.7478 |
| 0.0913 | 2.3333 | 700 | 1.0259 | 0.7850 | 0.7927 | 0.7850 | 0.7850 |
| 0.0138 | 2.6667 | 800 | 1.4442 | 0.7457 | 0.8109 | 0.7457 | 0.7557 |
| 0.0551 | 3.0 | 900 | 1.3089 | 0.7449 | 0.8007 | 0.7449 | 0.7480 |
| 0.0209 | 3.3333 | 1000 | 1.5728 | 0.7441 | 0.8047 | 0.7441 | 0.7321 |
| 0.0243 | 3.6667 | 1100 | 1.2074 | 0.7817 | 0.8299 | 0.7817 | 0.7875 |
| 0.0015 | 4.0 | 1200 | 1.2362 | 0.7817 | 0.8110 | 0.7817 | 0.7755 |
| 0.0491 | 4.3333 | 1300 | 1.6820 | 0.7089 | 0.7648 | 0.7089 | 0.7121 |
| 0.0041 | 4.6667 | 1400 | 1.2421 | 0.7629 | 0.7794 | 0.7629 | 0.7656 |
| 0.0014 | 5.0 | 1500 | 1.5195 | 0.7400 | 0.7439 | 0.7400 | 0.7395 |
| 0.001 | 5.3333 | 1600 | 1.3705 | 0.7596 | 0.7567 | 0.7596 | 0.7551 |
| 0.0008 | 5.6667 | 1700 | 1.3614 | 0.7637 | 0.7652 | 0.7637 | 0.7619 |
| 0.0007 | 6.0 | 1800 | 1.3627 | 0.7694 | 0.7676 | 0.7694 | 0.7662 |
| 0.0006 | 6.3333 | 1900 | 1.3871 | 0.7694 | 0.7682 | 0.7694 | 0.7667 |
| 0.0006 | 6.6667 | 2000 | 1.4079 | 0.7678 | 0.7664 | 0.7678 | 0.7649 |
| 0.0005 | 7.0 | 2100 | 1.4300 | 0.7653 | 0.7636 | 0.7653 | 0.7622 |
| 0.0005 | 7.3333 | 2200 | 1.4476 | 0.7661 | 0.7658 | 0.7661 | 0.7637 |
| 0.0004 | 7.6667 | 2300 | 1.4655 | 0.7678 | 0.7680 | 0.7678 | 0.7655 |
| 0.0004 | 8.0 | 2400 | 1.4802 | 0.7678 | 0.7675 | 0.7678 | 0.7652 |
| 0.0004 | 8.3333 | 2500 | 1.4962 | 0.7678 | 0.7682 | 0.7678 | 0.7655 |
| 0.0004 | 8.6667 | 2600 | 1.5100 | 0.7678 | 0.7690 | 0.7678 | 0.7658 |
| 0.0003 | 9.0 | 2700 | 1.5230 | 0.7678 | 0.7690 | 0.7678 | 0.7658 |
| 0.0003 | 9.3333 | 2800 | 1.5361 | 0.7678 | 0.7699 | 0.7678 | 0.7662 |
| 0.0003 | 9.6667 | 2900 | 1.5466 | 0.7686 | 0.7711 | 0.7686 | 0.7673 |
| 0.0003 | 10.0 | 3000 | 1.5581 | 0.7686 | 0.7711 | 0.7686 | 0.7673 |
| 0.0003 | 10.3333 | 3100 | 1.5686 | 0.7686 | 0.7711 | 0.7686 | 0.7673 |
| 0.0003 | 10.6667 | 3200 | 1.5787 | 0.7686 | 0.7710 | 0.7686 | 0.7672 |
| 0.0002 | 11.0 | 3300 | 1.5877 | 0.7686 | 0.7717 | 0.7686 | 0.7675 |
| 0.0002 | 11.3333 | 3400 | 1.5963 | 0.7686 | 0.7717 | 0.7686 | 0.7675 |
| 0.0002 | 11.6667 | 3500 | 1.6044 | 0.7686 | 0.7722 | 0.7686 | 0.7677 |
| 0.0002 | 12.0 | 3600 | 1.6116 | 0.7686 | 0.7726 | 0.7686 | 0.7679 |
| 0.0002 | 12.3333 | 3700 | 1.6187 | 0.7686 | 0.7726 | 0.7686 | 0.7679 |
| 0.0002 | 12.6667 | 3800 | 1.6238 | 0.7686 | 0.7726 | 0.7686 | 0.7679 |
| 0.0002 | 13.0 | 3900 | 1.6295 | 0.7686 | 0.7722 | 0.7686 | 0.7679 |
| 0.0002 | 13.3333 | 4000 | 1.6344 | 0.7686 | 0.7726 | 0.7686 | 0.7679 |
| 0.0002 | 13.6667 | 4100 | 1.6379 | 0.7686 | 0.7726 | 0.7686 | 0.7679 |
| 0.0002 | 14.0 | 4200 | 1.6415 | 0.7686 | 0.7726 | 0.7686 | 0.7679 |
| 0.0002 | 14.3333 | 4300 | 1.6436 | 0.7678 | 0.7719 | 0.7678 | 0.7671 |
| 0.0002 | 14.6667 | 4400 | 1.6450 | 0.7678 | 0.7719 | 0.7678 | 0.7671 |
| 0.0002 | 15.0 | 4500 | 1.6454 | 0.7678 | 0.7719 | 0.7678 | 0.7671 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"sur-subtype_iva",
"sur-subtype_iva2",
"sur-subtype_ivc",
"sur-subtype_ivd",
"sur-subtype_ia",
"sur-subtype_va"
] |
sparshgarg57/swin-tiny-patch4-window7-224-finetuned-birdclef
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-birdclef
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5162
- Accuracy: 0.0702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 4.7582 | 0.9958 | 178 | 4.7242 | 0.0478 |
| 4.6596 | 1.9972 | 357 | 4.6146 | 0.0618 |
| 4.618 | 2.9874 | 534 | 4.5162 | 0.0702 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
[
"1139490",
"1192948",
"1194042",
"126247",
"1346504",
"134933",
"135045",
"1462711",
"1462737",
"1564122",
"21038",
"21116",
"21211",
"22333",
"22973",
"22976",
"24272",
"24292",
"24322",
"41663",
"41778",
"41970",
"42007",
"42087",
"42113",
"46010",
"47067",
"476537",
"476538",
"48124",
"50186",
"517119",
"523060",
"528041",
"52884",
"548639",
"555086",
"555142",
"566513",
"64862",
"65336",
"65344",
"65349",
"65373",
"65419",
"65448",
"65547",
"65962",
"66016",
"66531",
"66578",
"66893",
"67082",
"67252",
"714022",
"715170",
"787625",
"81930",
"868458",
"963335",
"amakin1",
"amekes",
"ampkin1",
"anhing",
"babwar",
"bafibi1",
"banana",
"baymac",
"bbwduc",
"bicwre1",
"bkcdon",
"bkmtou1",
"blbgra1",
"blbwre1",
"blcant4",
"blchaw1",
"blcjay1",
"blctit1",
"blhpar1",
"blkvul",
"bobfly1",
"bobher1",
"brtpar1",
"bubcur1",
"bubwre1",
"bucmot3",
"bugtan",
"butsal1",
"cargra1",
"cattyr",
"chbant1",
"chfmac1",
"cinbec1",
"cocher1",
"cocwoo1",
"colara1",
"colcha1",
"compau",
"compot1",
"cotfly1",
"crbtan1",
"crcwoo1",
"crebob1",
"cregua1",
"creoro1",
"eardov1",
"fotfly",
"gohman1",
"grasal4",
"grbhaw1",
"greani1",
"greegr",
"greibi1",
"grekis",
"grepot1",
"gretin1",
"grnkin",
"grysee1",
"gybmar",
"gycwor1",
"labter1",
"laufal1",
"leagre",
"linwoo1",
"littin1",
"mastit1",
"neocor",
"norscr1",
"olipic1",
"orcpar",
"palhor2",
"paltan1",
"pavpig2",
"piepuf1",
"pirfly1",
"piwtyr1",
"plbwoo1",
"plctan1",
"plukit1",
"purgal2",
"ragmac1",
"rebbla1",
"recwoo1",
"rinkin1",
"roahaw",
"rosspo1",
"royfly1",
"rtlhum",
"rubsee1",
"rufmot1",
"rugdov",
"rumfly1",
"ruther1",
"rutjac1",
"rutpuf1",
"saffin",
"sahpar1",
"savhaw1",
"secfly1",
"shghum1",
"shtfly1",
"smbani",
"snoegr",
"sobtyr1",
"socfly1",
"solsan",
"soulap1",
"spbwoo1",
"speowl1",
"spepar1",
"srwswa1",
"stbwoo2",
"strcuc1",
"strfly1",
"strher",
"strowl1",
"tbsfin1",
"thbeup1",
"thlsch3",
"trokin",
"tropar",
"trsowl",
"turvul",
"verfly",
"watjac1",
"wbwwre1",
"whbant1",
"whbman1",
"whfant1",
"whmtyr1",
"whtdov",
"whttro1",
"whwswa1",
"woosto",
"y00678",
"yebela1",
"yebfly1",
"yebsee1",
"yecspi2",
"yectyr1",
"yehbla2",
"yehcar1",
"yelori1",
"yeofly1",
"yercac1",
"ywcpar"
] |
Ivanrs/vit-base-kidney-stone-4-Jonathan_El-Beze_-w256_1k_v1-_MIX
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-4-Jonathan_El-Beze_-w256_1k_v1-_MIX
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5049
- Accuracy: 0.9046
- Precision: 0.9119
- Recall: 0.9046
- F1: 0.9032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.3582 | 0.1667 | 100 | 0.6579 | 0.7746 | 0.8010 | 0.7746 | 0.7645 |
| 0.152 | 0.3333 | 200 | 0.8315 | 0.7492 | 0.8132 | 0.7492 | 0.7457 |
| 0.1642 | 0.5 | 300 | 0.6003 | 0.8383 | 0.8506 | 0.8383 | 0.8390 |
| 0.088 | 0.6667 | 400 | 0.6790 | 0.81 | 0.8451 | 0.81 | 0.8064 |
| 0.0268 | 0.8333 | 500 | 0.5720 | 0.8596 | 0.8815 | 0.8596 | 0.8560 |
| 0.0503 | 1.0 | 600 | 0.5348 | 0.8671 | 0.8820 | 0.8671 | 0.8661 |
| 0.1888 | 1.1667 | 700 | 0.7472 | 0.8225 | 0.8405 | 0.8225 | 0.8233 |
| 0.0983 | 1.3333 | 800 | 0.9774 | 0.7875 | 0.8528 | 0.7875 | 0.7892 |
| 0.1343 | 1.5 | 900 | 0.9097 | 0.7983 | 0.8273 | 0.7983 | 0.7919 |
| 0.0681 | 1.6667 | 1000 | 0.6611 | 0.845 | 0.8639 | 0.845 | 0.8432 |
| 0.0768 | 1.8333 | 1100 | 0.8916 | 0.8133 | 0.8677 | 0.8133 | 0.8163 |
| 0.0447 | 2.0 | 1200 | 0.7102 | 0.8462 | 0.8541 | 0.8462 | 0.8450 |
| 0.0417 | 2.1667 | 1300 | 0.7364 | 0.8438 | 0.8549 | 0.8438 | 0.8404 |
| 0.0049 | 2.3333 | 1400 | 1.1942 | 0.7567 | 0.8037 | 0.7567 | 0.7570 |
| 0.1265 | 2.5 | 1500 | 0.5920 | 0.8812 | 0.8828 | 0.8812 | 0.8793 |
| 0.0117 | 2.6667 | 1600 | 0.7807 | 0.8421 | 0.8723 | 0.8421 | 0.8394 |
| 0.0256 | 2.8333 | 1700 | 0.5049 | 0.9046 | 0.9119 | 0.9046 | 0.9032 |
| 0.0776 | 3.0 | 1800 | 0.7417 | 0.8558 | 0.8685 | 0.8558 | 0.8564 |
| 0.0535 | 3.1667 | 1900 | 0.6490 | 0.8717 | 0.8771 | 0.8717 | 0.8711 |
| 0.1292 | 3.3333 | 2000 | 0.7179 | 0.87 | 0.8759 | 0.87 | 0.8681 |
| 0.0013 | 3.5 | 2100 | 0.6103 | 0.8921 | 0.8946 | 0.8921 | 0.8918 |
| 0.0015 | 3.6667 | 2200 | 0.8573 | 0.8558 | 0.8668 | 0.8558 | 0.8523 |
| 0.0006 | 3.8333 | 2300 | 0.6061 | 0.8896 | 0.8993 | 0.8896 | 0.8891 |
| 0.0015 | 4.0 | 2400 | 0.7029 | 0.8658 | 0.8758 | 0.8658 | 0.8638 |
| 0.0005 | 4.1667 | 2500 | 0.7734 | 0.8804 | 0.8928 | 0.8804 | 0.8808 |
| 0.0019 | 4.3333 | 2600 | 0.7360 | 0.8742 | 0.8911 | 0.8742 | 0.8746 |
| 0.001 | 4.5 | 2700 | 0.8893 | 0.8358 | 0.8531 | 0.8358 | 0.8346 |
| 0.0267 | 4.6667 | 2800 | 0.8946 | 0.8612 | 0.8830 | 0.8612 | 0.8545 |
| 0.0004 | 4.8333 | 2900 | 0.6665 | 0.8983 | 0.9081 | 0.8983 | 0.8981 |
| 0.0015 | 5.0 | 3000 | 0.7736 | 0.8788 | 0.8931 | 0.8788 | 0.8774 |
| 0.0005 | 5.1667 | 3100 | 0.7346 | 0.8846 | 0.8936 | 0.8846 | 0.8854 |
| 0.0005 | 5.3333 | 3200 | 1.0391 | 0.8512 | 0.8657 | 0.8512 | 0.8506 |
| 0.1055 | 5.5 | 3300 | 1.8161 | 0.73 | 0.7998 | 0.73 | 0.7148 |
| 0.0007 | 5.6667 | 3400 | 1.1328 | 0.8392 | 0.8677 | 0.8392 | 0.8361 |
| 0.0108 | 5.8333 | 3500 | 0.7424 | 0.8788 | 0.8821 | 0.8788 | 0.8782 |
| 0.0021 | 6.0 | 3600 | 1.0478 | 0.8271 | 0.8424 | 0.8271 | 0.8239 |
| 0.01 | 6.1667 | 3700 | 1.0144 | 0.8475 | 0.8719 | 0.8475 | 0.8478 |
| 0.0014 | 6.3333 | 3800 | 0.7536 | 0.8708 | 0.8837 | 0.8708 | 0.8697 |
| 0.0005 | 6.5 | 3900 | 0.9003 | 0.8567 | 0.8758 | 0.8567 | 0.8544 |
| 0.0003 | 6.6667 | 4000 | 0.8318 | 0.8667 | 0.8816 | 0.8667 | 0.8660 |
| 0.0003 | 6.8333 | 4100 | 0.8213 | 0.8679 | 0.8817 | 0.8679 | 0.8673 |
| 0.0003 | 7.0 | 4200 | 0.8114 | 0.8721 | 0.8849 | 0.8721 | 0.8716 |
| 0.0003 | 7.1667 | 4300 | 0.8461 | 0.8683 | 0.8825 | 0.8683 | 0.8681 |
| 0.0002 | 7.3333 | 4400 | 0.8416 | 0.8692 | 0.8820 | 0.8692 | 0.8690 |
| 0.048 | 7.5 | 4500 | 1.1867 | 0.8163 | 0.8539 | 0.8163 | 0.8168 |
| 0.0373 | 7.6667 | 4600 | 0.8870 | 0.8596 | 0.8829 | 0.8596 | 0.8587 |
| 0.0004 | 7.8333 | 4700 | 1.1816 | 0.7913 | 0.8061 | 0.7913 | 0.7769 |
| 0.0013 | 8.0 | 4800 | 1.2743 | 0.8087 | 0.8456 | 0.8087 | 0.7974 |
| 0.0002 | 8.1667 | 4900 | 0.8387 | 0.8712 | 0.8773 | 0.8712 | 0.8692 |
| 0.0002 | 8.3333 | 5000 | 0.8463 | 0.8688 | 0.8732 | 0.8688 | 0.8673 |
| 0.0002 | 8.5 | 5100 | 0.8732 | 0.8721 | 0.8751 | 0.8721 | 0.8713 |
| 0.0002 | 8.6667 | 5200 | 0.9575 | 0.8546 | 0.8654 | 0.8546 | 0.8539 |
| 0.0002 | 8.8333 | 5300 | 0.9553 | 0.8654 | 0.8651 | 0.8654 | 0.8646 |
| 0.0005 | 9.0 | 5400 | 0.9674 | 0.8583 | 0.8681 | 0.8583 | 0.8586 |
| 0.0002 | 9.1667 | 5500 | 0.7823 | 0.885 | 0.8842 | 0.885 | 0.8842 |
| 0.0002 | 9.3333 | 5600 | 0.9682 | 0.8621 | 0.8837 | 0.8621 | 0.8600 |
| 0.0002 | 9.5 | 5700 | 0.8930 | 0.8629 | 0.8739 | 0.8629 | 0.8616 |
| 0.0002 | 9.6667 | 5800 | 1.1100 | 0.8475 | 0.8764 | 0.8475 | 0.8417 |
| 0.0001 | 9.8333 | 5900 | 0.9290 | 0.8646 | 0.8646 | 0.8646 | 0.8634 |
| 0.0001 | 10.0 | 6000 | 0.9349 | 0.8629 | 0.8633 | 0.8629 | 0.8617 |
| 0.0001 | 10.1667 | 6100 | 0.9423 | 0.8629 | 0.8635 | 0.8629 | 0.8617 |
| 0.0001 | 10.3333 | 6200 | 0.9459 | 0.8633 | 0.8639 | 0.8633 | 0.8622 |
| 0.0001 | 10.5 | 6300 | 0.9522 | 0.8625 | 0.8631 | 0.8625 | 0.8613 |
| 0.0001 | 10.6667 | 6400 | 0.9575 | 0.8629 | 0.8634 | 0.8629 | 0.8617 |
| 0.0001 | 10.8333 | 6500 | 0.9637 | 0.8629 | 0.8638 | 0.8629 | 0.8618 |
| 0.0001 | 11.0 | 6600 | 0.9643 | 0.8642 | 0.8649 | 0.8642 | 0.8631 |
| 0.0001 | 11.1667 | 6700 | 0.9678 | 0.8646 | 0.8653 | 0.8646 | 0.8635 |
| 0.0001 | 11.3333 | 6800 | 0.9722 | 0.8646 | 0.8654 | 0.8646 | 0.8635 |
| 0.0001 | 11.5 | 6900 | 0.9772 | 0.8633 | 0.8642 | 0.8633 | 0.8623 |
| 0.0001 | 11.6667 | 7000 | 0.9795 | 0.8646 | 0.8653 | 0.8646 | 0.8635 |
| 0.0001 | 11.8333 | 7100 | 0.9828 | 0.8642 | 0.8650 | 0.8642 | 0.8631 |
| 0.0001 | 12.0 | 7200 | 0.9851 | 0.8646 | 0.8654 | 0.8646 | 0.8635 |
| 0.0001 | 12.1667 | 7300 | 0.9879 | 0.8646 | 0.8654 | 0.8646 | 0.8635 |
| 0.0001 | 12.3333 | 7400 | 0.9903 | 0.8646 | 0.8654 | 0.8646 | 0.8635 |
| 0.0001 | 12.5 | 7500 | 0.9937 | 0.865 | 0.8658 | 0.865 | 0.8639 |
| 0.0001 | 12.6667 | 7600 | 0.9963 | 0.865 | 0.8658 | 0.865 | 0.8639 |
| 0.0001 | 12.8333 | 7700 | 0.9989 | 0.8646 | 0.8654 | 0.8646 | 0.8635 |
| 0.0001 | 13.0 | 7800 | 1.0018 | 0.8646 | 0.8654 | 0.8646 | 0.8635 |
| 0.0001 | 13.1667 | 7900 | 1.0047 | 0.8646 | 0.8654 | 0.8646 | 0.8635 |
| 0.0001 | 13.3333 | 8000 | 1.0069 | 0.8646 | 0.8654 | 0.8646 | 0.8635 |
| 0.0001 | 13.5 | 8100 | 1.0088 | 0.8646 | 0.8654 | 0.8646 | 0.8635 |
| 0.0001 | 13.6667 | 8200 | 1.0108 | 0.8646 | 0.8654 | 0.8646 | 0.8635 |
| 0.0001 | 13.8333 | 8300 | 1.0124 | 0.8646 | 0.8654 | 0.8646 | 0.8635 |
| 0.0001 | 14.0 | 8400 | 1.0135 | 0.8646 | 0.8654 | 0.8646 | 0.8635 |
| 0.0001 | 14.1667 | 8500 | 1.0150 | 0.8646 | 0.8654 | 0.8646 | 0.8635 |
| 0.0001 | 14.3333 | 8600 | 1.0160 | 0.8646 | 0.8654 | 0.8646 | 0.8635 |
| 0.0001 | 14.5 | 8700 | 1.0172 | 0.8646 | 0.8654 | 0.8646 | 0.8635 |
| 0.0001 | 14.6667 | 8800 | 1.0178 | 0.8646 | 0.8654 | 0.8646 | 0.8635 |
| 0.0001 | 14.8333 | 8900 | 1.0183 | 0.8646 | 0.8654 | 0.8646 | 0.8635 |
| 0.0001 | 15.0 | 9000 | 1.0184 | 0.8646 | 0.8654 | 0.8646 | 0.8635 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"mix-subtype_iiia",
"mix-subtype_iia",
"mix-subtype_ivc",
"mix-subtype_ivd",
"mix-subtype_ia",
"mix-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-4-Jonathan_El-Beze_-w256_1k_v1-_SEC
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-4-Jonathan_El-Beze_-w256_1k_v1-_SEC
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2940
- Accuracy: 0.9242
- Precision: 0.9321
- Recall: 0.9242
- F1: 0.9251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.1207 | 0.3333 | 100 | 0.5525 | 0.8333 | 0.8760 | 0.8333 | 0.8303 |
| 0.0178 | 0.6667 | 200 | 0.3368 | 0.8883 | 0.9298 | 0.8883 | 0.8927 |
| 0.0396 | 1.0 | 300 | 0.3187 | 0.9108 | 0.9213 | 0.9108 | 0.9104 |
| 0.0074 | 1.3333 | 400 | 1.1846 | 0.7583 | 0.8167 | 0.7583 | 0.7339 |
| 0.0125 | 1.6667 | 500 | 0.2940 | 0.9242 | 0.9321 | 0.9242 | 0.9251 |
| 0.0029 | 2.0 | 600 | 0.5031 | 0.8958 | 0.9051 | 0.8958 | 0.8929 |
| 0.0021 | 2.3333 | 700 | 0.5150 | 0.9008 | 0.9114 | 0.9008 | 0.8977 |
| 0.0016 | 2.6667 | 800 | 0.4894 | 0.9092 | 0.9191 | 0.9092 | 0.9069 |
| 0.0013 | 3.0 | 900 | 0.5048 | 0.9092 | 0.9194 | 0.9092 | 0.9067 |
| 0.0011 | 3.3333 | 1000 | 0.5066 | 0.9092 | 0.9187 | 0.9092 | 0.9070 |
| 0.001 | 3.6667 | 1100 | 0.5179 | 0.9092 | 0.9189 | 0.9092 | 0.9070 |
| 0.0008 | 4.0 | 1200 | 0.5369 | 0.9092 | 0.9198 | 0.9092 | 0.9069 |
| 0.0007 | 4.3333 | 1300 | 0.5459 | 0.9092 | 0.9198 | 0.9092 | 0.9069 |
| 0.0006 | 4.6667 | 1400 | 0.5508 | 0.9092 | 0.9198 | 0.9092 | 0.9069 |
| 0.0006 | 5.0 | 1500 | 0.5557 | 0.91 | 0.9203 | 0.91 | 0.9079 |
| 0.0005 | 5.3333 | 1600 | 0.5605 | 0.9108 | 0.9210 | 0.9108 | 0.9088 |
| 0.0004 | 5.6667 | 1700 | 0.5647 | 0.9108 | 0.9210 | 0.9108 | 0.9088 |
| 0.0004 | 6.0 | 1800 | 0.5735 | 0.9108 | 0.9210 | 0.9108 | 0.9088 |
| 0.0004 | 6.3333 | 1900 | 0.5797 | 0.9108 | 0.9210 | 0.9108 | 0.9088 |
| 0.0003 | 6.6667 | 2000 | 0.5840 | 0.9108 | 0.9210 | 0.9108 | 0.9088 |
| 0.0003 | 7.0 | 2100 | 0.5877 | 0.9108 | 0.9210 | 0.9108 | 0.9088 |
| 0.0003 | 7.3333 | 2200 | 0.5942 | 0.9108 | 0.9210 | 0.9108 | 0.9088 |
| 0.0003 | 7.6667 | 2300 | 0.6003 | 0.9117 | 0.9222 | 0.9117 | 0.9096 |
| 0.0003 | 8.0 | 2400 | 0.5999 | 0.9108 | 0.9210 | 0.9108 | 0.9088 |
| 0.0002 | 8.3333 | 2500 | 0.6042 | 0.91 | 0.9203 | 0.91 | 0.9080 |
| 0.0002 | 8.6667 | 2600 | 0.6076 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
| 0.0002 | 9.0 | 2700 | 0.6098 | 0.9108 | 0.9210 | 0.9108 | 0.9088 |
| 0.0002 | 9.3333 | 2800 | 0.6135 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
| 0.0002 | 9.6667 | 2900 | 0.6157 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
| 0.0002 | 10.0 | 3000 | 0.6191 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
| 0.0002 | 10.3333 | 3100 | 0.6216 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
| 0.0002 | 10.6667 | 3200 | 0.6241 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
| 0.0002 | 11.0 | 3300 | 0.6265 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
| 0.0002 | 11.3333 | 3400 | 0.6291 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
| 0.0001 | 11.6667 | 3500 | 0.6308 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
| 0.0001 | 12.0 | 3600 | 0.6325 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
| 0.0001 | 12.3333 | 3700 | 0.6339 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
| 0.0001 | 12.6667 | 3800 | 0.6351 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
| 0.0001 | 13.0 | 3900 | 0.6371 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
| 0.0001 | 13.3333 | 4000 | 0.6376 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
| 0.0001 | 13.6667 | 4100 | 0.6393 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
| 0.0001 | 14.0 | 4200 | 0.6403 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
| 0.0001 | 14.3333 | 4300 | 0.6410 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
| 0.0001 | 14.6667 | 4400 | 0.6413 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
| 0.0001 | 15.0 | 4500 | 0.6414 | 0.9108 | 0.9215 | 0.9108 | 0.9088 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"sec-subtype_iiia",
"sec-subtype_iia",
"sec-subtype_ivc",
"sec-subtype_ivd",
"sec-subtype_ia",
"sec-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-4-Jonathan_El-Beze_-w256_1k_v1-_SUR
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-4-Jonathan_El-Beze_-w256_1k_v1-_SUR
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6379
- Accuracy: 0.745
- Precision: 0.7537
- Recall: 0.745
- F1: 0.7067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.3911 | 0.3333 | 100 | 0.6379 | 0.745 | 0.7537 | 0.745 | 0.7067 |
| 0.2601 | 0.6667 | 200 | 1.0005 | 0.6842 | 0.7312 | 0.6842 | 0.6523 |
| 0.1349 | 1.0 | 300 | 0.6380 | 0.8533 | 0.8720 | 0.8533 | 0.8518 |
| 0.0601 | 1.3333 | 400 | 1.1014 | 0.7217 | 0.7753 | 0.7217 | 0.7044 |
| 0.2132 | 1.6667 | 500 | 0.7327 | 0.8208 | 0.8438 | 0.8208 | 0.8197 |
| 0.0894 | 2.0 | 600 | 1.4871 | 0.7083 | 0.7449 | 0.7083 | 0.6682 |
| 0.0135 | 2.3333 | 700 | 0.9952 | 0.7883 | 0.8495 | 0.7883 | 0.7799 |
| 0.0042 | 2.6667 | 800 | 0.6547 | 0.8683 | 0.8729 | 0.8683 | 0.8679 |
| 0.0037 | 3.0 | 900 | 0.7970 | 0.8367 | 0.8739 | 0.8367 | 0.8370 |
| 0.0578 | 3.3333 | 1000 | 0.8231 | 0.845 | 0.8641 | 0.845 | 0.8436 |
| 0.0019 | 3.6667 | 1100 | 0.7459 | 0.8667 | 0.8771 | 0.8667 | 0.8655 |
| 0.2931 | 4.0 | 1200 | 0.9539 | 0.8292 | 0.8349 | 0.8292 | 0.8275 |
| 0.0017 | 4.3333 | 1300 | 0.8095 | 0.8408 | 0.8607 | 0.8408 | 0.8413 |
| 0.0018 | 4.6667 | 1400 | 0.7471 | 0.865 | 0.8690 | 0.865 | 0.8629 |
| 0.0014 | 5.0 | 1500 | 1.0642 | 0.7925 | 0.8148 | 0.7925 | 0.7915 |
| 0.0012 | 5.3333 | 1600 | 0.8130 | 0.8333 | 0.8372 | 0.8333 | 0.8334 |
| 0.001 | 5.6667 | 1700 | 1.1121 | 0.8133 | 0.8222 | 0.8133 | 0.8113 |
| 0.001 | 6.0 | 1800 | 0.7986 | 0.8475 | 0.8528 | 0.8475 | 0.8492 |
| 0.0008 | 6.3333 | 1900 | 0.7908 | 0.8708 | 0.8928 | 0.8708 | 0.8718 |
| 0.0007 | 6.6667 | 2000 | 0.7444 | 0.8842 | 0.8981 | 0.8842 | 0.8818 |
| 0.0028 | 7.0 | 2100 | 0.7492 | 0.87 | 0.8749 | 0.87 | 0.8677 |
| 0.0007 | 7.3333 | 2200 | 1.5649 | 0.7433 | 0.8440 | 0.7433 | 0.7117 |
| 0.0007 | 7.6667 | 2300 | 0.8539 | 0.8492 | 0.8679 | 0.8492 | 0.8492 |
| 0.0015 | 8.0 | 2400 | 0.8743 | 0.835 | 0.8553 | 0.835 | 0.8342 |
| 0.0006 | 8.3333 | 2500 | 0.7659 | 0.8583 | 0.8608 | 0.8583 | 0.8569 |
| 0.0005 | 8.6667 | 2600 | 0.7448 | 0.8642 | 0.8681 | 0.8642 | 0.8627 |
| 0.0005 | 9.0 | 2700 | 0.7439 | 0.8683 | 0.8726 | 0.8683 | 0.8666 |
| 0.0004 | 9.3333 | 2800 | 0.7444 | 0.8742 | 0.8807 | 0.8742 | 0.8725 |
| 0.0004 | 9.6667 | 2900 | 0.7484 | 0.8725 | 0.8790 | 0.8725 | 0.8707 |
| 0.0003 | 10.0 | 3000 | 0.7491 | 0.8708 | 0.8781 | 0.8708 | 0.8691 |
| 0.0003 | 10.3333 | 3100 | 0.7509 | 0.8717 | 0.8788 | 0.8717 | 0.8699 |
| 0.0003 | 10.6667 | 3200 | 0.7539 | 0.875 | 0.8827 | 0.875 | 0.8732 |
| 0.0003 | 11.0 | 3300 | 0.7572 | 0.8775 | 0.8853 | 0.8775 | 0.8756 |
| 0.0003 | 11.3333 | 3400 | 0.7598 | 0.8783 | 0.8866 | 0.8783 | 0.8765 |
| 0.0003 | 11.6667 | 3500 | 0.7626 | 0.8792 | 0.8873 | 0.8792 | 0.8772 |
| 0.0003 | 12.0 | 3600 | 0.7655 | 0.8792 | 0.8873 | 0.8792 | 0.8772 |
| 0.0003 | 12.3333 | 3700 | 0.7682 | 0.8792 | 0.8873 | 0.8792 | 0.8772 |
| 0.0003 | 12.6667 | 3800 | 0.7699 | 0.88 | 0.8880 | 0.88 | 0.8780 |
| 0.0002 | 13.0 | 3900 | 0.7723 | 0.8808 | 0.8887 | 0.8808 | 0.8788 |
| 0.0003 | 13.3333 | 4000 | 0.7747 | 0.88 | 0.8881 | 0.88 | 0.8779 |
| 0.0003 | 13.6667 | 4100 | 0.7761 | 0.88 | 0.8881 | 0.88 | 0.8779 |
| 0.0002 | 14.0 | 4200 | 0.7771 | 0.88 | 0.8881 | 0.88 | 0.8779 |
| 0.0002 | 14.3333 | 4300 | 0.7778 | 0.88 | 0.8881 | 0.88 | 0.8779 |
| 0.0002 | 14.6667 | 4400 | 0.7785 | 0.88 | 0.8881 | 0.88 | 0.8779 |
| 0.0002 | 15.0 | 4500 | 0.7787 | 0.88 | 0.8881 | 0.88 | 0.8779 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"sur-subtype_iiia",
"sur-subtype_iia",
"sur-subtype_ivc",
"sur-subtype_ivd",
"sur-subtype_ia",
"sur-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-4-Michel_Daudon_-w256_1k_v1-_MIX
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-4-Michel_Daudon_-w256_1k_v1-_MIX
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5183
- Accuracy: 0.8333
- Precision: 0.8596
- Recall: 0.8333
- F1: 0.8313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.4337 | 0.1667 | 100 | 0.6415 | 0.7688 | 0.7866 | 0.7688 | 0.7620 |
| 0.5458 | 0.3333 | 200 | 1.0270 | 0.7204 | 0.8072 | 0.7204 | 0.6929 |
| 0.1893 | 0.5 | 300 | 0.5183 | 0.8333 | 0.8596 | 0.8333 | 0.8313 |
| 0.2041 | 0.6667 | 400 | 0.5611 | 0.8333 | 0.8651 | 0.8333 | 0.8360 |
| 0.2087 | 0.8333 | 500 | 0.8036 | 0.7846 | 0.8253 | 0.7846 | 0.7916 |
| 0.1888 | 1.0 | 600 | 0.7427 | 0.8046 | 0.8312 | 0.8046 | 0.7960 |
| 0.1175 | 1.1667 | 700 | 0.7927 | 0.7837 | 0.7906 | 0.7837 | 0.7770 |
| 0.5783 | 1.3333 | 800 | 0.9454 | 0.7521 | 0.8095 | 0.7521 | 0.7551 |
| 0.1242 | 1.5 | 900 | 1.0772 | 0.7704 | 0.8102 | 0.7704 | 0.7796 |
| 0.1045 | 1.6667 | 1000 | 0.8234 | 0.8296 | 0.8333 | 0.8296 | 0.8223 |
| 0.1007 | 1.8333 | 1100 | 1.1756 | 0.7546 | 0.7483 | 0.7546 | 0.7460 |
| 0.0101 | 2.0 | 1200 | 0.7921 | 0.8446 | 0.8782 | 0.8446 | 0.8486 |
| 0.0079 | 2.1667 | 1300 | 0.9626 | 0.8204 | 0.8644 | 0.8204 | 0.8241 |
| 0.0626 | 2.3333 | 1400 | 1.0140 | 0.8025 | 0.8441 | 0.8025 | 0.8040 |
| 0.0216 | 2.5 | 1500 | 0.9297 | 0.8358 | 0.8540 | 0.8358 | 0.8364 |
| 0.0707 | 2.6667 | 1600 | 0.9193 | 0.8196 | 0.8425 | 0.8196 | 0.8203 |
| 0.0308 | 2.8333 | 1700 | 0.9988 | 0.8246 | 0.8429 | 0.8246 | 0.8209 |
| 0.0863 | 3.0 | 1800 | 0.8083 | 0.83 | 0.8592 | 0.83 | 0.8332 |
| 0.0016 | 3.1667 | 1900 | 1.1933 | 0.8029 | 0.8475 | 0.8029 | 0.8079 |
| 0.0014 | 3.3333 | 2000 | 1.0995 | 0.8142 | 0.8376 | 0.8142 | 0.8132 |
| 0.0745 | 3.5 | 2100 | 1.0348 | 0.8154 | 0.8720 | 0.8154 | 0.8259 |
| 0.0226 | 3.6667 | 2200 | 0.8861 | 0.8275 | 0.8576 | 0.8275 | 0.8303 |
| 0.0159 | 3.8333 | 2300 | 1.1476 | 0.79 | 0.8251 | 0.79 | 0.7981 |
| 0.1398 | 4.0 | 2400 | 1.2559 | 0.7879 | 0.8284 | 0.7879 | 0.7845 |
| 0.0011 | 4.1667 | 2500 | 1.2795 | 0.8008 | 0.8419 | 0.8008 | 0.8061 |
| 0.0016 | 4.3333 | 2600 | 1.1345 | 0.8108 | 0.8472 | 0.8108 | 0.8154 |
| 0.001 | 4.5 | 2700 | 1.0013 | 0.8242 | 0.8419 | 0.8242 | 0.8220 |
| 0.0888 | 4.6667 | 2800 | 1.0708 | 0.8313 | 0.8614 | 0.8313 | 0.8357 |
| 0.0212 | 4.8333 | 2900 | 1.1488 | 0.8113 | 0.8435 | 0.8113 | 0.8123 |
| 0.0857 | 5.0 | 3000 | 1.0805 | 0.8113 | 0.8506 | 0.8113 | 0.8182 |
| 0.0029 | 5.1667 | 3100 | 0.8731 | 0.8588 | 0.8762 | 0.8588 | 0.8619 |
| 0.0226 | 5.3333 | 3200 | 1.2513 | 0.8113 | 0.8410 | 0.8113 | 0.8128 |
| 0.0627 | 5.5 | 3300 | 1.1715 | 0.8063 | 0.8394 | 0.8063 | 0.8066 |
| 0.1471 | 5.6667 | 3400 | 0.8260 | 0.8325 | 0.8434 | 0.8325 | 0.8341 |
| 0.0008 | 5.8333 | 3500 | 0.8541 | 0.8404 | 0.8636 | 0.8404 | 0.8430 |
| 0.0005 | 6.0 | 3600 | 1.1119 | 0.8129 | 0.8340 | 0.8129 | 0.8165 |
| 0.0005 | 6.1667 | 3700 | 1.6586 | 0.7754 | 0.8261 | 0.7754 | 0.7762 |
| 0.0693 | 6.3333 | 3800 | 1.2959 | 0.8067 | 0.8427 | 0.8067 | 0.8107 |
| 0.0007 | 6.5 | 3900 | 1.0675 | 0.8142 | 0.8195 | 0.8142 | 0.8140 |
| 0.0008 | 6.6667 | 4000 | 1.3692 | 0.7904 | 0.8078 | 0.7904 | 0.7903 |
| 0.0063 | 6.8333 | 4100 | 1.2463 | 0.8092 | 0.8326 | 0.8092 | 0.8073 |
| 0.0006 | 7.0 | 4200 | 1.2368 | 0.8171 | 0.8433 | 0.8171 | 0.8187 |
| 0.0014 | 7.1667 | 4300 | 1.2245 | 0.7979 | 0.8126 | 0.7979 | 0.8004 |
| 0.0005 | 7.3333 | 4400 | 1.2486 | 0.7996 | 0.8134 | 0.7996 | 0.7996 |
| 0.0793 | 7.5 | 4500 | 1.3575 | 0.7762 | 0.8005 | 0.7762 | 0.7696 |
| 0.0006 | 7.6667 | 4600 | 1.2693 | 0.8013 | 0.8151 | 0.8013 | 0.7996 |
| 0.0005 | 7.8333 | 4700 | 1.1999 | 0.8192 | 0.8405 | 0.8192 | 0.8199 |
| 0.0007 | 8.0 | 4800 | 1.0169 | 0.8346 | 0.8517 | 0.8346 | 0.8353 |
| 0.067 | 8.1667 | 4900 | 1.0823 | 0.8346 | 0.8602 | 0.8346 | 0.8325 |
| 0.0007 | 8.3333 | 5000 | 1.3014 | 0.7996 | 0.8439 | 0.7996 | 0.7978 |
| 0.0003 | 8.5 | 5100 | 1.3176 | 0.7954 | 0.8398 | 0.7954 | 0.7986 |
| 0.0003 | 8.6667 | 5200 | 1.2994 | 0.8113 | 0.8559 | 0.8113 | 0.8124 |
| 0.0002 | 8.8333 | 5300 | 1.3460 | 0.7937 | 0.8308 | 0.7937 | 0.7908 |
| 0.0003 | 9.0 | 5400 | 1.0408 | 0.8346 | 0.8541 | 0.8346 | 0.8363 |
| 0.0002 | 9.1667 | 5500 | 1.1659 | 0.8246 | 0.8651 | 0.8246 | 0.8258 |
| 0.0002 | 9.3333 | 5600 | 1.1821 | 0.8263 | 0.8657 | 0.8263 | 0.8270 |
| 0.0002 | 9.5 | 5700 | 1.2786 | 0.8233 | 0.8607 | 0.8233 | 0.8227 |
| 0.0002 | 9.6667 | 5800 | 1.2611 | 0.8217 | 0.8577 | 0.8217 | 0.8210 |
| 0.0002 | 9.8333 | 5900 | 1.2556 | 0.8213 | 0.8568 | 0.8213 | 0.8206 |
| 0.0002 | 10.0 | 6000 | 1.3472 | 0.8158 | 0.8491 | 0.8158 | 0.8158 |
| 0.0002 | 10.1667 | 6100 | 1.3345 | 0.8175 | 0.8502 | 0.8175 | 0.8176 |
| 0.0001 | 10.3333 | 6200 | 1.3366 | 0.8187 | 0.8512 | 0.8187 | 0.8188 |
| 0.0001 | 10.5 | 6300 | 1.3363 | 0.8171 | 0.8497 | 0.8171 | 0.8174 |
| 0.0001 | 10.6667 | 6400 | 1.3340 | 0.8196 | 0.8517 | 0.8196 | 0.8198 |
| 0.0001 | 10.8333 | 6500 | 1.3658 | 0.8233 | 0.8593 | 0.8233 | 0.8243 |
| 0.0001 | 11.0 | 6600 | 1.3709 | 0.8237 | 0.8595 | 0.8237 | 0.8247 |
| 0.0001 | 11.1667 | 6700 | 1.3652 | 0.8242 | 0.8585 | 0.8242 | 0.8249 |
| 0.0001 | 11.3333 | 6800 | 1.3703 | 0.825 | 0.8594 | 0.825 | 0.8258 |
| 0.0001 | 11.5 | 6900 | 1.3755 | 0.8237 | 0.8579 | 0.8237 | 0.8247 |
| 0.0001 | 11.6667 | 7000 | 1.3781 | 0.8237 | 0.8579 | 0.8237 | 0.8247 |
| 0.0001 | 11.8333 | 7100 | 1.3811 | 0.8242 | 0.8582 | 0.8242 | 0.8251 |
| 0.0001 | 12.0 | 7200 | 1.3851 | 0.8237 | 0.8578 | 0.8237 | 0.8247 |
| 0.0001 | 12.1667 | 7300 | 1.3881 | 0.8242 | 0.8580 | 0.8242 | 0.8251 |
| 0.0001 | 12.3333 | 7400 | 1.3910 | 0.8246 | 0.8580 | 0.8246 | 0.8257 |
| 0.0001 | 12.5 | 7500 | 1.3937 | 0.8246 | 0.8580 | 0.8246 | 0.8257 |
| 0.0001 | 12.6667 | 7600 | 1.3977 | 0.8246 | 0.8580 | 0.8246 | 0.8257 |
| 0.0001 | 12.8333 | 7700 | 1.3995 | 0.8246 | 0.8580 | 0.8246 | 0.8257 |
| 0.0001 | 13.0 | 7800 | 1.4021 | 0.8246 | 0.8580 | 0.8246 | 0.8257 |
| 0.0001 | 13.1667 | 7900 | 1.4048 | 0.8246 | 0.8580 | 0.8246 | 0.8257 |
| 0.0001 | 13.3333 | 8000 | 1.4074 | 0.8246 | 0.8580 | 0.8246 | 0.8257 |
| 0.0001 | 13.5 | 8100 | 1.4099 | 0.8246 | 0.8580 | 0.8246 | 0.8257 |
| 0.0001 | 13.6667 | 8200 | 1.4117 | 0.8246 | 0.8580 | 0.8246 | 0.8257 |
| 0.0001 | 13.8333 | 8300 | 1.4134 | 0.825 | 0.8582 | 0.825 | 0.8261 |
| 0.0001 | 14.0 | 8400 | 1.4150 | 0.825 | 0.8582 | 0.825 | 0.8261 |
| 0.0001 | 14.1667 | 8500 | 1.4164 | 0.8246 | 0.8578 | 0.8246 | 0.8258 |
| 0.0001 | 14.3333 | 8600 | 1.4176 | 0.8242 | 0.8574 | 0.8242 | 0.8254 |
| 0.0001 | 14.5 | 8700 | 1.4186 | 0.8242 | 0.8574 | 0.8242 | 0.8254 |
| 0.0001 | 14.6667 | 8800 | 1.4192 | 0.8242 | 0.8574 | 0.8242 | 0.8254 |
| 0.0001 | 14.8333 | 8900 | 1.4197 | 0.8242 | 0.8574 | 0.8242 | 0.8254 |
| 0.0001 | 15.0 | 9000 | 1.4200 | 0.8242 | 0.8574 | 0.8242 | 0.8254 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"mix-subtype_iva",
"mix-subtype_iva2",
"mix-subtype_ivc",
"mix-subtype_ivd",
"mix-subtype_ia",
"mix-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-4-Michel_Daudon_-w256_1k_v1-_SEC
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-4-Michel_Daudon_-w256_1k_v1-_SEC
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2879
- Accuracy: 0.9242
- Precision: 0.9296
- Recall: 0.9242
- F1: 0.9248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2837 | 0.3333 | 100 | 0.5470 | 0.8333 | 0.8693 | 0.8333 | 0.8325 |
| 0.1498 | 0.6667 | 200 | 0.4199 | 0.8658 | 0.8833 | 0.8658 | 0.8647 |
| 0.0979 | 1.0 | 300 | 0.4712 | 0.8783 | 0.9015 | 0.8783 | 0.8799 |
| 0.009 | 1.3333 | 400 | 0.4957 | 0.885 | 0.8933 | 0.885 | 0.8819 |
| 0.0226 | 1.6667 | 500 | 0.2879 | 0.9242 | 0.9296 | 0.9242 | 0.9248 |
| 0.0722 | 2.0 | 600 | 0.4449 | 0.8875 | 0.8906 | 0.8875 | 0.8869 |
| 0.0043 | 2.3333 | 700 | 0.3699 | 0.9125 | 0.9221 | 0.9125 | 0.9104 |
| 0.0678 | 2.6667 | 800 | 0.6081 | 0.8792 | 0.8872 | 0.8792 | 0.8760 |
| 0.1178 | 3.0 | 900 | 0.5728 | 0.8767 | 0.8748 | 0.8767 | 0.8744 |
| 0.0297 | 3.3333 | 1000 | 0.3977 | 0.9258 | 0.9267 | 0.9258 | 0.9257 |
| 0.0813 | 3.6667 | 1100 | 1.1116 | 0.8283 | 0.8462 | 0.8283 | 0.8153 |
| 0.0336 | 4.0 | 1200 | 0.9246 | 0.82 | 0.8215 | 0.82 | 0.8155 |
| 0.0291 | 4.3333 | 1300 | 0.6674 | 0.8808 | 0.8980 | 0.8808 | 0.8819 |
| 0.1018 | 4.6667 | 1400 | 0.7256 | 0.8667 | 0.8760 | 0.8667 | 0.8641 |
| 0.0739 | 5.0 | 1500 | 0.4149 | 0.8908 | 0.9082 | 0.8908 | 0.8913 |
| 0.0017 | 5.3333 | 1600 | 0.3553 | 0.9208 | 0.9291 | 0.9208 | 0.9219 |
| 0.0011 | 5.6667 | 1700 | 0.3934 | 0.915 | 0.9188 | 0.915 | 0.9157 |
| 0.0056 | 6.0 | 1800 | 0.8180 | 0.8725 | 0.9139 | 0.8725 | 0.8733 |
| 0.001 | 6.3333 | 1900 | 0.3790 | 0.9225 | 0.9216 | 0.9225 | 0.9217 |
| 0.0055 | 6.6667 | 2000 | 0.6404 | 0.88 | 0.8910 | 0.88 | 0.8765 |
| 0.0007 | 7.0 | 2100 | 0.5133 | 0.9017 | 0.9073 | 0.9017 | 0.9023 |
| 0.0009 | 7.3333 | 2200 | 0.4628 | 0.92 | 0.9296 | 0.92 | 0.9189 |
| 0.0007 | 7.6667 | 2300 | 0.8405 | 0.8617 | 0.8744 | 0.8617 | 0.8581 |
| 0.1144 | 8.0 | 2400 | 1.0096 | 0.8592 | 0.8954 | 0.8592 | 0.8567 |
| 0.0007 | 8.3333 | 2500 | 0.6318 | 0.8983 | 0.9113 | 0.8983 | 0.8977 |
| 0.0005 | 8.6667 | 2600 | 0.4929 | 0.9075 | 0.9135 | 0.9075 | 0.9076 |
| 0.0013 | 9.0 | 2700 | 0.6148 | 0.8883 | 0.8955 | 0.8883 | 0.8866 |
| 0.001 | 9.3333 | 2800 | 1.0043 | 0.8392 | 0.8538 | 0.8392 | 0.8355 |
| 0.0004 | 9.6667 | 2900 | 0.9713 | 0.8425 | 0.8556 | 0.8425 | 0.8390 |
| 0.0004 | 10.0 | 3000 | 0.9737 | 0.865 | 0.8977 | 0.865 | 0.8634 |
| 0.0004 | 10.3333 | 3100 | 0.8766 | 0.8683 | 0.8835 | 0.8683 | 0.8673 |
| 0.0004 | 10.6667 | 3200 | 0.8620 | 0.8683 | 0.8808 | 0.8683 | 0.8672 |
| 0.0003 | 11.0 | 3300 | 0.8669 | 0.8675 | 0.8803 | 0.8675 | 0.8665 |
| 0.0003 | 11.3333 | 3400 | 0.8712 | 0.8667 | 0.8789 | 0.8667 | 0.8656 |
| 0.0003 | 11.6667 | 3500 | 0.8732 | 0.8675 | 0.8797 | 0.8675 | 0.8665 |
| 0.0003 | 12.0 | 3600 | 0.8754 | 0.8658 | 0.8782 | 0.8658 | 0.8648 |
| 0.0003 | 12.3333 | 3700 | 0.8775 | 0.8658 | 0.8782 | 0.8658 | 0.8648 |
| 0.0003 | 12.6667 | 3800 | 0.8797 | 0.865 | 0.8772 | 0.865 | 0.8640 |
| 0.0003 | 13.0 | 3900 | 0.8816 | 0.865 | 0.8772 | 0.865 | 0.8640 |
| 0.0003 | 13.3333 | 4000 | 0.8835 | 0.865 | 0.8772 | 0.865 | 0.8640 |
| 0.0003 | 13.6667 | 4100 | 0.8844 | 0.865 | 0.8769 | 0.865 | 0.8639 |
| 0.0003 | 14.0 | 4200 | 0.8852 | 0.8658 | 0.8775 | 0.8658 | 0.8648 |
| 0.0002 | 14.3333 | 4300 | 0.8859 | 0.8667 | 0.8780 | 0.8667 | 0.8655 |
| 0.0002 | 14.6667 | 4400 | 0.8865 | 0.8675 | 0.8786 | 0.8675 | 0.8664 |
| 0.0002 | 15.0 | 4500 | 0.8868 | 0.8675 | 0.8786 | 0.8675 | 0.8664 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"sec-subtype_iva",
"sec-subtype_iva2",
"sec-subtype_ivc",
"sec-subtype_ivd",
"sec-subtype_ia",
"sec-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-4-Michel_Daudon_-w256_1k_v1-_SUR
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-4-Michel_Daudon_-w256_1k_v1-_SUR
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6804
- Accuracy: 0.8136
- Precision: 0.8643
- Recall: 0.8136
- F1: 0.8124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.1898 | 0.3333 | 100 | 0.9163 | 0.7294 | 0.7512 | 0.7294 | 0.7288 |
| 0.2681 | 0.6667 | 200 | 0.6804 | 0.8136 | 0.8643 | 0.8136 | 0.8124 |
| 0.1036 | 1.0 | 300 | 0.9091 | 0.7939 | 0.8124 | 0.7939 | 0.7880 |
| 0.1047 | 1.3333 | 400 | 1.5065 | 0.6566 | 0.6964 | 0.6566 | 0.6685 |
| 0.0449 | 1.6667 | 500 | 0.9248 | 0.7833 | 0.7988 | 0.7833 | 0.7893 |
| 0.1781 | 2.0 | 600 | 1.1234 | 0.7621 | 0.7926 | 0.7621 | 0.7607 |
| 0.1509 | 2.3333 | 700 | 1.1867 | 0.7465 | 0.7468 | 0.7465 | 0.7396 |
| 0.1324 | 2.6667 | 800 | 1.3904 | 0.7433 | 0.7586 | 0.7433 | 0.7329 |
| 0.0037 | 3.0 | 900 | 1.3699 | 0.7408 | 0.7950 | 0.7408 | 0.7441 |
| 0.0025 | 3.3333 | 1000 | 1.2225 | 0.7433 | 0.7667 | 0.7433 | 0.7448 |
| 0.0587 | 3.6667 | 1100 | 1.4635 | 0.7244 | 0.7766 | 0.7244 | 0.7274 |
| 0.0422 | 4.0 | 1200 | 1.4949 | 0.7433 | 0.7599 | 0.7433 | 0.7398 |
| 0.0084 | 4.3333 | 1300 | 1.2363 | 0.7841 | 0.7863 | 0.7841 | 0.7788 |
| 0.0796 | 4.6667 | 1400 | 1.5322 | 0.7392 | 0.7473 | 0.7392 | 0.7419 |
| 0.003 | 5.0 | 1500 | 1.6031 | 0.7294 | 0.7752 | 0.7294 | 0.7319 |
| 0.0012 | 5.3333 | 1600 | 1.0992 | 0.8062 | 0.8066 | 0.8062 | 0.8056 |
| 0.0009 | 5.6667 | 1700 | 2.1569 | 0.6999 | 0.7144 | 0.6999 | 0.6907 |
| 0.0022 | 6.0 | 1800 | 2.2827 | 0.6312 | 0.6385 | 0.6312 | 0.6195 |
| 0.0009 | 6.3333 | 1900 | 1.8713 | 0.7089 | 0.7476 | 0.7089 | 0.6997 |
| 0.0012 | 6.6667 | 2000 | 1.9461 | 0.6983 | 0.6983 | 0.6983 | 0.6788 |
| 0.0006 | 7.0 | 2100 | 1.8889 | 0.7114 | 0.7217 | 0.7114 | 0.6998 |
| 0.0006 | 7.3333 | 2200 | 1.9514 | 0.6991 | 0.7212 | 0.6991 | 0.6794 |
| 0.0005 | 7.6667 | 2300 | 1.9619 | 0.7138 | 0.6644 | 0.7138 | 0.6726 |
| 0.0013 | 8.0 | 2400 | 1.7297 | 0.7490 | 0.7589 | 0.7490 | 0.7493 |
| 0.0005 | 8.3333 | 2500 | 2.2490 | 0.6950 | 0.7015 | 0.6950 | 0.6914 |
| 0.0004 | 8.6667 | 2600 | 2.2431 | 0.6975 | 0.7039 | 0.6975 | 0.6932 |
| 0.0009 | 9.0 | 2700 | 1.8096 | 0.7490 | 0.7593 | 0.7490 | 0.7443 |
| 0.0003 | 9.3333 | 2800 | 1.9490 | 0.7375 | 0.7450 | 0.7375 | 0.7353 |
| 0.0011 | 9.6667 | 2900 | 2.0860 | 0.7294 | 0.7239 | 0.7294 | 0.7153 |
| 0.0003 | 10.0 | 3000 | 1.9343 | 0.7383 | 0.7468 | 0.7383 | 0.7399 |
| 0.0004 | 10.3333 | 3100 | 1.9158 | 0.7457 | 0.7513 | 0.7457 | 0.7464 |
| 0.0003 | 10.6667 | 3200 | 1.9289 | 0.7465 | 0.7526 | 0.7465 | 0.7475 |
| 0.0802 | 11.0 | 3300 | 2.0591 | 0.7375 | 0.7487 | 0.7375 | 0.7404 |
| 0.0565 | 11.3333 | 3400 | 2.2480 | 0.7016 | 0.7854 | 0.7016 | 0.7131 |
| 0.0003 | 11.6667 | 3500 | 1.7115 | 0.7539 | 0.8088 | 0.7539 | 0.7572 |
| 0.0003 | 12.0 | 3600 | 1.9888 | 0.7195 | 0.7679 | 0.7195 | 0.7222 |
| 0.0003 | 12.3333 | 3700 | 2.0141 | 0.7179 | 0.7227 | 0.7179 | 0.7133 |
| 0.0002 | 12.6667 | 3800 | 2.0314 | 0.7089 | 0.7158 | 0.7089 | 0.7081 |
| 0.0002 | 13.0 | 3900 | 1.8735 | 0.7187 | 0.7291 | 0.7187 | 0.7220 |
| 0.0002 | 13.3333 | 4000 | 1.8854 | 0.7179 | 0.7281 | 0.7179 | 0.7210 |
| 0.0002 | 13.6667 | 4100 | 1.8931 | 0.7179 | 0.7281 | 0.7179 | 0.7210 |
| 0.0002 | 14.0 | 4200 | 1.8992 | 0.7179 | 0.7285 | 0.7179 | 0.7212 |
| 0.0002 | 14.3333 | 4300 | 1.9039 | 0.7179 | 0.7285 | 0.7179 | 0.7212 |
| 0.0002 | 14.6667 | 4400 | 1.9063 | 0.7179 | 0.7285 | 0.7179 | 0.7212 |
| 0.0002 | 15.0 | 4500 | 1.9073 | 0.7179 | 0.7285 | 0.7179 | 0.7212 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"sur-subtype_iva",
"sur-subtype_iva2",
"sur-subtype_ivc",
"sur-subtype_ivd",
"sur-subtype_ia",
"sur-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-5-Jonathan_El-Beze_-w256_1k_v1-_MIX
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-5-Jonathan_El-Beze_-w256_1k_v1-_MIX
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4482
- Accuracy: 0.8683
- Precision: 0.8788
- Recall: 0.8683
- F1: 0.8688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2457 | 0.1667 | 100 | 0.5382 | 0.8258 | 0.8382 | 0.8258 | 0.8180 |
| 0.0854 | 0.3333 | 200 | 0.7377 | 0.7875 | 0.8422 | 0.7875 | 0.7795 |
| 0.1279 | 0.5 | 300 | 0.6710 | 0.7883 | 0.8568 | 0.7883 | 0.7883 |
| 0.1442 | 0.6667 | 400 | 0.5535 | 0.8192 | 0.8342 | 0.8192 | 0.8192 |
| 0.2868 | 0.8333 | 500 | 1.0679 | 0.7242 | 0.7910 | 0.7242 | 0.7163 |
| 0.1327 | 1.0 | 600 | 0.4482 | 0.8683 | 0.8788 | 0.8683 | 0.8688 |
| 0.1097 | 1.1667 | 700 | 0.8910 | 0.7983 | 0.8425 | 0.7983 | 0.7898 |
| 0.0725 | 1.3333 | 800 | 0.6816 | 0.8037 | 0.8375 | 0.8037 | 0.8015 |
| 0.0152 | 1.5 | 900 | 0.8366 | 0.8175 | 0.8466 | 0.8175 | 0.8169 |
| 0.0057 | 1.6667 | 1000 | 0.5298 | 0.8812 | 0.8924 | 0.8812 | 0.8810 |
| 0.0804 | 1.8333 | 1100 | 1.1549 | 0.7425 | 0.8162 | 0.7425 | 0.7228 |
| 0.0655 | 2.0 | 1200 | 0.9445 | 0.795 | 0.8350 | 0.795 | 0.7907 |
| 0.1261 | 2.1667 | 1300 | 0.8882 | 0.8121 | 0.8449 | 0.8121 | 0.8067 |
| 0.0418 | 2.3333 | 1400 | 0.6411 | 0.8638 | 0.8682 | 0.8638 | 0.8636 |
| 0.0809 | 2.5 | 1500 | 0.5780 | 0.8708 | 0.8811 | 0.8708 | 0.8683 |
| 0.1062 | 2.6667 | 1600 | 1.1595 | 0.7875 | 0.8249 | 0.7875 | 0.7623 |
| 0.0021 | 2.8333 | 1700 | 1.4652 | 0.7525 | 0.8050 | 0.7525 | 0.7379 |
| 0.0031 | 3.0 | 1800 | 1.1441 | 0.7904 | 0.8277 | 0.7904 | 0.7647 |
| 0.0026 | 3.1667 | 1900 | 0.6132 | 0.8479 | 0.8537 | 0.8479 | 0.8471 |
| 0.0011 | 3.3333 | 2000 | 0.5269 | 0.8925 | 0.8948 | 0.8925 | 0.8913 |
| 0.0014 | 3.5 | 2100 | 0.8908 | 0.7808 | 0.8294 | 0.7808 | 0.7723 |
| 0.0013 | 3.6667 | 2200 | 0.8869 | 0.8075 | 0.8466 | 0.8075 | 0.8101 |
| 0.0007 | 3.8333 | 2300 | 0.6948 | 0.8667 | 0.8817 | 0.8667 | 0.8662 |
| 0.0824 | 4.0 | 2400 | 0.4991 | 0.8929 | 0.8962 | 0.8929 | 0.8934 |
| 0.0021 | 4.1667 | 2500 | 0.5147 | 0.9038 | 0.9116 | 0.9038 | 0.9025 |
| 0.0006 | 4.3333 | 2600 | 0.5748 | 0.8967 | 0.9043 | 0.8967 | 0.8970 |
| 0.0005 | 4.5 | 2700 | 0.5797 | 0.8962 | 0.9035 | 0.8962 | 0.8966 |
| 0.0006 | 4.6667 | 2800 | 0.8573 | 0.855 | 0.8741 | 0.855 | 0.8534 |
| 0.0006 | 4.8333 | 2900 | 0.7548 | 0.8446 | 0.8617 | 0.8446 | 0.8415 |
| 0.0019 | 5.0 | 3000 | 0.6473 | 0.8733 | 0.8850 | 0.8733 | 0.8714 |
| 0.0469 | 5.1667 | 3100 | 0.8790 | 0.8258 | 0.8368 | 0.8258 | 0.8274 |
| 0.0271 | 5.3333 | 3200 | 1.6532 | 0.7525 | 0.8328 | 0.7525 | 0.7430 |
| 0.0005 | 5.5 | 3300 | 0.7739 | 0.8654 | 0.8743 | 0.8654 | 0.8660 |
| 0.1697 | 5.6667 | 3400 | 0.7311 | 0.8592 | 0.8816 | 0.8592 | 0.8612 |
| 0.0162 | 5.8333 | 3500 | 0.7819 | 0.8621 | 0.8678 | 0.8621 | 0.8620 |
| 0.0039 | 6.0 | 3600 | 1.1462 | 0.8092 | 0.8282 | 0.8092 | 0.8073 |
| 0.0005 | 6.1667 | 3700 | 0.6625 | 0.8692 | 0.8750 | 0.8692 | 0.8699 |
| 0.0022 | 6.3333 | 3800 | 1.1395 | 0.8079 | 0.8245 | 0.8079 | 0.7988 |
| 0.0039 | 6.5 | 3900 | 0.5258 | 0.9104 | 0.9145 | 0.9104 | 0.9111 |
| 0.0003 | 6.6667 | 4000 | 0.8170 | 0.8438 | 0.8598 | 0.8438 | 0.8445 |
| 0.0005 | 6.8333 | 4100 | 0.6582 | 0.8862 | 0.8906 | 0.8862 | 0.8847 |
| 0.0003 | 7.0 | 4200 | 0.8093 | 0.8571 | 0.8707 | 0.8571 | 0.8585 |
| 0.0002 | 7.1667 | 4300 | 0.7803 | 0.8633 | 0.8744 | 0.8633 | 0.8645 |
| 0.0002 | 7.3333 | 4400 | 0.7809 | 0.865 | 0.8767 | 0.865 | 0.8660 |
| 0.0002 | 7.5 | 4500 | 0.7817 | 0.8671 | 0.8788 | 0.8671 | 0.8680 |
| 0.0002 | 7.6667 | 4600 | 0.7804 | 0.8683 | 0.8792 | 0.8683 | 0.8692 |
| 0.0001 | 7.8333 | 4700 | 0.7560 | 0.8762 | 0.8840 | 0.8762 | 0.8766 |
| 0.0002 | 8.0 | 4800 | 0.7634 | 0.8767 | 0.8848 | 0.8767 | 0.8771 |
| 0.0001 | 8.1667 | 4900 | 0.7603 | 0.8792 | 0.8866 | 0.8792 | 0.8794 |
| 0.0001 | 8.3333 | 5000 | 0.7596 | 0.8792 | 0.8864 | 0.8792 | 0.8794 |
| 0.0001 | 8.5 | 5100 | 0.7636 | 0.8804 | 0.8875 | 0.8804 | 0.8806 |
| 0.0001 | 8.6667 | 5200 | 0.7681 | 0.8792 | 0.8869 | 0.8792 | 0.8794 |
| 0.0001 | 8.8333 | 5300 | 0.7720 | 0.8796 | 0.8877 | 0.8796 | 0.8799 |
| 0.0001 | 9.0 | 5400 | 0.7743 | 0.8796 | 0.8876 | 0.8796 | 0.8798 |
| 0.0001 | 9.1667 | 5500 | 0.7771 | 0.88 | 0.8880 | 0.88 | 0.8802 |
| 0.0001 | 9.3333 | 5600 | 0.7801 | 0.8804 | 0.8883 | 0.8804 | 0.8806 |
| 0.0001 | 9.5 | 5700 | 0.7823 | 0.8804 | 0.8883 | 0.8804 | 0.8806 |
| 0.0001 | 9.6667 | 5800 | 0.7851 | 0.8808 | 0.8885 | 0.8808 | 0.8810 |
| 0.0001 | 9.8333 | 5900 | 0.7873 | 0.8808 | 0.8885 | 0.8808 | 0.8810 |
| 0.0001 | 10.0 | 6000 | 0.7907 | 0.8812 | 0.8890 | 0.8812 | 0.8814 |
| 0.0001 | 10.1667 | 6100 | 0.7934 | 0.8817 | 0.8893 | 0.8817 | 0.8818 |
| 0.0001 | 10.3333 | 6200 | 0.7968 | 0.8817 | 0.8896 | 0.8817 | 0.8818 |
| 0.0001 | 10.5 | 6300 | 0.8003 | 0.8817 | 0.8896 | 0.8817 | 0.8818 |
| 0.0001 | 10.6667 | 6400 | 0.8027 | 0.8817 | 0.8896 | 0.8817 | 0.8818 |
| 0.0001 | 10.8333 | 6500 | 0.8035 | 0.8812 | 0.8894 | 0.8812 | 0.8815 |
| 0.0001 | 11.0 | 6600 | 0.8049 | 0.8812 | 0.8894 | 0.8812 | 0.8815 |
| 0.0001 | 11.1667 | 6700 | 0.8070 | 0.8812 | 0.8894 | 0.8812 | 0.8815 |
| 0.0001 | 11.3333 | 6800 | 0.8091 | 0.8812 | 0.8894 | 0.8812 | 0.8815 |
| 0.0001 | 11.5 | 6900 | 0.8124 | 0.8817 | 0.8897 | 0.8817 | 0.8818 |
| 0.0001 | 11.6667 | 7000 | 0.8147 | 0.8817 | 0.8897 | 0.8817 | 0.8818 |
| 0.0001 | 11.8333 | 7100 | 0.8163 | 0.8821 | 0.8899 | 0.8821 | 0.8822 |
| 0.0001 | 12.0 | 7200 | 0.8181 | 0.8829 | 0.8908 | 0.8829 | 0.8830 |
| 0.0 | 12.1667 | 7300 | 0.8204 | 0.8833 | 0.8911 | 0.8833 | 0.8834 |
| 0.0 | 12.3333 | 7400 | 0.8224 | 0.8833 | 0.8911 | 0.8833 | 0.8834 |
| 0.0 | 12.5 | 7500 | 0.8246 | 0.8825 | 0.8902 | 0.8825 | 0.8826 |
| 0.0 | 12.6667 | 7600 | 0.8267 | 0.8821 | 0.8898 | 0.8821 | 0.8821 |
| 0.0 | 12.8333 | 7700 | 0.8280 | 0.8821 | 0.8898 | 0.8821 | 0.8821 |
| 0.0 | 13.0 | 7800 | 0.8290 | 0.8825 | 0.8902 | 0.8825 | 0.8826 |
| 0.0 | 13.1667 | 7900 | 0.8309 | 0.8821 | 0.8898 | 0.8821 | 0.8821 |
| 0.0 | 13.3333 | 8000 | 0.8328 | 0.8821 | 0.8898 | 0.8821 | 0.8821 |
| 0.0 | 13.5 | 8100 | 0.8340 | 0.8825 | 0.8902 | 0.8825 | 0.8826 |
| 0.0 | 13.6667 | 8200 | 0.8348 | 0.8821 | 0.8898 | 0.8821 | 0.8821 |
| 0.0 | 13.8333 | 8300 | 0.8360 | 0.8821 | 0.8898 | 0.8821 | 0.8821 |
| 0.0 | 14.0 | 8400 | 0.8369 | 0.8825 | 0.8902 | 0.8825 | 0.8826 |
| 0.0 | 14.1667 | 8500 | 0.8379 | 0.8821 | 0.8898 | 0.8821 | 0.8821 |
| 0.0 | 14.3333 | 8600 | 0.8386 | 0.8821 | 0.8898 | 0.8821 | 0.8821 |
| 0.0 | 14.5 | 8700 | 0.8390 | 0.8829 | 0.8905 | 0.8829 | 0.8830 |
| 0.0 | 14.6667 | 8800 | 0.8397 | 0.8825 | 0.8901 | 0.8825 | 0.8825 |
| 0.0 | 14.8333 | 8900 | 0.8401 | 0.8825 | 0.8901 | 0.8825 | 0.8825 |
| 0.0 | 15.0 | 9000 | 0.8401 | 0.8825 | 0.8901 | 0.8825 | 0.8825 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"mix-subtype_iiia",
"mix-subtype_iia",
"mix-subtype_ivc",
"mix-subtype_ivd",
"mix-subtype_ia",
"mix-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-5-Jonathan_El-Beze_-w256_1k_v1-_SEC
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-5-Jonathan_El-Beze_-w256_1k_v1-_SEC
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2317
- Accuracy: 0.9583
- Precision: 0.9611
- Recall: 0.9583
- F1: 0.9575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.1048 | 0.3333 | 100 | 0.2766 | 0.9125 | 0.9266 | 0.9125 | 0.9148 |
| 0.1694 | 0.6667 | 200 | 0.5766 | 0.855 | 0.8878 | 0.855 | 0.8515 |
| 0.1116 | 1.0 | 300 | 0.8084 | 0.8233 | 0.8730 | 0.8233 | 0.8067 |
| 0.0071 | 1.3333 | 400 | 0.6568 | 0.8783 | 0.9098 | 0.8783 | 0.8717 |
| 0.0606 | 1.6667 | 500 | 0.6522 | 0.8767 | 0.9201 | 0.8767 | 0.8796 |
| 0.0069 | 2.0 | 600 | 1.3007 | 0.7383 | 0.7651 | 0.7383 | 0.7228 |
| 0.003 | 2.3333 | 700 | 0.3122 | 0.925 | 0.9287 | 0.925 | 0.9253 |
| 0.002 | 2.6667 | 800 | 0.5233 | 0.89 | 0.9141 | 0.89 | 0.8863 |
| 0.0023 | 3.0 | 900 | 0.7763 | 0.8567 | 0.8853 | 0.8567 | 0.8499 |
| 0.1048 | 3.3333 | 1000 | 0.5440 | 0.8983 | 0.9024 | 0.8983 | 0.8971 |
| 0.0023 | 3.6667 | 1100 | 0.3234 | 0.9367 | 0.9471 | 0.9367 | 0.9366 |
| 0.0943 | 4.0 | 1200 | 0.9164 | 0.84 | 0.9062 | 0.84 | 0.8402 |
| 0.0858 | 4.3333 | 1300 | 0.2317 | 0.9583 | 0.9611 | 0.9583 | 0.9575 |
| 0.0011 | 4.6667 | 1400 | 1.0192 | 0.82 | 0.8376 | 0.82 | 0.8045 |
| 0.0009 | 5.0 | 1500 | 0.5853 | 0.8725 | 0.9008 | 0.8725 | 0.8718 |
| 0.0007 | 5.3333 | 1600 | 0.5612 | 0.8842 | 0.9086 | 0.8842 | 0.8841 |
| 0.0006 | 5.6667 | 1700 | 0.5591 | 0.8842 | 0.9085 | 0.8842 | 0.8842 |
| 0.0006 | 6.0 | 1800 | 0.5744 | 0.8833 | 0.9085 | 0.8833 | 0.8832 |
| 0.0005 | 6.3333 | 1900 | 0.5831 | 0.8817 | 0.9065 | 0.8817 | 0.8816 |
| 0.0005 | 6.6667 | 2000 | 0.5819 | 0.8842 | 0.9075 | 0.8842 | 0.8842 |
| 0.0004 | 7.0 | 2100 | 0.5861 | 0.8842 | 0.9076 | 0.8842 | 0.8843 |
| 0.0004 | 7.3333 | 2200 | 0.5866 | 0.8867 | 0.9092 | 0.8867 | 0.8869 |
| 0.0004 | 7.6667 | 2300 | 0.5911 | 0.8867 | 0.9092 | 0.8867 | 0.8869 |
| 0.0004 | 8.0 | 2400 | 0.5931 | 0.8867 | 0.9092 | 0.8867 | 0.8869 |
| 0.0003 | 8.3333 | 2500 | 0.5992 | 0.8867 | 0.9092 | 0.8867 | 0.8869 |
| 0.0003 | 8.6667 | 2600 | 0.5975 | 0.8892 | 0.9108 | 0.8892 | 0.8895 |
| 0.0003 | 9.0 | 2700 | 0.5978 | 0.89 | 0.9112 | 0.89 | 0.8904 |
| 0.0003 | 9.3333 | 2800 | 0.6015 | 0.89 | 0.9115 | 0.89 | 0.8905 |
| 0.0003 | 9.6667 | 2900 | 0.6045 | 0.89 | 0.9115 | 0.89 | 0.8905 |
| 0.0002 | 10.0 | 3000 | 0.6030 | 0.89 | 0.9115 | 0.89 | 0.8905 |
| 0.0002 | 10.3333 | 3100 | 0.6025 | 0.8917 | 0.9124 | 0.8917 | 0.8922 |
| 0.0002 | 10.6667 | 3200 | 0.6038 | 0.8917 | 0.9124 | 0.8917 | 0.8922 |
| 0.0002 | 11.0 | 3300 | 0.6075 | 0.8908 | 0.9112 | 0.8908 | 0.8913 |
| 0.0002 | 11.3333 | 3400 | 0.6090 | 0.8917 | 0.9116 | 0.8917 | 0.8922 |
| 0.0002 | 11.6667 | 3500 | 0.6109 | 0.8917 | 0.9116 | 0.8917 | 0.8923 |
| 0.0002 | 12.0 | 3600 | 0.6111 | 0.8917 | 0.9116 | 0.8917 | 0.8923 |
| 0.0002 | 12.3333 | 3700 | 0.6121 | 0.8917 | 0.9116 | 0.8917 | 0.8923 |
| 0.0002 | 12.6667 | 3800 | 0.6126 | 0.8917 | 0.9116 | 0.8917 | 0.8923 |
| 0.0002 | 13.0 | 3900 | 0.6135 | 0.8917 | 0.9119 | 0.8917 | 0.8923 |
| 0.0002 | 13.3333 | 4000 | 0.6142 | 0.8917 | 0.9119 | 0.8917 | 0.8923 |
| 0.0002 | 13.6667 | 4100 | 0.6154 | 0.8917 | 0.9119 | 0.8917 | 0.8923 |
| 0.0002 | 14.0 | 4200 | 0.6156 | 0.8917 | 0.9119 | 0.8917 | 0.8923 |
| 0.0002 | 14.3333 | 4300 | 0.6159 | 0.8917 | 0.9119 | 0.8917 | 0.8923 |
| 0.0002 | 14.6667 | 4400 | 0.6162 | 0.8917 | 0.9119 | 0.8917 | 0.8923 |
| 0.0002 | 15.0 | 4500 | 0.6163 | 0.8917 | 0.9119 | 0.8917 | 0.8923 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"sec-subtype_iiia",
"sec-subtype_iia",
"sec-subtype_ivc",
"sec-subtype_ivd",
"sec-subtype_ia",
"sec-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-5-Jonathan_El-Beze_-w256_1k_v1-_SUR
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-5-Jonathan_El-Beze_-w256_1k_v1-_SUR
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5091
- Accuracy: 0.8617
- Precision: 0.8757
- Recall: 0.8617
- F1: 0.8604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2613 | 0.3333 | 100 | 0.6234 | 0.7883 | 0.8364 | 0.7883 | 0.7915 |
| 0.1745 | 0.6667 | 200 | 0.7693 | 0.7342 | 0.7739 | 0.7342 | 0.7088 |
| 0.1303 | 1.0 | 300 | 0.5091 | 0.8617 | 0.8757 | 0.8617 | 0.8604 |
| 0.0163 | 1.3333 | 400 | 0.5309 | 0.8708 | 0.8869 | 0.8708 | 0.8706 |
| 0.009 | 1.6667 | 500 | 0.9663 | 0.7725 | 0.8345 | 0.7725 | 0.7706 |
| 0.0221 | 2.0 | 600 | 1.3265 | 0.7225 | 0.8133 | 0.7225 | 0.7219 |
| 0.0053 | 2.3333 | 700 | 0.8728 | 0.8408 | 0.8727 | 0.8408 | 0.8366 |
| 0.0031 | 2.6667 | 800 | 0.9499 | 0.8258 | 0.8596 | 0.8258 | 0.8225 |
| 0.0733 | 3.0 | 900 | 0.8135 | 0.8558 | 0.8840 | 0.8558 | 0.8554 |
| 0.0026 | 3.3333 | 1000 | 0.6858 | 0.885 | 0.8963 | 0.885 | 0.8826 |
| 0.0028 | 3.6667 | 1100 | 0.8497 | 0.8608 | 0.9004 | 0.8608 | 0.8631 |
| 0.0021 | 4.0 | 1200 | 1.0722 | 0.81 | 0.8493 | 0.81 | 0.8114 |
| 0.0023 | 4.3333 | 1300 | 0.7217 | 0.8742 | 0.8742 | 0.8742 | 0.8737 |
| 0.0243 | 4.6667 | 1400 | 0.8721 | 0.8467 | 0.8627 | 0.8467 | 0.8449 |
| 0.004 | 5.0 | 1500 | 0.8314 | 0.8425 | 0.8500 | 0.8425 | 0.8402 |
| 0.0011 | 5.3333 | 1600 | 0.9170 | 0.8367 | 0.8362 | 0.8367 | 0.8347 |
| 0.0008 | 5.6667 | 1700 | 0.9080 | 0.8475 | 0.8536 | 0.8475 | 0.8452 |
| 0.0017 | 6.0 | 1800 | 0.8709 | 0.855 | 0.8642 | 0.855 | 0.8527 |
| 0.0007 | 6.3333 | 1900 | 0.7878 | 0.8808 | 0.8899 | 0.8808 | 0.8777 |
| 0.0006 | 6.6667 | 2000 | 0.7954 | 0.8825 | 0.8926 | 0.8825 | 0.8795 |
| 0.0007 | 7.0 | 2100 | 1.0196 | 0.8475 | 0.8640 | 0.8475 | 0.8438 |
| 0.0005 | 7.3333 | 2200 | 1.0647 | 0.8508 | 0.8665 | 0.8508 | 0.8463 |
| 0.0005 | 7.6667 | 2300 | 1.2970 | 0.8125 | 0.8430 | 0.8125 | 0.8111 |
| 0.0005 | 8.0 | 2400 | 1.2049 | 0.8167 | 0.8214 | 0.8167 | 0.8143 |
| 0.0021 | 8.3333 | 2500 | 0.9407 | 0.8642 | 0.8663 | 0.8642 | 0.8602 |
| 0.0006 | 8.6667 | 2600 | 1.8421 | 0.7258 | 0.8273 | 0.7258 | 0.7256 |
| 0.0005 | 9.0 | 2700 | 1.6230 | 0.76 | 0.7921 | 0.76 | 0.7555 |
| 0.0116 | 9.3333 | 2800 | 1.2096 | 0.8258 | 0.8495 | 0.8258 | 0.8182 |
| 0.0004 | 9.6667 | 2900 | 1.4233 | 0.8158 | 0.8258 | 0.8158 | 0.8111 |
| 0.0006 | 10.0 | 3000 | 1.5142 | 0.7775 | 0.8340 | 0.7775 | 0.7760 |
| 0.0004 | 10.3333 | 3100 | 0.8260 | 0.875 | 0.8833 | 0.875 | 0.8715 |
| 0.0004 | 10.6667 | 3200 | 0.8945 | 0.8642 | 0.8754 | 0.8642 | 0.8631 |
| 0.0003 | 11.0 | 3300 | 0.9189 | 0.865 | 0.8658 | 0.865 | 0.8596 |
| 0.0003 | 11.3333 | 3400 | 0.6929 | 0.8917 | 0.8926 | 0.8917 | 0.8882 |
| 0.0003 | 11.6667 | 3500 | 0.7764 | 0.8908 | 0.9000 | 0.8908 | 0.8879 |
| 0.0003 | 12.0 | 3600 | 0.9250 | 0.8617 | 0.8749 | 0.8617 | 0.8598 |
| 0.0002 | 12.3333 | 3700 | 0.9109 | 0.865 | 0.8772 | 0.865 | 0.8628 |
| 0.0002 | 12.6667 | 3800 | 0.9101 | 0.865 | 0.8772 | 0.865 | 0.8628 |
| 0.0002 | 13.0 | 3900 | 0.9113 | 0.8675 | 0.8792 | 0.8675 | 0.8653 |
| 0.0002 | 13.3333 | 4000 | 0.9124 | 0.8683 | 0.8800 | 0.8683 | 0.8662 |
| 0.0002 | 13.6667 | 4100 | 0.9130 | 0.8683 | 0.8800 | 0.8683 | 0.8662 |
| 0.0002 | 14.0 | 4200 | 0.9124 | 0.8683 | 0.8800 | 0.8683 | 0.8662 |
| 0.0002 | 14.3333 | 4300 | 0.9125 | 0.8683 | 0.8800 | 0.8683 | 0.8662 |
| 0.0002 | 14.6667 | 4400 | 0.9130 | 0.8683 | 0.8800 | 0.8683 | 0.8662 |
| 0.0002 | 15.0 | 4500 | 0.9131 | 0.8683 | 0.8800 | 0.8683 | 0.8662 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"sur-subtype_iiia",
"sur-subtype_iia",
"sur-subtype_ivc",
"sur-subtype_ivd",
"sur-subtype_ia",
"sur-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-5-Michel_Daudon_-w256_1k_v1-_MIX
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-5-Michel_Daudon_-w256_1k_v1-_MIX
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3946
- Accuracy: 0.8888
- Precision: 0.8975
- Recall: 0.8888
- F1: 0.8871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5771 | 0.1667 | 100 | 0.6379 | 0.7929 | 0.8436 | 0.7929 | 0.7925 |
| 0.3294 | 0.3333 | 200 | 0.7346 | 0.7992 | 0.8342 | 0.7992 | 0.7915 |
| 0.5113 | 0.5 | 300 | 0.5429 | 0.8638 | 0.8829 | 0.8638 | 0.8625 |
| 0.1584 | 0.6667 | 400 | 0.6327 | 0.8304 | 0.8612 | 0.8304 | 0.8308 |
| 0.2638 | 0.8333 | 500 | 1.0157 | 0.7575 | 0.7964 | 0.7575 | 0.7623 |
| 0.2057 | 1.0 | 600 | 0.3946 | 0.8888 | 0.8975 | 0.8888 | 0.8871 |
| 0.1699 | 1.1667 | 700 | 0.7519 | 0.7987 | 0.8373 | 0.7987 | 0.8004 |
| 0.1526 | 1.3333 | 800 | 0.7253 | 0.8342 | 0.8727 | 0.8342 | 0.8372 |
| 0.0361 | 1.5 | 900 | 1.0151 | 0.7829 | 0.8064 | 0.7829 | 0.7748 |
| 0.0756 | 1.6667 | 1000 | 0.6614 | 0.8625 | 0.8860 | 0.8625 | 0.8647 |
| 0.0267 | 1.8333 | 1100 | 0.9163 | 0.8154 | 0.8321 | 0.8154 | 0.8195 |
| 0.1447 | 2.0 | 1200 | 0.7084 | 0.8271 | 0.8381 | 0.8271 | 0.8244 |
| 0.0132 | 2.1667 | 1300 | 0.8919 | 0.8354 | 0.8758 | 0.8354 | 0.8378 |
| 0.0254 | 2.3333 | 1400 | 0.7531 | 0.8488 | 0.8772 | 0.8488 | 0.8505 |
| 0.0848 | 2.5 | 1500 | 0.6491 | 0.8733 | 0.8841 | 0.8733 | 0.8765 |
| 0.0605 | 2.6667 | 1600 | 0.7045 | 0.855 | 0.8708 | 0.855 | 0.8515 |
| 0.0085 | 2.8333 | 1700 | 1.1652 | 0.7992 | 0.8305 | 0.7992 | 0.7879 |
| 0.1798 | 3.0 | 1800 | 0.9389 | 0.8075 | 0.8350 | 0.8075 | 0.8075 |
| 0.0555 | 3.1667 | 1900 | 0.7451 | 0.8421 | 0.8593 | 0.8421 | 0.8452 |
| 0.0245 | 3.3333 | 2000 | 0.4729 | 0.8888 | 0.8942 | 0.8888 | 0.8880 |
| 0.0017 | 3.5 | 2100 | 0.7608 | 0.8629 | 0.8859 | 0.8629 | 0.8663 |
| 0.0266 | 3.6667 | 2200 | 0.7795 | 0.8571 | 0.8668 | 0.8571 | 0.8578 |
| 0.0072 | 3.8333 | 2300 | 0.6487 | 0.8596 | 0.8862 | 0.8596 | 0.8600 |
| 0.0019 | 4.0 | 2400 | 0.6297 | 0.8712 | 0.8846 | 0.8712 | 0.8723 |
| 0.001 | 4.1667 | 2500 | 0.8346 | 0.8679 | 0.8849 | 0.8679 | 0.8692 |
| 0.0014 | 4.3333 | 2600 | 0.8441 | 0.8633 | 0.8869 | 0.8633 | 0.8671 |
| 0.0068 | 4.5 | 2700 | 0.7032 | 0.8662 | 0.8769 | 0.8662 | 0.8649 |
| 0.0014 | 4.6667 | 2800 | 0.7379 | 0.86 | 0.8795 | 0.86 | 0.8565 |
| 0.0951 | 4.8333 | 2900 | 0.5960 | 0.8979 | 0.9086 | 0.8979 | 0.8984 |
| 0.0439 | 5.0 | 3000 | 0.6975 | 0.8708 | 0.8902 | 0.8708 | 0.8699 |
| 0.1022 | 5.1667 | 3100 | 1.0231 | 0.8363 | 0.8703 | 0.8363 | 0.8312 |
| 0.0239 | 5.3333 | 3200 | 0.7746 | 0.8683 | 0.8767 | 0.8683 | 0.8690 |
| 0.0087 | 5.5 | 3300 | 0.8246 | 0.8567 | 0.8700 | 0.8567 | 0.8561 |
| 0.001 | 5.6667 | 3400 | 1.0921 | 0.8237 | 0.8484 | 0.8237 | 0.8208 |
| 0.0056 | 5.8333 | 3500 | 0.7431 | 0.8533 | 0.8562 | 0.8533 | 0.8524 |
| 0.0007 | 6.0 | 3600 | 0.8992 | 0.8213 | 0.8463 | 0.8213 | 0.8270 |
| 0.0041 | 6.1667 | 3700 | 0.8531 | 0.8438 | 0.8757 | 0.8438 | 0.8454 |
| 0.0138 | 6.3333 | 3800 | 0.6643 | 0.8821 | 0.8918 | 0.8821 | 0.8809 |
| 0.0005 | 6.5 | 3900 | 0.6779 | 0.8862 | 0.8970 | 0.8862 | 0.8877 |
| 0.0005 | 6.6667 | 4000 | 0.7109 | 0.8892 | 0.9030 | 0.8892 | 0.8903 |
| 0.0005 | 6.8333 | 4100 | 0.7191 | 0.8908 | 0.9013 | 0.8908 | 0.8911 |
| 0.0006 | 7.0 | 4200 | 0.8573 | 0.8675 | 0.8846 | 0.8675 | 0.8635 |
| 0.064 | 7.1667 | 4300 | 0.9180 | 0.8608 | 0.8743 | 0.8608 | 0.8603 |
| 0.0005 | 7.3333 | 4400 | 0.7651 | 0.8767 | 0.8885 | 0.8767 | 0.8763 |
| 0.0007 | 7.5 | 4500 | 0.8158 | 0.8571 | 0.8703 | 0.8571 | 0.8569 |
| 0.0004 | 7.6667 | 4600 | 0.8329 | 0.8504 | 0.8709 | 0.8504 | 0.8517 |
| 0.0003 | 7.8333 | 4700 | 0.9078 | 0.8454 | 0.8605 | 0.8454 | 0.8446 |
| 0.0003 | 8.0 | 4800 | 0.8859 | 0.8529 | 0.8684 | 0.8529 | 0.8538 |
| 0.0003 | 8.1667 | 4900 | 0.9303 | 0.8479 | 0.8669 | 0.8479 | 0.8491 |
| 0.0002 | 8.3333 | 5000 | 0.9324 | 0.8475 | 0.8676 | 0.8475 | 0.8483 |
| 0.0002 | 8.5 | 5100 | 0.9206 | 0.8533 | 0.8733 | 0.8533 | 0.8544 |
| 0.0002 | 8.6667 | 5200 | 0.8745 | 0.8621 | 0.8813 | 0.8621 | 0.8630 |
| 0.0002 | 8.8333 | 5300 | 0.9208 | 0.8567 | 0.8764 | 0.8567 | 0.8575 |
| 0.0002 | 9.0 | 5400 | 0.9221 | 0.8583 | 0.8776 | 0.8583 | 0.8592 |
| 0.0002 | 9.1667 | 5500 | 0.9255 | 0.8588 | 0.8777 | 0.8588 | 0.8596 |
| 0.0002 | 9.3333 | 5600 | 0.9285 | 0.8583 | 0.8772 | 0.8583 | 0.8592 |
| 0.0001 | 9.5 | 5700 | 0.9288 | 0.8592 | 0.8780 | 0.8592 | 0.8601 |
| 0.0001 | 9.6667 | 5800 | 0.9305 | 0.8596 | 0.8782 | 0.8596 | 0.8605 |
| 0.0002 | 9.8333 | 5900 | 0.9323 | 0.8596 | 0.8782 | 0.8596 | 0.8605 |
| 0.0001 | 10.0 | 6000 | 0.9335 | 0.8596 | 0.8782 | 0.8596 | 0.8606 |
| 0.0001 | 10.1667 | 6100 | 0.9336 | 0.8608 | 0.8791 | 0.8608 | 0.8619 |
| 0.0001 | 10.3333 | 6200 | 0.9360 | 0.8612 | 0.8795 | 0.8612 | 0.8623 |
| 0.0001 | 10.5 | 6300 | 0.9374 | 0.8625 | 0.8803 | 0.8625 | 0.8635 |
| 0.0001 | 10.6667 | 6400 | 0.9406 | 0.8629 | 0.8809 | 0.8629 | 0.8640 |
| 0.0001 | 10.8333 | 6500 | 0.9420 | 0.8633 | 0.8810 | 0.8633 | 0.8643 |
| 0.0001 | 11.0 | 6600 | 0.9443 | 0.8633 | 0.8810 | 0.8633 | 0.8643 |
| 0.0001 | 11.1667 | 6700 | 0.9452 | 0.8633 | 0.8810 | 0.8633 | 0.8643 |
| 0.0001 | 11.3333 | 6800 | 0.9476 | 0.8638 | 0.8813 | 0.8638 | 0.8647 |
| 0.0001 | 11.5 | 6900 | 0.9495 | 0.8638 | 0.8813 | 0.8638 | 0.8647 |
| 0.0001 | 11.6667 | 7000 | 0.9501 | 0.8642 | 0.8818 | 0.8642 | 0.8652 |
| 0.0001 | 11.8333 | 7100 | 0.9528 | 0.8646 | 0.8820 | 0.8646 | 0.8656 |
| 0.0001 | 12.0 | 7200 | 0.9547 | 0.8646 | 0.8820 | 0.8646 | 0.8656 |
| 0.0001 | 12.1667 | 7300 | 0.9574 | 0.8646 | 0.8820 | 0.8646 | 0.8656 |
| 0.0001 | 12.3333 | 7400 | 0.9586 | 0.8646 | 0.8820 | 0.8646 | 0.8656 |
| 0.0001 | 12.5 | 7500 | 0.9594 | 0.8646 | 0.8820 | 0.8646 | 0.8656 |
| 0.0001 | 12.6667 | 7600 | 0.9611 | 0.8646 | 0.8820 | 0.8646 | 0.8656 |
| 0.0001 | 12.8333 | 7700 | 0.9627 | 0.8646 | 0.8820 | 0.8646 | 0.8656 |
| 0.0001 | 13.0 | 7800 | 0.9639 | 0.8646 | 0.8820 | 0.8646 | 0.8656 |
| 0.0001 | 13.1667 | 7900 | 0.9656 | 0.8646 | 0.8820 | 0.8646 | 0.8656 |
| 0.0001 | 13.3333 | 8000 | 0.9662 | 0.8646 | 0.8820 | 0.8646 | 0.8655 |
| 0.0001 | 13.5 | 8100 | 0.9675 | 0.8642 | 0.8815 | 0.8642 | 0.8651 |
| 0.0001 | 13.6667 | 8200 | 0.9684 | 0.8642 | 0.8814 | 0.8642 | 0.8651 |
| 0.0001 | 13.8333 | 8300 | 0.9695 | 0.8646 | 0.8818 | 0.8646 | 0.8656 |
| 0.0001 | 14.0 | 8400 | 0.9706 | 0.8646 | 0.8818 | 0.8646 | 0.8656 |
| 0.0001 | 14.1667 | 8500 | 0.9714 | 0.8646 | 0.8818 | 0.8646 | 0.8656 |
| 0.0001 | 14.3333 | 8600 | 0.9724 | 0.8646 | 0.8818 | 0.8646 | 0.8656 |
| 0.0001 | 14.5 | 8700 | 0.9727 | 0.8646 | 0.8818 | 0.8646 | 0.8656 |
| 0.0001 | 14.6667 | 8800 | 0.9733 | 0.8646 | 0.8818 | 0.8646 | 0.8656 |
| 0.0001 | 14.8333 | 8900 | 0.9734 | 0.8646 | 0.8818 | 0.8646 | 0.8656 |
| 0.0001 | 15.0 | 9000 | 0.9736 | 0.8646 | 0.8818 | 0.8646 | 0.8656 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"mix-subtype_iva",
"mix-subtype_iva2",
"mix-subtype_ivc",
"mix-subtype_ivd",
"mix-subtype_ia",
"mix-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-5-Michel_Daudon_-w256_1k_v1-_SEC
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-5-Michel_Daudon_-w256_1k_v1-_SEC
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3821
- Accuracy: 0.9283
- Precision: 0.9298
- Recall: 0.9283
- F1: 0.9282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.3259 | 0.3333 | 100 | 0.6052 | 0.8142 | 0.8678 | 0.8142 | 0.8113 |
| 0.1852 | 0.6667 | 200 | 0.4605 | 0.8525 | 0.8799 | 0.8525 | 0.8505 |
| 0.1342 | 1.0 | 300 | 0.5787 | 0.8583 | 0.8939 | 0.8583 | 0.8592 |
| 0.0984 | 1.3333 | 400 | 0.4582 | 0.8875 | 0.8938 | 0.8875 | 0.8863 |
| 0.0555 | 1.6667 | 500 | 0.3914 | 0.8825 | 0.8955 | 0.8825 | 0.8844 |
| 0.2228 | 2.0 | 600 | 0.5982 | 0.865 | 0.8807 | 0.865 | 0.8668 |
| 0.016 | 2.3333 | 700 | 0.5747 | 0.8708 | 0.8929 | 0.8708 | 0.8729 |
| 0.2215 | 2.6667 | 800 | 0.6513 | 0.8575 | 0.8777 | 0.8575 | 0.8564 |
| 0.0118 | 3.0 | 900 | 0.8234 | 0.8492 | 0.8687 | 0.8492 | 0.8498 |
| 0.0028 | 3.3333 | 1000 | 0.6503 | 0.88 | 0.8949 | 0.88 | 0.8804 |
| 0.0035 | 3.6667 | 1100 | 0.4011 | 0.9133 | 0.9207 | 0.9133 | 0.9145 |
| 0.0742 | 4.0 | 1200 | 0.5671 | 0.8833 | 0.9069 | 0.8833 | 0.8833 |
| 0.0074 | 4.3333 | 1300 | 0.6269 | 0.8742 | 0.8902 | 0.8742 | 0.8711 |
| 0.0043 | 4.6667 | 1400 | 0.6497 | 0.8792 | 0.8998 | 0.8792 | 0.8800 |
| 0.133 | 5.0 | 1500 | 0.7292 | 0.8733 | 0.9075 | 0.8733 | 0.8738 |
| 0.0012 | 5.3333 | 1600 | 0.7823 | 0.8633 | 0.8799 | 0.8633 | 0.8637 |
| 0.0009 | 5.6667 | 1700 | 0.4115 | 0.915 | 0.9186 | 0.915 | 0.9156 |
| 0.0011 | 6.0 | 1800 | 0.8521 | 0.85 | 0.8619 | 0.85 | 0.8493 |
| 0.001 | 6.3333 | 1900 | 0.4895 | 0.9108 | 0.9263 | 0.9108 | 0.9126 |
| 0.0219 | 6.6667 | 2000 | 0.3821 | 0.9283 | 0.9298 | 0.9283 | 0.9282 |
| 0.0008 | 7.0 | 2100 | 0.7710 | 0.8683 | 0.8868 | 0.8683 | 0.8666 |
| 0.0007 | 7.3333 | 2200 | 0.5704 | 0.9108 | 0.9179 | 0.9108 | 0.9073 |
| 0.0014 | 7.6667 | 2300 | 0.6604 | 0.8925 | 0.8981 | 0.8925 | 0.8902 |
| 0.0005 | 8.0 | 2400 | 0.5364 | 0.9075 | 0.9095 | 0.9075 | 0.9061 |
| 0.0005 | 8.3333 | 2500 | 0.5356 | 0.9075 | 0.9093 | 0.9075 | 0.9062 |
| 0.0004 | 8.6667 | 2600 | 0.5364 | 0.9067 | 0.9082 | 0.9067 | 0.9053 |
| 0.0004 | 9.0 | 2700 | 0.7982 | 0.8692 | 0.8722 | 0.8692 | 0.8636 |
| 0.0004 | 9.3333 | 2800 | 0.7586 | 0.875 | 0.8774 | 0.875 | 0.8706 |
| 0.0004 | 9.6667 | 2900 | 0.7252 | 0.8808 | 0.8837 | 0.8808 | 0.8774 |
| 0.0003 | 10.0 | 3000 | 0.6126 | 0.8992 | 0.9037 | 0.8992 | 0.8995 |
| 0.0003 | 10.3333 | 3100 | 0.6417 | 0.8917 | 0.8889 | 0.8917 | 0.8899 |
| 0.0003 | 10.6667 | 3200 | 0.6489 | 0.8925 | 0.8901 | 0.8925 | 0.8909 |
| 0.0003 | 11.0 | 3300 | 0.6508 | 0.8917 | 0.8892 | 0.8917 | 0.8900 |
| 0.0003 | 11.3333 | 3400 | 0.6529 | 0.8917 | 0.8892 | 0.8917 | 0.8900 |
| 0.0003 | 11.6667 | 3500 | 0.6544 | 0.8917 | 0.8892 | 0.8917 | 0.8900 |
| 0.0003 | 12.0 | 3600 | 0.6561 | 0.8917 | 0.8892 | 0.8917 | 0.8900 |
| 0.0003 | 12.3333 | 3700 | 0.6577 | 0.8925 | 0.8899 | 0.8925 | 0.8907 |
| 0.0002 | 12.6667 | 3800 | 0.6592 | 0.8933 | 0.8906 | 0.8933 | 0.8915 |
| 0.0002 | 13.0 | 3900 | 0.6601 | 0.8933 | 0.8906 | 0.8933 | 0.8915 |
| 0.0002 | 13.3333 | 4000 | 0.6613 | 0.8933 | 0.8906 | 0.8933 | 0.8915 |
| 0.0002 | 13.6667 | 4100 | 0.6622 | 0.8933 | 0.8906 | 0.8933 | 0.8915 |
| 0.0002 | 14.0 | 4200 | 0.6629 | 0.8933 | 0.8906 | 0.8933 | 0.8915 |
| 0.0002 | 14.3333 | 4300 | 0.6635 | 0.8933 | 0.8906 | 0.8933 | 0.8915 |
| 0.0002 | 14.6667 | 4400 | 0.6638 | 0.8933 | 0.8906 | 0.8933 | 0.8915 |
| 0.0002 | 15.0 | 4500 | 0.6640 | 0.8933 | 0.8906 | 0.8933 | 0.8915 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"sec-subtype_iva",
"sec-subtype_iva2",
"sec-subtype_ivc",
"sec-subtype_ivd",
"sec-subtype_ia",
"sec-subtype_va"
] |
Ivanrs/vit-base-kidney-stone-5-Michel_Daudon_-w256_1k_v1-_SUR
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-kidney-stone-5-Michel_Daudon_-w256_1k_v1-_SUR
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0850
- Accuracy: 0.7195
- Precision: 0.7506
- Recall: 0.7195
- F1: 0.7206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2033 | 0.3333 | 100 | 1.2261 | 0.6361 | 0.6932 | 0.6361 | 0.6400 |
| 0.0929 | 0.6667 | 200 | 1.0850 | 0.7195 | 0.7506 | 0.7195 | 0.7206 |
| 0.0625 | 1.0 | 300 | 1.3736 | 0.6909 | 0.7185 | 0.6909 | 0.6945 |
| 0.1293 | 1.3333 | 400 | 1.6858 | 0.6819 | 0.7413 | 0.6819 | 0.6573 |
| 0.0786 | 1.6667 | 500 | 1.6693 | 0.6746 | 0.7054 | 0.6746 | 0.6852 |
| 0.0769 | 2.0 | 600 | 1.2500 | 0.7653 | 0.7741 | 0.7653 | 0.7659 |
| 0.0675 | 2.3333 | 700 | 1.2728 | 0.7277 | 0.7905 | 0.7277 | 0.7006 |
| 0.0577 | 2.6667 | 800 | 1.7467 | 0.6942 | 0.7236 | 0.6942 | 0.7024 |
| 0.1206 | 3.0 | 900 | 1.9383 | 0.7105 | 0.7649 | 0.7105 | 0.6852 |
| 0.0516 | 3.3333 | 1000 | 1.6047 | 0.6999 | 0.6905 | 0.6999 | 0.6914 |
| 0.0235 | 3.6667 | 1100 | 1.2994 | 0.7686 | 0.7826 | 0.7686 | 0.7676 |
| 0.0016 | 4.0 | 1200 | 1.5717 | 0.7424 | 0.7565 | 0.7424 | 0.7443 |
| 0.0015 | 4.3333 | 1300 | 1.4555 | 0.7809 | 0.7935 | 0.7809 | 0.7757 |
| 0.0276 | 4.6667 | 1400 | 1.2971 | 0.7751 | 0.7664 | 0.7751 | 0.7679 |
| 0.0132 | 5.0 | 1500 | 1.6617 | 0.7555 | 0.7683 | 0.7555 | 0.7538 |
| 0.0015 | 5.3333 | 1600 | 1.5638 | 0.7383 | 0.7585 | 0.7383 | 0.7419 |
| 0.0009 | 5.6667 | 1700 | 1.8707 | 0.7383 | 0.7490 | 0.7383 | 0.7428 |
| 0.0008 | 6.0 | 1800 | 1.8055 | 0.7539 | 0.7631 | 0.7539 | 0.7570 |
| 0.0008 | 6.3333 | 1900 | 1.9551 | 0.7294 | 0.7480 | 0.7294 | 0.7338 |
| 0.0006 | 6.6667 | 2000 | 1.9497 | 0.7318 | 0.7496 | 0.7318 | 0.7361 |
| 0.0007 | 7.0 | 2100 | 1.9260 | 0.7343 | 0.7472 | 0.7343 | 0.7380 |
| 0.0006 | 7.3333 | 2200 | 1.9289 | 0.7326 | 0.7452 | 0.7326 | 0.7360 |
| 0.0024 | 7.6667 | 2300 | 1.8358 | 0.7261 | 0.7435 | 0.7261 | 0.7333 |
| 0.0005 | 8.0 | 2400 | 1.9143 | 0.7302 | 0.7482 | 0.7302 | 0.7359 |
| 0.0004 | 8.3333 | 2500 | 1.9815 | 0.7220 | 0.7419 | 0.7220 | 0.7279 |
| 0.0181 | 8.6667 | 2600 | 2.2374 | 0.6926 | 0.7291 | 0.6926 | 0.6944 |
| 0.0004 | 9.0 | 2700 | 1.9174 | 0.7482 | 0.7919 | 0.7482 | 0.7498 |
| 0.0004 | 9.3333 | 2800 | 1.9026 | 0.7473 | 0.7795 | 0.7473 | 0.7529 |
| 0.0003 | 9.6667 | 2900 | 1.9087 | 0.7522 | 0.7823 | 0.7522 | 0.7575 |
| 0.0004 | 10.0 | 3000 | 1.9171 | 0.7514 | 0.7817 | 0.7514 | 0.7567 |
| 0.0003 | 10.3333 | 3100 | 1.9246 | 0.7539 | 0.7839 | 0.7539 | 0.7591 |
| 0.0003 | 10.6667 | 3200 | 1.9318 | 0.7539 | 0.7839 | 0.7539 | 0.7591 |
| 0.0003 | 11.0 | 3300 | 1.9402 | 0.7506 | 0.7795 | 0.7506 | 0.7562 |
| 0.0002 | 11.3333 | 3400 | 1.9475 | 0.7506 | 0.7784 | 0.7506 | 0.7560 |
| 0.0003 | 11.6667 | 3500 | 1.9540 | 0.7522 | 0.7792 | 0.7522 | 0.7574 |
| 0.0003 | 12.0 | 3600 | 1.9608 | 0.7522 | 0.7792 | 0.7522 | 0.7574 |
| 0.0003 | 12.3333 | 3700 | 1.9678 | 0.7506 | 0.7765 | 0.7506 | 0.7559 |
| 0.0002 | 12.6667 | 3800 | 1.9732 | 0.7514 | 0.7771 | 0.7514 | 0.7567 |
| 0.0002 | 13.0 | 3900 | 1.9782 | 0.7522 | 0.7773 | 0.7522 | 0.7574 |
| 0.0002 | 13.3333 | 4000 | 1.9827 | 0.7514 | 0.7763 | 0.7514 | 0.7566 |
| 0.0002 | 13.6667 | 4100 | 1.9861 | 0.7514 | 0.7759 | 0.7514 | 0.7567 |
| 0.0002 | 14.0 | 4200 | 1.9894 | 0.7506 | 0.7749 | 0.7506 | 0.7560 |
| 0.0002 | 14.3333 | 4300 | 1.9920 | 0.7506 | 0.7749 | 0.7506 | 0.7560 |
| 0.0002 | 14.6667 | 4400 | 1.9933 | 0.7498 | 0.7739 | 0.7498 | 0.7552 |
| 0.0002 | 15.0 | 4500 | 1.9939 | 0.7498 | 0.7739 | 0.7498 | 0.7552 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.6.0+cu126
- Datasets 3.1.0
- Tokenizers 0.21.0
|
[
"sur-subtype_iva",
"sur-subtype_iva2",
"sur-subtype_ivc",
"sur-subtype_ivd",
"sur-subtype_ia",
"sur-subtype_va"
] |
rban01/vit-xray-pneumonia-classification
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"normal",
"pneumonia"
] |
darthraider/vit-4-veggies
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-4-veggies
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the darthraider/fruit-ripeness-detection-dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0606
- Accuracy: 0.9879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.3154 | 0.6494 | 100 | 0.3098 | 0.9435 |
| 0.1446 | 1.2987 | 200 | 0.2217 | 0.9435 |
| 0.0814 | 1.9481 | 300 | 0.1310 | 0.9717 |
| 0.0438 | 2.5974 | 400 | 0.0875 | 0.9830 |
| 0.0212 | 3.2468 | 500 | 0.1199 | 0.9766 |
| 0.0212 | 3.8961 | 600 | 0.0606 | 0.9879 |
| 0.002 | 4.5455 | 700 | 0.0803 | 0.9863 |
| 0.0011 | 5.1948 | 800 | 0.0745 | 0.9871 |
| 0.0008 | 5.8442 | 900 | 0.0809 | 0.9879 |
| 0.0005 | 6.4935 | 1000 | 0.0861 | 0.9887 |
| 0.0005 | 7.1429 | 1100 | 0.0865 | 0.9879 |
| 0.0004 | 7.7922 | 1200 | 0.0788 | 0.9879 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"damaged",
"dried",
"old",
"ripe",
"unripe"
] |
prithivMLmods/SAT-Landforms-Classifier
|

# **SAT-Landforms-Classifier**
> **SAT-Landforms-Classifier** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify satellite images into different landform categories using the **SiglipForImageClassification** architecture.
```py
Accuracy: 0.9863
F1 Score: 0.9858
Classification Report:
precision recall f1-score support
Annual Crop 0.9866 0.9810 0.9838 3000
Forest 0.9927 0.9957 0.9942 3000
Herbaceous Vegetation 0.9697 0.9800 0.9748 3000
Highway 0.9826 0.9928 0.9877 2500
Industrial 0.9964 0.9916 0.9940 2500
Pasture 0.9882 0.9610 0.9744 2000
Permanent Crop 0.9690 0.9760 0.9725 2500
Residential 0.9940 0.9970 0.9955 3000
River 0.9864 0.9872 0.9868 2500
Sea Lake 0.9963 0.9923 0.9943 3000
accuracy 0.9863 27000
macro avg 0.9862 0.9855 0.9858 27000
weighted avg 0.9863 0.9863 0.9863 27000
```

The model categorizes images into ten classes:
- **Class 0:** "Annual Crop"
- **Class 1:** "Forest"
- **Class 2:** "Herbaceous Vegetation"
- **Class 3:** "Highway"
- **Class 4:** "Industrial"
- **Class 5:** "Pasture"
- **Class 6:** "Permanent Crop"
- **Class 7:** "Residential"
- **Class 8:** "River"
- **Class 9:** "Sea Lake"
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/SAT-Landforms-Classifier"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def landform_classification(image):
"""Predicts landform category for a satellite image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "Annual Crop", "1": "Forest", "2": "Herbaceous Vegetation", "3": "Highway", "4": "Industrial",
"5": "Pasture", "6": "Permanent Crop", "7": "Residential", "8": "River", "9": "Sea Lake"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=landform_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="SAT Landforms Classification",
description="Upload a satellite image to classify its landform type."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
# **Intended Use:**
The **SAT-Landforms-Classifier** model is designed to classify satellite images into various landform types. Potential use cases include:
- **Land Use Monitoring:** Identifying different land use patterns from satellite imagery.
- **Environmental Studies:** Supporting researchers in tracking changes in vegetation and water bodies.
- **Urban Planning:** Assisting planners in analyzing residential, industrial, and infrastructure distributions.
- **Agricultural Analysis:** Helping assess crop distribution and pastureland areas.
- **Disaster Management:** Providing insights into land coverage for emergency response and recovery planning.
|
[
"annual crop",
"forest",
"herbaceous vegetation",
"highway",
"industrial",
"pasture",
"permanent crop",
"residential",
"river",
"sea lake"
] |
prithivMLmods/Gender-Classifier-Mini
|

# **Gender-Classifier-Mini**
> **Gender-Classifier-Mini** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify images based on gender using the **SiglipForImageClassification** architecture.
```py
Accuracy: 0.9720
F1 Score: 0.9720
Classification Report:
precision recall f1-score support
Female ♀ 0.9660 0.9796 0.9727 2549
Male ♂ 0.9785 0.9641 0.9712 2451
accuracy 0.9720 5000
macro avg 0.9722 0.9718 0.9720 5000
weighted avg 0.9721 0.9720 0.9720 5000
```

The model categorizes images into two classes:
- **Class 0:** "Female ♀"
- **Class 1:** "Male ♂"
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Gender-Classifier-Mini"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def gender_classification(image):
"""Predicts gender category for an image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {"0": "Female ♀", "1": "Male ♂"}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=gender_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Gender Classification",
description="Upload an image to classify its gender."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
# **Intended Use:**
The **Gender-Classifier-Mini** model is designed to classify images into gender categories. Potential use cases include:
- **Demographic Analysis:** Assisting in understanding gender distribution in datasets.
- **Face Recognition Systems:** Enhancing identity verification processes.
- **Marketing & Advertising:** Personalizing content based on demographic insights.
- **Healthcare & Research:** Supporting gender-based analysis in medical imaging.
|
[
"female ♀",
"male ♂"
] |
ehdgnsllee/vit-cifar100-finetuned
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24",
"label_25",
"label_26",
"label_27",
"label_28",
"label_29",
"label_30",
"label_31",
"label_32",
"label_33",
"label_34",
"label_35",
"label_36",
"label_37",
"label_38",
"label_39",
"label_40",
"label_41",
"label_42",
"label_43",
"label_44",
"label_45",
"label_46",
"label_47",
"label_48",
"label_49",
"label_50",
"label_51",
"label_52",
"label_53",
"label_54",
"label_55",
"label_56",
"label_57",
"label_58",
"label_59",
"label_60",
"label_61",
"label_62",
"label_63",
"label_64",
"label_65",
"label_66",
"label_67",
"label_68",
"label_69",
"label_70",
"label_71",
"label_72",
"label_73",
"label_74",
"label_75",
"label_76",
"label_77",
"label_78",
"label_79",
"label_80",
"label_81",
"label_82",
"label_83",
"label_84",
"label_85",
"label_86",
"label_87",
"label_88",
"label_89",
"label_90",
"label_91",
"label_92",
"label_93",
"label_94",
"label_95",
"label_96",
"label_97",
"label_98",
"label_99"
] |
prithivMLmods/WBC-Type-Classifier
|

# **WBC-Type-Classifier**
> **WBC-Type-Classifier** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify different types of white blood cells (WBCs) using the **SiglipForImageClassification** architecture.
```py
Accuracy: 0.9891
F1 Score: 0.9893
Classification Report:
precision recall f1-score support
basophil 0.9822 0.9959 0.9890 1218
eosinophil 0.9994 0.9984 0.9989 3117
erythroblast 0.9835 0.9974 0.9904 1551
ig 0.9787 0.9693 0.9740 2895
lymphocyte 0.9893 0.9942 0.9918 1214
monocyte 0.9852 0.9852 0.9852 1420
neutrophil 0.9876 0.9838 0.9857 3329
platelet 1.0000 0.9996 0.9998 2348
accuracy 0.9891 17092
macro avg 0.9882 0.9905 0.9893 17092
weighted avg 0.9891 0.9891 0.9891 17092
```

The model categorizes images into eight classes:
- **Class 0:** "Basophil"
- **Class 1:** "Eosinophil"
- **Class 2:** "Erythroblast"
- **Class 3:** "IG"
- **Class 4:** "Lymphocyte"
- **Class 5:** "Monocyte"
- **Class 6:** "Neutrophil"
- **Class 7:** "Platelet"
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/WBC-Type-Classifier"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def wbc_classification(image):
"""Predicts WBC type for a given blood cell image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "Basophil", "1": "Eosinophil", "2": "Erythroblast", "3": "IG",
"4": "Lymphocyte", "5": "Monocyte", "6": "Neutrophil", "7": "Platelet"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=wbc_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="WBC Type Classification",
description="Upload a blood cell image to classify its WBC type."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
# **Intended Use:**
The **WBC-Type-Classifier** model is designed to classify different types of white blood cells from blood smear images. Potential use cases include:
- **Medical Diagnostics:** Assisting pathologists in identifying different WBC types for diagnosis.
- **Hematology Research:** Supporting studies related to blood cell morphology and disease detection.
- **Automated Blood Analysis:** Enhancing automated diagnostic tools for rapid blood cell classification.
- **Educational Purposes:** Providing insights and training data for medical students and researchers.
|
[
"basophil",
"eosinophil",
"erythroblast",
"ig",
"lymphocyte",
"monocyte",
"neutrophil",
"platelet"
] |
kakaon1/kakaon1
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
ehdgnsllee/general-vit-cifar100-fine-tune
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24",
"label_25",
"label_26",
"label_27",
"label_28",
"label_29",
"label_30",
"label_31",
"label_32",
"label_33",
"label_34",
"label_35",
"label_36",
"label_37",
"label_38",
"label_39",
"label_40",
"label_41",
"label_42",
"label_43",
"label_44",
"label_45",
"label_46",
"label_47",
"label_48",
"label_49",
"label_50",
"label_51",
"label_52",
"label_53",
"label_54",
"label_55",
"label_56",
"label_57",
"label_58",
"label_59",
"label_60",
"label_61",
"label_62",
"label_63",
"label_64",
"label_65",
"label_66",
"label_67",
"label_68",
"label_69",
"label_70",
"label_71",
"label_72",
"label_73",
"label_74",
"label_75",
"label_76",
"label_77",
"label_78",
"label_79",
"label_80",
"label_81",
"label_82",
"label_83",
"label_84",
"label_85",
"label_86",
"label_87",
"label_88",
"label_89",
"label_90",
"label_91",
"label_92",
"label_93",
"label_94",
"label_95",
"label_96",
"label_97",
"label_98",
"label_99"
] |
prithivMLmods/Painting-126-DomainNet
|

# **Painting-126-DomainNet**
> **Painting-126-DomainNet** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify paintings into 126 domain categories using the **SiglipForImageClassification** architecture.

*Moment Matching for Multi-Source Domain Adaptation* : https://arxiv.org/pdf/1812.01754
*SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features* https://arxiv.org/pdf/2502.14786
```py
Classification Report:
precision recall f1-score support
aircraft_carrier 0.8065 0.4717 0.5952 106
alarm_clock 1.0000 0.7612 0.8644 67
ant 0.8095 0.7234 0.7640 188
anvil 0.3205 0.2066 0.2513 121
asparagus 0.8242 0.8827 0.8525 324
axe 0.4028 0.4857 0.4404 175
banana 0.8986 0.8986 0.8986 286
basket 0.7251 0.7492 0.7370 331
bathtub 0.0000 0.0000 0.0000 35
bear 0.8704 0.8647 0.8675 303
bee 0.9478 0.9440 0.9459 250
bird 0.8031 0.8807 0.8401 176
blackberry 0.0000 0.0000 0.0000 11
blueberry 0.8258 0.8258 0.8258 132
bottlecap 0.6487 0.8173 0.7233 427
broccoli 0.8961 0.8625 0.8790 80
bus 0.6909 0.8444 0.7600 90
butterfly 0.9313 0.9613 0.9460 310
cactus 0.9583 0.9388 0.9485 98
cake 0.5290 0.5984 0.5615 122
calculator 0.0000 0.0000 0.0000 10
camel 0.8244 0.9351 0.8763 231
camera 0.9725 0.8480 0.9060 125
candle 0.7763 0.8551 0.8138 207
cannon 0.0000 0.0000 0.0000 43
canoe 0.8400 0.9161 0.8764 298
carrot 0.9744 0.9005 0.9360 211
castle 0.9027 0.9278 0.9151 180
cat 0.8824 0.9818 0.9294 275
ceiling_fan 1.0000 0.1333 0.2353 30
cell_phone 0.7117 0.7453 0.7281 106
cello 0.8647 0.9127 0.8880 126
chair 0.8750 0.1667 0.2800 42
chandelier 0.9773 0.9348 0.9556 46
coffee_cup 0.9015 0.8095 0.8530 147
compass 0.9483 0.8871 0.9167 62
computer 0.0000 0.0000 0.0000 14
cow 0.9590 0.9360 0.9474 125
crab 0.9829 0.9426 0.9623 122
crocodile 0.9468 0.9271 0.9368 96
cruise_ship 0.8977 0.8977 0.8977 176
dog 0.9149 0.9739 0.9435 574
dolphin 0.8928 0.9595 0.9249 321
dragon 0.9278 0.9730 0.9499 185
drums 0.8457 0.8405 0.8431 163
duck 0.9335 0.9642 0.9486 335
dumbbell 0.9539 0.9603 0.9571 151
elephant 0.9405 0.9794 0.9595 339
eyeglasses 0.5417 0.1970 0.2889 66
feather 0.9314 0.9416 0.9365 274
fence 0.0000 0.0000 0.0000 39
fish 0.8829 0.9671 0.9231 304
flamingo 0.9778 0.9888 0.9832 178
flower 0.7188 0.7706 0.7438 388
foot 0.5893 0.4853 0.5323 68
fork 0.9500 0.2836 0.4368 67
frog 0.9172 0.9925 0.9534 134
giraffe 0.9762 0.9762 0.9762 84
goatee 0.4565 0.4828 0.4693 87
grapes 0.8761 0.8200 0.8471 250
guitar 0.8827 0.8827 0.8827 162
hammer 0.0000 0.0000 0.0000 36
helicopter 0.9733 0.8835 0.9262 206
helmet 0.0000 0.0000 0.0000 22
horse 0.9514 0.9856 0.9682 417
kangaroo 0.9387 0.9053 0.9217 169
lantern 0.6263 0.7126 0.6667 174
laptop 0.8800 0.8871 0.8835 124
leaf 0.7754 0.8930 0.8301 402
lion 0.9347 0.8883 0.9109 403
lipstick 0.9281 0.9045 0.9161 157
lobster 0.9646 0.9455 0.9550 202
microphone 0.9231 0.8136 0.8649 118
monkey 0.7892 0.8656 0.8256 320
mosquito 0.8696 0.3846 0.5333 52
mouse 0.8610 0.9174 0.8883 351
mug 0.8669 0.9365 0.9003 299
mushroom 0.9070 0.9653 0.9353 202
onion 0.8700 0.9231 0.8958 377
panda 0.9631 0.9952 0.9789 210
peanut 0.5000 0.1212 0.1951 66
pear 0.9278 0.9356 0.9317 357
peas 0.8281 0.7465 0.7852 71
pencil 0.4902 0.5245 0.5068 143
penguin 0.9496 0.9576 0.9536 354
pig 0.9392 0.9500 0.9446 260
pillow 0.7273 0.0727 0.1322 110
pineapple 0.9849 0.9812 0.9831 266
potato 1.0000 0.0652 0.1224 46
power_outlet 0.9600 0.8889 0.9231 81
purse 0.5000 0.0513 0.0930 39
rabbit 0.8961 0.9673 0.9303 214
raccoon 0.9490 0.9394 0.9442 198
rhinoceros 0.9657 0.9657 0.9657 175
rifle 0.8200 0.8542 0.8367 192
saxophone 0.8100 0.8556 0.8322 284
screwdriver 0.7083 0.6296 0.6667 54
sea_turtle 0.9757 0.9969 0.9862 322
see_saw 0.3527 0.6077 0.4463 130
sheep 0.9328 0.9398 0.9363 266
shoe 0.9522 0.9567 0.9544 208
skateboard 0.4464 0.2083 0.2841 120
snake 0.8627 0.8550 0.8588 338
speedboat 0.8710 0.6835 0.7660 79
spider 0.8129 0.6975 0.7508 162
squirrel 0.9325 0.9063 0.9192 427
strawberry 0.9316 0.9470 0.9392 302
streetlight 0.7493 0.7948 0.7714 346
string_bean 0.8636 0.4130 0.5588 46
submarine 0.5845 0.7423 0.6541 326
swan 0.9222 0.8910 0.9063 266
table 0.0000 0.0000 0.0000 81
teapot 0.8619 0.9318 0.8955 308
teddy-bear 0.8517 0.9136 0.8816 220
television 0.0000 0.0000 0.0000 40
the_Eiffel_Tower 0.9366 0.9882 0.9617 254
the_Great_Wall_of_China 0.8244 0.8710 0.8471 124
tiger 0.9504 0.9702 0.9602 336
toe 0.0000 0.0000 0.0000 1
train 0.9367 0.9628 0.9496 323
truck 0.8864 0.7959 0.8387 98
umbrella 0.6309 0.8174 0.7121 230
vase 0.7382 0.8309 0.7818 207
watermelon 0.9479 0.9450 0.9464 327
whale 0.8877 0.8657 0.8766 283
zebra 0.9832 0.9832 0.9832 238
accuracy 0.8533 24032
macro avg 0.7686 0.7273 0.7299 24032
weighted avg 0.8445 0.8533 0.8424 24032
```
The model categorizes images into the following 126 classes:
- **Class 0:** "aircraft_carrier"
- **Class 1:** "alarm_clock"
- **Class 2:** "ant"
- **Class 3:** "anvil"
- **Class 4:** "asparagus"
- **Class 5:** "axe"
- **Class 6:** "banana"
- **Class 7:** "basket"
- **Class 8:** "bathtub"
- **Class 9:** "bear"
- **Class 10:** "bee"
- **Class 11:** "bird"
- **Class 12:** "blackberry"
- **Class 13:** "blueberry"
- **Class 14:** "bottlecap"
- **Class 15:** "broccoli"
- **Class 16:** "bus"
- **Class 17:** "butterfly"
- **Class 18:** "cactus"
- **Class 19:** "cake"
- **Class 20:** "calculator"
- **Class 21:** "camel"
- **Class 22:** "camera"
- **Class 23:** "candle"
- **Class 24:** "cannon"
- **Class 25:** "canoe"
- **Class 26:** "carrot"
- **Class 27:** "castle"
- **Class 28:** "cat"
- **Class 29:** "ceiling_fan"
- **Class 30:** "cell_phone"
- **Class 31:** "cello"
- **Class 32:** "chair"
- **Class 33:** "chandelier"
- **Class 34:** "coffee_cup"
- **Class 35:** "compass"
- **Class 36:** "computer"
- **Class 37:** "cow"
- **Class 38:** "crab"
- **Class 39:** "crocodile"
- **Class 40:** "cruise_ship"
- **Class 41:** "dog"
- **Class 42:** "dolphin"
- **Class 43:** "dragon"
- **Class 44:** "drums"
- **Class 45:** "duck"
- **Class 46:** "dumbbell"
- **Class 47:** "elephant"
- **Class 48:** "eyeglasses"
- **Class 49:** "feather"
- **Class 50:** "fence"
- **Class 51:** "fish"
- **Class 52:** "flamingo"
- **Class 53:** "flower"
- **Class 54:** "foot"
- **Class 55:** "fork"
- **Class 56:** "frog"
- **Class 57:** "giraffe"
- **Class 58:** "goatee"
- **Class 59:** "grapes"
- **Class 60:** "guitar"
- **Class 61:** "hammer"
- **Class 62:** "helicopter"
- **Class 63:** "helmet"
- **Class 64:** "horse"
- **Class 65:** "kangaroo"
- **Class 66:** "lantern"
- **Class 67:** "laptop"
- **Class 68:** "leaf"
- **Class 69:** "lion"
- **Class 70:** "lipstick"
- **Class 71:** "lobster"
- **Class 72:** "microphone"
- **Class 73:** "monkey"
- **Class 74:** "mosquito"
- **Class 75:** "mouse"
- **Class 76:** "mug"
- **Class 77:** "mushroom"
- **Class 78:** "onion"
- **Class 79:** "panda"
- **Class 80:** "peanut"
- **Class 81:** "pear"
- **Class 82:** "peas"
- **Class 83:** "pencil"
- **Class 84:** "penguin"
- **Class 85:** "pig"
- **Class 86:** "pillow"
- **Class 87:** "pineapple"
- **Class 88:** "potato"
- **Class 89:** "power_outlet"
- **Class 90:** "purse"
- **Class 91:** "rabbit"
- **Class 92:** "raccoon"
- **Class 93:** "rhinoceros"
- **Class 94:** "rifle"
- **Class 95:** "saxophone"
- **Class 96:** "screwdriver"
- **Class 97:** "sea_turtle"
- **Class 98:** "see_saw"
- **Class 99:** "sheep"
- **Class 100:** "shoe"
- **Class 101:** "skateboard"
- **Class 102:** "snake"
- **Class 103:** "speedboat"
- **Class 104:** "spider"
- **Class 105:** "squirrel"
- **Class 106:** "strawberry"
- **Class 107:** "streetlight"
- **Class 108:** "string_bean"
- **Class 109:** "submarine"
- **Class 110:** "swan"
- **Class 111:** "table"
- **Class 112:** "teapot"
- **Class 113:** "teddy-bear"
- **Class 114:** "television"
- **Class 115:** "the_Eiffel_Tower"
- **Class 116:** "the_Great_Wall_of_China"
- **Class 117:** "tiger"
- **Class 118:** "toe"
- **Class 119:** "train"
- **Class 120:** "truck"
- **Class 121:** "umbrella"
- **Class 122:** "vase"
- **Class 123:** "watermelon"
- **Class 124:** "whale"
- **Class 125:** "zebra"
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Painting-126-DomainNet"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def painting_classification(image):
"""Predicts the painting category for an input image."""
# Convert the input numpy array to a PIL image and ensure it is in RGB format
image = Image.fromarray(image).convert("RGB")
# Process the image for the model
inputs = processor(images=image, return_tensors="pt")
# Get predictions from the model without gradient computation
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
# Convert logits to probabilities using softmax
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
# Define the label mapping for each class index
labels = {
"0": "aircraft_carrier", "1": "alarm_clock", "2": "ant", "3": "anvil", "4": "asparagus",
"5": "axe", "6": "banana", "7": "basket", "8": "bathtub", "9": "bear",
"10": "bee", "11": "bird", "12": "blackberry", "13": "blueberry", "14": "bottlecap",
"15": "broccoli", "16": "bus", "17": "butterfly", "18": "cactus", "19": "cake",
"20": "calculator", "21": "camel", "22": "camera", "23": "candle", "24": "cannon",
"25": "canoe", "26": "carrot", "27": "castle", "28": "cat", "29": "ceiling_fan",
"30": "cell_phone", "31": "cello", "32": "chair", "33": "chandelier", "34": "coffee_cup",
"35": "compass", "36": "computer", "37": "cow", "38": "crab", "39": "crocodile",
"40": "cruise_ship", "41": "dog", "42": "dolphin", "43": "dragon", "44": "drums",
"45": "duck", "46": "dumbbell", "47": "elephant", "48": "eyeglasses", "49": "feather",
"50": "fence", "51": "fish", "52": "flamingo", "53": "flower", "54": "foot",
"55": "fork", "56": "frog", "57": "giraffe", "58": "goatee", "59": "grapes",
"60": "guitar", "61": "hammer", "62": "helicopter", "63": "helmet", "64": "horse",
"65": "kangaroo", "66": "lantern", "67": "laptop", "68": "leaf", "69": "lion",
"70": "lipstick", "71": "lobster", "72": "microphone", "73": "monkey", "74": "mosquito",
"75": "mouse", "76": "mug", "77": "mushroom", "78": "onion", "79": "panda",
"80": "peanut", "81": "pear", "82": "peas", "83": "pencil", "84": "penguin",
"85": "pig", "86": "pillow", "87": "pineapple", "88": "potato", "89": "power_outlet",
"90": "purse", "91": "rabbit", "92": "raccoon", "93": "rhinoceros", "94": "rifle",
"95": "saxophone", "96": "screwdriver", "97": "sea_turtle", "98": "see_saw", "99": "sheep",
"100": "shoe", "101": "skateboard", "102": "snake", "103": "speedboat", "104": "spider",
"105": "squirrel", "106": "strawberry", "107": "streetlight", "108": "string_bean",
"109": "submarine", "110": "swan", "111": "table", "112": "teapot", "113": "teddy-bear",
"114": "television", "115": "the_Eiffel_Tower", "116": "the_Great_Wall_of_China",
"117": "tiger", "118": "toe", "119": "train", "120": "truck", "121": "umbrella",
"122": "vase", "123": "watermelon", "124": "whale", "125": "zebra"
}
# Map each label to its corresponding probability (rounded)
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface for the painting classifier
iface = gr.Interface(
fn=painting_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Painting-126-DomainNet Classification",
description="Upload a painting to classify it into one of 126 categories."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
# **Intended Use:**
The **Painting-126-DomainNet** model is designed for painting image classification. It categorizes paintings into a wide range of domains—from objects like an "aircraft_carrier" or "alarm_clock" to animals, plants, and everyday items. Potential use cases include:
- **Art Curation & Analysis:** Assisting galleries and museums in organizing and categorizing artworks.
- **Creative Search Engines:** Enabling painting-based search for art inspiration and research.
- **Educational Tools:** Supporting art education by categorizing and retrieving visual resources.
- **Computer Vision Research:** Providing a benchmark dataset for studies in painting recognition and domain adaptation tasks.
|
[
"aircraft_carrier",
"alarm_clock",
"ant",
"anvil",
"asparagus",
"axe",
"banana",
"basket",
"bathtub",
"bear",
"bee",
"bird",
"blackberry",
"blueberry",
"bottlecap",
"broccoli",
"bus",
"butterfly",
"cactus",
"cake",
"calculator",
"camel",
"camera",
"candle",
"cannon",
"canoe",
"carrot",
"castle",
"cat",
"ceiling_fan",
"cell_phone",
"cello",
"chair",
"chandelier",
"coffee_cup",
"compass",
"computer",
"cow",
"crab",
"crocodile",
"cruise_ship",
"dog",
"dolphin",
"dragon",
"drums",
"duck",
"dumbbell",
"elephant",
"eyeglasses",
"feather",
"fence",
"fish",
"flamingo",
"flower",
"foot",
"fork",
"frog",
"giraffe",
"goatee",
"grapes",
"guitar",
"hammer",
"helicopter",
"helmet",
"horse",
"kangaroo",
"lantern",
"laptop",
"leaf",
"lion",
"lipstick",
"lobster",
"microphone",
"monkey",
"mosquito",
"mouse",
"mug",
"mushroom",
"onion",
"panda",
"peanut",
"pear",
"peas",
"pencil",
"penguin",
"pig",
"pillow",
"pineapple",
"potato",
"power_outlet",
"purse",
"rabbit",
"raccoon",
"rhinoceros",
"rifle",
"saxophone",
"screwdriver",
"sea_turtle",
"see_saw",
"sheep",
"shoe",
"skateboard",
"snake",
"speedboat",
"spider",
"squirrel",
"strawberry",
"streetlight",
"string_bean",
"submarine",
"swan",
"table",
"teapot",
"teddy-bear",
"television",
"the_eiffel_tower",
"the_great_wall_of_china",
"tiger",
"toe",
"train",
"truck",
"umbrella",
"vase",
"watermelon",
"whale",
"zebra"
] |
nvidia/MambaVision-B-21K
|
[**MambaVision: A Hybrid Mamba-Transformer Vision Backbone**](https://arxiv.org/abs/2407.08083).
## Model Overview
We have developed the first hybrid model for computer vision which leverages the strengths of Mamba and Transformers. Specifically, our core contribution includes redesigning the Mamba formulation to enhance its capability for efficient modeling of visual features. In addition, we conducted a comprehensive ablation study on the feasibility of integrating Vision Transformers (ViT) with Mamba. Our results demonstrate that equipping the Mamba architecture with several self-attention blocks at the final layers greatly improves the modeling capacity to capture long-range spatial dependencies. Based on our findings, we introduce a family of MambaVision models with a hierarchical architecture to meet various design criteria.
## Model Performance
MambaVision-B-21K is pretrained on ImageNet-21K dataset and finetuned on ImageNet-1K.
<table>
<tr>
<th>Name</th>
<th>Acc@1(%)</th>
<th>Acc@5(%)</th>
<th>#Params(M)</th>
<th>FLOPs(G)</th>
<th>Resolution</th>
</tr>
<tr>
<td>MambaVision-B-21K</td>
<td>84.9</td>
<td>97.5</td>
<td>97.7</td>
<td>15.0</td>
<td>224x224</td>
</tr>
</table>
In addition, the MambaVision models demonstrate a strong performance by achieving a new SOTA Pareto-front in
terms of Top-1 accuracy and throughput.
<p align="center">
<img src="https://github.com/NVlabs/MambaVision/assets/26806394/79dcf841-3966-4b77-883d-76cd5e1d4320" width=70% height=70%
class="center">
</p>
## Model Usage
It is highly recommended to install the requirements for MambaVision by running the following:
Code: https://github.com/NVlabs/MambaVision
```Bash
pip install mambavision
```
For each model, we offer two variants for image classification and feature extraction that can be imported with 1 line of code.
### Image Classification
In the following example, we demonstrate how MambaVision can be used for image classification.
Given the following image from [COCO dataset](https://cocodataset.org/#home) val set as an input:
<p align="center">
<img src="https://hf.fast360.xyz/production/uploads/64414b62603214724ebd2636/4duSnqLf4lrNiAHczSmAN.jpeg" width=70% height=70%
class="center">
</p>
The following snippet can be used for image classification:
```Python
from transformers import AutoModelForImageClassification
from PIL import Image
from timm.data.transforms_factory import create_transform
import requests
model = AutoModelForImageClassification.from_pretrained("nvidia/MambaVision-B-21K", trust_remote_code=True)
# eval mode for inference
model.cuda().eval()
# prepare image for the model
url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
image = Image.open(requests.get(url, stream=True).raw)
input_resolution = (3, 224, 224) # MambaVision supports any input resolutions
transform = create_transform(input_size=input_resolution,
is_training=False,
mean=model.config.mean,
std=model.config.std,
crop_mode=model.config.crop_mode,
crop_pct=model.config.crop_pct)
inputs = transform(image).unsqueeze(0).cuda()
# model inference
outputs = model(inputs)
logits = outputs['logits']
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
The predicted label is ```brown bear, bruin, Ursus arctos.```
### Feature Extraction
MambaVision can also be used as a generic feature extractor.
Specifically, we can extract the outputs of each stage of model (4 stages) as well as the final averaged-pool features that are flattened.
The following snippet can be used for feature extraction:
```Python
from transformers import AutoModel
from PIL import Image
from timm.data.transforms_factory import create_transform
import requests
model = AutoModel.from_pretrained("nvidia/MambaVision-B-21K", trust_remote_code=True)
# eval mode for inference
model.cuda().eval()
# prepare image for the model
url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
image = Image.open(requests.get(url, stream=True).raw)
input_resolution = (3, 224, 224) # MambaVision supports any input resolutions
transform = create_transform(input_size=input_resolution,
is_training=False,
mean=model.config.mean,
std=model.config.std,
crop_mode=model.config.crop_pct,
crop_pct=model.config.crop_pct)
inputs = transform(image).unsqueeze(0).cuda()
# model inference
out_avg_pool, features = model(inputs)
print("Size of the averaged pool features:", out_avg_pool.size()) # torch.Size([1, 640])
print("Number of stages in extracted features:", len(features)) # 4 stages
print("Size of extracted features in stage 1:", features[0].size()) # torch.Size([1, 80, 56, 56])
print("Size of extracted features in stage 4:", features[3].size()) # torch.Size([1, 640, 7, 7])
```
### License:
[NVIDIA Source Code License-NC](https://huggingface.co/nvidia/MambaVision-B-21K/blob/main/LICENSE)
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
prithivMLmods/Sketch-126-DomainNet
|

# **Sketch-126-DomainNet**
> **Sketch-126-DomainNet** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify sketches into 126 domain categories using the **SiglipForImageClassification** architecture.

*Moment Matching for Multi-Source Domain Adaptation* : https://arxiv.org/pdf/1812.01754
*SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features* https://arxiv.org/pdf/2502.14786
```py
Classification Report:
precision recall f1-score support
aircraft_carrier 1.0000 0.2200 0.3607 50
alarm_clock 0.9873 0.9568 0.9718 162
ant 0.9432 0.9326 0.9379 89
anvil 0.2727 0.0423 0.0732 71
asparagus 0.9673 0.8916 0.9279 166
axe 0.8034 0.8773 0.8387 163
banana 0.9744 0.9383 0.9560 162
basket 0.7160 0.7682 0.7412 151
bathtub 0.8073 0.9281 0.8635 167
bear 0.8636 0.6690 0.7540 142
bee 0.9196 0.8957 0.9075 115
bird 0.9094 0.9429 0.9259 245
blackberry 1.0000 0.1250 0.2222 48
blueberry 0.6744 0.8529 0.7532 102
bottlecap 0.7468 0.5315 0.6211 111
broccoli 0.7727 0.9444 0.8500 144
bus 0.9302 0.8989 0.9143 178
butterfly 0.9594 0.9497 0.9545 199
cactus 1.0000 0.6735 0.8049 49
cake 0.0000 0.0000 0.0000 54
calculator 0.9298 0.9636 0.9464 55
camel 0.9208 0.8942 0.9073 104
camera 0.9200 0.7931 0.8519 87
candle 0.9556 0.6935 0.8037 62
cannon 0.7500 0.2027 0.3191 74
canoe 0.8000 0.5825 0.6742 103
carrot 0.0000 0.0000 0.0000 27
castle 0.9583 0.5111 0.6667 45
cat 0.8961 0.6635 0.7624 104
ceiling_fan 0.0000 0.0000 0.0000 20
cell_phone 0.0000 0.0000 0.0000 18
cello 0.9600 0.4706 0.6316 51
chair 0.8043 0.4805 0.6016 77
chandelier 0.0000 0.0000 0.0000 27
coffee_cup 0.0000 0.0000 0.0000 26
compass 0.0000 0.0000 0.0000 10
computer 0.2500 0.0435 0.0741 23
cow 0.0000 0.0000 0.0000 14
crab 0.9123 0.8525 0.8814 122
crocodile 0.9280 0.8992 0.9134 129
cruise_ship 0.7467 0.9032 0.8175 124
dog 0.8533 0.8911 0.8718 248
dolphin 0.9091 0.8824 0.8955 68
dragon 0.7914 0.8269 0.8088 156
drums 0.9259 0.8772 0.9009 171
duck 0.8409 0.8409 0.8409 220
dumbbell 0.9507 0.9184 0.9343 147
elephant 0.9630 0.9765 0.9697 213
eyeglasses 0.8155 0.7919 0.8035 173
feather 0.9344 0.9344 0.9344 244
fence 0.8796 0.8482 0.8636 112
fish 0.9527 0.9495 0.9511 297
flamingo 0.9818 0.9474 0.9643 114
flower 0.8267 0.9219 0.8717 269
foot 0.7743 0.8578 0.8140 204
fork 0.9366 0.9433 0.9399 141
frog 0.9620 0.9383 0.9500 162
giraffe 0.9655 0.9396 0.9524 149
goatee 0.7914 0.8897 0.8377 145
grapes 0.9132 0.9609 0.9364 230
guitar 0.8462 0.9862 0.9108 145
hammer 0.8333 0.4386 0.5747 57
helicopter 0.9441 0.9620 0.9530 158
helmet 0.8509 0.8204 0.8354 167
horse 0.9091 0.9877 0.9467 81
kangaroo 0.9592 0.9691 0.9641 97
lantern 0.0000 0.0000 0.0000 30
laptop 0.8273 0.9200 0.8712 250
leaf 0.8449 0.8870 0.8655 301
lion 0.9697 0.9734 0.9715 263
lipstick 0.9634 0.8977 0.9294 88
lobster 0.9265 0.9130 0.9197 138
microphone 0.8917 0.8770 0.8843 122
monkey 0.9297 0.8947 0.9119 133
mosquito 0.9052 0.9211 0.9130 114
mouse 0.8632 0.8039 0.8325 102
mug 0.6928 0.7737 0.7310 137
mushroom 0.8174 0.8861 0.8504 202
onion 0.9538 0.9841 0.9688 126
panda 0.9643 0.8710 0.9153 62
peanut 0.8302 0.8462 0.8381 104
pear 0.7966 0.9658 0.8731 146
peas 0.6667 0.8438 0.7448 64
pencil 0.0000 0.0000 0.0000 21
penguin 0.9586 0.9701 0.9643 167
pig 0.8983 0.8785 0.8883 181
pillow 0.9570 0.9674 0.9622 92
pineapple 0.9808 0.9714 0.9761 105
potato 0.9444 0.5231 0.6733 65
power_outlet 0.5556 0.0676 0.1205 74
purse 0.9220 0.7182 0.8075 181
rabbit 0.9697 0.8767 0.9209 73
raccoon 0.7850 0.9097 0.8428 277
rhinoceros 0.9863 0.9863 0.9863 146
rifle 0.9143 0.9796 0.9458 98
saxophone 0.9381 0.8618 0.8983 246
screwdriver 0.7709 0.8706 0.8177 286
sea_turtle 0.9698 0.9507 0.9602 203
see_saw 0.3296 0.5738 0.4187 413
sheep 0.9254 0.9153 0.9203 366
shoe 0.9395 0.9688 0.9539 513
skateboard 0.7365 0.7831 0.7591 332
snake 0.8005 0.8737 0.8355 372
speedboat 0.8388 0.8833 0.8605 377
spider 0.7954 0.8696 0.8309 514
squirrel 0.8511 0.8484 0.8498 310
strawberry 0.8313 0.8471 0.8391 157
streetlight 0.7944 0.8134 0.8038 209
string_bean 0.7143 0.3000 0.4225 50
submarine 0.5916 0.6975 0.6402 162
swan 0.8966 0.8387 0.8667 186
table 0.6705 0.7522 0.7090 230
teapot 0.8464 0.8968 0.8709 252
teddy-bear 0.6818 0.8385 0.7521 161
television 0.8974 0.7071 0.7910 99
the_Eiffel_Tower 0.9860 0.9679 0.9769 218
the_Great_Wall_of_China 0.6389 0.8440 0.7273 109
tiger 0.9417 0.9604 0.9510 303
toe 0.0000 0.0000 0.0000 53
train 0.8650 0.9010 0.8827 192
truck 0.8136 0.9372 0.8710 191
umbrella 0.8650 0.8913 0.8779 230
vase 0.8082 0.8082 0.8082 146
watermelon 0.8947 0.8333 0.8629 102
whale 0.8910 0.8744 0.8826 215
zebra 0.9817 0.9727 0.9772 220
accuracy 0.8440 19317
macro avg 0.7818 0.7419 0.7475 19317
weighted avg 0.8404 0.8440 0.8352 19317
```
The model categorizes images into the following 126 classes:
- **Class 0:** "aircraft_carrier"
- **Class 1:** "alarm_clock"
- **Class 2:** "ant"
- **Class 3:** "anvil"
- **Class 4:** "asparagus"
- **Class 5:** "axe"
- **Class 6:** "banana"
- **Class 7:** "basket"
- **Class 8:** "bathtub"
- **Class 9:** "bear"
- **Class 10:** "bee"
- **Class 11:** "bird"
- **Class 12:** "blackberry"
- **Class 13:** "blueberry"
- **Class 14:** "bottlecap"
- **Class 15:** "broccoli"
- **Class 16:** "bus"
- **Class 17:** "butterfly"
- **Class 18:** "cactus"
- **Class 19:** "cake"
- **Class 20:** "calculator"
- **Class 21:** "camel"
- **Class 22:** "camera"
- **Class 23:** "candle"
- **Class 24:** "cannon"
- **Class 25:** "canoe"
- **Class 26:** "carrot"
- **Class 27:** "castle"
- **Class 28:** "cat"
- **Class 29:** "ceiling_fan"
- **Class 30:** "cell_phone"
- **Class 31:** "cello"
- **Class 32:** "chair"
- **Class 33:** "chandelier"
- **Class 34:** "coffee_cup"
- **Class 35:** "compass"
- **Class 36:** "computer"
- **Class 37:** "cow"
- **Class 38:** "crab"
- **Class 39:** "crocodile"
- **Class 40:** "cruise_ship"
- **Class 41:** "dog"
- **Class 42:** "dolphin"
- **Class 43:** "dragon"
- **Class 44:** "drums"
- **Class 45:** "duck"
- **Class 46:** "dumbbell"
- **Class 47:** "elephant"
- **Class 48:** "eyeglasses"
- **Class 49:** "feather"
- **Class 50:** "fence"
- **Class 51:** "fish"
- **Class 52:** "flamingo"
- **Class 53:** "flower"
- **Class 54:** "foot"
- **Class 55:** "fork"
- **Class 56:** "frog"
- **Class 57:** "giraffe"
- **Class 58:** "goatee"
- **Class 59:** "grapes"
- **Class 60:** "guitar"
- **Class 61:** "hammer"
- **Class 62:** "helicopter"
- **Class 63:** "helmet"
- **Class 64:** "horse"
- **Class 65:** "kangaroo"
- **Class 66:** "lantern"
- **Class 67:** "laptop"
- **Class 68:** "leaf"
- **Class 69:** "lion"
- **Class 70:** "lipstick"
- **Class 71:** "lobster"
- **Class 72:** "microphone"
- **Class 73:** "monkey"
- **Class 74:** "mosquito"
- **Class 75:** "mouse"
- **Class 76:** "mug"
- **Class 77:** "mushroom"
- **Class 78:** "onion"
- **Class 79:** "panda"
- **Class 80:** "peanut"
- **Class 81:** "pear"
- **Class 82:** "peas"
- **Class 83:** "pencil"
- **Class 84:** "penguin"
- **Class 85:** "pig"
- **Class 86:** "pillow"
- **Class 87:** "pineapple"
- **Class 88:** "potato"
- **Class 89:** "power_outlet"
- **Class 90:** "purse"
- **Class 91:** "rabbit"
- **Class 92:** "raccoon"
- **Class 93:** "rhinoceros"
- **Class 94:** "rifle"
- **Class 95:** "saxophone"
- **Class 96:** "screwdriver"
- **Class 97:** "sea_turtle"
- **Class 98:** "see_saw"
- **Class 99:** "sheep"
- **Class 100:** "shoe"
- **Class 101:** "skateboard"
- **Class 102:** "snake"
- **Class 103:** "speedboat"
- **Class 104:** "spider"
- **Class 105:** "squirrel"
- **Class 106:** "strawberry"
- **Class 107:** "streetlight"
- **Class 108:** "string_bean"
- **Class 109:** "submarine"
- **Class 110:** "swan"
- **Class 111:** "table"
- **Class 112:** "teapot"
- **Class 113:** "teddy-bear"
- **Class 114:** "television"
- **Class 115:** "the_Eiffel_Tower"
- **Class 116:** "the_Great_Wall_of_China"
- **Class 117:** "tiger"
- **Class 118:** "toe"
- **Class 119:** "train"
- **Class 120:** "truck"
- **Class 121:** "umbrella"
- **Class 122:** "vase"
- **Class 123:** "watermelon"
- **Class 124:** "whale"
- **Class 125:** "zebra"
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Sketch-126-DomainNet"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def sketch_classification(image):
"""Predicts the sketch category for an input image."""
# Convert the input numpy array to a PIL Image and ensure it has 3 channels (RGB)
image = Image.fromarray(image).convert("RGB")
# Process the image and prepare it for the model
inputs = processor(images=image, return_tensors="pt")
# Perform inference without gradient calculation
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
# Convert logits to probabilities using softmax
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
# Mapping from indices to corresponding sketch category labels
labels = {
"0": "aircraft_carrier", "1": "alarm_clock", "2": "ant", "3": "anvil", "4": "asparagus",
"5": "axe", "6": "banana", "7": "basket", "8": "bathtub", "9": "bear",
"10": "bee", "11": "bird", "12": "blackberry", "13": "blueberry", "14": "bottlecap",
"15": "broccoli", "16": "bus", "17": "butterfly", "18": "cactus", "19": "cake",
"20": "calculator", "21": "camel", "22": "camera", "23": "candle", "24": "cannon",
"25": "canoe", "26": "carrot", "27": "castle", "28": "cat", "29": "ceiling_fan",
"30": "cell_phone", "31": "cello", "32": "chair", "33": "chandelier", "34": "coffee_cup",
"35": "compass", "36": "computer", "37": "cow", "38": "crab", "39": "crocodile",
"40": "cruise_ship", "41": "dog", "42": "dolphin", "43": "dragon", "44": "drums",
"45": "duck", "46": "dumbbell", "47": "elephant", "48": "eyeglasses", "49": "feather",
"50": "fence", "51": "fish", "52": "flamingo", "53": "flower", "54": "foot",
"55": "fork", "56": "frog", "57": "giraffe", "58": "goatee", "59": "grapes",
"60": "guitar", "61": "hammer", "62": "helicopter", "63": "helmet", "64": "horse",
"65": "kangaroo", "66": "lantern", "67": "laptop", "68": "leaf", "69": "lion",
"70": "lipstick", "71": "lobster", "72": "microphone", "73": "monkey", "74": "mosquito",
"75": "mouse", "76": "mug", "77": "mushroom", "78": "onion", "79": "panda",
"80": "peanut", "81": "pear", "82": "peas", "83": "pencil", "84": "penguin",
"85": "pig", "86": "pillow", "87": "pineapple", "88": "potato", "89": "power_outlet",
"90": "purse", "91": "rabbit", "92": "raccoon", "93": "rhinoceros", "94": "rifle",
"95": "saxophone", "96": "screwdriver", "97": "sea_turtle", "98": "see_saw", "99": "sheep",
"100": "shoe", "101": "skateboard", "102": "snake", "103": "speedboat", "104": "spider",
"105": "squirrel", "106": "strawberry", "107": "streetlight", "108": "string_bean",
"109": "submarine", "110": "swan", "111": "table", "112": "teapot", "113": "teddy-bear",
"114": "television", "115": "the_Eiffel_Tower", "116": "the_Great_Wall_of_China",
"117": "tiger", "118": "toe", "119": "train", "120": "truck", "121": "umbrella",
"122": "vase", "123": "watermelon", "124": "whale", "125": "zebra"
}
# Create a dictionary mapping each label to its predicted probability (rounded)
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=sketch_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Sketch-126-DomainNet Classification",
description="Upload a sketch to classify it into one of 126 categories."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
---
# **Intended Use:**
The **Sketch-126-DomainNet** model is designed for sketch image classification. It is capable of categorizing sketches into a wide range of domains—from objects like an "aircraft_carrier" or "alarm_clock" to animals, plants, and everyday items. Potential use cases include:
- **Art and Design Applications:** Assisting artists and designers in organizing and retrieving sketches based on content.
- **Creative Search Engines:** Enabling sketch-based search for design inspiration.
- **Educational Tools:** Helping students and educators in art and design fields with categorization and retrieval of visual resources.
- **Computer Vision Research:** Providing a benchmark dataset for sketch recognition and domain adaptation tasks.
|
[
"aircraft_carrier",
"alarm_clock",
"ant",
"anvil",
"asparagus",
"axe",
"banana",
"basket",
"bathtub",
"bear",
"bee",
"bird",
"blackberry",
"blueberry",
"bottlecap",
"broccoli",
"bus",
"butterfly",
"cactus",
"cake",
"calculator",
"camel",
"camera",
"candle",
"cannon",
"canoe",
"carrot",
"castle",
"cat",
"ceiling_fan",
"cell_phone",
"cello",
"chair",
"chandelier",
"coffee_cup",
"compass",
"computer",
"cow",
"crab",
"crocodile",
"cruise_ship",
"dog",
"dolphin",
"dragon",
"drums",
"duck",
"dumbbell",
"elephant",
"eyeglasses",
"feather",
"fence",
"fish",
"flamingo",
"flower",
"foot",
"fork",
"frog",
"giraffe",
"goatee",
"grapes",
"guitar",
"hammer",
"helicopter",
"helmet",
"horse",
"kangaroo",
"lantern",
"laptop",
"leaf",
"lion",
"lipstick",
"lobster",
"microphone",
"monkey",
"mosquito",
"mouse",
"mug",
"mushroom",
"onion",
"panda",
"peanut",
"pear",
"peas",
"pencil",
"penguin",
"pig",
"pillow",
"pineapple",
"potato",
"power_outlet",
"purse",
"rabbit",
"raccoon",
"rhinoceros",
"rifle",
"saxophone",
"screwdriver",
"sea_turtle",
"see_saw",
"sheep",
"shoe",
"skateboard",
"snake",
"speedboat",
"spider",
"squirrel",
"strawberry",
"streetlight",
"string_bean",
"submarine",
"swan",
"table",
"teapot",
"teddy-bear",
"television",
"the_eiffel_tower",
"the_great_wall_of_china",
"tiger",
"toe",
"train",
"truck",
"umbrella",
"vase",
"watermelon",
"whale",
"zebra"
] |
Ratihd/results
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1652
- Accuracy: 0.0187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 3.8639 | 0.0063 |
| No log | 2.0 | 80 | 4.1054 | 0.0063 |
| No log | 3.0 | 120 | 4.1652 | 0.0187 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
nvidia/MambaVision-L-21K
|
# MambaVision: A Hybrid Mamba-Transformer Vision Backbone
[**MambaVision: A Hybrid Mamba-Transformer Vision Backbone**](https://arxiv.org/abs/2407.08083)
## Model Description
We propose a novel hybrid Mamba-Transformer backbone, denoted as MambaVision, which is specifically tailored for vision applications. Our core contribution includes redesigning the Mamba formulation to enhance its capability for efficient modeling of visual features. In addition, we conduct a comprehensive ablation study on the feasibility of integrating Vision Transformers (ViT) with Mamba. Our results demonstrate that equipping the Mamba architecture with several self-attention blocks at the final layers greatly improves the modeling capacity to capture long-range spatial dependencies. Based on our findings, we introduce a family of MambaVision models with a hierarchical architecture to meet various design criteria. For Image classification on ImageNet-1K dataset, MambaVision model variants achieve a new State-of-the-Art (SOTA) performance in terms of Top-1 accuracy and image throughput. In downstream tasks such as object detection, instance segmentation and semantic segmentation on MS COCO and ADE20K datasets, MambaVision outperforms comparably-sized backbones and demonstrates more favorable performance. Code: https://github.com/NVlabs/MambaVision.
## Model Performance
MambaVision-L-21K is pretrained on ImageNet-21K dataset and finetuned on ImageNet-1K.
<table>
<tr>
<th>Name</th>
<th>Acc@1(%)</th>
<th>Acc@5(%)</th>
<th>#Params(M)</th>
<th>FLOPs(G)</th>
<th>Resolution</th>
</tr>
<tr>
<td>MambaVision-L-21K</td>
<td>86.1</td>
<td>97.9</td>
<td>227.9</td>
<td>34.9</td>
<td>224x224</td>
</tr>
</table>
In addition, the MambaVision models demonstrate a strong performance by achieving a new SOTA Pareto-front in
terms of Top-1 accuracy and throughput.
<p align="center">
<img src="https://github.com/NVlabs/MambaVision/assets/26806394/79dcf841-3966-4b77-883d-76cd5e1d4320" width=70% height=70%
class="center">
</p>
## Model Usage
It is highly recommended to install the requirements for MambaVision by running the following:
```Bash
pip install mambavision
```
For each model, we offer two variants for image classification and feature extraction that can be imported with 1 line of code.
### Image Classification
In the following example, we demonstrate how MambaVision can be used for image classification.
Given the following image from [COCO dataset](https://cocodataset.org/#home) val set as an input:
<p align="center">
<img src="https://hf.fast360.xyz/production/uploads/64414b62603214724ebd2636/4duSnqLf4lrNiAHczSmAN.jpeg" width=70% height=70%
class="center">
</p>
The following snippet can be used for image classification:
```Python
from transformers import AutoModelForImageClassification
from PIL import Image
from timm.data.transforms_factory import create_transform
import requests
model = AutoModelForImageClassification.from_pretrained("nvidia/MambaVision-L-21K", trust_remote_code=True)
# eval mode for inference
model.cuda().eval()
# prepare image for the model
url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
image = Image.open(requests.get(url, stream=True).raw)
input_resolution = (3, 224, 224) # MambaVision supports any input resolutions
transform = create_transform(input_size=input_resolution,
is_training=False,
mean=model.config.mean,
std=model.config.std,
crop_mode=model.config.crop_mode,
crop_pct=model.config.crop_pct)
inputs = transform(image).unsqueeze(0).cuda()
# model inference
outputs = model(inputs)
logits = outputs['logits']
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
The predicted label is ```brown bear, bruin, Ursus arctos.```
### Feature Extraction
MambaVision can also be used as a generic feature extractor.
Specifically, we can extract the outputs of each stage of model (4 stages) as well as the final averaged-pool features that are flattened.
The following snippet can be used for feature extraction:
```Python
from transformers import AutoModel
from PIL import Image
from timm.data.transforms_factory import create_transform
import requests
model = AutoModel.from_pretrained("nvidia/MambaVision-L-21K", trust_remote_code=True)
# eval mode for inference
model.cuda().eval()
# prepare image for the model
url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
image = Image.open(requests.get(url, stream=True).raw)
input_resolution = (3, 224, 224) # MambaVision supports any input resolutions
transform = create_transform(input_size=input_resolution,
is_training=False,
mean=model.config.mean,
std=model.config.std,
crop_mode=model.config.crop_mode,
crop_pct=model.config.crop_pct)
inputs = transform(image).unsqueeze(0).cuda()
# model inference
out_avg_pool, features = model(inputs)
print("Size of the averaged pool features:", out_avg_pool.size()) # torch.Size([1, 640])
print("Number of stages in extracted features:", len(features)) # 4 stages
print("Size of extracted features in stage 1:", features[0].size()) # torch.Size([1, 80, 56, 56])
print("Size of extracted features in stage 4:", features[3].size()) # torch.Size([1, 640, 7, 7])
```
### License:
[NVIDIA Source Code License-NC](https://huggingface.co/nvidia/MambaVision-L-21K/blob/main/LICENSE)
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
nvidia/MambaVision-L2-512-21K
|
[**MambaVision: A Hybrid Mamba-Transformer Vision Backbone**](https://arxiv.org/abs/2407.08083).
## Model Overview
We have developed the first hybrid model for computer vision which leverages the strengths of Mamba and Transformers. Specifically, our core contribution includes redesigning the Mamba formulation to enhance its capability for efficient modeling of visual features. In addition, we conducted a comprehensive ablation study on the feasibility of integrating Vision Transformers (ViT) with Mamba. Our results demonstrate that equipping the Mamba architecture with several self-attention blocks at the final layers greatly improves the modeling capacity to capture long-range spatial dependencies. Based on our findings, we introduce a family of MambaVision models with a hierarchical architecture to meet various design criteria.
## Model Performance
MambaVision-L2-512-21K is pretrained on ImageNet-21K dataset and finetuned on ImageNet-1K at 512 x 512 resolution.
<table>
<tr>
<th>Name</th>
<th>Acc@1(%)</th>
<th>Acc@5(%)</th>
<th>#Params(M)</th>
<th>FLOPs(G)</th>
<th>Resolution</th>
</tr>
<tr>
<td>MambaVision-L2-512-21K</td>
<td>87.3</td>
<td>98.4</td>
<td>241.5</td>
<td>196.3</td>
<td>512x512</td>
</tr>
</table>
In addition, the MambaVision models demonstrate a strong performance by achieving a new SOTA Pareto-front in
terms of Top-1 accuracy and throughput.
<p align="center">
<img src="https://github.com/NVlabs/MambaVision/assets/26806394/79dcf841-3966-4b77-883d-76cd5e1d4320" width=70% height=70%
class="center">
</p>
## Model Usage
It is highly recommended to install the requirements for MambaVision by running the following:
```Bash
pip install mambavision
```
For each model, we offer two variants for image classification and feature extraction that can be imported with 1 line of code.
### Image Classification
In the following example, we demonstrate how MambaVision can be used for image classification.
Given the following image from [COCO dataset](https://cocodataset.org/#home) val set as an input:
<p align="center">
<img src="https://hf.fast360.xyz/production/uploads/64414b62603214724ebd2636/4duSnqLf4lrNiAHczSmAN.jpeg" width=70% height=70%
class="center">
</p>
The following snippet can be used for image classification:
```Python
from transformers import AutoModelForImageClassification
from PIL import Image
from timm.data.transforms_factory import create_transform
import requests
model = AutoModelForImageClassification.from_pretrained("nvidia/MambaVision-L2-512-21K", trust_remote_code=True)
# eval mode for inference
model.cuda().eval()
# prepare image for the model
url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
image = Image.open(requests.get(url, stream=True).raw)
input_resolution = (3, 512, 512) # MambaVision supports any input resolutions
transform = create_transform(input_size=input_resolution,
is_training=False,
mean=model.config.mean,
std=model.config.std,
crop_mode=model.config.crop_mode,
crop_pct=model.config.crop_pct)
inputs = transform(image).unsqueeze(0).cuda()
# model inference
outputs = model(inputs)
logits = outputs['logits']
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
The predicted label is ```brown bear, bruin, Ursus arctos.```
### Feature Extraction
MambaVision can also be used as a generic feature extractor.
Specifically, we can extract the outputs of each stage of model (4 stages) as well as the final averaged-pool features that are flattened.
The following snippet can be used for feature extraction:
```Python
from transformers import AutoModel
from PIL import Image
from timm.data.transforms_factory import create_transform
import requests
model = AutoModel.from_pretrained("nvidia/MambaVision-L2-512-21K", trust_remote_code=True)
# eval mode for inference
model.cuda().eval()
# prepare image for the model
url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
image = Image.open(requests.get(url, stream=True).raw)
input_resolution = (3, 512, 512) # MambaVision supports any input resolutions
transform = create_transform(input_size=input_resolution,
is_training=False,
mean=model.config.mean,
std=model.config.std,
crop_mode=model.config.crop_mode,
crop_pct=model.config.crop_pct)
inputs = transform(image).unsqueeze(0).cuda()
# model inference
out_avg_pool, features = model(inputs)
print("Size of the averaged pool features:", out_avg_pool.size()) # torch.Size([1, 1568])
print("Number of stages in extracted features:", len(features)) # 4 stages
print("Size of extracted features in stage 1:", features[0].size()) # torch.Size([1, 196, 128, 128])
print("Size of extracted features in stage 4:", features[3].size()) # torch.Size([1, 1568, 16, 16])
```
### License:
[NVIDIA Source Code License-NC](https://huggingface.co/nvidia/MambaVision-L2-512-21K/blob/main/LICENSE)
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
nvidia/MambaVision-L3-512-21K
|
[**MambaVision: A Hybrid Mamba-Transformer Vision Backbone**](https://arxiv.org/abs/2407.08083).
[Project page](https://github.com/NVlabs/MambaVision)
## Model Overview
We have developed the first hybrid model for computer vision which leverages the strengths of Mamba and Transformers. Specifically, our core contribution includes redesigning the Mamba formulation to enhance its capability for efficient modeling of visual features. In addition, we conducted a comprehensive ablation study on the feasibility of integrating Vision Transformers (ViT) with Mamba. Our results demonstrate that equipping the Mamba architecture with several self-attention blocks at the final layers greatly improves the modeling capacity to capture long-range spatial dependencies. Based on our findings, we introduce a family of MambaVision models with a hierarchical architecture to meet various design criteria.
## Model Performance
MambaVision-L3-512-21K is pretrained on ImageNet-21K dataset and finetuned on ImageNet-1K at 512 x 512 resolution.
<table>
<tr>
<th>Name</th>
<th>Acc@1(%)</th>
<th>Acc@5(%)</th>
<th>#Params(M)</th>
<th>FLOPs(G)</th>
<th>Resolution</th>
</tr>
<tr>
<td>MambaVision-L3-512-21K</td>
<td>88.1</td>
<td>98.6</td>
<td>739.6</td>
<td>489.1</td>
<td>512x512</td>
</tr>
</table>
In addition, the MambaVision models demonstrate a strong performance by achieving a new SOTA Pareto-front in
terms of Top-1 accuracy and throughput.
<p align="center">
<img src="https://github.com/NVlabs/MambaVision/assets/26806394/79dcf841-3966-4b77-883d-76cd5e1d4320" width=70% height=70%
class="center">
</p>
## Model Usage
It is highly recommended to install the requirements for MambaVision by running the following:
```Bash
pip install mambavision
```
For each model, we offer two variants for image classification and feature extraction that can be imported with 1 line of code.
### Image Classification
In the following example, we demonstrate how MambaVision can be used for image classification.
Given the following image from [COCO dataset](https://cocodataset.org/#home) val set as an input:
<p align="center">
<img src="https://hf.fast360.xyz/production/uploads/64414b62603214724ebd2636/4duSnqLf4lrNiAHczSmAN.jpeg" width=70% height=70%
class="center">
</p>
The following snippet can be used for image classification:
```Python
from transformers import AutoModelForImageClassification
from PIL import Image
from timm.data.transforms_factory import create_transform
import requests
model = AutoModelForImageClassification.from_pretrained("nvidia/MambaVision-L3-512-21K", trust_remote_code=True)
# eval mode for inference
model.cuda().eval()
# prepare image for the model
url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
image = Image.open(requests.get(url, stream=True).raw)
input_resolution = (3, 512, 512) # MambaVision supports any input resolutions
transform = create_transform(input_size=input_resolution,
is_training=False,
mean=model.config.mean,
std=model.config.std,
crop_mode=model.config.crop_mode,
crop_pct=model.config.crop_pct)
inputs = transform(image).unsqueeze(0).cuda()
# model inference
outputs = model(inputs)
logits = outputs['logits']
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
The predicted label is ```brown bear, bruin, Ursus arctos.```
### Feature Extraction
MambaVision can also be used as a generic feature extractor.
Specifically, we can extract the outputs of each stage of model (4 stages) as well as the final averaged-pool features that are flattened.
The following snippet can be used for feature extraction:
```Python
from transformers import AutoModel
from PIL import Image
from timm.data.transforms_factory import create_transform
import requests
model = AutoModel.from_pretrained("nvidia/MambaVision-L3-512-21K", trust_remote_code=True)
# eval mode for inference
model.cuda().eval()
# prepare image for the model
url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
image = Image.open(requests.get(url, stream=True).raw)
input_resolution = (3, 512, 512) # MambaVision supports any input resolutions
transform = create_transform(input_size=input_resolution,
is_training=False,
mean=model.config.mean,
std=model.config.std,
crop_mode=model.config.crop_mode,
crop_pct=model.config.crop_pct)
inputs = transform(image).unsqueeze(0).cuda()
# model inference
out_avg_pool, features = model(inputs)
print("Size of the averaged pool features:", out_avg_pool.size()) # torch.Size([1, 1568])
print("Number of stages in extracted features:", len(features)) # 4 stages
print("Size of extracted features in stage 1:", features[0].size()) # torch.Size([1, 196, 128, 128])
print("Size of extracted features in stage 4:", features[3].size()) # torch.Size([1, 1568, 16, 16])
```
### License:
[NVIDIA Source Code License-NC](https://huggingface.co/nvidia/MambaVision-L3-512-21K/blob/main/LICENSE)
## Results + Pretrained Models
### ImageNet-21K
<table>
<tr>
<th>Name</th>
<th>Acc@1(%)</th>
<th>Acc@5(%)</th>
<th>#Params(M)</th>
<th>FLOPs(G)</th>
<th>Resolution</th>
<th>HF</th>
<th>Download</th>
</tr>
<tr>
<td>MambaVision-B-21K</td>
<td>84.9</td>
<td>97.5</td>
<td>97.7</td>
<td>15.0</td>
<td>224x224</td>
<td><a href="https://huggingface.co/nvidia/MambaVision-B-21K">link</a></td>
<td><a href="https://huggingface.co/nvidia/MambaVision-B-21K/resolve/main/mambavision_base_21k.pth.tar">model</a></td>
</tr>
<tr>
<td>MambaVision-L-21K</td>
<td>86.1</td>
<td>97.9</td>
<td>227.9</td>
<td>34.9</td>
<td>224x224</td>
<td><a href="https://huggingface.co/nvidia/MambaVision-L-21K">link</a></td>
<td><a href="https://huggingface.co/nvidia/MambaVision-L-21K/resolve/main/mambavision_large_21k.pth.tar">model</a></td>
</tr>
<tr>
<td>MambaVision-L2-512-21K</td>
<td>87.3</td>
<td>98.4</td>
<td>241.5</td>
<td>196.3</td>
<td>512x512</td>
<td><a href="https://huggingface.co/nvidia/MambaVision-L2-512-21K">link</a></td>
<td><a href="https://huggingface.co/nvidia/MambaVision-L2-512-21K/resolve/main/mambavision_L2_21k_240m_512.pth.tar">model</a></td>
</tr>
<tr>
<td>MambaVision-L3-256-21K</td>
<td>87.3</td>
<td>98.3</td>
<td>739.6</td>
<td>122.3</td>
<td>256x256</td>
<td><a href="https://huggingface.co/nvidia/MambaVision-L3-256-21K">link</a></td>
<td><a href="https://huggingface.co/nvidia/MambaVision-L3-256-21K/resolve/main/mambavision_L3_21k_740m_256.pth.tar">model</a></td>
</tr>
<tr>
<td>MambaVision-L3-512-21K</td>
<td>88.1</td>
<td>98.6</td>
<td>739.6</td>
<td>489.1</td>
<td>512x512</td>
<td><a href="https://huggingface.co/nvidia/MambaVision-L3-512-21K">link</a></td>
<td><a href="https://huggingface.co/nvidia/MambaVision-L3-512-21K/resolve/main/mambavision_L3_21k_740m_512.pth.tar">model</a></td>
</tr>
</table>
### ImageNet-1K
<table>
<tr>
<th>Name</th>
<th>Acc@1(%)</th>
<th>Acc@5(%)</th>
<th>Throughput(Img/Sec)</th>
<th>Resolution</th>
<th>#Params(M)</th>
<th>FLOPs(G)</th>
<th>HF</th>
<th>Download</th>
</tr>
<tr>
<td>MambaVision-T</td>
<td>82.3</td>
<td>96.2</td>
<td>6298</td>
<td>224x224</td>
<td>31.8</td>
<td>4.4</td>
<td><a href="https://huggingface.co/nvidia/MambaVision-T-1K">link</a></td>
<td><a href="https://huggingface.co/nvidia/MambaVision-T-1K/resolve/main/mambavision_tiny_1k.pth.tar">model</a></td>
</tr>
<tr>
<td>MambaVision-T2</td>
<td>82.7</td>
<td>96.3</td>
<td>5990</td>
<td>224x224</td>
<td>35.1</td>
<td>5.1</td>
<td><a href="https://huggingface.co/nvidia/MambaVision-T2-1K">link</a></td>
<td><a href="https://huggingface.co/nvidia/MambaVision-T2-1K/resolve/main/mambavision_tiny2_1k.pth.tar">model</a></td>
</tr>
<tr>
<td>MambaVision-S</td>
<td>83.3</td>
<td>96.5</td>
<td>4700</td>
<td>224x224</td>
<td>50.1</td>
<td>7.5</td>
<td><a href="https://huggingface.co/nvidia/MambaVision-S-1K">link</a></td>
<td><a href="https://huggingface.co/nvidia/MambaVision-S-1K/resolve/main/mambavision_small_1k.pth.tar">model</a></td>
</tr>
<tr>
<td>MambaVision-B</td>
<td>84.2</td>
<td>96.9</td>
<td>3670</td>
<td>224x224</td>
<td>97.7</td>
<td>15.0</td>
<td><a href="https://huggingface.co/nvidia/MambaVision-B-1K">link</a></td>
<td><a href="https://huggingface.co/nvidia/MambaVision-B-1K/resolve/main/mambavision_base_1k.pth.tar">model</a></td>
</tr>
<tr>
<td>MambaVision-L</td>
<td>85.0</td>
<td>97.1</td>
<td>2190</td>
<td>224x224</td>
<td>227.9</td>
<td>34.9</td>
<td><a href="https://huggingface.co/nvidia/MambaVision-L-1K">link</a></td>
<td><a href="https://huggingface.co/nvidia/MambaVision-L-1K/resolve/main/mambavision_large_1k.pth.tar">model</a></td>
</tr>
<tr>
<td>MambaVision-L2</td>
<td>85.3</td>
<td>97.2</td>
<td>1021</td>
<td>224x224</td>
<td>241.5</td>
<td>37.5</td>
<td><a href="https://huggingface.co/nvidia/MambaVision-L2-1K">link</a></td>
<td><a href="https://huggingface.co/nvidia/MambaVision-L2-1K/resolve/main/mambavision_large2_1k.pth.tar">model</a></td>
</tr>
</table>
## Installation
We provide a [docker file](./Dockerfile). In addition, assuming that a recent [PyTorch](https://pytorch.org/get-started/locally/) package is installed, the dependencies can be installed by running:
```bash
pip install -r requirements.txt
```
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
prithivMLmods/Multisource-121-DomainNet
|

# **Multisource-121-DomainNet**
> **Multisource-121-DomainNet** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify images into 121 domain categories using the **SiglipForImageClassification** architecture.

*Moment Matching for Multi-Source Domain Adaptation* : https://arxiv.org/pdf/1812.01754
*SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features* https://arxiv.org/pdf/2502.14786
```py
Classification Report:
precision recall f1-score support
barn 0.7483 0.8370 0.7902 270
baseball_bat 0.9197 0.9333 0.9265 270
basket 0.8302 0.8148 0.8224 270
beach 0.7059 0.7556 0.7299 270
bear 0.7500 0.7444 0.7472 270
beard 0.5496 0.5741 0.5616 270
bee 0.9004 0.9037 0.9020 270
bird 0.7352 0.7815 0.7576 270
blueberry 0.7230 0.7926 0.7562 270
bowtie 0.8726 0.8370 0.8544 270
bracelet 0.7328 0.7111 0.7218 270
brain 0.8925 0.9222 0.9071 270
bread 0.5573 0.6667 0.6071 270
broccoli 0.9200 0.7667 0.8364 270
bus 0.8442 0.8630 0.8535 270
butterfly 0.9321 0.9148 0.9234 270
circle 0.6038 0.8185 0.6950 270
cloud 0.8201 0.8444 0.8321 270
cruise_ship 0.8545 0.8481 0.8513 270
dolphin 0.8286 0.8593 0.8436 270
dumbbell 0.8705 0.8963 0.8832 270
elephant 0.8598 0.8630 0.8614 270
eye 0.8603 0.8667 0.8635 270
eyeglasses 0.8425 0.7926 0.8168 270
feather 0.8413 0.7852 0.8123 270
fish 0.8169 0.8593 0.8375 270
flower 0.7973 0.8741 0.8339 270
foot 0.8152 0.8333 0.8242 270
frog 0.9270 0.8000 0.8588 270
giraffe 0.9026 0.8926 0.8976 270
goatee 0.5171 0.5037 0.5103 270
golf_club 0.6466 0.6778 0.6618 270
grapes 0.8731 0.8407 0.8566 270
grass 0.7359 0.6296 0.6786 270
guitar 0.8386 0.8852 0.8613 270
hamburger 0.8535 0.8630 0.8582 270
hand 0.7824 0.6926 0.7348 270
hat 0.7333 0.7741 0.7532 270
headphones 0.8971 0.9037 0.9004 270
helicopter 0.8992 0.8259 0.8610 270
hexagon 0.9113 0.8370 0.8726 270
hockey_stick 0.8419 0.8481 0.8450 270
horse 0.8081 0.8889 0.8466 270
hourglass 0.9161 0.9296 0.9228 270
house 0.7524 0.8778 0.8103 270
ice_cream 0.8821 0.8593 0.8705 270
jacket 0.8621 0.7407 0.7968 270
ladder 0.7051 0.8148 0.7560 270
leg 0.5916 0.5741 0.5827 270
lipstick 0.8889 0.8000 0.8421 270
megaphone 0.8710 0.9000 0.8852 270
monkey 0.8370 0.8556 0.8462 270
moon 0.8527 0.8148 0.8333 270
mushroom 0.8774 0.8481 0.8625 270
necklace 0.8670 0.7481 0.8032 270
owl 0.9179 0.9111 0.9145 270
panda 0.9490 0.8963 0.9219 270
pear 0.8832 0.8963 0.8897 270
peas 0.7743 0.8259 0.7993 270
penguin 0.8618 0.8778 0.8697 270
pig 0.6767 0.8296 0.7454 270
pillow 0.7359 0.6296 0.6786 270
pineapple 0.9213 0.9111 0.9162 270
pizza 0.9173 0.9444 0.9307 270
pool 0.6717 0.6593 0.6654 270
popsicle 0.7390 0.8074 0.7717 270
rabbit 0.8345 0.8778 0.8556 270
rhinoceros 0.9219 0.9185 0.9202 270
rifle 0.9256 0.8296 0.8750 270
river 0.6067 0.7370 0.6656 270
sailboat 0.8606 0.9148 0.8869 270
sandwich 0.7638 0.7667 0.7652 270
sea_turtle 0.8794 0.9185 0.8986 270
shark 0.8114 0.8444 0.8276 270
shoe 0.8097 0.8667 0.8372 270
skyscraper 0.7727 0.8185 0.7950 270
snorkel 0.8238 0.6926 0.7525 270
snowman 0.8736 0.8444 0.8588 270
soccer_ball 0.9395 0.8630 0.8996 270
speedboat 0.7649 0.7593 0.7621 270
spider 0.9212 0.8222 0.8689 270
spoon 0.8165 0.8074 0.8119 270
square 0.4669 0.6259 0.5348 270
squirrel 0.8394 0.7741 0.8054 270
stethoscope 0.8566 0.8630 0.8598 270
strawberry 0.8629 0.7926 0.8263 270
streetlight 0.5000 0.6852 0.5781 270
submarine 0.6850 0.6926 0.6888 270
suitcase 0.8259 0.7556 0.7892 270
sun 0.8082 0.6556 0.7239 270
sweater 0.5912 0.6963 0.6395 270
sword 0.8258 0.8074 0.8165 270
table 0.5502 0.5481 0.5492 270
teapot 0.9019 0.8852 0.8935 270
teddy-bear 0.7906 0.8111 0.8007 270
telephone 0.7836 0.7778 0.7807 270
tent 0.7579 0.7074 0.7318 270
The_Eiffel_Tower 0.8633 0.8889 0.8759 270
The_Great_Wall_of_China 0.8893 0.8333 0.8604 270
The_Mona_Lisa 0.8152 0.9148 0.8621 270
tiger 0.8577 0.8259 0.8415 270
toaster 0.6788 0.6889 0.6838 270
tooth 0.8807 0.7926 0.8343 270
tornado 0.7530 0.7000 0.7255 270
tractor 0.9372 0.8296 0.8802 270
train 0.7692 0.7407 0.7547 270
tree 0.7639 0.8148 0.7885 270
triangle 0.8852 0.8000 0.8405 270
trombone 0.6653 0.5963 0.6289 270
truck 0.7049 0.7963 0.7478 270
trumpet 0.7463 0.5667 0.6442 270
umbrella 0.9144 0.8704 0.8918 270
vase 0.8148 0.7333 0.7719 270
violin 0.8966 0.7704 0.8287 270
watermelon 0.7970 0.8000 0.7985 270
whale 0.7769 0.6963 0.7344 270
windmill 0.8963 0.8963 0.8963 270
wine_glass 0.8996 0.8630 0.8809 270
yoga 0.7406 0.8037 0.7709 270
zebra 0.9144 0.7519 0.8252 270
zigzag 0.6502 0.6333 0.6417 270
accuracy 0.7995 32670
macro avg 0.8052 0.7995 0.8006 32670
weighted avg 0.8052 0.7995 0.8006 32670
```
The model categorizes images into the following 121 classes:
- **Class 0:** "barn"
- **Class 1:** "baseball_bat"
- **Class 2:** "basket"
- **Class 3:** "beach"
- **Class 4:** "bear"
- **Class 5:** "beard"
- **Class 6:** "bee"
- **Class 7:** "bird"
- **Class 8:** "blueberry"
- **Class 9:** "bowtie"
- **Class 10:** "bracelet"
- **Class 11:** "brain"
- **Class 12:** "bread"
- **Class 13:** "broccoli"
- **Class 14:** "bus"
- **Class 15:** "butterfly"
- **Class 16:** "circle"
- **Class 17:** "cloud"
- **Class 18:** "cruise_ship"
- **Class 19:** "dolphin"
- **Class 20:** "dumbbell"
- **Class 21:** "elephant"
- **Class 22:** "eye"
- **Class 23:** "eyeglasses"
- **Class 24:** "feather"
- **Class 25:** "fish"
- **Class 26:** "flower"
- **Class 27:** "foot"
- **Class 28:** "frog"
- **Class 29:** "giraffe"
- **Class 30:** "goatee"
- **Class 31:** "golf_club"
- **Class 32:** "grapes"
- **Class 33:** "grass"
- **Class 34:** "guitar"
- **Class 35:** "hamburger"
- **Class 36:** "hand"
- **Class 37:** "hat"
- **Class 38:** "headphones"
- **Class 39:** "helicopter"
- **Class 40:** "hexagon"
- **Class 41:** "hockey_stick"
- **Class 42:** "horse"
- **Class 43:** "hourglass"
- **Class 44:** "house"
- **Class 45:** "ice_cream"
- **Class 46:** "jacket"
- **Class 47:** "ladder"
- **Class 48:** "leg"
- **Class 49:** "lipstick"
- **Class 50:** "megaphone"
- **Class 51:** "monkey"
- **Class 52:** "moon"
- **Class 53:** "mushroom"
- **Class 54:** "necklace"
- **Class 55:** "owl"
- **Class 56:** "panda"
- **Class 57:** "pear"
- **Class 58:** "peas"
- **Class 59:** "penguin"
- **Class 60:** "pig"
- **Class 61:** "pillow"
- **Class 62:** "pineapple"
- **Class 63:** "pizza"
- **Class 64:** "pool"
- **Class 65:** "popsicle"
- **Class 66:** "rabbit"
- **Class 67:** "rhinoceros"
- **Class 68:** "rifle"
- **Class 69:** "river"
- **Class 70:** "sailboat"
- **Class 71:** "sandwich"
- **Class 72:** "sea_turtle"
- **Class 73:** "shark"
- **Class 74:** "shoe"
- **Class 75:** "skyscraper"
- **Class 76:** "snorkel"
- **Class 77:** "snowman"
- **Class 78:** "soccer_ball"
- **Class 79:** "speedboat"
- **Class 80:** "spider"
- **Class 81:** "spoon"
- **Class 82:** "square"
- **Class 83:** "squirrel"
- **Class 84:** "stethoscope"
- **Class 85:** "strawberry"
- **Class 86:** "streetlight"
- **Class 87:** "submarine"
- **Class 88:** "suitcase"
- **Class 89:** "sun"
- **Class 90:** "sweater"
- **Class 91:** "sword"
- **Class 92:** "table"
- **Class 93:** "teapot"
- **Class 94:** "teddy-bear"
- **Class 95:** "telephone"
- **Class 96:** "tent"
- **Class 97:** "The_Eiffel_Tower"
- **Class 98:** "The_Great_Wall_of_China"
- **Class 99:** "The_Mona_Lisa"
- **Class 100:** "tiger"
- **Class 101:** "toaster"
- **Class 102:** "tooth"
- **Class 103:** "tornado"
- **Class 104:** "tractor"
- **Class 105:** "train"
- **Class 106:** "tree"
- **Class 107:** "triangle"
- **Class 108:** "trombone"
- **Class 109:** "truck"
- **Class 110:** "trumpet"
- **Class 111:** "umbrella"
- **Class 112:** "vase"
- **Class 113:** "violin"
- **Class 114:** "watermelon"
- **Class 115:** "whale"
- **Class 116:** "windmill"
- **Class 117:** "wine_glass"
- **Class 118:** "yoga"
- **Class 119:** "zebra"
- **Class 120:** "zigzag"
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Multisource-121-DomainNet"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def multisource_classification(image):
"""Predicts the domain category for an input image."""
# Convert the input numpy array to a PIL Image and ensure it is in RGB format
image = Image.fromarray(image).convert("RGB")
# Process the image and convert it to model inputs
inputs = processor(images=image, return_tensors="pt")
# Get model predictions without gradient calculations
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
# Convert logits to probabilities using softmax
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
# Mapping from class indices to domain labels
labels = {
"0": "barn", "1": "baseball_bat", "2": "basket", "3": "beach", "4": "bear",
"5": "beard", "6": "bee", "7": "bird", "8": "blueberry", "9": "bowtie",
"10": "bracelet", "11": "brain", "12": "bread", "13": "broccoli", "14": "bus",
"15": "butterfly", "16": "circle", "17": "cloud", "18": "cruise_ship", "19": "dolphin",
"20": "dumbbell", "21": "elephant", "22": "eye", "23": "eyeglasses", "24": "feather",
"25": "fish", "26": "flower", "27": "foot", "28": "frog", "29": "giraffe",
"30": "goatee", "31": "golf_club", "32": "grapes", "33": "grass", "34": "guitar",
"35": "hamburger", "36": "hand", "37": "hat", "38": "headphones", "39": "helicopter",
"40": "hexagon", "41": "hockey_stick", "42": "horse", "43": "hourglass", "44": "house",
"45": "ice_cream", "46": "jacket", "47": "ladder", "48": "leg", "49": "lipstick",
"50": "megaphone", "51": "monkey", "52": "moon", "53": "mushroom", "54": "necklace",
"55": "owl", "56": "panda", "57": "pear", "58": "peas", "59": "penguin",
"60": "pig", "61": "pillow", "62": "pineapple", "63": "pizza", "64": "pool",
"65": "popsicle", "66": "rabbit", "67": "rhinoceros", "68": "rifle", "69": "river",
"70": "sailboat", "71": "sandwich", "72": "sea_turtle", "73": "shark", "74": "shoe",
"75": "skyscraper", "76": "snorkel", "77": "snowman", "78": "soccer_ball", "79": "speedboat",
"80": "spider", "81": "spoon", "82": "square", "83": "squirrel", "84": "stethoscope",
"85": "strawberry", "86": "streetlight", "87": "submarine", "88": "suitcase", "89": "sun",
"90": "sweater", "91": "sword", "92": "table", "93": "teapot", "94": "teddy-bear",
"95": "telephone", "96": "tent", "97": "The_Eiffel_Tower", "98": "The_Great_Wall_of_China",
"99": "The_Mona_Lisa", "100": "tiger", "101": "toaster", "102": "tooth", "103": "tornado",
"104": "tractor", "105": "train", "106": "tree", "107": "triangle", "108": "trombone",
"109": "truck", "110": "trumpet", "111": "umbrella", "112": "vase", "113": "violin",
"114": "watermelon", "115": "whale", "116": "windmill", "117": "wine_glass", "118": "yoga",
"119": "zebra", "120": "zigzag"
}
# Create a dictionary mapping each label to its corresponding probability (rounded)
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=multisource_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Multisource-121-DomainNet Classification",
description="Upload an image to classify it into one of 121 domain categories."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
---
# **Intended Use:**
The **Multisource-121-DomainNet** model is designed for multi-source image classification. It can categorize images into a diverse set of 121 domains, covering various objects, scenes, and landmarks. Potential use cases include:
- **Cross-Domain Image Analysis:** Enabling robust classification across a wide range of visual domains.
- **Multimedia Retrieval:** Assisting in content organization and retrieval in multimedia databases.
- **Computer Vision Research:** Serving as a benchmark for evaluating domain adaptation and transfer learning techniques.
- **Interactive Applications:** Enhancing user interfaces with diverse, real-time image recognition capabilities.
|
[
"barn",
"baseball_bat",
"basket",
"beach",
"bear",
"beard",
"bee",
"bird",
"blueberry",
"bowtie",
"bracelet",
"brain",
"bread",
"broccoli",
"bus",
"butterfly",
"circle",
"cloud",
"cruise_ship",
"dolphin",
"dumbbell",
"elephant",
"eye",
"eyeglasses",
"feather",
"fish",
"flower",
"foot",
"frog",
"giraffe",
"goatee",
"golf_club",
"grapes",
"grass",
"guitar",
"hamburger",
"hand",
"hat",
"headphones",
"helicopter",
"hexagon",
"hockey_stick",
"horse",
"hourglass",
"house",
"ice_cream",
"jacket",
"ladder",
"leg",
"lipstick",
"megaphone",
"monkey",
"moon",
"mushroom",
"necklace",
"owl",
"panda",
"pear",
"peas",
"penguin",
"pig",
"pillow",
"pineapple",
"pizza",
"pool",
"popsicle",
"rabbit",
"rhinoceros",
"rifle",
"river",
"sailboat",
"sandwich",
"sea_turtle",
"shark",
"shoe",
"skyscraper",
"snorkel",
"snowman",
"soccer_ball",
"speedboat",
"spider",
"spoon",
"square",
"squirrel",
"stethoscope",
"strawberry",
"streetlight",
"submarine",
"suitcase",
"sun",
"sweater",
"sword",
"table",
"teapot",
"teddy-bear",
"telephone",
"tent",
"the_eiffel_tower",
"the_great_wall_of_china",
"the_mona_lisa",
"tiger",
"toaster",
"tooth",
"tornado",
"tractor",
"train",
"tree",
"triangle",
"trombone",
"truck",
"trumpet",
"umbrella",
"vase",
"violin",
"watermelon",
"whale",
"windmill",
"wine_glass",
"yoga",
"zebra",
"zigzag"
] |
nvidia/MambaVision-L3-256-21K
|
[**MambaVision: A Hybrid Mamba-Transformer Vision Backbone**](https://arxiv.org/abs/2407.08083).
Code: https://github.com/NVlabs/MambaVision
## Model Overview
We have developed the first hybrid model for computer vision which leverages the strengths of Mamba and Transformers. Specifically, our core contribution includes redesigning the Mamba formulation to enhance its capability for efficient modeling of visual features. In addition, we conducted a comprehensive ablation study on the feasibility of integrating Vision Transformers (ViT) with Mamba. Our results demonstrate that equipping the Mamba architecture with several self-attention blocks at the final layers greatly improves the modeling capacity to capture long-range spatial dependencies. Based on our findings, we introduce a family of MambaVision models with a hierarchical architecture to meet various design criteria.
## Model Performance
MambaVision-L3-256-21K is pretrained on ImageNet-21K dataset and finetuned on ImageNet-1K. Both pretraining and finetuning are performed at 256 x 256 resolution.
<table>
<tr>
<th>Name</th>
<th>Acc@1(%)</th>
<th>Acc@5(%)</th>
<th>#Params(M)</th>
<th>FLOPs(G)</th>
<th>Resolution</th>
</tr>
<tr>
<td>MambaVision-L3-256-21K</td>
<td>87.3</td>
<td>98.3</td>
<td>739.6</td>
<td>122.3</td>
<td>256x256</td>
</tr>
</table>
In addition, the MambaVision models demonstrate a strong performance by achieving a new SOTA Pareto-front in
terms of Top-1 accuracy and throughput.
<p align="center">
<img src="https://github.com/NVlabs/MambaVision/assets/26806394/79dcf841-3966-4b77-883d-76cd5e1d4320" width=70% height=70%
class="center">
</p>
## Model Usage
It is highly recommended to install the requirements for MambaVision by running the following:
```Bash
pip install mambavision
```
For each model, we offer two variants for image classification and feature extraction that can be imported with 1 line of code.
### Image Classification
In the following example, we demonstrate how MambaVision can be used for image classification.
Given the following image from [COCO dataset](https://cocodataset.org/#home) val set as an input:
<p align="center">
<img src="https://hf.fast360.xyz/production/uploads/64414b62603214724ebd2636/4duSnqLf4lrNiAHczSmAN.jpeg" width=70% height=70%
class="center">
</p>
The following snippet can be used for image classification:
```Python
from transformers import AutoModelForImageClassification
from PIL import Image
from timm.data.transforms_factory import create_transform
import requests
model = AutoModelForImageClassification.from_pretrained("nvidia/MambaVision-L3-256-21K", trust_remote_code=True)
# eval mode for inference
model.cuda().eval()
# prepare image for the model
url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
image = Image.open(requests.get(url, stream=True).raw)
input_resolution = (3, 256, 256) # MambaVision supports any input resolutions
transform = create_transform(input_size=input_resolution,
is_training=False,
mean=model.config.mean,
std=model.config.std,
crop_mode=model.config.crop_mode,
crop_pct=model.config.crop_pct)
inputs = transform(image).unsqueeze(0).cuda()
# model inference
outputs = model(inputs)
logits = outputs['logits']
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
The predicted label is ```brown bear, bruin, Ursus arctos.```
### Feature Extraction
MambaVision can also be used as a generic feature extractor.
Specifically, we can extract the outputs of each stage of model (4 stages) as well as the final averaged-pool features that are flattened.
The following snippet can be used for feature extraction:
```Python
from transformers import AutoModel
from PIL import Image
from timm.data.transforms_factory import create_transform
import requests
model = AutoModel.from_pretrained("nvidia/MambaVision-L3-256-21K", trust_remote_code=True)
# eval mode for inference
model.cuda().eval()
# prepare image for the model
url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
image = Image.open(requests.get(url, stream=True).raw)
input_resolution = (3, 256, 256) # MambaVision supports any input resolutions
transform = create_transform(input_size=input_resolution,
is_training=False,
mean=model.config.mean,
std=model.config.std,
crop_mode=model.config.crop_mode,
crop_pct=model.config.crop_pct)
inputs = transform(image).unsqueeze(0).cuda()
# model inference
out_avg_pool, features = model(inputs)
print("Size of the averaged pool features:", out_avg_pool.size()) # torch.Size([1, 1568])
print("Number of stages in extracted features:", len(features)) # 4 stages
print("Size of extracted features in stage 1:", features[0].size()) # torch.Size([1, 196, 128, 128])
print("Size of extracted features in stage 4:", features[3].size()) # torch.Size([1, 1568, 16, 16])
```
### License:
[NVIDIA Source Code License-NC](https://huggingface.co/nvidia/MambaVision-L3-256-21K/blob/main/LICENSE)
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
prithivMLmods/Clipart-126-DomainNet
|

# **Clipart-126-DomainNet**
> **Clipart-126-DomainNet** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify clipart images into 126 domain categories using the **SiglipForImageClassification** architecture.

*Moment Matching for Multi-Source Domain Adaptation* : https://arxiv.org/pdf/1812.01754
*SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features* https://arxiv.org/pdf/2502.14786
```py
Classification Report:
precision recall f1-score support
aircraft_carrier 0.8667 0.4643 0.6047 56
alarm_clock 0.9706 0.8919 0.9296 74
ant 0.8889 0.8615 0.8750 65
anvil 0.5984 0.6083 0.6033 120
asparagus 0.8158 0.6078 0.6966 51
axe 0.7544 0.5309 0.6232 81
banana 0.7111 0.5517 0.6214 58
basket 0.8571 0.8182 0.8372 66
bathtub 0.7531 0.7821 0.7673 78
bear 0.9118 0.6458 0.7561 48
bee 0.9636 0.9636 0.9636 165
bird 0.8967 0.9529 0.9240 255
blackberry 0.8082 0.8429 0.8252 70
blueberry 0.8661 0.8981 0.8818 108
bottlecap 0.7821 0.8299 0.8053 147
broccoli 0.8947 0.8947 0.8947 95
bus 0.9663 0.9348 0.9503 92
butterfly 0.9333 0.9545 0.9438 132
cactus 0.9677 0.9091 0.9375 99
cake 0.8750 0.8099 0.8412 121
calculator 0.9583 0.5897 0.7302 39
camel 0.9391 0.9310 0.9351 116
camera 0.8846 0.8679 0.8762 53
candle 0.8298 0.8478 0.8387 92
cannon 0.8551 0.8551 0.8551 69
canoe 0.8462 0.7432 0.7914 74
carrot 0.8800 0.7719 0.8224 57
castle 1.0000 0.8511 0.9195 47
cat 0.8167 0.7903 0.8033 62
ceiling_fan 1.0000 0.2000 0.3333 30
cell_phone 0.7400 0.6491 0.6916 57
cello 0.8372 0.9114 0.8727 79
chair 0.8986 0.8378 0.8671 74
chandelier 0.9617 0.9263 0.9437 190
coffee_cup 0.8811 0.9389 0.9091 229
compass 0.9799 0.9012 0.9389 162
computer 0.7124 0.9045 0.7970 178
cow 0.9517 0.9718 0.9617 142
crab 0.8738 0.9000 0.8867 100
crocodile 0.9778 0.9167 0.9462 144
cruise_ship 0.8544 0.9072 0.8800 194
dog 0.8125 0.7761 0.7939 67
dolphin 0.7680 0.7500 0.7589 128
dragon 0.9512 0.9176 0.9341 85
drums 0.8919 0.9635 0.9263 137
duck 0.8774 0.8447 0.8608 161
dumbbell 0.9048 0.9500 0.9268 280
elephant 0.9038 0.8952 0.8995 105
eyeglasses 0.8636 0.8488 0.8562 291
feather 0.8564 0.9227 0.8883 181
fence 0.9211 0.8400 0.8787 125
fish 0.8963 0.8768 0.8864 138
flamingo 0.9636 0.9381 0.9507 226
flower 0.9146 0.9454 0.9298 238
foot 0.8780 0.8889 0.8834 81
fork 0.9032 0.9091 0.9061 154
frog 0.9420 0.9489 0.9455 137
giraffe 0.9643 0.9153 0.9391 118
goatee 0.8763 0.9422 0.9081 173
grapes 0.9114 0.8571 0.8834 84
guitar 0.9595 0.8554 0.9045 83
hammer 0.6111 0.7719 0.6822 114
helicopter 0.9444 0.9533 0.9488 107
helmet 0.7368 0.8550 0.7915 131
horse 0.9588 0.9819 0.9702 166
kangaroo 0.9125 0.8488 0.8795 86
lantern 0.8254 0.7536 0.7879 69
laptop 0.8108 0.5000 0.6186 60
leaf 0.7143 0.3333 0.4545 30
lion 0.9744 0.8085 0.8837 47
lipstick 0.7875 0.6632 0.7200 95
lobster 0.8963 0.9130 0.9046 161
microphone 0.7925 0.9231 0.8528 91
monkey 0.9623 0.9027 0.9315 113
mosquito 0.8636 0.8444 0.8539 45
mouse 0.9167 0.8333 0.8730 66
mug 0.8989 0.8163 0.8556 98
mushroom 0.9429 0.9429 0.9429 105
onion 0.9365 0.8429 0.8872 140
panda 1.0000 0.9726 0.9861 73
peanut 0.5900 0.7195 0.6484 82
pear 0.7692 0.7246 0.7463 69
peas 0.8000 0.7429 0.7704 70
pencil 0.6667 0.0909 0.1600 44
penguin 0.9717 0.9279 0.9493 111
pig 0.9551 0.8252 0.8854 103
pillow 0.6290 0.5571 0.5909 70
pineapple 0.9846 0.8889 0.9343 72
potato 0.6038 0.6531 0.6275 98
power_outlet 0.8636 0.4043 0.5507 47
purse 0.0000 0.0000 0.0000 27
rabbit 0.9341 0.8586 0.8947 99
raccoon 0.8836 0.9021 0.8927 143
rhinoceros 0.8750 0.9459 0.9091 74
rifle 0.7595 0.7500 0.7547 80
saxophone 0.9454 0.9886 0.9665 175
screwdriver 0.7521 0.6929 0.7213 127
sea_turtle 0.9677 0.9626 0.9651 187
see_saw 0.6679 0.8698 0.7556 215
sheep 0.9355 0.9158 0.9255 95
shoe 0.8969 0.8700 0.8832 100
skateboard 0.8632 0.8673 0.8652 211
snake 0.9302 0.9160 0.9231 131
speedboat 0.8187 0.8976 0.8563 166
spider 0.9043 0.9286 0.9163 112
squirrel 0.7945 0.8855 0.8375 131
strawberry 0.8687 0.9923 0.9264 260
streetlight 0.8178 0.9293 0.8700 198
string_bean 0.8525 0.8000 0.8254 65
submarine 0.8022 0.8902 0.8439 164
swan 0.8397 0.9003 0.8690 291
table 0.8564 0.9200 0.8871 175
teapot 0.8763 0.9189 0.8971 185
teddy-bear 0.9006 0.8953 0.8980 172
television 0.8509 0.8220 0.8362 118
the_Eiffel_Tower 0.9468 0.9082 0.9271 98
the_Great_Wall_of_China 0.9462 0.9462 0.9462 93
tiger 0.9417 0.9826 0.9617 230
toe 0.8250 0.6600 0.7333 50
train 0.9362 0.9778 0.9565 90
truck 0.9367 0.8916 0.9136 83
umbrella 0.9633 0.9545 0.9589 110
vase 0.7642 0.8393 0.8000 112
watermelon 0.9527 0.9527 0.9527 148
whale 0.7453 0.8144 0.7783 194
zebra 0.9275 0.9676 0.9471 185
accuracy 0.8691 14818
macro avg 0.8613 0.8251 0.8351 14818
weighted avg 0.8705 0.8691 0.8661 14818
```
The model categorizes images into the following 126 classes:
- **Class 0:** "aircraft_carrier"
- **Class 1:** "alarm_clock"
- **Class 2:** "ant"
- **Class 3:** "anvil"
- **Class 4:** "asparagus"
- **Class 5:** "axe"
- **Class 6:** "banana"
- **Class 7:** "basket"
- **Class 8:** "bathtub"
- **Class 9:** "bear"
- **Class 10:** "bee"
- **Class 11:** "bird"
- **Class 12:** "blackberry"
- **Class 13:** "blueberry"
- **Class 14:** "bottlecap"
- **Class 15:** "broccoli"
- **Class 16:** "bus"
- **Class 17:** "butterfly"
- **Class 18:** "cactus"
- **Class 19:** "cake"
- **Class 20:** "calculator"
- **Class 21:** "camel"
- **Class 22:** "camera"
- **Class 23:** "candle"
- **Class 24:** "cannon"
- **Class 25:** "canoe"
- **Class 26:** "carrot"
- **Class 27:** "castle"
- **Class 28:** "cat"
- **Class 29:** "ceiling_fan"
- **Class 30:** "cell_phone"
- **Class 31:** "cello"
- **Class 32:** "chair"
- **Class 33:** "chandelier"
- **Class 34:** "coffee_cup"
- **Class 35:** "compass"
- **Class 36:** "computer"
- **Class 37:** "cow"
- **Class 38:** "crab"
- **Class 39:** "crocodile"
- **Class 40:** "cruise_ship"
- **Class 41:** "dog"
- **Class 42:** "dolphin"
- **Class 43:** "dragon"
- **Class 44:** "drums"
- **Class 45:** "duck"
- **Class 46:** "dumbbell"
- **Class 47:** "elephant"
- **Class 48:** "eyeglasses"
- **Class 49:** "feather"
- **Class 50:** "fence"
- **Class 51:** "fish"
- **Class 52:** "flamingo"
- **Class 53:** "flower"
- **Class 54:** "foot"
- **Class 55:** "fork"
- **Class 56:** "frog"
- **Class 57:** "giraffe"
- **Class 58:** "goatee"
- **Class 59:** "grapes"
- **Class 60:** "guitar"
- **Class 61:** "hammer"
- **Class 62:** "helicopter"
- **Class 63:** "helmet"
- **Class 64:** "horse"
- **Class 65:** "kangaroo"
- **Class 66:** "lantern"
- **Class 67:** "laptop"
- **Class 68:** "leaf"
- **Class 69:** "lion"
- **Class 70:** "lipstick"
- **Class 71:** "lobster"
- **Class 72:** "microphone"
- **Class 73:** "monkey"
- **Class 74:** "mosquito"
- **Class 75:** "mouse"
- **Class 76:** "mug"
- **Class 77:** "mushroom"
- **Class 78:** "onion"
- **Class 79:** "panda"
- **Class 80:** "peanut"
- **Class 81:** "pear"
- **Class 82:** "peas"
- **Class 83:** "pencil"
- **Class 84:** "penguin"
- **Class 85:** "pig"
- **Class 86:** "pillow"
- **Class 87:** "pineapple"
- **Class 88:** "potato"
- **Class 89:** "power_outlet"
- **Class 90:** "purse"
- **Class 91:** "rabbit"
- **Class 92:** "raccoon"
- **Class 93:** "rhinoceros"
- **Class 94:** "rifle"
- **Class 95:** "saxophone"
- **Class 96:** "screwdriver"
- **Class 97:** "sea_turtle"
- **Class 98:** "see_saw"
- **Class 99:** "sheep"
- **Class 100:** "shoe"
- **Class 101:** "skateboard"
- **Class 102:** "snake"
- **Class 103:** "speedboat"
- **Class 104:** "spider"
- **Class 105:** "squirrel"
- **Class 106:** "strawberry"
- **Class 107:** "streetlight"
- **Class 108:** "string_bean"
- **Class 109:** "submarine"
- **Class 110:** "swan"
- **Class 111:** "table"
- **Class 112:** "teapot"
- **Class 113:** "teddy-bear"
- **Class 114:** "television"
- **Class 115:** "the_Eiffel_Tower"
- **Class 116:** "the_Great_Wall_of_China"
- **Class 117:** "tiger"
- **Class 118:** "toe"
- **Class 119:** "train"
- **Class 120:** "truck"
- **Class 121:** "umbrella"
- **Class 122:** "vase"
- **Class 123:** "watermelon"
- **Class 124:** "whale"
- **Class 125:** "zebra"
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Clipart-126-DomainNet"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def clipart_classification(image):
"""Predicts the clipart category for an input image."""
# Convert the input numpy array to a PIL Image and ensure it's in RGB format
image = Image.fromarray(image).convert("RGB")
# Process the image and prepare it for the model
inputs = processor(images=image, return_tensors="pt")
# Perform inference without gradient computation
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
# Apply softmax to obtain probabilities for each class
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
# Mapping from indices to clipart category labels
labels = {
"0": "aircraft_carrier", "1": "alarm_clock", "2": "ant", "3": "anvil", "4": "asparagus",
"5": "axe", "6": "banana", "7": "basket", "8": "bathtub", "9": "bear",
"10": "bee", "11": "bird", "12": "blackberry", "13": "blueberry", "14": "bottlecap",
"15": "broccoli", "16": "bus", "17": "butterfly", "18": "cactus", "19": "cake",
"20": "calculator", "21": "camel", "22": "camera", "23": "candle", "24": "cannon",
"25": "canoe", "26": "carrot", "27": "castle", "28": "cat", "29": "ceiling_fan",
"30": "cell_phone", "31": "cello", "32": "chair", "33": "chandelier", "34": "coffee_cup",
"35": "compass", "36": "computer", "37": "cow", "38": "crab", "39": "crocodile",
"40": "cruise_ship", "41": "dog", "42": "dolphin", "43": "dragon", "44": "drums",
"45": "duck", "46": "dumbbell", "47": "elephant", "48": "eyeglasses", "49": "feather",
"50": "fence", "51": "fish", "52": "flamingo", "53": "flower", "54": "foot",
"55": "fork", "56": "frog", "57": "giraffe", "58": "goatee", "59": "grapes",
"60": "guitar", "61": "hammer", "62": "helicopter", "63": "helmet", "64": "horse",
"65": "kangaroo", "66": "lantern", "67": "laptop", "68": "leaf", "69": "lion",
"70": "lipstick", "71": "lobster", "72": "microphone", "73": "monkey", "74": "mosquito",
"75": "mouse", "76": "mug", "77": "mushroom", "78": "onion", "79": "panda",
"80": "peanut", "81": "pear", "82": "peas", "83": "pencil", "84": "penguin",
"85": "pig", "86": "pillow", "87": "pineapple", "88": "potato", "89": "power_outlet",
"90": "purse", "91": "rabbit", "92": "raccoon", "93": "rhinoceros", "94": "rifle",
"95": "saxophone", "96": "screwdriver", "97": "sea_turtle", "98": "see_saw", "99": "sheep",
"100": "shoe", "101": "skateboard", "102": "snake", "103": "speedboat", "104": "spider",
"105": "squirrel", "106": "strawberry", "107": "streetlight", "108": "string_bean",
"109": "submarine", "110": "swan", "111": "table", "112": "teapot", "113": "teddy-bear",
"114": "television", "115": "the_Eiffel_Tower", "116": "the_Great_Wall_of_China",
"117": "tiger", "118": "toe", "119": "train", "120": "truck", "121": "umbrella",
"122": "vase", "123": "watermelon", "124": "whale", "125": "zebra"
}
# Create a dictionary mapping each label to its corresponding probability (rounded)
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=clipart_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Clipart-126-DomainNet Classification",
description="Upload a clipart image to classify it into one of 126 domain categories."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
---
# **Intended Use:**
The **Clipart-126-DomainNet** model is designed for clipart image classification. It categorizes clipart images into a wide range of domains—from objects like an "aircraft_carrier" or "alarm_clock" to various everyday items. Potential use cases include:
- **Digital Art and Design:** Assisting designers in organizing and retrieving clipart assets.
- **Content Management:** Enhancing digital asset management systems with robust clipart classification.
- **Creative Search Engines:** Enabling clipart-based search for design inspiration and resource curation.
- **Computer Vision Research:** Serving as a benchmark for studies in clipart recognition and domain adaptation.
|
[
"aircraft_carrier",
"alarm_clock",
"ant",
"anvil",
"asparagus",
"axe",
"banana",
"basket",
"bathtub",
"bear",
"bee",
"bird",
"blackberry",
"blueberry",
"bottlecap",
"broccoli",
"bus",
"butterfly",
"cactus",
"cake",
"calculator",
"camel",
"camera",
"candle",
"cannon",
"canoe",
"carrot",
"castle",
"cat",
"ceiling_fan",
"cell_phone",
"cello",
"chair",
"chandelier",
"coffee_cup",
"compass",
"computer",
"cow",
"crab",
"crocodile",
"cruise_ship",
"dog",
"dolphin",
"dragon",
"drums",
"duck",
"dumbbell",
"elephant",
"eyeglasses",
"feather",
"fence",
"fish",
"flamingo",
"flower",
"foot",
"fork",
"frog",
"giraffe",
"goatee",
"grapes",
"guitar",
"hammer",
"helicopter",
"helmet",
"horse",
"kangaroo",
"lantern",
"laptop",
"leaf",
"lion",
"lipstick",
"lobster",
"microphone",
"monkey",
"mosquito",
"mouse",
"mug",
"mushroom",
"onion",
"panda",
"peanut",
"pear",
"peas",
"pencil",
"penguin",
"pig",
"pillow",
"pineapple",
"potato",
"power_outlet",
"purse",
"rabbit",
"raccoon",
"rhinoceros",
"rifle",
"saxophone",
"screwdriver",
"sea_turtle",
"see_saw",
"sheep",
"shoe",
"skateboard",
"snake",
"speedboat",
"spider",
"squirrel",
"strawberry",
"streetlight",
"string_bean",
"submarine",
"swan",
"table",
"teapot",
"teddy-bear",
"television",
"the_eiffel_tower",
"the_great_wall_of_china",
"tiger",
"toe",
"train",
"truck",
"umbrella",
"vase",
"watermelon",
"whale",
"zebra"
] |
tschosbert/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2455
- Accuracy: 0.5116
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 3 | 4.2463 | 0.5116 |
| No log | 2.0 | 6 | 3.5920 | 0.5116 |
| No log | 3.0 | 9 | 3.2455 | 0.5116 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cpu
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"accordion_roller_conveyor",
"accumulating_pallet_stopping",
"accumulation_conveyor_chain",
"acrylic_oil_cylinder_glass",
"actima_clamping_device",
"actima_clamping_system",
"active_bridge_connection_kit",
"active_charging_station",
"active_shuttle_charger",
"actuation_indexer",
"actuation_indexing_plunger",
"actuation_status_plunger",
"actuator_connector_clamp",
"adapter_plate",
"adapter_set_for_various",
"adapter_slot",
"adapter_slot_connector",
"adhesive_levelling_wedge",
"adhesive_remover",
"adhesive_steel_magnets",
"adjustable_angle",
"adjustable_angle_joint",
"adjustable_axial_joint",
"adjustable_ball_joint",
"adjustable_ball_joint_clamp",
"adjustable_ball_nut",
"adjustable_ball_pads",
"adjustable_cam_clamp",
"adjustable_cam_lever",
"adjustable_cam_levers",
"adjustable_centre_holder",
"adjustable_chain_tensioner",
"adjustable_clamp_element",
"adjustable_clamp_strap",
"adjustable_clamp_straps",
"adjustable_clamping_joint",
"adjustable_clamping_latch",
"adjustable_clamping_lever",
"adjustable_clamping_nut",
"adjustable_clamping_pin",
"adjustable_clamping_screw",
"adjustable_compression_latch",
"adjustable_control_cover",
"adjustable_control_knob",
"adjustable_conveyor_legs",
"adjustable_detectable_lever",
"adjustable_die-cast_zinc_hinge",
"adjustable_directional_knob",
"adjustable_down_thrust_clamp",
"adjustable_drip_oil_feeder",
"adjustable_elevating_castor",
"adjustable_feet_plates",
"adjustable_form_a_clamp",
"adjustable_form_c_gripper",
"adjustable_friction_hinge",
"adjustable_friction_hinges",
"adjustable_gauge_arm",
"adjustable_grip_clamp",
"adjustable_gripper",
"adjustable_grippers",
"adjustable_hand_lever",
"adjustable_hand_levers",
"adjustable_height_cylinders",
"adjustable_hinge",
"adjustable_hinge_lock",
"adjustable_hinge_mechanism",
"adjustable_hinge_mould",
"adjustable_hook_clamp",
"adjustable_industrial_latch",
"adjustable_industrial_stop",
"adjustable_latch",
"adjustable_lateral_holder",
"adjustable_leveling_feet",
"adjustable_leveling_foot",
"adjustable_lifting_column",
"adjustable_linear_bushing",
"adjustable_locking_system",
"adjustable_logic_cover",
"adjustable_mounting_clamp",
"adjustable_mounting_clamps",
"adjustable_plastic_hinge",
"adjustable_position_switch_actuator",
"adjustable_power_clamp",
"adjustable_profile_connector",
"adjustable_ratchet_connectors",
"adjustable_ratchet_element",
"adjustable_relay_housing",
"adjustable_rest_pads",
"adjustable_revolving_handle",
"adjustable_roller_carriage",
"adjustable_rotation_bearing",
"adjustable_screw_block",
"adjustable_screw_blocks",
"adjustable_screw_stops",
"adjustable_seating_block",
"adjustable_shaft_collar",
"adjustable_shock_absorber",
"adjustable_side_clamp",
"adjustable_side_clamps",
"adjustable_single_nut",
"adjustable_slide_connectors",
"adjustable_slide_unit",
"adjustable_sliding_hub",
"adjustable_slotted_hinge",
"adjustable_snap_clamps",
"adjustable_spirit_level",
"adjustable_spirit_level_mount",
"adjustable_spirit_levels",
"adjustable_spring_plunger",
"adjustable_stainless_hinge",
"adjustable_stainless_steel_hinge",
"adjustable_stainless_steel_hinges",
"adjustable_stainless_steel_lever",
"adjustable_stainless_steel_tension_lever",
"adjustable_standard_knob",
"adjustable_star_knob",
"adjustable_steel_angle",
"adjustable_steel_clamping_pads",
"adjustable_steel_clamps",
"adjustable_stop_assembly",
"adjustable_stop_gate",
"adjustable_suspension_bracket",
"adjustable_swing_clamp",
"adjustable_swing_latch",
"adjustable_swivel_clamp",
"adjustable_swivel_clamp_kit",
"adjustable_swivel_foot",
"adjustable_swivel_joint"
] |
shraddha117/my-awesome-model
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"benign",
"malignant"
] |
soliv11/ocularAllergen
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"label_0",
"label_1",
"label_2",
"label_3"
] |
diegojuse/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1970
- Accuracy: 0.9418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3871 | 1.0 | 370 | 0.3112 | 0.9269 |
| 0.2117 | 2.0 | 740 | 0.2410 | 0.9323 |
| 0.1636 | 3.0 | 1110 | 0.2264 | 0.9296 |
| 0.1428 | 4.0 | 1480 | 0.2164 | 0.9337 |
| 0.1274 | 5.0 | 1850 | 0.2148 | 0.9337 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
FatimaK6/breast_cancer_convnext_large
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"not reached",
"reached"
] |
kaisest1/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1868
- Accuracy: 0.9378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.371 | 1.0 | 370 | 0.3024 | 0.9296 |
| 0.2148 | 2.0 | 740 | 0.2295 | 0.9391 |
| 0.1603 | 3.0 | 1110 | 0.2134 | 0.9432 |
| 0.1395 | 4.0 | 1480 | 0.2047 | 0.9391 |
| 0.129 | 5.0 | 1850 | 0.2039 | 0.9432 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
### Report the results (Accuracy, Precision, and Recall) for the Oxford-Pet dataset using a zero-shot classification model
- Accuracy: 0.8800
- Precision: 0.8768
- Recall: 0.8800
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
Dhruvt7707/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6330
- Accuracy: 0.836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.6895 | 1.0 | 704 | 0.8515 | 0.7828 |
| 1.3324 | 2.0 | 1408 | 0.6900 | 0.8208 |
| 1.2539 | 2.9968 | 2109 | 0.6330 | 0.836 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"n01443537",
"n01629819",
"n01641577",
"n01644900",
"n01698640",
"n01742172",
"n01768244",
"n01770393",
"n01774384",
"n01774750",
"n01784675",
"n01882714",
"n01910747",
"n01917289",
"n01944390",
"n01950731",
"n01983481",
"n01984695",
"n02002724",
"n02056570",
"n02058221",
"n02074367",
"n02094433",
"n02099601",
"n02099712",
"n02106662",
"n02113799",
"n02123045",
"n02123394",
"n02124075",
"n02125311",
"n02129165",
"n02132136",
"n02165456",
"n02226429",
"n02231487",
"n02233338",
"n02236044",
"n02268443",
"n02279972",
"n02281406",
"n02321529",
"n02364673",
"n02395406",
"n02403003",
"n02410509",
"n02415577",
"n02423022",
"n02437312",
"n02480495",
"n02481823",
"n02486410",
"n02504458",
"n02509815",
"n02666347",
"n02669723",
"n02699494",
"n02769748",
"n02788148",
"n02791270",
"n02793495",
"n02795169",
"n02802426",
"n02808440",
"n02814533",
"n02814860",
"n02815834",
"n02823428",
"n02837789",
"n02841315",
"n02843684",
"n02883205",
"n02892201",
"n02909870",
"n02917067",
"n02927161",
"n02948072",
"n02950826",
"n02963159",
"n02977058",
"n02988304",
"n03014705",
"n03026506",
"n03042490",
"n03085013",
"n03089624",
"n03100240",
"n03126707",
"n03160309",
"n03179701",
"n03201208",
"n03255030",
"n03355925",
"n03373237",
"n03388043",
"n03393912",
"n03400231",
"n03404251",
"n03424325",
"n03444034",
"n03447447",
"n03544143",
"n03584254",
"n03599486",
"n03617480",
"n03637318",
"n03649909",
"n03662601",
"n03670208",
"n03706229",
"n03733131",
"n03763968",
"n03770439",
"n03796401",
"n03814639",
"n03837869",
"n03838899",
"n03854065",
"n03891332",
"n03902125",
"n03930313",
"n03937543",
"n03970156",
"n03977966",
"n03980874",
"n03983396",
"n03992509",
"n04008634",
"n04023962",
"n04070727",
"n04074963",
"n04099969",
"n04118538",
"n04133789",
"n04146614",
"n04149813",
"n04179913",
"n04251144",
"n04254777",
"n04259630",
"n04265275",
"n04275548",
"n04285008",
"n04311004",
"n04328186",
"n04356056",
"n04366367",
"n04371430",
"n04376876",
"n04398044",
"n04399382",
"n04417672",
"n04456115",
"n04465666",
"n04486054",
"n04487081",
"n04501370",
"n04507155",
"n04532106",
"n04532670",
"n04540053",
"n04560804",
"n04562935",
"n04596742",
"n04598010",
"n06596364",
"n07056680",
"n07583066",
"n07614500",
"n07615774",
"n07646821",
"n07647870",
"n07657664",
"n07695742",
"n07711569",
"n07715103",
"n07720875",
"n07749582",
"n07753592",
"n07768694",
"n07871810",
"n07873807",
"n07875152",
"n07920052",
"n07975909",
"n08496334",
"n08620881",
"n08742578",
"n09193705",
"n09246464",
"n09256479",
"n09332890",
"n09428293",
"n12267677",
"n12520864",
"n13001041",
"n13652335",
"n13652994",
"n13719102",
"n14991210"
] |
brothersen/food-classifier
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# food-classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6384
- Accuracy: 0.892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.5596 | 1.0 | 63 | 2.4049 | 0.837 |
| 1.871 | 2.0 | 126 | 1.7607 | 0.895 |
| 1.6474 | 2.96 | 186 | 1.6384 | 0.892 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cpu
- Datasets 2.16.1
- Tokenizers 0.21.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
mariamoracrossitcr/vit-base-beans-demo-v25marzo
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans-demo-v25marzo
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0275
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.0301 | 1.5385 | 100 | 0.0442 | 0.9850 |
| 0.0084 | 3.0769 | 200 | 0.0275 | 0.9925 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 2.17.0
- Tokenizers 0.21.1
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
prithivMLmods/Deepfake-vs-Real-8000
|

# **Deepfake-vs-Real-8000**
> **Deepfake-vs-Real-8000** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to detect whether an image is a deepfake or a real one using the **SiglipForImageClassification** architecture.
```py
Classification Report:
precision recall f1-score support
Deepfake 0.9990 0.9972 0.9981 4000
Real one 0.9973 0.9990 0.9981 4000
accuracy 0.9981 8000
macro avg 0.9981 0.9981 0.9981 8000
weighted avg 0.9981 0.9981 0.9981 8000
```

The model categorizes images into two classes:
- **Class 0:** "Deepfake"
- **Class 1:** "Real one"
---
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Deepfake-vs-Real-8000"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def deepfake_classification(image):
"""Predicts whether an image is a Deepfake or Real."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "Deepfake", "1": "Real one"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=deepfake_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Deepfake vs. Real Image Classification",
description="Upload an image to determine if it's a Deepfake or a Real one."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
---
# **Intended Use:**
The **Deepfake-vs-Real-8000** model is designed to detect deepfake images from real ones. Potential use cases include:
- **Deepfake Detection:** Assisting cybersecurity experts and forensic teams in detecting synthetic media.
- **Media Verification:** Helping journalists and fact-checkers verify the authenticity of images.
- **AI Ethics & Research:** Contributing to studies on AI-generated content detection.
- **Social Media Moderation:** Enhancing tools to prevent misinformation and digital deception.
|
[
"deepfake",
"real one"
] |
prithivMLmods/AI-vs-Deepfake-vs-Real-9999
|

# **AI-vs-Deepfake-vs-Real-9999**
> **AI-vs-Deepfake-vs-Real-9999** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to detect whether an image is AI-generated, a deepfake, or a real one using the **SiglipForImageClassification** architecture.
```py
Classification Report:
precision recall f1-score support
Artificial 0.9994 0.9979 0.9986 3333
Deepfake 0.9979 0.9994 0.9987 3333
Real one 0.9994 0.9994 0.9994 3333
accuracy 0.9989 9999
macro avg 0.9989 0.9989 0.9989 9999
weighted avg 0.9989 0.9989 0.9989 9999
```

The model categorizes images into three classes:
- **Class 0:** "Artificial"
- **Class 1:** "Deepfake"
- **Class 2:** "Real one"
---
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/AI-vs-Deepfake-vs-Real-9999"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def classify_image(image):
"""Predicts whether an image is Artificial, Deepfake, or Real."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "Artificial", "1": "Deepfake", "2": "Real one"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=classify_image,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="AI vs. Deepfake vs. Real Image Classification",
description="Upload an image to determine if it's AI-generated, a Deepfake, or a Real one."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
---
# **Intended Use:**
The **AI-vs-Deepfake-vs-Real-9999** model is designed to classify images into three categories: AI-generated, deepfake, or real. Potential use cases include:
- **AI Content Detection:** Identifying AI-generated images from real ones.
- **Deepfake Detection:** Assisting cybersecurity experts and forensic teams in detecting synthetic media.
- **Media Verification:** Helping journalists and fact-checkers verify the authenticity of images.
- **AI Ethics & Research:** Contributing to studies on AI-generated content detection.
- **Social Media Moderation:** Enhancing tools to prevent misinformation and digital deception.
|
[
"artificial",
"deepfake",
"real one"
] |
svdb01/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0473
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2151 | 1.0 | 190 | 0.0894 | 0.9726 |
| 0.1873 | 2.0 | 380 | 0.0569 | 0.9822 |
| 0.1161 | 3.0 | 570 | 0.0473 | 0.9867 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
[
"annualcrop",
"forest",
"herbaceousvegetation",
"highway",
"industrial",
"pasture",
"permanentcrop",
"residential",
"river",
"sealake"
] |
towa-kato/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0757
- Accuracy: 0.9728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.4642 | 1.0 | 352 | 0.1344 | 0.9532 |
| 0.367 | 2.0 | 704 | 0.0884 | 0.9688 |
| 0.3387 | 2.9922 | 1053 | 0.0757 | 0.9728 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
towa-kato/convnext-tiny-224-finetuned-eurosat-albumentations
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-eurosat-albumentations
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2066
- Accuracy: 0.9342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.4905 | 1.0 | 352 | 0.3714 | 0.8934 |
| 0.3144 | 2.0 | 704 | 0.2355 | 0.9294 |
| 0.2465 | 2.9922 | 1053 | 0.2066 | 0.9342 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
towa-kato/swin-tiny-patch4-window7-224-finetuned-eurosat-kornia
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat-kornia
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0677
- Accuracy: 0.9776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1188 | 1.0 | 352 | 0.1105 | 0.9614 |
| 0.0698 | 2.0 | 704 | 0.0780 | 0.9738 |
| 0.016 | 2.9922 | 1053 | 0.0677 | 0.9776 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
Jared1125/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0868
- Accuracy: 0.9684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.4506 | 1.0 | 352 | 0.1273 | 0.958 |
| 0.3144 | 2.0 | 704 | 0.0971 | 0.9658 |
| 0.3258 | 2.9922 | 1053 | 0.0868 | 0.9684 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 1.18.3
- Tokenizers 0.21.1
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
Schram03/vit-base-oxford-iiit-pets
|
# oxford-pets-zero-shot
This model achieves the following results:
- Accuracy: 0.8800
- Precision: 0.8768
- Recall: 0.8800
# vit-base-oxford-iiit-pets (Transfer learning)
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1814
- Accuracy: 0.9499
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3645 | 1.0 | 370 | 0.3259 | 0.9296 |
| 0.2118 | 2.0 | 740 | 0.2660 | 0.9350 |
| 0.1643 | 3.0 | 1110 | 0.2436 | 0.9323 |
| 0.1482 | 4.0 | 1480 | 0.2364 | 0.9364 |
| 0.1412 | 5.0 | 1850 | 0.2357 | 0.9350 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
weileluc/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2027
- Accuracy: 0.9432
## Model description
This model is a fine-tuned version of Google's ViT base model (`vit-base-patch16-224`) adapted for image classification on the Oxford-IIIT Pet dataset.
It distinguishes between 37 cat and dog breeds using transfer learning and achieves strong performance with minimal training effort.
The model was trained using the Hugging Face `Trainer` API and can be used for pet image classification tasks or as a base for further fine-tuning.
## Intended uses & limitations
**Intended uses:**
- Classifying pet images (dogs and cats) by breed
- Educational purposes for transfer learning and computer vision
- Comparisons with zero-shot models such as CLIP
**Limitations:**
- The model is only trained on 37 specific breeds from the Oxford-IIIT Pet dataset
- May perform poorly on images outside this dataset (e.g. unusual angles, bad lighting, non-pets)
## Training and evaluation data
The model was fine-tuned on the [Oxford-IIIT Pet dataset](https://www.robots.ox.ac.uk/~vgg/data/pets/), which contains 7,349 images of 37 different dog and cat breeds.
The dataset was split into:
- 80% for training
- 10% for validation
- 10% for testing
The evaluation results are reported on the validation set.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3729 | 1.0 | 370 | 0.3053 | 0.9175 |
| 0.2022 | 2.0 | 740 | 0.2266 | 0.9323 |
| 0.1653 | 3.0 | 1110 | 0.2137 | 0.9350 |
| 0.1555 | 4.0 | 1480 | 0.2052 | 0.9391 |
| 0.1224 | 5.0 | 1850 | 0.2024 | 0.9405 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cpu
- Datasets 3.5.0
- Tokenizers 0.21.1
## 🔍 Zero-Shot Classification Comparison
This model (`vit-base-oxford-iiit-pets`) was compared to a zero-shot model using **CLIP**
([openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)) on the **Oxford-IIIT Pet** dataset.
**Zero-Shot CLIP Evaluation Results:**
- Accuracy: 0.8800 %
- Precision: 0.8768 %
- Recall: 0.8800 %
The fine-tuned ViT model achieved:
- Accuracy: 94.32 %
This shows that transfer learning using ViT outperforms CLIP on this dataset.
## 🧪 Live Demo
👉 Try it live: [Gradio App on Hugging Face Spaces](https://huggingface.co/spaces/weileluc/pet-classifier)
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
kitty365/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2038
- Accuracy: 0.9445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.373 | 1.0 | 370 | 0.2732 | 0.9337 |
| 0.2127 | 2.0 | 740 | 0.2148 | 0.9405 |
| 0.1801 | 3.0 | 1110 | 0.1918 | 0.9445 |
| 0.1448 | 4.0 | 1480 | 0.1857 | 0.9472 |
| 0.1308 | 5.0 | 1850 | 0.1814 | 0.9445 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
### Zero-Shot Evaluation
- Model used: `openai/clip-vit-large-patch14`
- Dataset: Oxford-IIIT-Pets (`pcuenq/oxford-pets`)
- Accuracy: 0.8800
- Precision: 0.8768
- Recall: 0.8800
The zero-shot evaluation was done using Hugging Face Transformers and the CLIP model on the Oxford-Pet dataset.
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
thini77/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2023
- Accuracy: 0.9459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3878 | 1.0 | 370 | 0.2921 | 0.9215 |
| 0.2188 | 2.0 | 740 | 0.2260 | 0.9269 |
| 0.1832 | 3.0 | 1110 | 0.2136 | 0.9283 |
| 0.14 | 4.0 | 1480 | 0.2050 | 0.9323 |
| 0.1322 | 5.0 | 1850 | 0.2030 | 0.9323 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
Zero-Shot Evaluation
Model used: openai/clip-vit-large-patch14
Dataset: Oxford-IIIT-Pets ( pcuenq/oxford-pets )
Accuracy: 0.8800
Precision: 0.8768
Recall: 0.8800
The zero-shot evaluation was done using Hugging Face Transformers and the CLIP model on the Oxford-Pet dataset.
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
kleemyan/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1814
- Accuracy: 0.9418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.381 | 1.0 | 370 | 0.3033 | 0.9323 |
| 0.2102 | 2.0 | 740 | 0.2452 | 0.9310 |
| 0.1771 | 3.0 | 1110 | 0.2192 | 0.9364 |
| 0.14 | 4.0 | 1480 | 0.2116 | 0.9364 |
| 0.1369 | 5.0 | 1850 | 0.2113 | 0.9391 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
Zero-Shot Evaluation
Model used: openai/clip-vit-large-patch14
Dataset: Oxford-IIIT-Pets ( pcuenq/oxford-pets )
Accuracy: 0.8800
Precision: 0.8768
Recall: 0.8800
The zero-shot evaluation was done using Hugging Face Transformers and the CLIP model on the Oxford-Pet dataset.
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
n1kooo/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1782
- Accuracy: 0.9513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3652 | 1.0 | 370 | 0.2811 | 0.9269 |
| 0.2181 | 2.0 | 740 | 0.2083 | 0.9378 |
| 0.1688 | 3.0 | 1110 | 0.1952 | 0.9364 |
| 0.1353 | 4.0 | 1480 | 0.1847 | 0.9405 |
| 0.1506 | 5.0 | 1850 | 0.1849 | 0.9350 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
### Zero-Shot Evaluation
- Model used: openai/clip-vit-large-patch14
- Dataset: Oxford-IIIT-Pets ( pcuenq/oxford-pets )
- Accuracy: 0.8785
- Precision: 0.8761
- Recall: 0.8785
The zero-shot evaluation was done using Hugging Face Transformers and the CLIP model on the Oxford-Pet dataset.
|
[
"19",
"29",
"0",
"11",
"1",
"86",
"90",
"28",
"23",
"31",
"39",
"96",
"82",
"17",
"71",
"8",
"97",
"80",
"74",
"59",
"70",
"87",
"84",
"64",
"52",
"42",
"47",
"65",
"21",
"22",
"81",
"24",
"78",
"45",
"49",
"56",
"76",
"89",
"73",
"14",
"9",
"6",
"20",
"98",
"36",
"55",
"72",
"43",
"51",
"35",
"83",
"33",
"27",
"53",
"92",
"50",
"15",
"18",
"46",
"75",
"38",
"66",
"77",
"69",
"95",
"99",
"93",
"4",
"61",
"94",
"68",
"34",
"32",
"88",
"67",
"30",
"62",
"63",
"40",
"26",
"48",
"79",
"85",
"54",
"44",
"7",
"12",
"2",
"41",
"37",
"13",
"25",
"10",
"57",
"5",
"60",
"91",
"3",
"58",
"16"
] |
prithivMLmods/Rice-Leaf-Disease
|

# **Rice-Leaf-Disease** 🌾
> **Rice-Leaf-Disease** is an image classification model fine-tuned from **google/siglip2-base-patch16-224** for detecting and categorizing diseases in rice leaves. It is built using the **SiglipForImageClassification** architecture and helps in early identification of plant diseases for better crop management.
>
```py
Classification Report:
precision recall f1-score support
Bacterialblight 0.8853 0.9596 0.9210 1585
Blast 0.9271 0.8472 0.8853 1440
Brownspot 0.9746 0.9369 0.9554 1600
Healthy 1.0000 1.0000 1.0000 1488
Tungro 0.9589 0.9977 0.9779 1308
accuracy 0.9477 7421
macro avg 0.9492 0.9483 0.9479 7421
weighted avg 0.9486 0.9477 0.9474 7421
```

### **Disease Categories:**
- **Class 0:** Bacterial Blight
- **Class 1:** Blast
- **Class 2:** Brown Spot
- **Class 3:** Healthy
- **Class 4:** Tungro
---
# **Run with Transformers 🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Rice-Leaf-Disease"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def classify_leaf_disease(image):
"""Predicts the disease type in a rice leaf image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "Bacterial Blight",
"1": "Blast",
"2": "Brown Spot",
"3": "Healthy",
"4": "Tungro"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=classify_leaf_disease,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Rice Leaf Disease Classification 🌾",
description="Upload an image of a rice leaf to identify if it is healthy or affected by diseases like Bacterial Blight, Blast, Brown Spot, or Tungro."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
---
# **Intended Use:**
The **Rice-Leaf-Disease** model helps in detecting and classifying rice leaf diseases early, supporting:
✅ **Farmers & Agriculturists:** Quick disease detection for better crop management.
✅ **Agricultural Research:** Monitoring and analyzing plant disease patterns.
✅ **AI & Machine Learning Projects:** Applying AI to real-world agricultural challenges.
|
[
"bacterialblight",
"blast",
"brownspot",
"healthy",
"tungro"
] |
prithivMLmods/Age-Classification-SigLIP2
|

# **Age-Classification-SigLIP2**
> **Age-Classification-SigLIP2** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to predict the age group of a person from an image using the **SiglipForImageClassification** architecture.
```py
Classification Report:
precision recall f1-score support
Child 0-12 0.9744 0.9562 0.9652 2193
Teenager 13-20 0.8675 0.7032 0.7768 1779
Adult 21-44 0.9053 0.9769 0.9397 9999
Middle Age 45-64 0.9059 0.8317 0.8672 3785
Aged 65+ 0.9144 0.8397 0.8755 1260
accuracy 0.9109 19016
macro avg 0.9135 0.8615 0.8849 19016
weighted avg 0.9105 0.9109 0.9087 19016
```

The model categorizes images into five age groups:
- **Class 0:** "Child 0-12"
- **Class 1:** "Teenager 13-20"
- **Class 2:** "Adult 21-44"
- **Class 3:** "Middle Age 45-64"
- **Class 4:** "Aged 65+"
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Age-Classification-SigLIP2"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def age_classification(image):
"""Predicts the age group of a person from an image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "Child 0-12",
"1": "Teenager 13-20",
"2": "Adult 21-44",
"3": "Middle Age 45-64",
"4": "Aged 65+"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=age_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Age Group Classification",
description="Upload an image to predict the person's age group."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
# **Sample Inference:**


# **Intended Use:**
The **Age-Classification-SigLIP2** model is designed to classify images into five age categories. Potential use cases include:
- **Demographic Analysis:** Helping businesses and researchers analyze age distribution.
- **Health & Fitness Applications:** Assisting in age-based health recommendations.
- **Security & Access Control:** Implementing age verification in digital systems.
- **Retail & Marketing:** Enhancing personalized customer experiences.
- **Forensics & Surveillance:** Aiding in age estimation for security purposes.
|
[
"child 0-12",
"teenager 13-20",
"adult 21-44",
"middle age 45-64",
"aged 65+"
] |
prithivMLmods/Indian-Western-Food-34
|

# **Indian-Western-Food-34**
> **Indian-Western-Food-34** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify food images into various Indian and Western dishes using the **SiglipForImageClassification** architecture.
```py
Classification Report:
precision recall f1-score support
Baked Potato 0.9912 0.9780 0.9846 1500
Crispy Chicken 0.9811 0.9707 0.9759 1500
Donut 0.9893 0.9893 0.9893 1500
Fries 0.9742 0.9827 0.9784 1500
Hot Dog 0.9830 0.9735 0.9783 1548
Sandwich 0.9898 0.9673 0.9784 1500
Taco 0.9327 0.9427 0.9377 1500
Taquito 0.9624 0.9387 0.9504 1500
Apple Pie 0.9666 0.9540 0.9602 1000
Burger 0.9114 0.9940 0.9509 331
Butter Naan 0.9691 0.9186 0.9431 307
Chai 0.9801 1.0000 0.9899 344
Chapati 0.9188 0.9694 0.9435 327
Cheesecake 0.9573 0.9640 0.9606 1000
Chicken Curry 0.9610 0.9850 0.9728 1000
Chole Bhature 0.9841 0.9867 0.9854 376
Dal Makhani 0.9698 0.9797 0.9747 295
Dhokla 0.9959 0.9959 0.9959 245
Fried Rice 0.9485 1.0000 0.9736 350
Ice Cream 0.9569 0.9770 0.9668 1000
Idli 0.9934 1.0000 0.9967 302
Jalebi 0.9931 1.0000 0.9965 288
Kaathi Rolls 0.9640 0.9606 0.9623 279
Kadai Paneer 0.9848 0.9731 0.9789 334
Kulfi 0.9810 0.9673 0.9741 214
Masala Dosa 0.9890 0.9890 0.9890 273
Momos 0.9908 0.9969 0.9938 323
Omelette 0.9829 0.9790 0.9810 1000
Paani Puri 0.9281 0.9861 0.9562 144
Pakode 0.9738 0.9665 0.9701 269
Pav Bhaji 0.9901 0.9803 0.9852 305
Pizza 0.9647 0.9927 0.9785 275
Samosa 0.9878 0.9959 0.9918 244
Sushi 0.9969 0.9800 0.9884 1000
accuracy 0.9729 23873
macro avg 0.9719 0.9775 0.9745 23873
weighted avg 0.9731 0.9729 0.9729 23873
```
---
The model categorizes images into 34 food classes:
### **Western Foods**
- **Class 0:** "Baked Potato"
- **Class 1:** "Crispy Chicken"
- **Class 2:** "Donut"
- **Class 3:** "Fries"
- **Class 4:** "Hot Dog"
- **Class 5:** "Sandwich"
- **Class 6:** "Taco"
- **Class 7:** "Taquito"
- **Class 8:** "Apple Pie"
- **Class 9:** "Burger"
- **Class 13:** "Cheesecake"
- **Class 18:** "Fried Rice"
- **Class 19:** "Ice Cream"
- **Class 27:** "Omelette"
- **Class 31:** "Pizza"
- **Class 33:** "Sushi"
### **Indian Foods**
- **Class 10:** "Butter Naan"
- **Class 11:** "Chai"
- **Class 12:** "Chapati"
- **Class 14:** "Chicken Curry"
- **Class 15:** "Chole Bhature"
- **Class 16:** "Dal Makhani"
- **Class 17:** "Dhokla"
- **Class 20:** "Idli"
- **Class 21:** "Jalebi"
- **Class 22:** "Kaathi Rolls"
- **Class 23:** "Kadai Paneer"
- **Class 24:** "Kulfi"
- **Class 25:** "Masala Dosa"
- **Class 26:** "Momos"
- **Class 28:** "Paani Puri"
- **Class 29:** "Pakode"
- **Class 30:** "Pav Bhaji"
- **Class 32:** "Samosa"
---
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Indian-Western-Food-34"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def food_classification(image):
"""Predicts the type of food in an image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "Baked Potato", "1": "Crispy Chicken", "2": "Donut", "3": "Fries",
"4": "Hot Dog", "5": "Sandwich", "6": "Taco", "7": "Taquito", "8": "Apple Pie",
"9": "Burger", "10": "Butter Naan", "11": "Chai", "12": "Chapati", "13": "Cheesecake",
"14": "Chicken Curry", "15": "Chole Bhature", "16": "Dal Makhani", "17": "Dhokla",
"18": "Fried Rice", "19": "Ice Cream", "20": "Idli", "21": "Jalebi", "22": "Kaathi Rolls",
"23": "Kadai Paneer", "24": "Kulfi", "25": "Masala Dosa", "26": "Momos", "27": "Omelette",
"28": "Paani Puri", "29": "Pakode", "30": "Pav Bhaji", "31": "Pizza", "32": "Samosa",
"33": "Sushi"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=food_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Indian & Western Food Classification",
description="Upload a food image to classify it into one of the 34 food types."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
---
# **Intended Use:**
The **Indian-Western-Food-34** model is designed to classify food images into Indian and Western dishes. Potential use cases include:
- **Restaurant & Food Delivery Apps:** Enhancing food recognition for better menu recommendations.
- **Health & Nutrition Apps:** Tracking calorie intake and diet preferences.
- **Food Blogging & Social Media:** Auto-tagging food items in posts.
- **Educational Purposes:** Teaching AI-based food classification.
|
[
"baked potato",
"crispy chicken",
"donut",
"fries",
"hot dog",
"sandwich",
"taco",
"taquito",
"apple pie",
"burger",
"butter naan",
"chai",
"chapati",
"cheesecake",
"chicken curry",
"chole bhature",
"dal makhani",
"dhokla",
"fried rice",
"ice cream",
"idli",
"jalebi",
"kaathi rolls",
"kadai paneer",
"kulfi",
"masala dosa",
"momos",
"omelette",
"paani puri",
"pakode",
"pav bhaji",
"pizza",
"samosa",
"sushi"
] |
startanalytics/autotrain-melanoma-vit-v1
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.4700036942958832
f1_macro: 0.8440663933220411
f1_micro: 0.9007633587786259
f1_weighted: 0.8993592040241694
precision_macro: 0.8985460933094256
precision_micro: 0.9007633587786259
precision_weighted: 0.9004997984022535
recall_macro: 0.8042197414881518
recall_micro: 0.9007633587786259
recall_weighted: 0.9007633587786259
accuracy: 0.9007633587786259
|
[
"actinic keratosis",
"basal cell carcinoma",
"dermatofibroma",
"melanoma",
"melanoma metastasis",
"nevus",
"seborrheic keratosis",
"solar lentigo",
"squamous cell carcinoma",
"vascular lesion"
] |
prithivMLmods/Mnist-Digits-SigLIP2
|


# **Mnist-Digits-SigLIP2**
> **Mnist-Digits-SigLIP2** is an image classification model fine-tuned from **google/siglip2-base-patch16-224** to classify handwritten digits (0-9) using the **SiglipForImageClassification** architecture. It is trained on the MNIST dataset for accurate digit recognition.
```py
Classification Report:
precision recall f1-score support
0 0.9988 0.9959 0.9974 5923
1 0.9987 0.9918 0.9952 6742
2 0.9918 0.9943 0.9930 5958
3 0.9975 0.9938 0.9957 6131
4 0.9892 0.9882 0.9887 5842
5 0.9859 0.9937 0.9898 5421
6 0.9936 0.9939 0.9937 5918
7 0.9856 0.9943 0.9899 6265
8 0.9932 0.9921 0.9926 5851
9 0.9926 0.9897 0.9912 5949
accuracy 0.9928 60000
macro avg 0.9927 0.9928 0.9927 60000
weighted avg 0.9928 0.9928 0.9928 60000
```

### **Classes:**
- **Class 0:** "0"
- **Class 1:** "1"
- **Class 2:** "2"
- **Class 3:** "3"
- **Class 4:** "4"
- **Class 5:** "5"
- **Class 6:** "6"
- **Class 7:** "7"
- **Class 8:** "8"
- **Class 9:** "9"
---
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Mnist-Digits-SigLIP2"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def classify_digit(image):
"""Predicts the digit in the given handwritten digit image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "0", "1": "1", "2": "2", "3": "3", "4": "4",
"5": "5", "6": "6", "7": "7", "8": "8", "9": "9"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=classify_digit,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="MNIST Digit Classification 🔢",
description="Upload a handwritten digit image (0-9) to recognize it using MNIST-Digits-SigLIP2."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
---
# **Sample Inference**




# **Intended Use:**
The **Mnist-Digits-SigLIP2** model is designed for handwritten digit recognition. Potential applications include:
- **Optical Character Recognition (OCR):** Digit recognition for various documents.
- **Banking & Finance:** Automated check processing.
- **Education & Learning:** AI-powered handwriting assessment.
- **Embedded Systems:** Handwriting input in smart devices.
|
[
"0",
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9"
] |
alealejandro1/ABC_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ABC_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3009
- Accuracy: 0.845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.356 | 0.992 | 62 | 2.3009 | 0.845 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
Snppuzzle/Lanna-model-convnextv2-base-22k-224
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"aabnam",
"amnat",
"anong",
"anu",
"aphet",
"athik",
"banman",
"binbon",
"bochai",
"bokin",
"bolao",
"bopen",
"boron",
"buppe",
"chaehom",
"chaeyang",
"chaidi",
"chanan",
"changhan",
"chaofa",
"chaomom",
"chaomueang",
"chata",
"chatu",
"chaya",
"chiangdao",
"chiangmai",
"chingchang",
"chokdi",
"dangsaap",
"deklek",
"deumnai",
"doilo",
"doiluang",
"doitao",
"dokbua",
"eka",
"fanhan",
"hangdong",
"hangsat",
"hungtam",
"huwai",
"inta",
"iti",
"itom",
"jara",
"kadi",
"kamyao",
"kanmo",
"kapmo",
"kephet",
"kepphak",
"khaikai",
"khaipa",
"khamaen",
"khaoma",
"khata",
"kheumnguem",
"khomchai",
"khongbo",
"khongtua",
"khunyuam",
"khwaluat",
"khwamsuk",
"kinkhao",
"kinkhong",
"kinmuea",
"kinru",
"kluaibo",
"laemai",
"laichiao",
"lailong",
"lampang",
"lattho",
"loka",
"luathak",
"luatok",
"maechaem",
"maechai",
"maechan",
"maecharim",
"maelao",
"maelim",
"maemo",
"maephrik",
"maetaeng",
"maeth",
"maetha",
"maewang",
"maha",
"mahachai",
"mam",
"manpen",
"manu",
"mueangphan",
"mueangyong",
"nakrian",
"nambo",
"nanglong",
"nangsue",
"naokhong",
"nara",
"newin",
"nganban",
"nguenchae",
"nguenchat",
"omkoi",
"oprom",
"oram",
"osot",
"padaet",
"phaideuan",
"phaka",
"phakhawa",
"phayao",
"phoenwai",
"phuphiang",
"phusang",
"phuttha",
"phuttho",
"pikat",
"pikot",
"piso",
"puri",
"rakha",
"ratna",
"roisai",
"ruluem",
"saichai",
"saket",
"sana",
"sanam",
"sanya",
"sapha",
"sawa",
"sayong",
"siri",
"sitth",
"soekho",
"soekman",
"somkhuan",
"songkho",
"sukhato",
"sukka",
"taefai",
"taehai",
"tanam",
"taro",
"thairat",
"thamam",
"thawai",
"thewa",
"thuti",
"uru",
"wailang",
"wasa",
"wati",
"wihan",
"witcha",
"witwo",
"yapheng",
"yukloek"
] |
Snppuzzle/Lanna-model-efficientnet-b0
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"aabnam",
"amnat",
"anong",
"anu",
"aphet",
"athik",
"banman",
"binbon",
"bochai",
"bokin",
"bolao",
"bopen",
"boron",
"buppe",
"chaehom",
"chaeyang",
"chaidi",
"chanan",
"changhan",
"chaofa",
"chaomom",
"chaomueang",
"chata",
"chatu",
"chaya",
"chiangdao",
"chiangmai",
"chingchang",
"chokdi",
"dangsaap",
"deklek",
"deumnai",
"doilo",
"doiluang",
"doitao",
"dokbua",
"eka",
"fanhan",
"hangdong",
"hangsat",
"hungtam",
"huwai",
"inta",
"iti",
"itom",
"jara",
"kadi",
"kamyao",
"kanmo",
"kapmo",
"kephet",
"kepphak",
"khaikai",
"khaipa",
"khamaen",
"khaoma",
"khata",
"kheumnguem",
"khomchai",
"khongbo",
"khongtua",
"khunyuam",
"khwaluat",
"khwamsuk",
"kinkhao",
"kinkhong",
"kinmuea",
"kinru",
"kluaibo",
"laemai",
"laichiao",
"lailong",
"lampang",
"lattho",
"loka",
"luathak",
"luatok",
"maechaem",
"maechai",
"maechan",
"maecharim",
"maelao",
"maelim",
"maemo",
"maephrik",
"maetaeng",
"maeth",
"maetha",
"maewang",
"maha",
"mahachai",
"mam",
"manpen",
"manu",
"mueangphan",
"mueangyong",
"nakrian",
"nambo",
"nanglong",
"nangsue",
"naokhong",
"nara",
"newin",
"nganban",
"nguenchae",
"nguenchat",
"omkoi",
"oprom",
"oram",
"osot",
"padaet",
"phaideuan",
"phaka",
"phakhawa",
"phayao",
"phoenwai",
"phuphiang",
"phusang",
"phuttha",
"phuttho",
"pikat",
"pikot",
"piso",
"puri",
"rakha",
"ratna",
"roisai",
"ruluem",
"saichai",
"saket",
"sana",
"sanam",
"sanya",
"sapha",
"sawa",
"sayong",
"siri",
"sitth",
"soekho",
"soekman",
"somkhuan",
"songkho",
"sukhato",
"sukka",
"taefai",
"taehai",
"tanam",
"taro",
"thairat",
"thamam",
"thawai",
"thewa",
"thuti",
"uru",
"wailang",
"wasa",
"wati",
"wihan",
"witcha",
"witwo",
"yapheng",
"yukloek"
] |
mizikfischer/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2527
- Accuracy: 0.9445
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 93 | 0.6027 | 0.8958 |
| 1.5772 | 2.0 | 186 | 0.3632 | 0.9161 |
| 0.3807 | 3.0 | 279 | 0.3124 | 0.9202 |
| 0.2645 | 4.0 | 372 | 0.2945 | 0.9242 |
| 0.2288 | 5.0 | 465 | 0.2890 | 0.9242 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
## Zero-Shot Classification Results (CLIP)
**Model used:** `openai/clip-vit-large-patch14`
**Oxford-Pet Dataset**
- **Accuracy:** 87.86 %
- **Precision:** 87.61 %
- **Recall:** 87.86 %
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
alxfgh/dinov2-base-finetuned-dermnet-lr3-5-0.05wd-csr
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dinov2-base-finetuned-dermnet-lr3-5-0.05wd-csr
This model is a fine-tuned version of [facebook/dinov2-base](https://huggingface.co/facebook/dinov2-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8119
- Accuracy: 0.7958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 2.7488 | 1.0 | 98 | 2.5158 | 0.3722 |
| 1.9402 | 2.0 | 196 | 1.7710 | 0.5170 |
| 1.4938 | 3.0 | 294 | 1.4939 | 0.5996 |
| 1.1226 | 4.0 | 392 | 1.3168 | 0.6256 |
| 0.9329 | 5.0 | 490 | 1.1906 | 0.6705 |
| 0.8039 | 6.0 | 588 | 1.0882 | 0.7067 |
| 0.6426 | 7.0 | 686 | 1.1061 | 0.6930 |
| 0.5777 | 8.0 | 784 | 1.0133 | 0.7227 |
| 0.477 | 9.0 | 882 | 0.9681 | 0.7364 |
| 0.3961 | 10.0 | 980 | 0.9402 | 0.7581 |
| 0.3451 | 11.0 | 1078 | 0.9311 | 0.7509 |
| 0.337 | 12.0 | 1176 | 0.8897 | 0.7661 |
| 0.2348 | 13.0 | 1274 | 0.8616 | 0.7762 |
| 0.1992 | 14.0 | 1372 | 0.8241 | 0.7951 |
| 0.182 | 15.0 | 1470 | 0.8312 | 0.7878 |
| 0.1556 | 16.0 | 1568 | 0.8245 | 0.7857 |
| 0.1516 | 17.0 | 1666 | 0.8170 | 0.7958 |
| 0.1569 | 18.0 | 1764 | 0.8202 | 0.7878 |
| 0.1364 | 19.0 | 1862 | 0.8117 | 0.7951 |
| 0.1427 | 19.8021 | 1940 | 0.8119 | 0.7958 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"aids",
"acanthosis nigricans",
"acne",
"acne keloidalis nuchae",
"actinic cheilitis",
"actinic keratosis",
"alopecia areata",
"amyloidosis",
"angular cheilitis",
"atopic dermatitis",
"basal cell carcinoma",
"bowen's disease",
"bullous pemphigoid",
"candidiasis",
"cellulitis",
"chondrodermatitis nodularis",
"cutaneous t-cell lymphoma",
"dermatitis",
"dermatitis herpetiformis",
"dermatofibroma",
"drug-induced photosensitivity and eruptions",
"dyshidrosis eczema",
"eczema",
"eczema herpetiformis",
"epidermolysis bullosa",
"fifth disease",
"flat warts",
"folliculitis",
"furuncle",
"genital warts",
"grover's disease",
"hailey–hailey disease",
"hand-foot-and-mouth disease",
"herpes simplex 1 or 2",
"herpes zoster",
"hidradenitis suppurativa",
"ichthyosis",
"impetigo",
"kawasaki syndrome",
"keratoacanthom",
"keratosis follicularis",
"larva migrans",
"lentigo",
"lentigo maligna",
"leprosy borderline",
"leprosy lepromatous",
"leprosy tuberculoid",
"lichen planus",
"lichen sclerosus",
"lichen simplex chronicus",
"lupus erythematosus chronicus discoides",
"melanoma",
"molluscum contagiosum",
"necrobiosis lipoidica",
"neurofibromatosis 1",
"nevus",
"nevus sebaceous",
"nummular eczema",
"onychomycosis",
"papilomatosis confluentes and reticulate",
"paronychia",
"pediculosis capitis",
"perioral dermatitis",
"phototoxic reaction",
"pityriasis rosea",
"porokeratosis",
"porokeratosis actinic",
"psoriasis",
"rosacea",
"scarlet fever",
"scleroderma",
"sebaceous hyperplasia",
"seborrheic dermatitis",
"seborrheic keratosis",
"squamous cell carcinoma",
"stasis dermatitis",
"tinea capitis",
"tinea corporis",
"tinea cruris",
"tinea faciei",
"tinea incognita",
"tinea manuum",
"tinea nigra",
"tinea pedis",
"tinea versicolor",
"tuberous sclerosis",
"tungiasis",
"varicella",
"vasculitis",
"vitiligo",
"pigmented benign keratosis"
] |
yuus2733/toyotacars_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# yuus2733/toyotacars_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3304
- Validation Loss: 1.1095
- Train Accuracy: 0.7028
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 43110, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.9793 | 2.8834 | 0.1556 | 0 |
| 2.8141 | 2.7550 | 0.25 | 1 |
| 2.6444 | 2.5301 | 0.3361 | 2 |
| 2.4691 | 2.3374 | 0.4722 | 3 |
| 2.2959 | 2.1452 | 0.5333 | 4 |
| 2.0945 | 2.0539 | 0.5389 | 5 |
| 1.9633 | 1.8626 | 0.5722 | 6 |
| 1.8124 | 1.8536 | 0.55 | 7 |
| 1.6898 | 1.6423 | 0.6083 | 8 |
| 1.5422 | 1.5292 | 0.6222 | 9 |
| 1.4272 | 1.4585 | 0.6306 | 10 |
| 1.3109 | 1.4785 | 0.6222 | 11 |
| 1.1907 | 1.3007 | 0.6861 | 12 |
| 1.0850 | 1.2980 | 0.6833 | 13 |
| 1.0577 | 1.2130 | 0.7056 | 14 |
| 0.9468 | 1.1251 | 0.7167 | 15 |
| 0.8517 | 1.3172 | 0.6472 | 16 |
| 0.8206 | 1.1645 | 0.7083 | 17 |
| 0.7459 | 1.1768 | 0.6972 | 18 |
| 0.6864 | 1.1457 | 0.6778 | 19 |
| 0.6379 | 1.1162 | 0.6972 | 20 |
| 0.5928 | 1.0945 | 0.7056 | 21 |
| 0.5569 | 1.0542 | 0.7139 | 22 |
| 0.5276 | 1.1110 | 0.7083 | 23 |
| 0.4715 | 1.0347 | 0.7222 | 24 |
| 0.4470 | 0.9403 | 0.7222 | 25 |
| 0.4112 | 0.9729 | 0.7222 | 26 |
| 0.4101 | 1.0422 | 0.6944 | 27 |
| 0.3707 | 1.0415 | 0.6917 | 28 |
| 0.3304 | 1.1095 | 0.7028 | 29 |
### Framework versions
- Transformers 4.50.3
- TensorFlow 2.19.0
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"alphard",
"aqua",
"hiace",
"highlander",
"hilux",
"iq",
"prius",
"rav4",
"rush",
"soarer",
"starlet",
"supra",
"camry",
"vitz",
"yaris",
"celica",
"corolla",
"corona",
"crown",
"estima",
"etios",
"fortuner"
] |
SY750/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2590
- Accuracy: 0.8955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.3556 | 1.0 | 101 | 0.3509 | 0.8596 |
| 0.2608 | 2.0 | 202 | 0.3039 | 0.8863 |
| 0.181 | 2.9751 | 300 | 0.2590 | 0.8955 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
[
"adenocarcinoma",
"high-grade in",
"low-grade in",
"normal",
"polyp"
] |
Inhasw/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6247
- Accuracy: 0.8279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.6684 | 1.0 | 43 | 1.5274 | 0.5386 |
| 1.422 | 2.0 | 86 | 1.2167 | 0.5964 |
| 1.2303 | 3.0 | 129 | 0.9840 | 0.6617 |
| 1.0419 | 4.0 | 172 | 0.8583 | 0.7255 |
| 0.886 | 5.0 | 215 | 0.7485 | 0.7389 |
| 0.8443 | 6.0 | 258 | 0.7898 | 0.7463 |
| 0.7685 | 7.0 | 301 | 0.6678 | 0.7745 |
| 0.6999 | 8.0 | 344 | 0.7002 | 0.7730 |
| 0.6128 | 9.0 | 387 | 0.6634 | 0.7982 |
| 0.5588 | 10.0 | 430 | 0.6644 | 0.7789 |
| 0.5829 | 11.0 | 473 | 0.6318 | 0.8116 |
| 0.5234 | 12.0 | 516 | 0.6662 | 0.7864 |
| 0.4712 | 13.0 | 559 | 0.6781 | 0.8042 |
| 0.4042 | 14.0 | 602 | 0.6542 | 0.8131 |
| 0.3966 | 15.0 | 645 | 0.6432 | 0.8086 |
| 0.41 | 16.0 | 688 | 0.6346 | 0.8145 |
| 0.3848 | 17.0 | 731 | 0.6295 | 0.8323 |
| 0.3612 | 18.0 | 774 | 0.6841 | 0.8042 |
| 0.3258 | 19.0 | 817 | 0.6613 | 0.8145 |
| 0.3163 | 20.0 | 860 | 0.6340 | 0.8279 |
| 0.3469 | 21.0 | 903 | 0.6621 | 0.8205 |
| 0.3523 | 22.0 | 946 | 0.6655 | 0.8131 |
| 0.3533 | 23.0 | 989 | 0.6541 | 0.8131 |
| 0.3312 | 24.0 | 1032 | 0.6445 | 0.8116 |
| 0.3095 | 25.0 | 1075 | 0.6519 | 0.8205 |
| 0.2425 | 26.0 | 1118 | 0.6363 | 0.8145 |
| 0.2956 | 27.0 | 1161 | 0.6318 | 0.8294 |
| 0.2629 | 28.0 | 1204 | 0.6217 | 0.8249 |
| 0.2755 | 29.0 | 1247 | 0.6243 | 0.8279 |
| 0.29 | 29.3077 | 1260 | 0.6247 | 0.8279 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"가디건",
"니트웨어",
"드레스",
"래깅스",
"베스트",
"브라탑",
"블라우스",
"셔츠",
"스커트",
"재킷",
"점퍼",
"점프수트",
"조거팬츠",
"짚업",
"청바지",
"코트",
"티셔츠",
"패딩",
"팬츠",
"후드티"
] |
zekicalb/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1984
- Accuracy: 0.9391
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.392 | 1.0 | 370 | 0.3038 | 0.9256 |
| 0.2276 | 2.0 | 740 | 0.2278 | 0.9364 |
| 0.1625 | 3.0 | 1110 | 0.2089 | 0.9364 |
| 0.1596 | 4.0 | 1480 | 0.1997 | 0.9405 |
| 0.1454 | 5.0 | 1850 | 0.1969 | 0.9405 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
## Zero-Shot Classification Results
- **Accuracy**: 0.8800
- **Precision (macro)**: 0.8768
- **Recall (macro)**: 0.8800
This was done using [openai/clip-vit-large-patch14] on the Oxford-Pets dataset.
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
Louloubib/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6492
- Accuracy: 0.882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7174 | 1.0 | 63 | 2.5482 | 0.807 |
| 1.8634 | 2.0 | 126 | 1.8092 | 0.848 |
| 1.6128 | 2.96 | 186 | 1.6492 | 0.882 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
affal01/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1858
- Accuracy: 0.9472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3958 | 1.0 | 370 | 0.2588 | 0.9405 |
| 0.2184 | 2.0 | 740 | 0.1908 | 0.9486 |
| 0.1666 | 3.0 | 1110 | 0.1737 | 0.9405 |
| 0.1558 | 4.0 | 1480 | 0.1658 | 0.9472 |
| 0.1395 | 5.0 | 1850 | 0.1648 | 0.9459 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
### Zero shot
Accuracy: 0.8800
Precision: 0.8768
Recall: 0.8800
Wurde mit [openai/clip-vit-base-patch32] & dem the Oxford-Pets dataset durchgeführt.
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
Inhasw/swin-small-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-small-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-small-patch4-window7-224](https://huggingface.co/microsoft/swin-small-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5541
- Accuracy: 0.8472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 2.8572 | 1.0 | 48 | 2.6775 | 0.2226 |
| 1.6357 | 2.0 | 96 | 1.3411 | 0.5519 |
| 1.076 | 3.0 | 144 | 0.9200 | 0.6973 |
| 1.0091 | 4.0 | 192 | 0.8266 | 0.7062 |
| 0.777 | 5.0 | 240 | 0.6904 | 0.7685 |
| 0.6453 | 6.0 | 288 | 0.6149 | 0.7864 |
| 0.6124 | 7.0 | 336 | 0.6207 | 0.7923 |
| 0.5204 | 8.0 | 384 | 0.6130 | 0.7908 |
| 0.529 | 9.0 | 432 | 0.6334 | 0.8042 |
| 0.4394 | 10.0 | 480 | 0.5370 | 0.8249 |
| 0.4398 | 11.0 | 528 | 0.5589 | 0.8249 |
| 0.3996 | 12.0 | 576 | 0.5391 | 0.8501 |
| 0.3585 | 13.0 | 624 | 0.5796 | 0.8205 |
| 0.3276 | 14.0 | 672 | 0.5851 | 0.8338 |
| 0.3382 | 15.0 | 720 | 0.5508 | 0.8457 |
| 0.3212 | 16.0 | 768 | 0.5279 | 0.8605 |
| 0.3226 | 17.0 | 816 | 0.5769 | 0.8338 |
| 0.2836 | 18.0 | 864 | 0.5942 | 0.8294 |
| 0.2743 | 19.0 | 912 | 0.5862 | 0.8309 |
| 0.2637 | 20.0 | 960 | 0.5586 | 0.8234 |
| 0.2567 | 21.0 | 1008 | 0.5335 | 0.8427 |
| 0.2932 | 22.0 | 1056 | 0.5653 | 0.8383 |
| 0.2532 | 23.0 | 1104 | 0.5493 | 0.8368 |
| 0.2286 | 24.0 | 1152 | 0.5798 | 0.8383 |
| 0.206 | 25.0 | 1200 | 0.5623 | 0.8487 |
| 0.2288 | 26.0 | 1248 | 0.5566 | 0.8442 |
| 0.2059 | 27.0 | 1296 | 0.5437 | 0.8457 |
| 0.1904 | 28.0 | 1344 | 0.5500 | 0.8338 |
| 0.2416 | 29.0 | 1392 | 0.5563 | 0.8487 |
| 0.1967 | 29.3789 | 1410 | 0.5541 | 0.8472 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"가디건",
"니트웨어",
"드레스",
"래깅스",
"베스트",
"브라탑",
"블라우스",
"셔츠",
"스커트",
"재킷",
"점퍼",
"점프수트",
"조거팬츠",
"짚업",
"청바지",
"코트",
"티셔츠",
"패딩",
"팬츠",
"후드티"
] |
Inhasw/beit-base-patch16-224-pt22k-ft22k-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-pt22k-ft22k-finetuned-eurosat
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1650
- Accuracy: 0.7003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 3.0558 | 1.0 | 43 | 3.0721 | 0.0519 |
| 2.9535 | 2.0 | 86 | 2.9192 | 0.1068 |
| 2.7928 | 3.0 | 129 | 2.6951 | 0.2315 |
| 2.5368 | 4.0 | 172 | 2.4673 | 0.3457 |
| 2.3928 | 5.0 | 215 | 2.2777 | 0.4303 |
| 2.1938 | 6.0 | 258 | 2.1163 | 0.4896 |
| 2.0454 | 7.0 | 301 | 1.9760 | 0.5208 |
| 1.9604 | 8.0 | 344 | 1.8527 | 0.5593 |
| 1.8495 | 9.0 | 387 | 1.7479 | 0.5890 |
| 1.6308 | 10.0 | 430 | 1.6564 | 0.6231 |
| 1.6896 | 11.0 | 473 | 1.5779 | 0.6424 |
| 1.6121 | 12.0 | 516 | 1.5088 | 0.6632 |
| 1.5673 | 13.0 | 559 | 1.4500 | 0.6751 |
| 1.4859 | 14.0 | 602 | 1.3988 | 0.6869 |
| 1.4813 | 15.0 | 645 | 1.3561 | 0.6958 |
| 1.4414 | 16.0 | 688 | 1.3177 | 0.7062 |
| 1.4252 | 17.0 | 731 | 1.2833 | 0.7136 |
| 1.3811 | 18.0 | 774 | 1.2549 | 0.7226 |
| 1.3464 | 19.0 | 817 | 1.2304 | 0.7240 |
| 1.2451 | 20.0 | 860 | 1.2093 | 0.7270 |
| 1.2871 | 21.0 | 903 | 1.1904 | 0.7315 |
| 1.2546 | 22.0 | 946 | 1.1746 | 0.7315 |
| 1.2464 | 23.0 | 989 | 1.1611 | 0.7329 |
| 1.3012 | 24.0 | 1032 | 1.1499 | 0.7374 |
| 1.2477 | 25.0 | 1075 | 1.1409 | 0.7404 |
| 1.2761 | 26.0 | 1118 | 1.1337 | 0.7404 |
| 1.2687 | 27.0 | 1161 | 1.1289 | 0.7404 |
| 1.2304 | 28.0 | 1204 | 1.1258 | 0.7433 |
| 1.2628 | 29.0 | 1247 | 1.1244 | 0.7433 |
| 1.2639 | 29.3077 | 1260 | 1.1243 | 0.7433 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"가디건",
"니트웨어",
"드레스",
"래깅스",
"베스트",
"브라탑",
"블라우스",
"셔츠",
"스커트",
"재킷",
"점퍼",
"점프수트",
"조거팬츠",
"짚업",
"청바지",
"코트",
"티셔츠",
"패딩",
"팬츠",
"후드티"
] |
Inhasw/swinv2-base-patch4-window12to16-192to256-22kto1k-ft-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-base-patch4-window12to16-192to256-22kto1k-ft-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window12to16-192to256-22kto1k-ft](https://huggingface.co/microsoft/swinv2-base-patch4-window12to16-192to256-22kto1k-ft) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4454
- Accuracy: 0.9036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 2.5802 | 1.0 | 48 | 1.6596 | 0.5445 |
| 0.9821 | 2.0 | 96 | 0.6121 | 0.8012 |
| 0.767 | 3.0 | 144 | 0.5143 | 0.8323 |
| 0.6452 | 4.0 | 192 | 0.4496 | 0.8516 |
| 0.4403 | 5.0 | 240 | 0.5212 | 0.8472 |
| 0.4859 | 6.0 | 288 | 0.4766 | 0.8828 |
| 0.4177 | 7.0 | 336 | 0.4704 | 0.8516 |
| 0.3815 | 8.0 | 384 | 0.4597 | 0.8694 |
| 0.4111 | 9.0 | 432 | 0.4289 | 0.8828 |
| 0.3375 | 10.0 | 480 | 0.4603 | 0.8709 |
| 0.3267 | 11.0 | 528 | 0.5173 | 0.8739 |
| 0.2948 | 12.0 | 576 | 0.4379 | 0.8932 |
| 0.2322 | 13.0 | 624 | 0.4454 | 0.9036 |
| 0.2612 | 14.0 | 672 | 0.5133 | 0.8739 |
| 0.2259 | 15.0 | 720 | 0.4377 | 0.8947 |
| 0.2534 | 16.0 | 768 | 0.5072 | 0.8724 |
| 0.1852 | 17.0 | 816 | 0.4951 | 0.8843 |
| 0.1976 | 18.0 | 864 | 0.5063 | 0.8902 |
| 0.2377 | 19.0 | 912 | 0.4767 | 0.8843 |
| 0.189 | 20.0 | 960 | 0.4763 | 0.8917 |
| 0.1744 | 21.0 | 1008 | 0.5027 | 0.8813 |
| 0.1546 | 22.0 | 1056 | 0.5021 | 0.8961 |
| 0.1451 | 23.0 | 1104 | 0.4772 | 0.9006 |
| 0.1681 | 24.0 | 1152 | 0.4767 | 0.8976 |
| 0.1539 | 25.0 | 1200 | 0.5087 | 0.8902 |
| 0.1054 | 26.0 | 1248 | 0.5186 | 0.8902 |
| 0.1111 | 27.0 | 1296 | 0.5066 | 0.9006 |
| 0.1057 | 28.0 | 1344 | 0.5019 | 0.8947 |
| 0.1498 | 29.0 | 1392 | 0.5147 | 0.9006 |
| 0.1255 | 29.3789 | 1410 | 0.5148 | 0.8991 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"가디건",
"니트웨어",
"드레스",
"래깅스",
"베스트",
"브라탑",
"블라우스",
"셔츠",
"스커트",
"재킷",
"점퍼",
"점프수트",
"조거팬츠",
"짚업",
"청바지",
"코트",
"티셔츠",
"패딩",
"팬츠",
"후드티"
] |
saurabhati/VMamba_ImageNet_82.6
|
# VMamba: Visual State Space Model
VMamba is a bidirectional state-space model finetuned on Imagenet dataset. It was introduced in the paper:
[VMamba: Visual State Space Model](https://arxiv.org/pdf/2401.10166) and was first released in [this repo](https://github.com/MzeroMiko/VMamba/tree/main).
Disclaimer: This is not the official implementation, please refer to the [official repo](https://github.com/MzeroMiko/VMamba/tree/main).
This is work is progress to add VMamba backbone for Image, Audio Classification tasks by me, Saurabhchand Bhati.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import torch
from PIL import Image
import torchvision.transforms as T
from transformers import AutoConfig, AutoModelForImageClassification
config = AutoConfig.from_pretrained('saurabhati/VMamba_ImageNet_82.6',trust_remote_code=True)
vmamba_model = AutoModelForImageClassification.from_pretrained('saurabhati/VMamba_ImageNet_82.6',trust_remote_code=True)
preprocess = T.Compose([
T.Resize(224, interpolation=Image.BICUBIC),
T.CenterCrop(224),
T.ToTensor(),
T.Normalize(
mean=[0.4850, 0.4560, 0.4060],
std=[0.2290, 0.2240, 0.2250]
)])
input_image = Image.open('/data/sls/scratch/sbhati/data/Imagenet/train/n02009912/n02009912_16160.JPEG')
input_image = preprocess(input_image)
with torch.no_grad():
logits = vmamba_model(input_image.unsqueeze(0)).logits
predicted_label = vmamba_model.config.id2label[logits.argmax().item()]
predicted_label
'crane'
```
## Citation
```bibtex
@article{liu2024vmamba,
title={VMamba: Visual State Space Model},
author={Liu, Yue and Tian, Yunjie and Zhao, Yuzhong and Yu, Hongtian and Xie, Lingxi and Wang, Yaowei and Ye, Qixiang and Liu, Yunfan},
journal={arXiv preprint arXiv:2401.10166},
year={2024}
}
```
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
saurabhati/VMamba_ImageNet_83.6
|
# VMamba: Visual State Space Model
VMamba is a bidirectional state-space model finetuned on Imagenet dataset. It was introduced in the paper:
[VMamba: Visual State Space Model](https://arxiv.org/pdf/2401.10166) and was first released in [this repo](https://github.com/MzeroMiko/VMamba/tree/main).
Disclaimer: This is not the official implementation, please refer to the [official repo](https://github.com/MzeroMiko/VMamba/tree/main).
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import torch
from PIL import Image
import torchvision.transforms as T
from transformers import AutoConfig, AutoModelForImageClassification
config = AutoConfig.from_pretrained('saurabhati/VMamba_ImageNet_82.6',trust_remote_code=True)
vmamba_model = AutoModelForImageClassification.from_pretrained('saurabhati/VMamba_ImageNet_82.6',trust_remote_code=True)
preprocess = T.Compose([
T.Resize(224, interpolation=Image.BICUBIC),
T.CenterCrop(224),
T.ToTensor(),
T.Normalize(
mean=[0.4850, 0.4560, 0.4060],
std=[0.2290, 0.2240, 0.2250]
)])
input_image = Image.open('/data/sls/scratch/sbhati/data/Imagenet/train/n02009912/n02009912_16160.JPEG')
input_image = preprocess(input_image)
with torch.no_grad():
logits = vmamba_model(input_image.unsqueeze(0)).logits
predicted_label = vmamba_model.config.id2label[logits.argmax().item()]
predicted_label
'crane'
```
## Citation
```bibtex
@article{liu2024vmamba,
title={VMamba: Visual State Space Model},
author={Liu, Yue and Tian, Yunjie and Zhao, Yuzhong and Yu, Hongtian and Xie, Lingxi and Wang, Yaowei and Ye, Qixiang and Liu, Yunfan},
journal={arXiv preprint arXiv:2401.10166},
year={2024}
}
```
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
halimalm/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2081
- Accuracy: 0.9296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.385 | 1.0 | 370 | 0.2946 | 0.9283 |
| 0.2121 | 2.0 | 740 | 0.2315 | 0.9364 |
| 0.1545 | 3.0 | 1110 | 0.2084 | 0.9364 |
| 0.1476 | 4.0 | 1480 | 0.2036 | 0.9391 |
| 0.1198 | 5.0 | 1850 | 0.2018 | 0.9405 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
## Zero-Shot classification model
This section compares the performance of a zero-shot model (`openai/clip-vit-large-patch14`) on the Oxford Pets dataset (`pcuenq/oxford-pets`).
- **Model used**: `openai/clip-vit-large-patch14`
- **Dataset**: `pcuenq/oxford-pets` (train split)
- **Evaluation Task**: Zero-Shot Image Classification
- **Candidate Labels**: 37 pet breeds from the dataset
### Results:
Zero-Shot Evaluation mit CLIP:
Accuracy: 0.8800
Precision: 0.8768
Recall: 0.8800
Evaluated using Hugging Face `transformers` pipeline and `sklearn.metrics` on the full training set.
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
drj0731/wikibooks-huggingface
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
Eraly-ml/centraasia-Swinv2
|
# Central Asian Food Classification
## Model Information
- **Base Model**: [microsoft/swinv2-base-patch4-window16-256](https://huggingface.co/microsoft/swinv2-base-patch4-window16-256)
- **Dataset**: [issai/Central_Asian_Food_Dataset](https://huggingface.co/datasets/issai/Central_Asian_Food_Dataset)
- **Library**: `transformers`, `pytorch`
- **Pipeline**: Image Classification
- **License**: Creative Commons Attribution Non Commercial 4.0
## Model Description
- This model classifies images of Central Asian dishes into 42 different categories.
- The model is fine-tuned on the Central Asian Food Dataset using Swin Transformer v2 architecture.
- The training was conducted on 2 Tesla T4 GPUs in Oregon, USA.
## Labels (Classes)
```python
class_names = [
"achichuk", "airan-katyk", "asip", "bauyrsak", "beshbarmak-w-kazy",
"beshbarmak-wo-kazy", "chak-chak", "cheburek", "doner-lavash", "doner-nan",
"hvorost", "irimshik", "kattama-nan", "kazy-karta", "kurt", "kuyrdak",
"kymyz-kymyran", "lagman-fried", "lagman-w-soup", "lagman-wo-soup", "manty",
"naryn", "nauryz-kozhe", "orama", "plov", "samsa", "shashlyk-chicken",
"shashlyk-chicken-v", "shashlyk-kuskovoi", "shashlyk-kuskovoi-v",
"shashlyk-minced-meat", "sheep-head", "shelpek", "shorpa", "soup-plain",
"sushki", "suzbe", "taba-nan", "talkan-zhent", "tushpara-fried",
"tushpara-w-soup", "tushpara-wo-soup"
]
```
## Training
```
training_args = TrainingArguments(
output_dir="./swinv2_classification",
evaluation_strategy="epoch",
save_strategy="epoch",
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=5,
weight_decay=0.01,
logging_dir="./logs",
logging_steps=10
)
```
```
Epoch Training Loss Validation Loss
1 0.815700 0.741029
2 0.454500 0.641849
3 0.100500 0.680114
4 0.030000 0.704669
5 0.009000 0.661318
```
## Evaluation Metrics
The model achieved **87% accuracy** on the validation set. Below is the classification report with precision, recall, and F1-score for each class:
```
accuracy 0.87 2735
macro avg 0.86 0.85 0.85 2735
weighted avg 0.88 0.87 0.87 2735
```

## Environmental Impact
The estimated carbon emissions from training this model:
- **Emissions**: 0.054843 grams CO2
- **Source**: Code Carbon
- **Training Type**: Fine-tuning
- **Location**: Oregon, USA (45.5999, -121.1871)
- **Hardware Used**: 2x Tesla T4 GPUs, Intel Xeon CPU (4 cores), 31.35 GB RAM
## Usage
To use this model for inference:
```python
import requests
from io import BytesIO
from PIL import Image
from transformers import pipeline
# Load the model
pipe = pipeline("image-classification", model="Eraly-ml/centraasia-Swinv2", device=0)
# Image URL
image_url = "https://avatars.mds.yandex.net/get-altay/12813969/2a0000018e10a3da6a2a1d1d2c2745548220/XXXL"
# Download the image from the internet
response = requests.get(image_url)
image = Image.open(BytesIO(response.content))
# Model classes
class_names = [
"achichuk", "airan-katyk", "asip", "bauyrsak", "beshbarmak-w-kazy",
"beshbarmak-wo-kazy", "chak-chak", "cheburek", "doner-lavash", "doner-nan",
"hvorost", "irimshik", "kattama-nan", "kazy-karta", "kurt", "kuyrdak",
"kymyz-kymyran", "lagman-fried", "lagman-w-soup", "lagman-wo-soup", "manty",
"naryn", "nauryz-kozhe", "orama", "plov", "samsa", "shashlyk-chicken",
"shashlyk-chicken-v", "shashlyk-kuskovoi", "shashlyk-kuskovoi-v",
"shashlyk-minced-meat", "sheep-head", "shelpek", "shorpa", "soup-plain",
"sushki", "suzbe", "taba-nan", "talkan-zhent", "tushpara-fried",
"tushpara-w-soup", "tushpara-wo-soup"
]
# Make a prediction
predictions = pipe(image)
# Display results with correct labels
for pred in predictions:
label_id = int(pred["label"].replace("LABEL_", "")) # Extract the number
class_name = class_names[label_id] # Get the class name
score = pred["score"] # Probability
print(f"Class: {class_name}, probability: {score:.4f}")
```
## Citation
If you use this model, please cite:
```
@misc{CentralAsianFood,
author = {Eraly Gainulla},
title = {Central Asian Food Classification Model},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/Eraly-ml/centraasia-Swinv2}
}
```
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24",
"label_25",
"label_26",
"label_27",
"label_28",
"label_29",
"label_30",
"label_31",
"label_32",
"label_33",
"label_34",
"label_35",
"label_36",
"label_37",
"label_38",
"label_39",
"label_40",
"label_41"
] |
svdb01/swin-tiny-patch4-window7-224-finetuned-tig-5083
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-tig-5083
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0028
- Accuracy: 0.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1123 | 1.0 | 188 | 0.0139 | 0.9963 |
| 0.0899 | 2.0 | 376 | 0.0087 | 0.9974 |
| 0.0666 | 3.0 | 564 | 0.0028 | 0.9996 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
[
"0",
"1",
"2",
"3",
"4",
"5"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.