model_id
stringlengths 7
105
| model_card
stringlengths 1
130k
| model_labels
listlengths 2
80k
|
---|---|---|
mbiarreta/vit-orinoquia
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-orinoquia
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the orinoquia dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1021
- Accuracy: 0.9691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.9968 | 0.0463 | 100 | 1.8490 | 0.4418 |
| 1.3976 | 0.0927 | 200 | 1.5191 | 0.5054 |
| 1.3472 | 0.1390 | 300 | 1.3085 | 0.6078 |
| 0.9815 | 0.1854 | 400 | 1.1603 | 0.6314 |
| 1.2055 | 0.2317 | 500 | 1.0710 | 0.6709 |
| 1.0358 | 0.2780 | 600 | 1.0229 | 0.6820 |
| 0.8788 | 0.3244 | 700 | 0.8523 | 0.7340 |
| 0.9701 | 0.3707 | 800 | 0.8020 | 0.7497 |
| 0.6715 | 0.4171 | 900 | 0.7216 | 0.7830 |
| 0.851 | 0.4634 | 1000 | 0.7933 | 0.7527 |
| 0.6638 | 0.5097 | 1100 | 0.6775 | 0.8034 |
| 0.6149 | 0.5561 | 1200 | 0.6193 | 0.8183 |
| 0.6763 | 0.6024 | 1300 | 0.5211 | 0.8462 |
| 0.6147 | 0.6487 | 1400 | 0.5817 | 0.8229 |
| 0.6746 | 0.6951 | 1500 | 0.4546 | 0.8700 |
| 0.4658 | 0.7414 | 1600 | 0.4779 | 0.8586 |
| 0.4134 | 0.7878 | 1700 | 0.3890 | 0.8854 |
| 0.4485 | 0.8341 | 1800 | 0.4842 | 0.8518 |
| 0.4662 | 0.8804 | 1900 | 0.3461 | 0.8992 |
| 0.475 | 0.9268 | 2000 | 0.3462 | 0.8968 |
| 0.2374 | 0.9731 | 2100 | 0.3530 | 0.8936 |
| 0.2639 | 1.0195 | 2200 | 0.3032 | 0.9128 |
| 0.2466 | 1.0658 | 2300 | 0.3104 | 0.9120 |
| 0.1393 | 1.1121 | 2400 | 0.2706 | 0.9244 |
| 0.1186 | 1.1585 | 2500 | 0.2955 | 0.9193 |
| 0.121 | 1.2048 | 2600 | 0.2699 | 0.9236 |
| 0.4363 | 1.2512 | 2700 | 0.2491 | 0.9323 |
| 0.3046 | 1.2975 | 2800 | 0.2502 | 0.9290 |
| 0.1064 | 1.3438 | 2900 | 0.2466 | 0.9339 |
| 0.1233 | 1.3902 | 3000 | 0.2184 | 0.9391 |
| 0.1971 | 1.4365 | 3100 | 0.2066 | 0.9426 |
| 0.0741 | 1.4829 | 3200 | 0.1730 | 0.9510 |
| 0.1206 | 1.5292 | 3300 | 0.1964 | 0.9477 |
| 0.045 | 1.5755 | 3400 | 0.1719 | 0.9515 |
| 0.0972 | 1.6219 | 3500 | 0.1527 | 0.9588 |
| 0.1798 | 1.6682 | 3600 | 0.1389 | 0.9613 |
| 0.0468 | 1.7146 | 3700 | 0.1267 | 0.9664 |
| 0.0451 | 1.7609 | 3800 | 0.1337 | 0.9645 |
| 0.0362 | 1.8072 | 3900 | 0.1312 | 0.9648 |
| 0.0546 | 1.8536 | 4000 | 0.1172 | 0.9680 |
| 0.163 | 1.8999 | 4100 | 0.1091 | 0.9694 |
| 0.0625 | 1.9462 | 4200 | 0.1055 | 0.9686 |
| 0.0725 | 1.9926 | 4300 | 0.1021 | 0.9691 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"agouti",
"bird spec",
"red brocket deer",
"red deer",
"red fox",
"red squirrel",
"roe deer",
"spiny rat",
"white tailed deer",
"white-nosed coati",
"wild boar",
"wood mouse",
"coiban agouti",
"collared peccary",
"common opossum",
"european hare",
"great tinamou",
"mouflon",
"ocelot",
"paca"
] |
MatrixYao/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0492
- Accuracy: 0.9830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2157 | 1.0 | 190 | 0.0875 | 0.9704 |
| 0.1528 | 2.0 | 380 | 0.0713 | 0.9748 |
| 0.0861 | 3.0 | 570 | 0.0492 | 0.9830 |
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+xpu
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"annual crop",
"forest",
"herbaceous vegetation",
"highway",
"industrial",
"pasture",
"permanent crop",
"residential",
"river",
"sea or lake"
] |
encku/tuborg-04-2025v3
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.0023665453772991896
f1_macro: 0.9997188906181732
f1_micro: 0.9997194058887351
f1_weighted: 0.9997201329822247
precision_macro: 0.999728318490246
precision_micro: 0.9997194058887351
precision_weighted: 0.9997260797417288
recall_macro: 0.9997146266778684
recall_micro: 0.9997194058887351
recall_weighted: 0.9997194058887351
accuracy: 0.9997194058887351
|
[
"6974202725334",
"c001",
"c002",
"c002_4lu",
"c003",
"c004",
"c005",
"c006",
"c006_4lu",
"c007",
"c008",
"c009",
"c010",
"c012",
"c014",
"c015",
"c015_4lu",
"c017",
"c018",
"c020",
"c020_4lu",
"c021",
"c022",
"c023",
"c024",
"c025",
"c026",
"c027",
"c028",
"c029",
"c030",
"c031",
"c032",
"c033",
"c034",
"c035",
"c036",
"c037",
"c038",
"c039",
"c040",
"c041",
"c042",
"c043",
"c044",
"tbrg066",
"tbrg067",
"tbrg068",
"tbrg072",
"tbrg073",
"tbrg074",
"tbrg075",
"tbrg085",
"tbrg086",
"tbrg087",
"tbrg090",
"tbrg092",
"tbrg093",
"tbrg096",
"tbrg097",
"tbrg098",
"tbrg100",
"tbrg156",
"tbrg157",
"tbrg158",
"tbrg159",
"tt00277",
"tt00523",
"tt00677",
"tt00677-1",
"tt00685",
"tt00735",
"tt00737",
"tt00765",
"tt00792",
"tt00792-1",
"tt00793",
"tt00793-1",
"tt00810",
"tt00811",
"tt00812",
"tt00852",
"tt00853",
"tt00854",
"tt00857",
"tt00857-1",
"tt00859",
"tt00875",
"tt00875-1",
"tt00876",
"tt00876-1",
"tt00893",
"tt00904",
"tt00944",
"tt00945",
"tt00947",
"tt00964",
"tt00980",
"tt00989",
"tt01001",
"tt01020",
"tt01037",
"tt01069",
"tt01070",
"tt01071",
"tt01072",
"tt01142",
"tt01148",
"tt01149",
"tt01150",
"tt01152",
"tt01155",
"tt01160",
"tt01162",
"tt01169",
"tt01172",
"tt01174",
"tt01176",
"tt01178",
"tt01179",
"tt01231",
"tt01276",
"tt01277",
"tt01296",
"tt01297",
"tt01300",
"tt01307",
"tt01431",
"tt01460",
"tt01481",
"tt01482"
] |
sungkwan2/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6121
- Accuracy: 0.887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6922 | 1.0 | 63 | 2.4892 | 0.818 |
| 1.7764 | 2.0 | 126 | 1.7810 | 0.859 |
| 1.556 | 2.96 | 186 | 1.6121 | 0.887 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
Ayca11/test_modal
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_modal
This model is a fine-tuned version of [motheecreator/vit-Facial-Expression-Recognition](https://huggingface.co/motheecreator/vit-Facial-Expression-Recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- training_steps: 20
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Tokenizers 0.21.1
|
[
"angry",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
blaze-05/finetuned-indian-food
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-indian-food
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6866
- Accuracy: 0.8286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 2.9933 | 0.1946 | 100 | 3.0302 | 0.4679 |
| 2.4249 | 0.3891 | 200 | 2.2643 | 0.5594 |
| 2.1319 | 0.5837 | 300 | 2.0092 | 0.5691 |
| 1.7793 | 0.7782 | 400 | 1.7550 | 0.6086 |
| 1.8401 | 0.9728 | 500 | 1.6372 | 0.6407 |
| 1.3499 | 1.1673 | 600 | 1.3922 | 0.6908 |
| 1.2906 | 1.3619 | 700 | 1.2705 | 0.7118 |
| 0.9804 | 1.5564 | 800 | 1.1602 | 0.7322 |
| 1.0783 | 1.7510 | 900 | 1.1229 | 0.7308 |
| 0.968 | 1.9455 | 1000 | 1.0598 | 0.7468 |
| 0.6267 | 2.1401 | 1100 | 0.9519 | 0.7663 |
| 0.7739 | 2.3346 | 1200 | 0.9060 | 0.7785 |
| 0.8705 | 2.5292 | 1300 | 0.8600 | 0.7945 |
| 0.7832 | 2.7237 | 1400 | 0.8134 | 0.8023 |
| 0.7061 | 2.9183 | 1500 | 0.8009 | 0.7984 |
| 0.4784 | 3.1128 | 1600 | 0.7495 | 0.8087 |
| 0.514 | 3.3074 | 1700 | 0.7359 | 0.8101 |
| 0.4407 | 3.5019 | 1800 | 0.7049 | 0.8184 |
| 0.4831 | 3.6965 | 1900 | 0.6899 | 0.8208 |
| 0.4283 | 3.8911 | 2000 | 0.6866 | 0.8286 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"adhirasam",
"aloo_gobi",
"bhatura",
"bhindi_masala",
"biryani",
"boondi",
"burger",
"butter_chicken",
"butter_naan",
"chai",
"chak_hao_kheer",
"cham_cham",
"aloo_matar",
"chana_masala",
"chapati",
"chhena_kheeri",
"chicken_razala",
"chicken_tikka",
"chicken_tikka_masala",
"chikki",
"chole_bhature",
"daal_baati_churma",
"daal_puri",
"aloo_methi",
"dal_makhani",
"dal_tadka",
"dharwad_pedha",
"dhokla",
"doodhpak",
"double_ka_meetha",
"dum_aloo",
"fried_rice",
"gajar_ka_halwa",
"gavvalu",
"aloo_shimla_mirch",
"ghevar",
"gulab_jamun",
"idli",
"imarti",
"jalebi",
"kaathi_rolls",
"kachori",
"kadai_paneer",
"kadhi_pakoda",
"kajjikaya",
"aloo_tikki",
"kakinada_khaja",
"kalakand",
"karela_bharta",
"kofta",
"kulfi",
"kuzhi_paniyaram",
"lassi",
"ledikeni",
"litti_chokha",
"lyangcha",
"anarsa",
"maach_jhol",
"makki_di_roti_sarson_da_saag",
"malapua",
"masala_dosa",
"misi_roti",
"misti_doi",
"modak",
"momos",
"mysore_pak",
"naan",
"ariselu",
"navrattan_korma",
"paani_puri",
"pakode",
"palak_paneer",
"paneer_butter_masala",
"pav_bhaji",
"phirni",
"pithe",
"pizza",
"poha",
"bandar_laddu",
"poornalu",
"pootharekulu",
"qubani_ka_meetha",
"rabri",
"ras_malai",
"rasgulla",
"samosa",
"sandesh",
"shankarpali",
"sheer_korma",
"basundi",
"sheera",
"shrikhand",
"sohan_halwa",
"sohan_papdi",
"sutar_feni",
"unni_appam"
] |
prithivMLmods/BnW-vs-Colored-Detection
|

# **BnW-vs-Colored-Detection**
> **BnW-vs-Colored-Detection** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to distinguish between black & white and colored images using the **SiglipForImageClassification** architecture.
```py
Classification Report:
precision recall f1-score support
B & W 0.9982 0.9996 0.9989 5000
Colored 0.9996 0.9982 0.9989 5000
accuracy 0.9989 10000
macro avg 0.9989 0.9989 0.9989 10000
weighted avg 0.9989 0.9989 0.9989 10000
```

---
The model categorizes images into 2 classes:
```
Class 0: "B & W"
Class 1: "Colored"
```
---
## **Install dependencies**
```python
!pip install -q transformers torch pillow gradio
```
---
## **Inference Code**
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/BnW-vs-Colored-Detection" # Updated model name
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def classify_bw_colored(image):
"""Predicts if an image is Black & White or Colored."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "B & W", "1": "Colored"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=classify_bw_colored,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="BnW vs Colored Detection",
description="Upload an image to detect if it is Black & White or Colored."
)
if __name__ == "__main__":
iface.launch()
```
---
## **Intended Use:**
The **BnW-vs-Colored-Detection** model is designed to classify images by color mode. Potential use cases include:
- **Archive Organization:** Separate historical B&W images from modern colored ones.
- **Data Filtering:** Preprocess image datasets by removing or labeling specific types.
- **Digital Restoration:** Assist in determining candidates for colorization.
- **Search & Categorization:** Enable efficient tagging and filtering in image libraries.
|
[
"b & w",
"colored"
] |
encku/tuborg-multi-single
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.00197649747133255
f1: 0.9987211254096395
precision: 0.9974455176818073
recall: 1.0
auc: 0.9999999999999999
accuracy: 0.9987095733526897
|
[
"multiple",
"single"
] |
SodaXII/vit-base-patch16-224_rice-leaf-disease-augmented-v4_v5_pft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224_rice-leaf-disease-augmented-v4_v5_pft
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4332
- Accuracy: 0.8456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 256
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1701 | 0.5 | 64 | 1.9984 | 0.2349 |
| 1.878 | 1.0 | 128 | 1.6113 | 0.4698 |
| 1.4921 | 1.5 | 192 | 1.2635 | 0.6174 |
| 1.2126 | 2.0 | 256 | 1.0178 | 0.6812 |
| 0.9922 | 2.5 | 320 | 0.8928 | 0.7148 |
| 0.8858 | 3.0 | 384 | 0.7883 | 0.7483 |
| 0.7966 | 3.5 | 448 | 0.7408 | 0.7517 |
| 0.7427 | 4.0 | 512 | 0.6912 | 0.7651 |
| 0.7077 | 4.5 | 576 | 0.6795 | 0.7718 |
| 0.6796 | 5.0 | 640 | 0.6647 | 0.7785 |
| 0.6597 | 5.5 | 704 | 0.6684 | 0.7752 |
| 0.6652 | 6.0 | 768 | 0.6535 | 0.7752 |
| 0.6762 | 6.5 | 832 | 0.6533 | 0.7752 |
| 0.6277 | 7.0 | 896 | 0.6356 | 0.7886 |
| 0.6264 | 7.5 | 960 | 0.6008 | 0.7987 |
| 0.5906 | 8.0 | 1024 | 0.5791 | 0.8154 |
| 0.5596 | 8.5 | 1088 | 0.5789 | 0.8054 |
| 0.5619 | 9.0 | 1152 | 0.5785 | 0.7987 |
| 0.5381 | 9.5 | 1216 | 0.5532 | 0.8121 |
| 0.5275 | 10.0 | 1280 | 0.5591 | 0.8087 |
| 0.5183 | 10.5 | 1344 | 0.5555 | 0.8054 |
| 0.5236 | 11.0 | 1408 | 0.5551 | 0.8087 |
| 0.5198 | 11.5 | 1472 | 0.5649 | 0.8020 |
| 0.5128 | 12.0 | 1536 | 0.5312 | 0.8356 |
| 0.4932 | 12.5 | 1600 | 0.5238 | 0.8054 |
| 0.4854 | 13.0 | 1664 | 0.5234 | 0.8121 |
| 0.4674 | 13.5 | 1728 | 0.5142 | 0.8221 |
| 0.4614 | 14.0 | 1792 | 0.5109 | 0.8154 |
| 0.4558 | 14.5 | 1856 | 0.5095 | 0.8289 |
| 0.4419 | 15.0 | 1920 | 0.5043 | 0.8188 |
| 0.4362 | 15.5 | 1984 | 0.5034 | 0.8221 |
| 0.4496 | 16.0 | 2048 | 0.5032 | 0.8221 |
| 0.4484 | 16.5 | 2112 | 0.5017 | 0.8221 |
| 0.4325 | 17.0 | 2176 | 0.5015 | 0.8289 |
| 0.428 | 17.5 | 2240 | 0.4967 | 0.8221 |
| 0.4091 | 18.0 | 2304 | 0.4704 | 0.8356 |
| 0.405 | 18.5 | 2368 | 0.4792 | 0.8289 |
| 0.4012 | 19.0 | 2432 | 0.4750 | 0.8322 |
| 0.3887 | 19.5 | 2496 | 0.4750 | 0.8289 |
| 0.3986 | 20.0 | 2560 | 0.4711 | 0.8255 |
| 0.3983 | 20.5 | 2624 | 0.4713 | 0.8255 |
| 0.3857 | 21.0 | 2688 | 0.4750 | 0.8289 |
| 0.3925 | 21.5 | 2752 | 0.4506 | 0.8456 |
| 0.3787 | 22.0 | 2816 | 0.4622 | 0.8255 |
| 0.368 | 22.5 | 2880 | 0.4583 | 0.8389 |
| 0.3702 | 23.0 | 2944 | 0.4479 | 0.8423 |
| 0.3591 | 23.5 | 3008 | 0.4485 | 0.8389 |
| 0.3588 | 24.0 | 3072 | 0.4534 | 0.8356 |
| 0.3517 | 24.5 | 3136 | 0.4496 | 0.8356 |
| 0.3546 | 25.0 | 3200 | 0.4482 | 0.8389 |
| 0.3636 | 25.5 | 3264 | 0.4518 | 0.8356 |
| 0.3435 | 26.0 | 3328 | 0.4495 | 0.8322 |
| 0.3423 | 26.5 | 3392 | 0.4427 | 0.8322 |
| 0.3477 | 27.0 | 3456 | 0.4365 | 0.8423 |
| 0.3405 | 27.5 | 3520 | 0.4380 | 0.8389 |
| 0.3254 | 28.0 | 3584 | 0.4366 | 0.8389 |
| 0.3245 | 28.5 | 3648 | 0.4316 | 0.8423 |
| 0.3265 | 29.0 | 3712 | 0.4305 | 0.8423 |
| 0.3193 | 29.5 | 3776 | 0.4339 | 0.8456 |
| 0.3244 | 30.0 | 3840 | 0.4332 | 0.8456 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.1
|
[
"bacterial leaf blight",
"brown spot",
"healthy rice leaf",
"leaf blast",
"leaf scald",
"narrow brown leaf spot",
"rice hispa",
"sheath blight"
] |
ricardoSLabs/Fer_vit_jaffe_GOOGLE_0
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fer_vit_jaffe_GOOGLE_0
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4000
- Accuracy: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 2.2045 | 0.1 |
| No log | 2.0 | 2 | 2.0846 | 0.1333 |
| No log | 3.0 | 3 | 2.1060 | 0.1 |
| No log | 4.0 | 4 | 1.9913 | 0.1333 |
| No log | 5.0 | 5 | 1.9980 | 0.1 |
| No log | 6.0 | 6 | 1.8330 | 0.3 |
| No log | 7.0 | 7 | 1.9518 | 0.1333 |
| No log | 8.0 | 8 | 1.9296 | 0.1667 |
| No log | 9.0 | 9 | 1.8688 | 0.3333 |
| 1.9932 | 10.0 | 10 | 1.7509 | 0.3667 |
| 1.9932 | 11.0 | 11 | 1.6357 | 0.4 |
| 1.9932 | 12.0 | 12 | 1.5627 | 0.3667 |
| 1.9932 | 13.0 | 13 | 1.6459 | 0.3 |
| 1.9932 | 14.0 | 14 | 1.5215 | 0.4 |
| 1.9932 | 15.0 | 15 | 1.5421 | 0.3667 |
| 1.9932 | 16.0 | 16 | 1.4164 | 0.5 |
| 1.9932 | 17.0 | 17 | 1.4463 | 0.3667 |
| 1.9932 | 18.0 | 18 | 1.2905 | 0.4667 |
| 1.9932 | 19.0 | 19 | 1.2456 | 0.6 |
| 1.2161 | 20.0 | 20 | 1.2170 | 0.5667 |
| 1.2161 | 21.0 | 21 | 1.0307 | 0.6 |
| 1.2161 | 22.0 | 22 | 1.1198 | 0.6 |
| 1.2161 | 23.0 | 23 | 1.1648 | 0.5 |
| 1.2161 | 24.0 | 24 | 1.0260 | 0.6 |
| 1.2161 | 25.0 | 25 | 1.3020 | 0.5 |
| 1.2161 | 26.0 | 26 | 0.9796 | 0.6333 |
| 1.2161 | 27.0 | 27 | 0.9824 | 0.6667 |
| 1.2161 | 28.0 | 28 | 0.8884 | 0.7 |
| 1.2161 | 29.0 | 29 | 0.9246 | 0.6333 |
| 0.5116 | 30.0 | 30 | 0.8455 | 0.7333 |
| 0.5116 | 31.0 | 31 | 0.7960 | 0.7 |
| 0.5116 | 32.0 | 32 | 0.8179 | 0.7333 |
| 0.5116 | 33.0 | 33 | 0.8721 | 0.6667 |
| 0.5116 | 34.0 | 34 | 0.8279 | 0.7667 |
| 0.5116 | 35.0 | 35 | 0.6486 | 0.7667 |
| 0.5116 | 36.0 | 36 | 0.6816 | 0.7333 |
| 0.5116 | 37.0 | 37 | 0.8016 | 0.7333 |
| 0.5116 | 38.0 | 38 | 0.6464 | 0.8 |
| 0.5116 | 39.0 | 39 | 0.6922 | 0.7667 |
| 0.2101 | 40.0 | 40 | 0.6768 | 0.7667 |
| 0.2101 | 41.0 | 41 | 0.6408 | 0.7667 |
| 0.2101 | 42.0 | 42 | 0.5335 | 0.8333 |
| 0.2101 | 43.0 | 43 | 0.4862 | 0.8333 |
| 0.2101 | 44.0 | 44 | 0.3713 | 0.8667 |
| 0.2101 | 45.0 | 45 | 0.4382 | 0.8333 |
| 0.2101 | 46.0 | 46 | 0.6664 | 0.7667 |
| 0.2101 | 47.0 | 47 | 0.4865 | 0.8333 |
| 0.2101 | 48.0 | 48 | 0.4411 | 0.8 |
| 0.2101 | 49.0 | 49 | 0.4707 | 0.8667 |
| 0.0921 | 50.0 | 50 | 0.6355 | 0.7667 |
| 0.0921 | 51.0 | 51 | 0.3975 | 0.9 |
| 0.0921 | 52.0 | 52 | 0.4261 | 0.8333 |
| 0.0921 | 53.0 | 53 | 0.3944 | 0.8 |
| 0.0921 | 54.0 | 54 | 0.2987 | 0.9333 |
| 0.0921 | 55.0 | 55 | 0.4845 | 0.8667 |
| 0.0921 | 56.0 | 56 | 0.5880 | 0.7667 |
| 0.0921 | 57.0 | 57 | 0.6478 | 0.8333 |
| 0.0921 | 58.0 | 58 | 0.4498 | 0.8 |
| 0.0921 | 59.0 | 59 | 0.3165 | 0.8667 |
| 0.0488 | 60.0 | 60 | 0.5294 | 0.8333 |
| 0.0488 | 61.0 | 61 | 0.6030 | 0.8333 |
| 0.0488 | 62.0 | 62 | 0.4018 | 0.8333 |
| 0.0488 | 63.0 | 63 | 0.5076 | 0.8333 |
| 0.0488 | 64.0 | 64 | 0.5128 | 0.8667 |
| 0.0488 | 65.0 | 65 | 0.5164 | 0.8667 |
| 0.0488 | 66.0 | 66 | 0.4238 | 0.8333 |
| 0.0488 | 67.0 | 67 | 0.5057 | 0.8333 |
| 0.0488 | 68.0 | 68 | 0.6507 | 0.7667 |
| 0.0488 | 69.0 | 69 | 0.4623 | 0.8667 |
| 0.0336 | 70.0 | 70 | 0.4230 | 0.8333 |
| 0.0336 | 71.0 | 71 | 0.4669 | 0.8333 |
| 0.0336 | 72.0 | 72 | 0.4836 | 0.8333 |
| 0.0336 | 73.0 | 73 | 0.3458 | 0.9333 |
| 0.0336 | 74.0 | 74 | 0.4629 | 0.8667 |
| 0.0336 | 75.0 | 75 | 0.4426 | 0.7667 |
| 0.0336 | 76.0 | 76 | 0.4735 | 0.8 |
| 0.0336 | 77.0 | 77 | 0.5138 | 0.7667 |
| 0.0336 | 78.0 | 78 | 0.4728 | 0.8333 |
| 0.0336 | 79.0 | 79 | 0.3224 | 0.8667 |
| 0.0204 | 80.0 | 80 | 0.2733 | 0.8667 |
| 0.0204 | 81.0 | 81 | 0.4948 | 0.8333 |
| 0.0204 | 82.0 | 82 | 0.3923 | 0.9 |
| 0.0204 | 83.0 | 83 | 0.2380 | 0.9 |
| 0.0204 | 84.0 | 84 | 0.4343 | 0.8667 |
| 0.0204 | 85.0 | 85 | 0.4008 | 0.8 |
| 0.0204 | 86.0 | 86 | 0.3960 | 0.9 |
| 0.0204 | 87.0 | 87 | 0.4185 | 0.8667 |
| 0.0204 | 88.0 | 88 | 0.4394 | 0.8 |
| 0.0204 | 89.0 | 89 | 0.3055 | 0.9 |
| 0.0113 | 90.0 | 90 | 0.4782 | 0.7333 |
| 0.0113 | 91.0 | 91 | 0.4763 | 0.8667 |
| 0.0113 | 92.0 | 92 | 0.4404 | 0.9 |
| 0.0113 | 93.0 | 93 | 0.2787 | 0.9 |
| 0.0113 | 94.0 | 94 | 0.3599 | 0.9 |
| 0.0113 | 95.0 | 95 | 0.5665 | 0.8333 |
| 0.0113 | 96.0 | 96 | 0.3193 | 0.9333 |
| 0.0113 | 97.0 | 97 | 0.3259 | 0.8667 |
| 0.0113 | 98.0 | 98 | 0.3528 | 0.9333 |
| 0.0113 | 99.0 | 99 | 0.3905 | 0.8667 |
| 0.009 | 100.0 | 100 | 0.4000 | 0.8667 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
[
"an",
"di",
"fe",
"ha",
"ne",
"sa",
"su"
] |
ricardoSLabs/Fer_vit_jaffe_GOOGLE_1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fer_vit_jaffe_GOOGLE_1
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4137
- Accuracy: 0.8333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 2.4981 | 0.1333 |
| No log | 2.0 | 2 | 2.4022 | 0.1333 |
| No log | 3.0 | 3 | 2.2167 | 0.1 |
| No log | 4.0 | 4 | 2.0743 | 0.1333 |
| No log | 5.0 | 5 | 1.9393 | 0.1 |
| No log | 6.0 | 6 | 2.0201 | 0.1667 |
| No log | 7.0 | 7 | 1.9793 | 0.1667 |
| No log | 8.0 | 8 | 1.9287 | 0.2 |
| No log | 9.0 | 9 | 1.8316 | 0.1667 |
| 2.1031 | 10.0 | 10 | 1.6923 | 0.4667 |
| 2.1031 | 11.0 | 11 | 1.7380 | 0.2667 |
| 2.1031 | 12.0 | 12 | 1.7164 | 0.3333 |
| 2.1031 | 13.0 | 13 | 1.6525 | 0.3333 |
| 2.1031 | 14.0 | 14 | 1.5759 | 0.3333 |
| 2.1031 | 15.0 | 15 | 1.5251 | 0.3333 |
| 2.1031 | 16.0 | 16 | 1.4557 | 0.4333 |
| 2.1031 | 17.0 | 17 | 1.3619 | 0.4333 |
| 2.1031 | 18.0 | 18 | 1.2880 | 0.4333 |
| 2.1031 | 19.0 | 19 | 1.2356 | 0.5667 |
| 1.2981 | 20.0 | 20 | 1.1369 | 0.6 |
| 1.2981 | 21.0 | 21 | 1.1489 | 0.5 |
| 1.2981 | 22.0 | 22 | 1.0756 | 0.7 |
| 1.2981 | 23.0 | 23 | 1.0136 | 0.5333 |
| 1.2981 | 24.0 | 24 | 1.0509 | 0.5333 |
| 1.2981 | 25.0 | 25 | 0.9975 | 0.6 |
| 1.2981 | 26.0 | 26 | 0.9895 | 0.6 |
| 1.2981 | 27.0 | 27 | 0.9735 | 0.6 |
| 1.2981 | 28.0 | 28 | 0.9328 | 0.6 |
| 1.2981 | 29.0 | 29 | 0.9559 | 0.6667 |
| 0.5735 | 30.0 | 30 | 0.8359 | 0.7667 |
| 0.5735 | 31.0 | 31 | 0.8023 | 0.7667 |
| 0.5735 | 32.0 | 32 | 0.8285 | 0.6333 |
| 0.5735 | 33.0 | 33 | 0.7287 | 0.7 |
| 0.5735 | 34.0 | 34 | 0.7043 | 0.7667 |
| 0.5735 | 35.0 | 35 | 0.8992 | 0.7333 |
| 0.5735 | 36.0 | 36 | 0.8664 | 0.7667 |
| 0.5735 | 37.0 | 37 | 0.8023 | 0.7333 |
| 0.5735 | 38.0 | 38 | 0.6910 | 0.7667 |
| 0.5735 | 39.0 | 39 | 0.8197 | 0.6667 |
| 0.2477 | 40.0 | 40 | 0.5915 | 0.7667 |
| 0.2477 | 41.0 | 41 | 0.9184 | 0.6333 |
| 0.2477 | 42.0 | 42 | 0.6734 | 0.7 |
| 0.2477 | 43.0 | 43 | 0.9225 | 0.7 |
| 0.2477 | 44.0 | 44 | 0.5961 | 0.8 |
| 0.2477 | 45.0 | 45 | 0.7012 | 0.7 |
| 0.2477 | 46.0 | 46 | 0.9223 | 0.6 |
| 0.2477 | 47.0 | 47 | 0.5819 | 0.7 |
| 0.2477 | 48.0 | 48 | 0.7171 | 0.7333 |
| 0.2477 | 49.0 | 49 | 0.6416 | 0.7667 |
| 0.1117 | 50.0 | 50 | 0.8718 | 0.7 |
| 0.1117 | 51.0 | 51 | 0.4941 | 0.8 |
| 0.1117 | 52.0 | 52 | 0.7385 | 0.8 |
| 0.1117 | 53.0 | 53 | 0.6660 | 0.8333 |
| 0.1117 | 54.0 | 54 | 0.6988 | 0.8667 |
| 0.1117 | 55.0 | 55 | 0.7074 | 0.7667 |
| 0.1117 | 56.0 | 56 | 0.5847 | 0.8 |
| 0.1117 | 57.0 | 57 | 0.6636 | 0.8 |
| 0.1117 | 58.0 | 58 | 0.5520 | 0.8333 |
| 0.1117 | 59.0 | 59 | 0.6299 | 0.7667 |
| 0.0591 | 60.0 | 60 | 0.6717 | 0.7667 |
| 0.0591 | 61.0 | 61 | 0.4874 | 0.8333 |
| 0.0591 | 62.0 | 62 | 0.4603 | 0.8 |
| 0.0591 | 63.0 | 63 | 0.5516 | 0.7333 |
| 0.0591 | 64.0 | 64 | 0.4729 | 0.8 |
| 0.0591 | 65.0 | 65 | 0.5710 | 0.7667 |
| 0.0591 | 66.0 | 66 | 0.8985 | 0.7 |
| 0.0591 | 67.0 | 67 | 0.8074 | 0.7667 |
| 0.0591 | 68.0 | 68 | 0.5652 | 0.8 |
| 0.0591 | 69.0 | 69 | 0.5538 | 0.8333 |
| 0.0296 | 70.0 | 70 | 0.5727 | 0.7333 |
| 0.0296 | 71.0 | 71 | 0.6359 | 0.8 |
| 0.0296 | 72.0 | 72 | 0.6932 | 0.7333 |
| 0.0296 | 73.0 | 73 | 0.9025 | 0.6667 |
| 0.0296 | 74.0 | 74 | 0.6639 | 0.7333 |
| 0.0296 | 75.0 | 75 | 0.8385 | 0.7333 |
| 0.0296 | 76.0 | 76 | 0.5827 | 0.8 |
| 0.0296 | 77.0 | 77 | 0.5443 | 0.8667 |
| 0.0296 | 78.0 | 78 | 0.6330 | 0.8333 |
| 0.0296 | 79.0 | 79 | 0.6706 | 0.7333 |
| 0.0175 | 80.0 | 80 | 0.7803 | 0.8 |
| 0.0175 | 81.0 | 81 | 0.5401 | 0.8 |
| 0.0175 | 82.0 | 82 | 0.6806 | 0.8333 |
| 0.0175 | 83.0 | 83 | 0.3827 | 0.8 |
| 0.0175 | 84.0 | 84 | 0.7853 | 0.8 |
| 0.0175 | 85.0 | 85 | 0.4391 | 0.8333 |
| 0.0175 | 86.0 | 86 | 0.6061 | 0.8 |
| 0.0175 | 87.0 | 87 | 0.4797 | 0.8 |
| 0.0175 | 88.0 | 88 | 0.4386 | 0.8333 |
| 0.0175 | 89.0 | 89 | 0.6556 | 0.8 |
| 0.0121 | 90.0 | 90 | 0.7927 | 0.8 |
| 0.0121 | 91.0 | 91 | 0.4925 | 0.8333 |
| 0.0121 | 92.0 | 92 | 0.6280 | 0.7667 |
| 0.0121 | 93.0 | 93 | 0.3561 | 0.9 |
| 0.0121 | 94.0 | 94 | 0.6058 | 0.8333 |
| 0.0121 | 95.0 | 95 | 0.5086 | 0.8 |
| 0.0121 | 96.0 | 96 | 0.3854 | 0.8667 |
| 0.0121 | 97.0 | 97 | 0.8370 | 0.7667 |
| 0.0121 | 98.0 | 98 | 0.6506 | 0.7333 |
| 0.0121 | 99.0 | 99 | 0.6100 | 0.7667 |
| 0.0088 | 100.0 | 100 | 0.4137 | 0.8333 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
[
"an",
"di",
"fe",
"ha",
"ne",
"sa",
"su"
] |
ricardoSLabs/Fer_vit_jaffe_crop_GOOGLE_0
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fer_vit_jaffe_crop_GOOGLE_0
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4370
- Accuracy: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 2.4680 | 0.1333 |
| No log | 2.0 | 2 | 2.2791 | 0.2333 |
| No log | 3.0 | 3 | 2.2505 | 0.1667 |
| No log | 4.0 | 4 | 2.0650 | 0.1 |
| No log | 5.0 | 5 | 2.1205 | 0.0333 |
| No log | 6.0 | 6 | 2.0198 | 0.1 |
| No log | 7.0 | 7 | 2.0317 | 0.1333 |
| No log | 8.0 | 8 | 1.9863 | 0.2333 |
| No log | 9.0 | 9 | 1.9390 | 0.3 |
| 2.1093 | 10.0 | 10 | 1.8465 | 0.2667 |
| 2.1093 | 11.0 | 11 | 1.6948 | 0.4333 |
| 2.1093 | 12.0 | 12 | 1.6453 | 0.4333 |
| 2.1093 | 13.0 | 13 | 1.6213 | 0.3333 |
| 2.1093 | 14.0 | 14 | 1.6045 | 0.3667 |
| 2.1093 | 15.0 | 15 | 1.5593 | 0.4333 |
| 2.1093 | 16.0 | 16 | 1.5160 | 0.5 |
| 2.1093 | 17.0 | 17 | 1.5322 | 0.4667 |
| 2.1093 | 18.0 | 18 | 1.4750 | 0.5 |
| 2.1093 | 19.0 | 19 | 1.3553 | 0.5333 |
| 1.3827 | 20.0 | 20 | 1.2704 | 0.4667 |
| 1.3827 | 21.0 | 21 | 1.2823 | 0.4667 |
| 1.3827 | 22.0 | 22 | 1.3789 | 0.5333 |
| 1.3827 | 23.0 | 23 | 1.2368 | 0.5667 |
| 1.3827 | 24.0 | 24 | 1.0561 | 0.6 |
| 1.3827 | 25.0 | 25 | 1.2039 | 0.5333 |
| 1.3827 | 26.0 | 26 | 1.2061 | 0.5333 |
| 1.3827 | 27.0 | 27 | 0.9144 | 0.6333 |
| 1.3827 | 28.0 | 28 | 1.0374 | 0.6 |
| 1.3827 | 29.0 | 29 | 1.0670 | 0.6333 |
| 0.6174 | 30.0 | 30 | 1.0691 | 0.6667 |
| 0.6174 | 31.0 | 31 | 0.9445 | 0.6667 |
| 0.6174 | 32.0 | 32 | 0.8885 | 0.5667 |
| 0.6174 | 33.0 | 33 | 0.9647 | 0.6 |
| 0.6174 | 34.0 | 34 | 1.0187 | 0.5667 |
| 0.6174 | 35.0 | 35 | 0.9037 | 0.6333 |
| 0.6174 | 36.0 | 36 | 0.9069 | 0.6 |
| 0.6174 | 37.0 | 37 | 0.8999 | 0.6333 |
| 0.6174 | 38.0 | 38 | 0.6198 | 0.7667 |
| 0.6174 | 39.0 | 39 | 0.8034 | 0.6667 |
| 0.2248 | 40.0 | 40 | 0.9049 | 0.6667 |
| 0.2248 | 41.0 | 41 | 0.7231 | 0.6667 |
| 0.2248 | 42.0 | 42 | 0.6554 | 0.7 |
| 0.2248 | 43.0 | 43 | 0.6591 | 0.8 |
| 0.2248 | 44.0 | 44 | 0.7196 | 0.8 |
| 0.2248 | 45.0 | 45 | 0.7233 | 0.7 |
| 0.2248 | 46.0 | 46 | 0.6112 | 0.8 |
| 0.2248 | 47.0 | 47 | 0.4299 | 0.8667 |
| 0.2248 | 48.0 | 48 | 0.5479 | 0.8 |
| 0.2248 | 49.0 | 49 | 0.5996 | 0.8333 |
| 0.0773 | 50.0 | 50 | 0.6714 | 0.7333 |
| 0.0773 | 51.0 | 51 | 0.4989 | 0.8333 |
| 0.0773 | 52.0 | 52 | 0.4956 | 0.8667 |
| 0.0773 | 53.0 | 53 | 0.4367 | 0.8333 |
| 0.0773 | 54.0 | 54 | 0.4542 | 0.8333 |
| 0.0773 | 55.0 | 55 | 0.5991 | 0.8 |
| 0.0773 | 56.0 | 56 | 0.6906 | 0.7667 |
| 0.0773 | 57.0 | 57 | 0.6667 | 0.7333 |
| 0.0773 | 58.0 | 58 | 0.5142 | 0.8 |
| 0.0773 | 59.0 | 59 | 0.5593 | 0.8 |
| 0.035 | 60.0 | 60 | 0.7527 | 0.7 |
| 0.035 | 61.0 | 61 | 0.4706 | 0.8667 |
| 0.035 | 62.0 | 62 | 0.5345 | 0.8333 |
| 0.035 | 63.0 | 63 | 0.5804 | 0.7667 |
| 0.035 | 64.0 | 64 | 0.5549 | 0.7667 |
| 0.035 | 65.0 | 65 | 0.5665 | 0.8 |
| 0.035 | 66.0 | 66 | 0.3258 | 0.9333 |
| 0.035 | 67.0 | 67 | 0.4890 | 0.8333 |
| 0.035 | 68.0 | 68 | 0.4657 | 0.8333 |
| 0.035 | 69.0 | 69 | 0.6546 | 0.8 |
| 0.0192 | 70.0 | 70 | 0.4962 | 0.8667 |
| 0.0192 | 71.0 | 71 | 0.5801 | 0.8 |
| 0.0192 | 72.0 | 72 | 0.5365 | 0.8667 |
| 0.0192 | 73.0 | 73 | 0.3524 | 0.8667 |
| 0.0192 | 74.0 | 74 | 0.5291 | 0.8667 |
| 0.0192 | 75.0 | 75 | 0.4613 | 0.9333 |
| 0.0192 | 76.0 | 76 | 0.5031 | 0.8 |
| 0.0192 | 77.0 | 77 | 0.4986 | 0.8333 |
| 0.0192 | 78.0 | 78 | 0.6103 | 0.8 |
| 0.0192 | 79.0 | 79 | 0.5855 | 0.8333 |
| 0.0126 | 80.0 | 80 | 0.6136 | 0.7667 |
| 0.0126 | 81.0 | 81 | 0.5112 | 0.8667 |
| 0.0126 | 82.0 | 82 | 0.4770 | 0.8333 |
| 0.0126 | 83.0 | 83 | 0.4016 | 0.8667 |
| 0.0126 | 84.0 | 84 | 0.4946 | 0.8667 |
| 0.0126 | 85.0 | 85 | 0.5542 | 0.7667 |
| 0.0126 | 86.0 | 86 | 0.4037 | 0.8667 |
| 0.0126 | 87.0 | 87 | 0.4775 | 0.8 |
| 0.0126 | 88.0 | 88 | 0.5146 | 0.8333 |
| 0.0126 | 89.0 | 89 | 0.5603 | 0.7667 |
| 0.0072 | 90.0 | 90 | 0.5734 | 0.8 |
| 0.0072 | 91.0 | 91 | 0.5937 | 0.8 |
| 0.0072 | 92.0 | 92 | 0.5328 | 0.8 |
| 0.0072 | 93.0 | 93 | 0.4362 | 0.8667 |
| 0.0072 | 94.0 | 94 | 0.6317 | 0.7667 |
| 0.0072 | 95.0 | 95 | 0.4078 | 0.8667 |
| 0.0072 | 96.0 | 96 | 0.5680 | 0.8 |
| 0.0072 | 97.0 | 97 | 0.6209 | 0.8 |
| 0.0072 | 98.0 | 98 | 0.5360 | 0.8 |
| 0.0072 | 99.0 | 99 | 0.4784 | 0.8667 |
| 0.0093 | 100.0 | 100 | 0.4370 | 0.9 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
[
"an",
"di",
"fe",
"ha",
"ne",
"sa",
"su"
] |
ricardoSLabs/Fer_vit_jaffe_crop_GOOGLE_1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fer_vit_jaffe_crop_GOOGLE_1
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2564
- Accuracy: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 2.8373 | 0.1667 |
| No log | 2.0 | 2 | 2.6726 | 0.1667 |
| No log | 3.0 | 3 | 2.4131 | 0.1 |
| No log | 4.0 | 4 | 2.1618 | 0.1 |
| No log | 5.0 | 5 | 1.9925 | 0.2333 |
| No log | 6.0 | 6 | 2.0082 | 0.1 |
| No log | 7.0 | 7 | 2.0631 | 0.1667 |
| No log | 8.0 | 8 | 1.9582 | 0.1667 |
| No log | 9.0 | 9 | 1.9078 | 0.1 |
| 2.2546 | 10.0 | 10 | 1.8412 | 0.2333 |
| 2.2546 | 11.0 | 11 | 1.7763 | 0.3 |
| 2.2546 | 12.0 | 12 | 1.7447 | 0.4 |
| 2.2546 | 13.0 | 13 | 1.6744 | 0.2667 |
| 2.2546 | 14.0 | 14 | 1.6643 | 0.3333 |
| 2.2546 | 15.0 | 15 | 1.5688 | 0.4333 |
| 2.2546 | 16.0 | 16 | 1.5458 | 0.4333 |
| 2.2546 | 17.0 | 17 | 1.5068 | 0.5 |
| 2.2546 | 18.0 | 18 | 1.4505 | 0.4333 |
| 2.2546 | 19.0 | 19 | 1.3640 | 0.4 |
| 1.352 | 20.0 | 20 | 1.3531 | 0.3667 |
| 1.352 | 21.0 | 21 | 1.3028 | 0.4667 |
| 1.352 | 22.0 | 22 | 1.2587 | 0.5 |
| 1.352 | 23.0 | 23 | 1.3161 | 0.5 |
| 1.352 | 24.0 | 24 | 1.2402 | 0.4333 |
| 1.352 | 25.0 | 25 | 1.1801 | 0.4333 |
| 1.352 | 26.0 | 26 | 1.1508 | 0.5333 |
| 1.352 | 27.0 | 27 | 1.0463 | 0.6667 |
| 1.352 | 28.0 | 28 | 1.0176 | 0.5 |
| 1.352 | 29.0 | 29 | 1.0326 | 0.5333 |
| 0.6369 | 30.0 | 30 | 0.9021 | 0.6 |
| 0.6369 | 31.0 | 31 | 0.9485 | 0.5 |
| 0.6369 | 32.0 | 32 | 0.8393 | 0.6 |
| 0.6369 | 33.0 | 33 | 0.9536 | 0.5667 |
| 0.6369 | 34.0 | 34 | 0.8815 | 0.6333 |
| 0.6369 | 35.0 | 35 | 0.8329 | 0.6333 |
| 0.6369 | 36.0 | 36 | 0.7946 | 0.7333 |
| 0.6369 | 37.0 | 37 | 0.8582 | 0.6333 |
| 0.6369 | 38.0 | 38 | 0.7418 | 0.7667 |
| 0.6369 | 39.0 | 39 | 0.7232 | 0.7667 |
| 0.2532 | 40.0 | 40 | 0.7750 | 0.7333 |
| 0.2532 | 41.0 | 41 | 0.7209 | 0.7333 |
| 0.2532 | 42.0 | 42 | 0.6851 | 0.7 |
| 0.2532 | 43.0 | 43 | 0.6823 | 0.7667 |
| 0.2532 | 44.0 | 44 | 0.5122 | 0.7667 |
| 0.2532 | 45.0 | 45 | 0.5930 | 0.7667 |
| 0.2532 | 46.0 | 46 | 0.6531 | 0.7333 |
| 0.2532 | 47.0 | 47 | 0.5651 | 0.8 |
| 0.2532 | 48.0 | 48 | 0.5014 | 0.8667 |
| 0.2532 | 49.0 | 49 | 0.4853 | 0.8333 |
| 0.098 | 50.0 | 50 | 0.4904 | 0.8667 |
| 0.098 | 51.0 | 51 | 0.6781 | 0.7 |
| 0.098 | 52.0 | 52 | 0.6540 | 0.8 |
| 0.098 | 53.0 | 53 | 0.7150 | 0.7 |
| 0.098 | 54.0 | 54 | 0.5828 | 0.8 |
| 0.098 | 55.0 | 55 | 0.5115 | 0.8 |
| 0.098 | 56.0 | 56 | 0.4744 | 0.8 |
| 0.098 | 57.0 | 57 | 0.4548 | 0.8667 |
| 0.098 | 58.0 | 58 | 0.4936 | 0.8667 |
| 0.098 | 59.0 | 59 | 0.3534 | 0.8667 |
| 0.0473 | 60.0 | 60 | 0.6354 | 0.7333 |
| 0.0473 | 61.0 | 61 | 0.4243 | 0.9 |
| 0.0473 | 62.0 | 62 | 0.2744 | 0.9333 |
| 0.0473 | 63.0 | 63 | 0.4937 | 0.8333 |
| 0.0473 | 64.0 | 64 | 0.3869 | 0.9 |
| 0.0473 | 65.0 | 65 | 0.5379 | 0.8667 |
| 0.0473 | 66.0 | 66 | 0.4878 | 0.8 |
| 0.0473 | 67.0 | 67 | 0.6310 | 0.7667 |
| 0.0473 | 68.0 | 68 | 0.5021 | 0.8 |
| 0.0473 | 69.0 | 69 | 0.5109 | 0.8667 |
| 0.0218 | 70.0 | 70 | 0.4052 | 0.8667 |
| 0.0218 | 71.0 | 71 | 0.3340 | 0.9 |
| 0.0218 | 72.0 | 72 | 0.4823 | 0.8333 |
| 0.0218 | 73.0 | 73 | 0.2980 | 0.9 |
| 0.0218 | 74.0 | 74 | 0.3515 | 0.8667 |
| 0.0218 | 75.0 | 75 | 0.4199 | 0.8 |
| 0.0218 | 76.0 | 76 | 0.4145 | 0.9 |
| 0.0218 | 77.0 | 77 | 0.4639 | 0.7667 |
| 0.0218 | 78.0 | 78 | 0.3376 | 0.8667 |
| 0.0218 | 79.0 | 79 | 0.3546 | 0.8667 |
| 0.0121 | 80.0 | 80 | 0.3863 | 0.8667 |
| 0.0121 | 81.0 | 81 | 0.3637 | 0.8667 |
| 0.0121 | 82.0 | 82 | 0.3622 | 0.8667 |
| 0.0121 | 83.0 | 83 | 0.4142 | 0.8667 |
| 0.0121 | 84.0 | 84 | 0.4829 | 0.7667 |
| 0.0121 | 85.0 | 85 | 0.4039 | 0.8667 |
| 0.0121 | 86.0 | 86 | 0.3893 | 0.9 |
| 0.0121 | 87.0 | 87 | 0.5483 | 0.8333 |
| 0.0121 | 88.0 | 88 | 0.3928 | 0.8333 |
| 0.0121 | 89.0 | 89 | 0.3336 | 0.8667 |
| 0.0077 | 90.0 | 90 | 0.2689 | 0.9333 |
| 0.0077 | 91.0 | 91 | 0.3586 | 0.9333 |
| 0.0077 | 92.0 | 92 | 0.4284 | 0.9 |
| 0.0077 | 93.0 | 93 | 0.4150 | 0.8333 |
| 0.0077 | 94.0 | 94 | 0.2941 | 0.9 |
| 0.0077 | 95.0 | 95 | 0.2634 | 0.8667 |
| 0.0077 | 96.0 | 96 | 0.2631 | 0.9333 |
| 0.0077 | 97.0 | 97 | 0.3490 | 0.9333 |
| 0.0077 | 98.0 | 98 | 0.3602 | 0.9 |
| 0.0077 | 99.0 | 99 | 0.2326 | 0.9333 |
| 0.0065 | 100.0 | 100 | 0.2564 | 0.9 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
[
"an",
"di",
"fe",
"ha",
"ne",
"sa",
"su"
] |
SodaXII/swin-base-patch4-window7-224_rice-leaf-disease-augmented-v4_v5_pft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-patch4-window7-224_rice-leaf-disease-augmented-v4_v5_pft
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4024
- Accuracy: 0.8490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 256
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0846 | 0.5 | 64 | 1.9602 | 0.2483 |
| 1.7504 | 1.0 | 128 | 1.5308 | 0.5034 |
| 1.3704 | 1.5 | 192 | 1.1825 | 0.6107 |
| 1.113 | 2.0 | 256 | 0.9313 | 0.7148 |
| 0.9305 | 2.5 | 320 | 0.8132 | 0.7617 |
| 0.8171 | 3.0 | 384 | 0.7214 | 0.7651 |
| 0.7497 | 3.5 | 448 | 0.6650 | 0.7785 |
| 0.7039 | 4.0 | 512 | 0.6244 | 0.8188 |
| 0.6696 | 4.5 | 576 | 0.6003 | 0.8188 |
| 0.649 | 5.0 | 640 | 0.5976 | 0.8121 |
| 0.6334 | 5.5 | 704 | 0.6032 | 0.8020 |
| 0.6256 | 6.0 | 768 | 0.5859 | 0.8188 |
| 0.6417 | 6.5 | 832 | 0.5851 | 0.8188 |
| 0.5991 | 7.0 | 896 | 0.5835 | 0.8154 |
| 0.6014 | 7.5 | 960 | 0.5394 | 0.8322 |
| 0.5614 | 8.0 | 1024 | 0.5211 | 0.8356 |
| 0.536 | 8.5 | 1088 | 0.5184 | 0.8121 |
| 0.5443 | 9.0 | 1152 | 0.5256 | 0.8154 |
| 0.5129 | 9.5 | 1216 | 0.5026 | 0.8221 |
| 0.5084 | 10.0 | 1280 | 0.5028 | 0.8188 |
| 0.5081 | 10.5 | 1344 | 0.4996 | 0.8188 |
| 0.4936 | 11.0 | 1408 | 0.5004 | 0.8188 |
| 0.5 | 11.5 | 1472 | 0.5091 | 0.8121 |
| 0.4934 | 12.0 | 1536 | 0.4892 | 0.8356 |
| 0.4831 | 12.5 | 1600 | 0.4736 | 0.8322 |
| 0.4638 | 13.0 | 1664 | 0.4727 | 0.8255 |
| 0.4549 | 13.5 | 1728 | 0.4552 | 0.8456 |
| 0.4454 | 14.0 | 1792 | 0.4646 | 0.8322 |
| 0.44 | 14.5 | 1856 | 0.4610 | 0.8322 |
| 0.4304 | 15.0 | 1920 | 0.4574 | 0.8356 |
| 0.4255 | 15.5 | 1984 | 0.4550 | 0.8356 |
| 0.4353 | 16.0 | 2048 | 0.4548 | 0.8356 |
| 0.4456 | 16.5 | 2112 | 0.4465 | 0.8322 |
| 0.4047 | 17.0 | 2176 | 0.4619 | 0.8255 |
| 0.4119 | 17.5 | 2240 | 0.4497 | 0.8389 |
| 0.4009 | 18.0 | 2304 | 0.4329 | 0.8423 |
| 0.3901 | 18.5 | 2368 | 0.4286 | 0.8456 |
| 0.3936 | 19.0 | 2432 | 0.4318 | 0.8456 |
| 0.3761 | 19.5 | 2496 | 0.4297 | 0.8456 |
| 0.3885 | 20.0 | 2560 | 0.4279 | 0.8456 |
| 0.3806 | 20.5 | 2624 | 0.4271 | 0.8456 |
| 0.3779 | 21.0 | 2688 | 0.4352 | 0.8523 |
| 0.3746 | 21.5 | 2752 | 0.4256 | 0.8490 |
| 0.3708 | 22.0 | 2816 | 0.4253 | 0.8557 |
| 0.362 | 22.5 | 2880 | 0.4205 | 0.8490 |
| 0.3558 | 23.0 | 2944 | 0.4122 | 0.8490 |
| 0.3507 | 23.5 | 3008 | 0.4147 | 0.8423 |
| 0.3481 | 24.0 | 3072 | 0.4134 | 0.8456 |
| 0.3452 | 24.5 | 3136 | 0.4120 | 0.8456 |
| 0.3437 | 25.0 | 3200 | 0.4117 | 0.8490 |
| 0.3499 | 25.5 | 3264 | 0.4164 | 0.8523 |
| 0.3373 | 26.0 | 3328 | 0.4109 | 0.8490 |
| 0.3468 | 26.5 | 3392 | 0.3999 | 0.8523 |
| 0.3297 | 27.0 | 3456 | 0.4079 | 0.8523 |
| 0.329 | 27.5 | 3520 | 0.3997 | 0.8423 |
| 0.3293 | 28.0 | 3584 | 0.4051 | 0.8423 |
| 0.3147 | 28.5 | 3648 | 0.3987 | 0.8523 |
| 0.3239 | 29.0 | 3712 | 0.4013 | 0.8523 |
| 0.3147 | 29.5 | 3776 | 0.4031 | 0.8490 |
| 0.3167 | 30.0 | 3840 | 0.4024 | 0.8490 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.1
|
[
"bacterial leaf blight",
"brown spot",
"healthy rice leaf",
"leaf blast",
"leaf scald",
"narrow brown leaf spot",
"rice hispa",
"sheath blight"
] |
SodaXII/dinov2-base_rice-leaf-disease-augmented-v4_v5_pft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dinov2-base_rice-leaf-disease-augmented-v4_v5_pft
This model is a fine-tuned version of [facebook/dinov2-base](https://huggingface.co/facebook/dinov2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2563
- Accuracy: 0.9295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 256
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.283 | 0.5 | 64 | 1.9783 | 0.2013 |
| 1.4862 | 1.0 | 128 | 1.1465 | 0.6745 |
| 0.9264 | 1.5 | 192 | 0.7663 | 0.7550 |
| 0.6628 | 2.0 | 256 | 0.6171 | 0.8020 |
| 0.5289 | 2.5 | 320 | 0.5400 | 0.8322 |
| 0.4616 | 3.0 | 384 | 0.5054 | 0.8322 |
| 0.4022 | 3.5 | 448 | 0.4499 | 0.8490 |
| 0.3653 | 4.0 | 512 | 0.4136 | 0.8691 |
| 0.3432 | 4.5 | 576 | 0.4172 | 0.8691 |
| 0.3168 | 5.0 | 640 | 0.3760 | 0.8826 |
| 0.3124 | 5.5 | 704 | 0.3938 | 0.8658 |
| 0.2992 | 6.0 | 768 | 0.3811 | 0.8859 |
| 0.3062 | 6.5 | 832 | 0.3866 | 0.8826 |
| 0.2951 | 7.0 | 896 | 0.3926 | 0.8725 |
| 0.2967 | 7.5 | 960 | 0.3657 | 0.8758 |
| 0.2635 | 8.0 | 1024 | 0.3501 | 0.8859 |
| 0.2474 | 8.5 | 1088 | 0.3510 | 0.8993 |
| 0.2452 | 9.0 | 1152 | 0.3402 | 0.8960 |
| 0.2185 | 9.5 | 1216 | 0.3353 | 0.8926 |
| 0.2283 | 10.0 | 1280 | 0.3332 | 0.9060 |
| 0.2138 | 10.5 | 1344 | 0.3235 | 0.9027 |
| 0.2143 | 11.0 | 1408 | 0.3267 | 0.8993 |
| 0.2156 | 11.5 | 1472 | 0.3379 | 0.8926 |
| 0.2125 | 12.0 | 1536 | 0.3256 | 0.8826 |
| 0.1998 | 12.5 | 1600 | 0.3135 | 0.8926 |
| 0.1961 | 13.0 | 1664 | 0.3031 | 0.8993 |
| 0.1818 | 13.5 | 1728 | 0.2855 | 0.9228 |
| 0.1788 | 14.0 | 1792 | 0.2977 | 0.9094 |
| 0.1709 | 14.5 | 1856 | 0.3101 | 0.9060 |
| 0.1652 | 15.0 | 1920 | 0.2969 | 0.9128 |
| 0.1578 | 15.5 | 1984 | 0.2916 | 0.9161 |
| 0.1661 | 16.0 | 2048 | 0.2904 | 0.9195 |
| 0.1727 | 16.5 | 2112 | 0.2854 | 0.9128 |
| 0.1617 | 17.0 | 2176 | 0.2802 | 0.9128 |
| 0.1519 | 17.5 | 2240 | 0.2954 | 0.8993 |
| 0.1491 | 18.0 | 2304 | 0.2812 | 0.9094 |
| 0.1366 | 18.5 | 2368 | 0.2877 | 0.9060 |
| 0.1424 | 19.0 | 2432 | 0.2791 | 0.9195 |
| 0.1297 | 19.5 | 2496 | 0.2810 | 0.9161 |
| 0.1325 | 20.0 | 2560 | 0.2787 | 0.9161 |
| 0.1309 | 20.5 | 2624 | 0.2764 | 0.9195 |
| 0.1298 | 21.0 | 2688 | 0.2715 | 0.9161 |
| 0.1334 | 21.5 | 2752 | 0.2768 | 0.9094 |
| 0.1268 | 22.0 | 2816 | 0.2712 | 0.9128 |
| 0.1169 | 22.5 | 2880 | 0.2739 | 0.9195 |
| 0.1203 | 23.0 | 2944 | 0.2607 | 0.9262 |
| 0.1091 | 23.5 | 3008 | 0.2703 | 0.9161 |
| 0.1123 | 24.0 | 3072 | 0.2611 | 0.9262 |
| 0.1056 | 24.5 | 3136 | 0.2606 | 0.9262 |
| 0.1061 | 25.0 | 3200 | 0.2630 | 0.9262 |
| 0.112 | 25.5 | 3264 | 0.2899 | 0.9094 |
| 0.1076 | 26.0 | 3328 | 0.2749 | 0.9195 |
| 0.105 | 26.5 | 3392 | 0.2570 | 0.9329 |
| 0.1048 | 27.0 | 3456 | 0.2654 | 0.9295 |
| 0.098 | 27.5 | 3520 | 0.2593 | 0.9262 |
| 0.0951 | 28.0 | 3584 | 0.2591 | 0.9295 |
| 0.0915 | 28.5 | 3648 | 0.2553 | 0.9262 |
| 0.0907 | 29.0 | 3712 | 0.2568 | 0.9295 |
| 0.0885 | 29.5 | 3776 | 0.2582 | 0.9295 |
| 0.087 | 30.0 | 3840 | 0.2563 | 0.9295 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.1
|
[
"bacterial leaf blight",
"brown spot",
"healthy rice leaf",
"leaf blast",
"leaf scald",
"narrow brown leaf spot",
"rice hispa",
"sheath blight"
] |
SodaXII/deit-base-patch16-224_rice-leaf-disease-augmented-v4_v5_pft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-patch16-224_rice-leaf-disease-augmented-v4_v5_pft
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5294
- Accuracy: 0.8356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 256
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0319 | 0.5 | 64 | 1.9391 | 0.3020 |
| 1.8086 | 1.0 | 128 | 1.6836 | 0.4530 |
| 1.5238 | 1.5 | 192 | 1.4166 | 0.5738 |
| 1.2924 | 2.0 | 256 | 1.2020 | 0.6510 |
| 1.1049 | 2.5 | 320 | 1.0590 | 0.6913 |
| 0.9923 | 3.0 | 384 | 0.9608 | 0.7181 |
| 0.9115 | 3.5 | 448 | 0.9018 | 0.7450 |
| 0.8588 | 4.0 | 512 | 0.8649 | 0.7450 |
| 0.8239 | 4.5 | 576 | 0.8442 | 0.7584 |
| 0.7945 | 5.0 | 640 | 0.8289 | 0.7584 |
| 0.7874 | 5.5 | 704 | 0.8279 | 0.7651 |
| 0.7697 | 6.0 | 768 | 0.8159 | 0.7651 |
| 0.7888 | 6.5 | 832 | 0.8148 | 0.7685 |
| 0.7464 | 7.0 | 896 | 0.7960 | 0.7651 |
| 0.7434 | 7.5 | 960 | 0.7555 | 0.7819 |
| 0.6914 | 8.0 | 1024 | 0.7372 | 0.7785 |
| 0.6704 | 8.5 | 1088 | 0.7182 | 0.7886 |
| 0.6627 | 9.0 | 1152 | 0.7151 | 0.7919 |
| 0.6469 | 9.5 | 1216 | 0.6991 | 0.7886 |
| 0.6291 | 10.0 | 1280 | 0.6958 | 0.7886 |
| 0.6276 | 10.5 | 1344 | 0.6928 | 0.7919 |
| 0.6253 | 11.0 | 1408 | 0.6923 | 0.7886 |
| 0.6301 | 11.5 | 1472 | 0.6938 | 0.7852 |
| 0.608 | 12.0 | 1536 | 0.6624 | 0.7987 |
| 0.5946 | 12.5 | 1600 | 0.6627 | 0.7852 |
| 0.5803 | 13.0 | 1664 | 0.6471 | 0.7953 |
| 0.5637 | 13.5 | 1728 | 0.6368 | 0.8020 |
| 0.5595 | 14.0 | 1792 | 0.6357 | 0.8020 |
| 0.5515 | 14.5 | 1856 | 0.6330 | 0.8054 |
| 0.5417 | 15.0 | 1920 | 0.6282 | 0.8020 |
| 0.5331 | 15.5 | 1984 | 0.6267 | 0.8020 |
| 0.5493 | 16.0 | 2048 | 0.6267 | 0.8020 |
| 0.5486 | 16.5 | 2112 | 0.6191 | 0.8054 |
| 0.5173 | 17.0 | 2176 | 0.6194 | 0.8020 |
| 0.517 | 17.5 | 2240 | 0.6128 | 0.8020 |
| 0.5039 | 18.0 | 2304 | 0.5875 | 0.8188 |
| 0.4928 | 18.5 | 2368 | 0.5911 | 0.8121 |
| 0.4952 | 19.0 | 2432 | 0.5872 | 0.8087 |
| 0.4891 | 19.5 | 2496 | 0.5870 | 0.8054 |
| 0.4814 | 20.0 | 2560 | 0.5830 | 0.8087 |
| 0.4829 | 20.5 | 2624 | 0.5826 | 0.8087 |
| 0.4833 | 21.0 | 2688 | 0.5855 | 0.8121 |
| 0.4711 | 21.5 | 2752 | 0.5668 | 0.8188 |
| 0.474 | 22.0 | 2816 | 0.5675 | 0.8020 |
| 0.4573 | 22.5 | 2880 | 0.5683 | 0.8054 |
| 0.4545 | 23.0 | 2944 | 0.5546 | 0.8322 |
| 0.4462 | 23.5 | 3008 | 0.5555 | 0.8188 |
| 0.4452 | 24.0 | 3072 | 0.5534 | 0.8221 |
| 0.4483 | 24.5 | 3136 | 0.5519 | 0.8289 |
| 0.4333 | 25.0 | 3200 | 0.5515 | 0.8289 |
| 0.4469 | 25.5 | 3264 | 0.5537 | 0.8188 |
| 0.4321 | 26.0 | 3328 | 0.5420 | 0.8322 |
| 0.4261 | 26.5 | 3392 | 0.5381 | 0.8255 |
| 0.4312 | 27.0 | 3456 | 0.5419 | 0.8255 |
| 0.4239 | 27.5 | 3520 | 0.5322 | 0.8356 |
| 0.4083 | 28.0 | 3584 | 0.5315 | 0.8389 |
| 0.4084 | 28.5 | 3648 | 0.5284 | 0.8322 |
| 0.4102 | 29.0 | 3712 | 0.5284 | 0.8356 |
| 0.4177 | 29.5 | 3776 | 0.5300 | 0.8356 |
| 0.3947 | 30.0 | 3840 | 0.5294 | 0.8356 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.1
|
[
"bacterial leaf blight",
"brown spot",
"healthy rice leaf",
"leaf blast",
"leaf scald",
"narrow brown leaf spot",
"rice hispa",
"sheath blight"
] |
SodaXII/vit-hybrid-base-bit-384_rice-leaf-disease-augmented-v4_v5_pft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-hybrid-base-bit-384_rice-leaf-disease-augmented-v4_v5_pft
This model is a fine-tuned version of [google/vit-hybrid-base-bit-384](https://huggingface.co/google/vit-hybrid-base-bit-384) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2706
- Accuracy: 0.9195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 256
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2177 | 0.5 | 64 | 2.0234 | 0.2248 |
| 1.7719 | 1.0 | 128 | 1.4579 | 0.5872 |
| 1.258 | 1.5 | 192 | 1.0302 | 0.7081 |
| 0.9323 | 2.0 | 256 | 0.7450 | 0.8154 |
| 0.7252 | 2.5 | 320 | 0.6527 | 0.7953 |
| 0.6134 | 3.0 | 384 | 0.5488 | 0.8154 |
| 0.5375 | 3.5 | 448 | 0.5004 | 0.8221 |
| 0.5028 | 4.0 | 512 | 0.4624 | 0.8557 |
| 0.471 | 4.5 | 576 | 0.4532 | 0.8557 |
| 0.44 | 5.0 | 640 | 0.4378 | 0.8557 |
| 0.4302 | 5.5 | 704 | 0.4446 | 0.8423 |
| 0.4267 | 6.0 | 768 | 0.4300 | 0.8591 |
| 0.4341 | 6.5 | 832 | 0.4302 | 0.8591 |
| 0.406 | 7.0 | 896 | 0.4174 | 0.8658 |
| 0.4077 | 7.5 | 960 | 0.3973 | 0.8523 |
| 0.3639 | 8.0 | 1024 | 0.3747 | 0.8792 |
| 0.3463 | 8.5 | 1088 | 0.3701 | 0.8859 |
| 0.343 | 9.0 | 1152 | 0.3682 | 0.8859 |
| 0.322 | 9.5 | 1216 | 0.3567 | 0.8792 |
| 0.3224 | 10.0 | 1280 | 0.3555 | 0.8859 |
| 0.3103 | 10.5 | 1344 | 0.3529 | 0.8859 |
| 0.314 | 11.0 | 1408 | 0.3531 | 0.8859 |
| 0.3153 | 11.5 | 1472 | 0.3546 | 0.8859 |
| 0.3033 | 12.0 | 1536 | 0.3434 | 0.8792 |
| 0.2905 | 12.5 | 1600 | 0.3326 | 0.8859 |
| 0.2857 | 13.0 | 1664 | 0.3323 | 0.8893 |
| 0.2693 | 13.5 | 1728 | 0.3238 | 0.8893 |
| 0.2683 | 14.0 | 1792 | 0.3273 | 0.9027 |
| 0.2582 | 14.5 | 1856 | 0.3243 | 0.9060 |
| 0.2544 | 15.0 | 1920 | 0.3181 | 0.8993 |
| 0.2478 | 15.5 | 1984 | 0.3167 | 0.8993 |
| 0.255 | 16.0 | 2048 | 0.3166 | 0.8993 |
| 0.2586 | 16.5 | 2112 | 0.3087 | 0.8993 |
| 0.24 | 17.0 | 2176 | 0.3126 | 0.9060 |
| 0.2351 | 17.5 | 2240 | 0.3032 | 0.9027 |
| 0.2302 | 18.0 | 2304 | 0.3005 | 0.9094 |
| 0.2229 | 18.5 | 2368 | 0.2993 | 0.9128 |
| 0.2185 | 19.0 | 2432 | 0.2982 | 0.9027 |
| 0.2138 | 19.5 | 2496 | 0.2968 | 0.9027 |
| 0.2128 | 20.0 | 2560 | 0.2952 | 0.9027 |
| 0.2134 | 20.5 | 2624 | 0.2946 | 0.9027 |
| 0.2107 | 21.0 | 2688 | 0.3014 | 0.8993 |
| 0.2077 | 21.5 | 2752 | 0.2885 | 0.9060 |
| 0.2073 | 22.0 | 2816 | 0.2911 | 0.9094 |
| 0.1943 | 22.5 | 2880 | 0.2853 | 0.9128 |
| 0.1979 | 23.0 | 2944 | 0.2806 | 0.9094 |
| 0.1907 | 23.5 | 3008 | 0.2793 | 0.9161 |
| 0.1848 | 24.0 | 3072 | 0.2794 | 0.9094 |
| 0.1884 | 24.5 | 3136 | 0.2780 | 0.9094 |
| 0.179 | 25.0 | 3200 | 0.2778 | 0.9094 |
| 0.1872 | 25.5 | 3264 | 0.2828 | 0.9128 |
| 0.181 | 26.0 | 3328 | 0.2749 | 0.9128 |
| 0.1779 | 26.5 | 3392 | 0.2752 | 0.9094 |
| 0.1783 | 27.0 | 3456 | 0.2730 | 0.9128 |
| 0.1777 | 27.5 | 3520 | 0.2720 | 0.9195 |
| 0.162 | 28.0 | 3584 | 0.2717 | 0.9128 |
| 0.1682 | 28.5 | 3648 | 0.2679 | 0.9128 |
| 0.1599 | 29.0 | 3712 | 0.2709 | 0.9128 |
| 0.1587 | 29.5 | 3776 | 0.2711 | 0.9161 |
| 0.164 | 30.0 | 3840 | 0.2706 | 0.9195 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.1
|
[
"bacterial leaf blight",
"brown spot",
"healthy rice leaf",
"leaf blast",
"leaf scald",
"narrow brown leaf spot",
"rice hispa",
"sheath blight"
] |
SodaXII/convnextv2-base-1k-224_rice-leaf-disease-augmented-v4_v5_pft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-base-1k-224_rice-leaf-disease-augmented-v4_v5_pft
This model is a fine-tuned version of [facebook/convnextv2-base-1k-224](https://huggingface.co/facebook/convnextv2-base-1k-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6774
- Accuracy: 0.7819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 256
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0691 | 0.5 | 64 | 2.0083 | 0.3523 |
| 1.9566 | 1.0 | 128 | 1.8732 | 0.5201 |
| 1.7715 | 1.5 | 192 | 1.6903 | 0.5638 |
| 1.5752 | 2.0 | 256 | 1.5040 | 0.6074 |
| 1.4088 | 2.5 | 320 | 1.3569 | 0.6208 |
| 1.2927 | 3.0 | 384 | 1.2600 | 0.6309 |
| 1.2173 | 3.5 | 448 | 1.1948 | 0.6577 |
| 1.1515 | 4.0 | 512 | 1.1464 | 0.6644 |
| 1.1183 | 4.5 | 576 | 1.1160 | 0.6711 |
| 1.0893 | 5.0 | 640 | 1.1001 | 0.6879 |
| 1.0792 | 5.5 | 704 | 1.0898 | 0.6913 |
| 1.0627 | 6.0 | 768 | 1.0831 | 0.6846 |
| 1.0714 | 6.5 | 832 | 1.0817 | 0.6846 |
| 1.0459 | 7.0 | 896 | 1.0483 | 0.6913 |
| 1.0282 | 7.5 | 960 | 1.0047 | 0.6980 |
| 0.9605 | 8.0 | 1024 | 0.9774 | 0.7081 |
| 0.9405 | 8.5 | 1088 | 0.9489 | 0.7114 |
| 0.9316 | 9.0 | 1152 | 0.9353 | 0.7148 |
| 0.9174 | 9.5 | 1216 | 0.9208 | 0.7181 |
| 0.8924 | 10.0 | 1280 | 0.9137 | 0.7215 |
| 0.9009 | 10.5 | 1344 | 0.9101 | 0.7282 |
| 0.8844 | 11.0 | 1408 | 0.9092 | 0.7248 |
| 0.8873 | 11.5 | 1472 | 0.9076 | 0.7215 |
| 0.8751 | 12.0 | 1536 | 0.8721 | 0.7383 |
| 0.8553 | 12.5 | 1600 | 0.8617 | 0.7248 |
| 0.8265 | 13.0 | 1664 | 0.8428 | 0.7416 |
| 0.8133 | 13.5 | 1728 | 0.8302 | 0.7416 |
| 0.808 | 14.0 | 1792 | 0.8232 | 0.7483 |
| 0.7915 | 14.5 | 1856 | 0.8187 | 0.7450 |
| 0.7975 | 15.0 | 1920 | 0.8157 | 0.7450 |
| 0.7765 | 15.5 | 1984 | 0.8143 | 0.7450 |
| 0.8017 | 16.0 | 2048 | 0.8142 | 0.7450 |
| 0.793 | 16.5 | 2112 | 0.7970 | 0.7584 |
| 0.7567 | 17.0 | 2176 | 0.7901 | 0.7550 |
| 0.7576 | 17.5 | 2240 | 0.7785 | 0.7483 |
| 0.7377 | 18.0 | 2304 | 0.7651 | 0.7651 |
| 0.7311 | 18.5 | 2368 | 0.7588 | 0.7651 |
| 0.7276 | 19.0 | 2432 | 0.7566 | 0.7651 |
| 0.7237 | 19.5 | 2496 | 0.7567 | 0.7651 |
| 0.7171 | 20.0 | 2560 | 0.7534 | 0.7685 |
| 0.7158 | 20.5 | 2624 | 0.7529 | 0.7685 |
| 0.7188 | 21.0 | 2688 | 0.7486 | 0.7651 |
| 0.7112 | 21.5 | 2752 | 0.7340 | 0.7752 |
| 0.6912 | 22.0 | 2816 | 0.7297 | 0.7752 |
| 0.6784 | 22.5 | 2880 | 0.7229 | 0.7785 |
| 0.6868 | 23.0 | 2944 | 0.7152 | 0.7752 |
| 0.6701 | 23.5 | 3008 | 0.7132 | 0.7819 |
| 0.6718 | 24.0 | 3072 | 0.7111 | 0.7752 |
| 0.671 | 24.5 | 3136 | 0.7105 | 0.7785 |
| 0.6609 | 25.0 | 3200 | 0.7097 | 0.7785 |
| 0.6722 | 25.5 | 3264 | 0.7066 | 0.7785 |
| 0.6526 | 26.0 | 3328 | 0.6959 | 0.7785 |
| 0.6448 | 26.5 | 3392 | 0.6920 | 0.7919 |
| 0.6493 | 27.0 | 3456 | 0.6903 | 0.7785 |
| 0.6394 | 27.5 | 3520 | 0.6816 | 0.7785 |
| 0.6274 | 28.0 | 3584 | 0.6819 | 0.7819 |
| 0.6198 | 28.5 | 3648 | 0.6784 | 0.7819 |
| 0.632 | 29.0 | 3712 | 0.6778 | 0.7819 |
| 0.634 | 29.5 | 3776 | 0.6776 | 0.7819 |
| 0.612 | 30.0 | 3840 | 0.6774 | 0.7819 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.1
|
[
"bacterial leaf blight",
"brown spot",
"healthy rice leaf",
"leaf blast",
"leaf scald",
"narrow brown leaf spot",
"rice hispa",
"sheath blight"
] |
desarrolloasesoreslocales/efficientnet-b0-accidents
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# efficientnet-b0-accidents
This model is a fine-tuned version of [google/efficientnet-b0](https://huggingface.co/google/efficientnet-b0) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3330
- Accuracy: 0.8367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 1024
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.4780 | 0.7959 |
| No log | 2.0 | 2 | 0.4610 | 0.7959 |
| No log | 3.0 | 3 | 0.4518 | 0.7755 |
| No log | 4.0 | 4 | 0.4450 | 0.8163 |
| No log | 5.0 | 5 | 0.4424 | 0.8265 |
| No log | 6.0 | 6 | 0.4483 | 0.7959 |
| No log | 7.0 | 7 | 0.4533 | 0.7959 |
| No log | 8.0 | 8 | 0.4557 | 0.7959 |
| No log | 9.0 | 9 | 0.4556 | 0.8265 |
| 0.4528 | 10.0 | 10 | 0.4453 | 0.8163 |
| 0.4528 | 11.0 | 11 | 0.4558 | 0.7755 |
| 0.4528 | 12.0 | 12 | 0.4390 | 0.8265 |
| 0.4528 | 13.0 | 13 | 0.4322 | 0.7959 |
| 0.4528 | 14.0 | 14 | 0.4323 | 0.8163 |
| 0.4528 | 15.0 | 15 | 0.4127 | 0.8061 |
| 0.4528 | 16.0 | 16 | 0.4341 | 0.8061 |
| 0.4528 | 17.0 | 17 | 0.4144 | 0.8265 |
| 0.4528 | 18.0 | 18 | 0.4275 | 0.8265 |
| 0.4528 | 19.0 | 19 | 0.3988 | 0.8673 |
| 0.4233 | 20.0 | 20 | 0.4210 | 0.7959 |
| 0.4233 | 21.0 | 21 | 0.4223 | 0.7755 |
| 0.4233 | 22.0 | 22 | 0.4288 | 0.8265 |
| 0.4233 | 23.0 | 23 | 0.3851 | 0.8571 |
| 0.4233 | 24.0 | 24 | 0.3956 | 0.8061 |
| 0.4233 | 25.0 | 25 | 0.4159 | 0.8367 |
| 0.4233 | 26.0 | 26 | 0.4055 | 0.8163 |
| 0.4233 | 27.0 | 27 | 0.3861 | 0.8163 |
| 0.4233 | 28.0 | 28 | 0.3751 | 0.8469 |
| 0.4233 | 29.0 | 29 | 0.3915 | 0.8367 |
| 0.3846 | 30.0 | 30 | 0.3705 | 0.8571 |
| 0.3846 | 31.0 | 31 | 0.3868 | 0.8367 |
| 0.3846 | 32.0 | 32 | 0.3710 | 0.8469 |
| 0.3846 | 33.0 | 33 | 0.3770 | 0.8469 |
| 0.3846 | 34.0 | 34 | 0.3903 | 0.8265 |
| 0.3846 | 35.0 | 35 | 0.3864 | 0.8469 |
| 0.3846 | 36.0 | 36 | 0.3728 | 0.8265 |
| 0.3846 | 37.0 | 37 | 0.3772 | 0.8367 |
| 0.3846 | 38.0 | 38 | 0.3633 | 0.8163 |
| 0.3846 | 39.0 | 39 | 0.3824 | 0.8469 |
| 0.3714 | 40.0 | 40 | 0.3520 | 0.8571 |
| 0.3714 | 41.0 | 41 | 0.3844 | 0.8469 |
| 0.3714 | 42.0 | 42 | 0.3564 | 0.8469 |
| 0.3714 | 43.0 | 43 | 0.3747 | 0.8673 |
| 0.3714 | 44.0 | 44 | 0.3395 | 0.8571 |
| 0.3714 | 45.0 | 45 | 0.3871 | 0.8163 |
| 0.3714 | 46.0 | 46 | 0.3487 | 0.8367 |
| 0.3714 | 47.0 | 47 | 0.3798 | 0.8163 |
| 0.3714 | 48.0 | 48 | 0.3848 | 0.8367 |
| 0.3714 | 49.0 | 49 | 0.3978 | 0.8265 |
| 0.3618 | 50.0 | 50 | 0.3384 | 0.8571 |
| 0.3618 | 51.0 | 51 | 0.3647 | 0.8265 |
| 0.3618 | 52.0 | 52 | 0.3544 | 0.8571 |
| 0.3618 | 53.0 | 53 | 0.4289 | 0.8163 |
| 0.3618 | 54.0 | 54 | 0.3568 | 0.8673 |
| 0.3618 | 55.0 | 55 | 0.3727 | 0.8673 |
| 0.3618 | 56.0 | 56 | 0.3796 | 0.8265 |
| 0.3618 | 57.0 | 57 | 0.3678 | 0.8571 |
| 0.3618 | 58.0 | 58 | 0.3719 | 0.8469 |
| 0.3618 | 59.0 | 59 | 0.3808 | 0.8878 |
| 0.327 | 60.0 | 60 | 0.3783 | 0.8163 |
| 0.327 | 61.0 | 61 | 0.3637 | 0.8367 |
| 0.327 | 62.0 | 62 | 0.3743 | 0.8367 |
| 0.327 | 63.0 | 63 | 0.3554 | 0.8571 |
| 0.327 | 64.0 | 64 | 0.3544 | 0.8265 |
| 0.327 | 65.0 | 65 | 0.3615 | 0.8469 |
| 0.327 | 66.0 | 66 | 0.3503 | 0.8673 |
| 0.327 | 67.0 | 67 | 0.3914 | 0.7959 |
| 0.327 | 68.0 | 68 | 0.3687 | 0.8367 |
| 0.327 | 69.0 | 69 | 0.3296 | 0.8878 |
| 0.3136 | 70.0 | 70 | 0.3548 | 0.8571 |
| 0.3136 | 71.0 | 71 | 0.3810 | 0.8265 |
| 0.3136 | 72.0 | 72 | 0.3522 | 0.8469 |
| 0.3136 | 73.0 | 73 | 0.3852 | 0.8367 |
| 0.3136 | 74.0 | 74 | 0.3434 | 0.8571 |
| 0.3136 | 75.0 | 75 | 0.3596 | 0.8571 |
| 0.3136 | 76.0 | 76 | 0.3551 | 0.8367 |
| 0.3136 | 77.0 | 77 | 0.4257 | 0.8163 |
| 0.3136 | 78.0 | 78 | 0.3554 | 0.8367 |
| 0.3136 | 79.0 | 79 | 0.3352 | 0.8265 |
| 0.316 | 80.0 | 80 | 0.3773 | 0.8367 |
| 0.316 | 81.0 | 81 | 0.3305 | 0.8469 |
| 0.316 | 82.0 | 82 | 0.3614 | 0.8571 |
| 0.316 | 83.0 | 83 | 0.3491 | 0.8265 |
| 0.316 | 84.0 | 84 | 0.3479 | 0.8571 |
| 0.316 | 85.0 | 85 | 0.3684 | 0.8367 |
| 0.316 | 86.0 | 86 | 0.3511 | 0.8571 |
| 0.316 | 87.0 | 87 | 0.3658 | 0.8265 |
| 0.316 | 88.0 | 88 | 0.3333 | 0.8367 |
| 0.316 | 89.0 | 89 | 0.3584 | 0.8776 |
| 0.3089 | 90.0 | 90 | 0.3277 | 0.8571 |
| 0.3089 | 91.0 | 91 | 0.3875 | 0.8367 |
| 0.3089 | 92.0 | 92 | 0.3757 | 0.8367 |
| 0.3089 | 93.0 | 93 | 0.3488 | 0.8367 |
| 0.3089 | 94.0 | 94 | 0.3282 | 0.8571 |
| 0.3089 | 95.0 | 95 | 0.3613 | 0.8571 |
| 0.3089 | 96.0 | 96 | 0.3753 | 0.8469 |
| 0.3089 | 97.0 | 97 | 0.3625 | 0.8469 |
| 0.3089 | 98.0 | 98 | 0.3930 | 0.8265 |
| 0.3089 | 99.0 | 99 | 0.3338 | 0.8469 |
| 0.3131 | 100.0 | 100 | 0.3330 | 0.8367 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"accident",
"non accident"
] |
Hokin/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0543
- Accuracy: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.188 | 1.0 | 190 | 0.0918 | 0.9681 |
| 0.1412 | 2.0 | 380 | 0.0585 | 0.98 |
| 0.1154 | 3.0 | 570 | 0.0543 | 0.9833 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"annual crop",
"forest",
"herbaceous vegetation",
"highway",
"industrial",
"pasture",
"permanent crop",
"residential",
"river",
"sea or lake"
] |
avanishd/vit-base-patch16-224-in21k-finetuned-cifar100
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-cifar100
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar-100 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7079
- Accuracy: 0.9054
## Model description
More information needed
## Intended uses & limitations
More information needed
## How to Get Started with the Model
```Python
from transformers import pipeline
pipe = pipeline("image-classification", "avanishd/vit-base-patch16-224-in21k-finetuned-cifar10")
pipe(image)
```
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 2.9669 | 1 | 313 | 2.7011 | 0.8221 |
| 1.9046 | 2.992 | 626 | 1.6451 | 0.8779 |
| 1.2161 | 4.987 | 939 | 0.8919 | 0.9023 |
| 1.0013 | 5.986 | 1252 | 0.7079 | 0.9054 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"apple",
"aquarium_fish",
"baby",
"bear",
"beaver",
"bed",
"bee",
"beetle",
"bicycle",
"bottle",
"bowl",
"boy",
"bridge",
"bus",
"butterfly",
"camel",
"can",
"castle",
"caterpillar",
"cattle",
"chair",
"chimpanzee",
"clock",
"cloud",
"cockroach",
"couch",
"cra",
"crocodile",
"cup",
"dinosaur",
"dolphin",
"elephant",
"flatfish",
"forest",
"fox",
"girl",
"hamster",
"house",
"kangaroo",
"keyboard",
"lamp",
"lawn_mower",
"leopard",
"lion",
"lizard",
"lobster",
"man",
"maple_tree",
"motorcycle",
"mountain",
"mouse",
"mushroom",
"oak_tree",
"orange",
"orchid",
"otter",
"palm_tree",
"pear",
"pickup_truck",
"pine_tree",
"plain",
"plate",
"poppy",
"porcupine",
"possum",
"rabbit",
"raccoon",
"ray",
"road",
"rocket",
"rose",
"sea",
"seal",
"shark",
"shrew",
"skunk",
"skyscraper",
"snail",
"snake",
"spider",
"squirrel",
"streetcar",
"sunflower",
"sweet_pepper",
"table",
"tank",
"telephone",
"television",
"tiger",
"tractor",
"train",
"trout",
"tulip",
"turtle",
"wardrobe",
"whale",
"willow_tree",
"wolf",
"woman",
"worm"
] |
saehkim11/renet-18_token
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
OSRSEnthusiast/trainer_output
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainer_output
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3427
- Accuracy: 0.8776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 20
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 39 | 0.4124 | 0.7938 |
| No log | 2.0 | 78 | 0.3294 | 0.8454 |
| 0.4497 | 3.0 | 117 | 0.2932 | 0.8454 |
| 0.4497 | 4.0 | 156 | 0.2799 | 0.8557 |
| 0.4497 | 5.0 | 195 | 0.2692 | 0.8969 |
| 0.2764 | 6.0 | 234 | 0.2604 | 0.8969 |
| 0.2764 | 7.0 | 273 | 0.2583 | 0.9175 |
| 0.2192 | 8.0 | 312 | 0.2546 | 0.9072 |
| 0.2192 | 9.0 | 351 | 0.2506 | 0.9072 |
| 0.2192 | 10.0 | 390 | 0.2536 | 0.9072 |
| 0.1936 | 11.0 | 429 | 0.2530 | 0.8866 |
| 0.1936 | 12.0 | 468 | 0.2503 | 0.9072 |
| 0.1731 | 13.0 | 507 | 0.2480 | 0.9072 |
| 0.1731 | 14.0 | 546 | 0.2496 | 0.9072 |
| 0.1731 | 15.0 | 585 | 0.2498 | 0.9072 |
| 0.155 | 16.0 | 624 | 0.2498 | 0.9072 |
| 0.155 | 17.0 | 663 | 0.2495 | 0.9072 |
| 0.1442 | 18.0 | 702 | 0.2488 | 0.9072 |
| 0.1442 | 19.0 | 741 | 0.2493 | 0.9072 |
| 0.1442 | 20.0 | 780 | 0.2490 | 0.9072 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"0",
"1"
] |
crocutacrocuto/dinov2-large-MEG7-10
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dinov2-large-MEG7-10
This model is a fine-tuned version of [facebook/dinov2-large](https://huggingface.co/facebook/dinov2-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3278
- Accuracy: 0.9324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 0.354 | 0.9999 | 5312 | 0.4495 | 0.8662 |
| 0.2705 | 2.0 | 10625 | 0.3835 | 0.8900 |
| 0.2003 | 2.9999 | 15937 | 0.3546 | 0.9021 |
| 0.2712 | 4.0 | 21250 | 0.3297 | 0.9060 |
| 0.2028 | 4.9999 | 26562 | 0.3259 | 0.9099 |
| 0.1638 | 6.0 | 31875 | 0.3087 | 0.9183 |
| 0.1289 | 6.9999 | 37187 | 0.3260 | 0.9195 |
| 0.0752 | 8.0 | 42500 | 0.3197 | 0.9256 |
| 0.0643 | 8.9999 | 47812 | 0.3229 | 0.9276 |
| 0.0509 | 9.9991 | 53120 | 0.3278 | 0.9324 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.20.3
|
[
"aardvark",
"baboon",
"badger",
"bird",
"black-and-white colobus",
"blue duiker",
"blue monkey",
"buffalo",
"bushbuck",
"bushpig",
"chimpanzee",
"civet_genet",
"elephant",
"galago_potto",
"golden cat",
"gorilla",
"guineafowl",
"hyrax",
"jackal",
"leopard",
"lhoests monkey",
"mandrill",
"mongoose",
"monkey",
"pangolin",
"porcupine",
"red colobus_red-capped mangabey",
"red duiker",
"rodent",
"serval",
"spotted hyena",
"squirrel",
"water chevrotain",
"yellow-backed duiker"
] |
Dugerij/dummy_classification_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dummy_classification_model
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the taresco/newspaper_ocr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0136
- Accuracy: 0.9969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
[
"no_segment",
"segment"
] |
mjpsm/confidence-image-classifier
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"label_0",
"label_1",
"label_2"
] |
Jjinuk/food_model
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
alimoh02/vit-base-food101
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-food101
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the ethz/food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7395
- Accuracy: 0.8017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 2.5327 | 0.1320 | 500 | 2.3914 | 0.5946 |
| 1.5713 | 0.2640 | 1000 | 1.5558 | 0.6978 |
| 1.2869 | 0.3960 | 1500 | 1.2575 | 0.7271 |
| 1.1479 | 0.5280 | 2000 | 1.1093 | 0.7476 |
| 1.0838 | 0.6600 | 2500 | 1.0286 | 0.7571 |
| 0.9623 | 0.7920 | 3000 | 0.9798 | 0.7641 |
| 0.9855 | 0.9240 | 3500 | 0.9395 | 0.7670 |
| 0.9263 | 1.0560 | 4000 | 0.9113 | 0.7723 |
| 0.8691 | 1.1880 | 4500 | 0.8844 | 0.7782 |
| 0.8025 | 1.3200 | 5000 | 0.8694 | 0.7768 |
| 0.7783 | 1.4520 | 5500 | 0.8574 | 0.7820 |
| 0.7774 | 1.5839 | 6000 | 0.8457 | 0.7799 |
| 0.7716 | 1.7159 | 6500 | 0.8309 | 0.7871 |
| 0.8445 | 1.8479 | 7000 | 0.8230 | 0.7868 |
| 0.8214 | 1.9799 | 7500 | 0.8107 | 0.7902 |
| 0.7226 | 2.1119 | 8000 | 0.8077 | 0.7897 |
| 0.7712 | 2.2439 | 8500 | 0.8015 | 0.7914 |
| 0.7306 | 2.3759 | 9000 | 0.7970 | 0.7889 |
| 0.6829 | 2.5079 | 9500 | 0.7919 | 0.7912 |
| 0.7593 | 2.6399 | 10000 | 0.7883 | 0.7901 |
| 0.6856 | 2.7719 | 10500 | 0.7802 | 0.7943 |
| 0.7156 | 2.9039 | 11000 | 0.7765 | 0.7976 |
| 0.6688 | 3.0359 | 11500 | 0.7735 | 0.7978 |
| 0.6245 | 3.1679 | 12000 | 0.7711 | 0.7972 |
| 0.668 | 3.2999 | 12500 | 0.7679 | 0.7989 |
| 0.6732 | 3.4319 | 13000 | 0.7657 | 0.7985 |
| 0.686 | 3.5639 | 13500 | 0.7645 | 0.7982 |
| 0.7121 | 3.6959 | 14000 | 0.7612 | 0.7984 |
| 0.6513 | 3.8279 | 14500 | 0.7599 | 0.7993 |
| 0.6963 | 3.9599 | 15000 | 0.7585 | 0.7993 |
| 0.7219 | 4.0919 | 15500 | 0.7554 | 0.7999 |
| 0.6253 | 4.2239 | 16000 | 0.7526 | 0.8016 |
| 0.6278 | 4.3559 | 16500 | 0.7504 | 0.8026 |
| 0.6605 | 4.4879 | 17000 | 0.7502 | 0.8028 |
| 0.6447 | 4.6199 | 17500 | 0.7493 | 0.8028 |
| 0.6469 | 4.7518 | 18000 | 0.7463 | 0.8040 |
| 0.6745 | 4.8838 | 18500 | 0.7462 | 0.8028 |
| 0.5882 | 5.0158 | 19000 | 0.7463 | 0.7995 |
| 0.6241 | 5.1478 | 19500 | 0.7428 | 0.8046 |
| 0.62 | 5.2798 | 20000 | 0.7439 | 0.8013 |
| 0.6435 | 5.4118 | 20500 | 0.7422 | 0.8018 |
| 0.6273 | 5.5438 | 21000 | 0.7418 | 0.8030 |
| 0.623 | 5.6758 | 21500 | 0.7415 | 0.8050 |
| 0.6181 | 5.8078 | 22000 | 0.7385 | 0.8055 |
| 0.6382 | 5.9398 | 22500 | 0.7388 | 0.8071 |
| 0.587 | 6.0718 | 23000 | 0.7379 | 0.8058 |
| 0.603 | 6.2038 | 23500 | 0.7374 | 0.8038 |
| 0.6334 | 6.3358 | 24000 | 0.7366 | 0.8054 |
| 0.613 | 6.4678 | 24500 | 0.7364 | 0.8048 |
| 0.5917 | 6.5998 | 25000 | 0.7355 | 0.8051 |
| 0.6167 | 6.7318 | 25500 | 0.7352 | 0.8059 |
| 0.6121 | 6.8638 | 26000 | 0.7347 | 0.8066 |
| 0.6133 | 6.9958 | 26500 | 0.7342 | 0.8059 |
| 0.6304 | 7.1278 | 27000 | 0.7338 | 0.8057 |
| 0.6041 | 7.2598 | 27500 | 0.7342 | 0.8063 |
| 0.6333 | 7.3918 | 28000 | 0.7334 | 0.8059 |
| 0.6234 | 7.5238 | 28500 | 0.7335 | 0.8061 |
| 0.5961 | 7.6558 | 29000 | 0.7334 | 0.8073 |
| 0.61 | 7.7878 | 29500 | 0.7333 | 0.8070 |
| 0.6586 | 7.9197 | 30000 | 0.7331 | 0.8070 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0
- Datasets 3.4.1
- Tokenizers 0.21.1
|
[
"6",
"79",
"81",
"53",
"10",
"20",
"77",
"48",
"86",
"84",
"76",
"34",
"51",
"21",
"64",
"0",
"43",
"44",
"73",
"57",
"14",
"5",
"46",
"55",
"93",
"98",
"38",
"11",
"99",
"72",
"22",
"59",
"70",
"16",
"2",
"58",
"83",
"96",
"39",
"49",
"45",
"88",
"9",
"26",
"94",
"4",
"65",
"32",
"27",
"36",
"87",
"69",
"85",
"25",
"40",
"19",
"35",
"56",
"42",
"60",
"68",
"100",
"41",
"92",
"24",
"3",
"89",
"75",
"17",
"97",
"61",
"33",
"80",
"30",
"8",
"74",
"66",
"31",
"18",
"67",
"37",
"13",
"63",
"28",
"47",
"52",
"54",
"1",
"82",
"91",
"95",
"7",
"29",
"78",
"15",
"23",
"12",
"62",
"50",
"71",
"90"
] |
myttt/vit-base-beans
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.9774
- Loss: 0.0798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 0.2799 | 1.0 | 130 | 0.9624 | 0.2172 |
| 0.1304 | 2.0 | 260 | 0.9699 | 0.1272 |
| 0.1387 | 3.0 | 390 | 0.9774 | 0.0970 |
| 0.0855 | 4.0 | 520 | 0.9925 | 0.0652 |
| 0.1134 | 5.0 | 650 | 0.9774 | 0.0798 |
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.7.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"angular_leaf_spot",
"bean_rust",
"healthy"
] |
mjpsm/participation-image-classifier
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"not paying attention",
"paying attention"
] |
NocturneVi/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0608
- Accuracy: 0.9796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1886 | 1.0 | 190 | 0.1029 | 0.9637 |
| 0.1368 | 2.0 | 380 | 0.0765 | 0.9752 |
| 0.129 | 3.0 | 570 | 0.0608 | 0.9796 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"annual crop",
"forest",
"herbaceous vegetation",
"highway",
"industrial",
"pasture",
"permanent crop",
"residential",
"river",
"sea or lake"
] |
encku/tuborg-multi-04-2025
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.0013590439921244979
f1_macro: 0.9994976257694946
f1_micro: 0.9994995495946352
f1_weighted: 0.9994995733569132
precision_macro: 0.9994955449696931
precision_micro: 0.9994995495946352
precision_weighted: 0.9995021133466129
recall_macro: 0.9995022330261746
recall_micro: 0.9994995495946352
recall_weighted: 0.9994995495946352
accuracy: 0.9994995495946352
|
[
"c002_4lu",
"c006_4lu",
"c015_4lu",
"c020_4lu",
"tbrg066",
"tt00677-1",
"tt00735",
"tt00737",
"tt00792-1",
"tt00793-1",
"tt00852",
"tt00857-1",
"tt00875-1",
"tt00876-1",
"tt00893",
"tt00904",
"tt00945",
"tt00989",
"tt01020",
"tt01150",
"tt01152",
"tt01155",
"tt01160",
"tt01176",
"tt01179",
"tt01296",
"tt01297",
"tt01481"
] |
prithivMLmods/RESISC45-SigLIP2
|

# **RESISC45-SigLIP2**
> **RESISC45-SigLIP2** is a vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for **multi-label** image classification. It is specifically trained to recognize and tag multiple land use and land cover scene categories from the **RESISC45** dataset using the **SiglipForImageClassification** architecture.
> [!note]
*SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features* https://arxiv.org/pdf/2502.14786
```py
Classification Report:
precision recall f1-score support
airplane 0.9830 0.9900 0.9865 700
airport 0.9461 0.9529 0.9495 700
baseball diamond 0.9802 0.9886 0.9844 700
basketball court 0.9516 0.9271 0.9392 700
beach 0.9914 0.9900 0.9907 700
bridge 0.9730 0.9771 0.9751 700
chaparral 0.9957 0.9986 0.9971 700
church 0.7949 0.8971 0.8430 700
circular farmland 0.9914 0.9914 0.9914 700
cloud 0.9957 0.9871 0.9914 700
commercial area 0.9231 0.8229 0.8701 700
dense residential 0.9355 0.8914 0.9129 700
desert 0.9821 0.9414 0.9613 700
forest 0.9652 0.9514 0.9583 700
freeway 0.9344 0.9571 0.9457 700
golf course 0.9759 0.9843 0.9801 700
ground track field 0.9623 0.9857 0.9739 700
harbor 0.9885 0.9843 0.9864 700
industrial area 0.9505 0.9043 0.9268 700
intersection 0.9855 0.9686 0.9769 700
island 0.9871 0.9829 0.9850 700
lake 0.9440 0.9629 0.9533 700
meadow 0.9564 0.9400 0.9481 700
medium residential 0.8602 0.9314 0.8944 700
mobile home park 0.9610 0.9500 0.9555 700
mountain 0.9388 0.9429 0.9408 700
overpass 0.9614 0.9614 0.9614 700
palace 0.8455 0.8286 0.8369 700
parking lot 0.9899 0.9757 0.9827 700
railway 0.9407 0.9071 0.9236 700
railway station 0.9104 0.9143 0.9123 700
rectangular farmland 0.9572 0.9271 0.9419 700
river 0.9281 0.9586 0.9431 700
roundabout 0.9914 0.9871 0.9893 700
runway 0.9669 0.9586 0.9627 700
sea ice 0.9957 0.9943 0.9950 700
ship 0.9558 0.9886 0.9719 700
snowberg 0.9886 0.9900 0.9893 700
sparse residential 0.9238 0.9700 0.9463 700
stadium 0.9716 0.9757 0.9736 700
storage tank 0.9787 0.9829 0.9808 700
tennis court 0.9326 0.9486 0.9405 700
terrace 0.9372 0.9586 0.9477 700
thermal power station 0.9482 0.9671 0.9576 700
wetland 0.9444 0.8986 0.9209 700
accuracy 0.9532 31500
macro avg 0.9538 0.9532 0.9532 31500
weighted avg 0.9538 0.9532 0.9532 31500
```
---
## **Label Space: 45 Scene Categories**
The model predicts the presence of one or more of the following **45 scene categories**:
```
Class 0: "airplane"
Class 1: "airport"
Class 2: "baseball diamond"
Class 3: "basketball court"
Class 4: "beach"
Class 5: "bridge"
Class 6: "chaparral"
Class 7: "church"
Class 8: "circular farmland"
Class 9: "cloud"
Class 10: "commercial area"
Class 11: "dense residential"
Class 12: "desert"
Class 13: "forest"
Class 14: "freeway"
Class 15: "golf course"
Class 16: "ground track field"
Class 17: "harbor"
Class 18: "industrial area"
Class 19: "intersection"
Class 20: "island"
Class 21: "lake"
Class 22: "meadow"
Class 23: "medium residential"
Class 24: "mobile home park"
Class 25: "mountain"
Class 26: "overpass"
Class 27: "palace"
Class 28: "parking lot"
Class 29: "railway"
Class 30: "railway station"
Class 31: "rectangular farmland"
Class 32: "river"
Class 33: "roundabout"
Class 34: "runway"
Class 35: "sea ice"
Class 36: "ship"
Class 37: "snowberg"
Class 38: "sparse residential"
Class 39: "stadium"
Class 40: "storage tank"
Class 41: "tennis court"
Class 42: "terrace"
Class 43: "thermal power station"
Class 44: "wetland"
```
---
## **Install dependencies**
```bash
pip install -q transformers torch pillow gradio
```
---
## **Inference Code**
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/RESISC45-SigLIP2" # Update to your actual Hugging Face model path
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Label map
id2label = {
"0": "airplane", "1": "airport", "2": "baseball diamond", "3": "basketball court", "4": "beach",
"5": "bridge", "6": "chaparral", "7": "church", "8": "circular farmland", "9": "cloud",
"10": "commercial area", "11": "dense residential", "12": "desert", "13": "forest", "14": "freeway",
"15": "golf course", "16": "ground track field", "17": "harbor", "18": "industrial area", "19": "intersection",
"20": "island", "21": "lake", "22": "meadow", "23": "medium residential", "24": "mobile home park",
"25": "mountain", "26": "overpass", "27": "palace", "28": "parking lot", "29": "railway",
"30": "railway station", "31": "rectangular farmland", "32": "river", "33": "roundabout", "34": "runway",
"35": "sea ice", "36": "ship", "37": "snowberg", "38": "sparse residential", "39": "stadium",
"40": "storage tank", "41": "tennis court", "42": "terrace", "43": "thermal power station", "44": "wetland"
}
def classify_resisc_image(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.sigmoid(logits).squeeze().tolist()
threshold = 0.5
predictions = {
id2label[str(i)]: round(probs[i], 3)
for i in range(len(probs)) if probs[i] >= threshold
}
return predictions or {"None Detected": 0.0}
# Gradio Interface
iface = gr.Interface(
fn=classify_resisc_image,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Predicted Scene Categories"),
title="RESISC45-SigLIP2",
description="Upload a satellite image to detect multiple land use and land cover categories (e.g., airport, forest, mountain)."
)
if __name__ == "__main__":
iface.launch()
```
---
## **Intended Use**
The **RESISC45-SigLIP2** model is ideal for multi-label classification tasks involving remote sensing imagery. Use cases include:
- **Remote Sensing Analysis** – Label elements in aerial/satellite images.
- **Urban Planning** – Identify urban structures and landscape features.
- **Geospatial Intelligence** – Aid in automated image interpretation pipelines.
- **Environmental Monitoring** – Track natural landforms and changes.
|
[
"airplane",
"airport",
"baseball diamond",
"basketball court",
"beach",
"bridge",
"chaparral",
"church",
"circular farmland",
"cloud",
"commercial area",
"dense residential",
"desert",
"forest",
"freeway",
"golf course",
"ground track field",
"harbor",
"industrial area",
"intersection",
"island",
"lake",
"meadow",
"medium residential",
"mobile home park",
"mountain",
"overpass",
"palace",
"parking lot",
"railway",
"railway station",
"rectangular farmland",
"river",
"roundabout",
"runway",
"sea ice",
"ship",
"snowberg",
"sparse residential",
"stadium",
"storage tank",
"tennis court",
"terrace",
"thermal power station",
"wetland"
] |
prithivMLmods/3D-Printed-Or-Not-SigLIP2
|

# **3D-Printed-Or-Not-SigLIP2**
> **3D-Printed-Or-Not-SigLIP2** is a vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for **binary image classification**. It is trained to distinguish between images of **3D printed** and **non-3D printed** objects using the **SiglipForImageClassification** architecture.
> [!note]
*SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features* https://arxiv.org/pdf/2502.14786
```py
Classification Report:
precision recall f1-score support
3D Printed 0.9108 0.9388 0.9246 25760
Not 3D Printed 0.9368 0.9081 0.9222 25760
accuracy 0.9234 51520
macro avg 0.9238 0.9234 0.9234 51520
weighted avg 0.9238 0.9234 0.9234 51520
```

---
## **Label Space: 2 Classes**
The model classifies each image into one of the following categories:
```
Class 0: "3D Printed"
Class 1: "Not 3D Printed"
```
---
## **Install Dependencies**
```bash
pip install -q transformers torch pillow gradio
```
---
## **Inference Code**
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/3D-Printed-Or-Not-SigLIP2" # Replace with your model path if different
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Label mapping
id2label = {
"0": "3D Printed",
"1": "Not 3D Printed"
}
def classify_3d_printed(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=classify_3d_printed,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=2, label="3D Printing Classification"),
title="3D-Printed-Or-Not-SigLIP2",
description="Upload an image to detect if the object is 3D printed or not."
)
if __name__ == "__main__":
iface.launch()
```
---
## **Intended Use**
**3D-Printed-Or-Not-SigLIP2** can be used for:
- **Manufacturing Verification** – Classify objects to ensure they meet production standards.
- **Educational Tools** – Train models and learners to distinguish between manufacturing methods.
- **Retail Filtering** – Categorize product images by manufacturing technique.
- **Quality Control** – Spot check datasets or content for 3D printing.
|
[
"3d printed",
"not 3d printed"
] |
prithivMLmods/Watermark-Detection-SigLIP2
|

# **Watermark-Detection-SigLIP2**
> **Watermark-Detection-SigLIP2** is a vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for **binary image classification**. It is trained to detect whether an image **contains a watermark or not**, using the **SiglipForImageClassification** architecture.
> [!note]
> Watermark detection works best with crisp and high-quality images. Noisy images are not recommended for validation.
> [!note]
*SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features* https://arxiv.org/pdf/2502.14786
```py
Classification Report:
precision recall f1-score support
No Watermark 0.9290 0.9722 0.9501 12779
Watermark 0.9622 0.9048 0.9326 9983
accuracy 0.9427 22762
macro avg 0.9456 0.9385 0.9414 22762
weighted avg 0.9435 0.9427 0.9424 22762
```

---
## **Label Space: 2 Classes**
The model classifies an image as either:
```
Class 0: "No Watermark"
Class 1: "Watermark"
```
---
## **Install dependencies**
```bash
pip install -q transformers torch pillow gradio
```
---
## **Inference Code**
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Watermark-Detection-SigLIP2" # Update this if using a different path
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Label mapping
id2label = {
"0": "No Watermark",
"1": "Watermark"
}
def classify_watermark(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=classify_watermark,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=2, label="Watermark Detection"),
title="Watermark-Detection-SigLIP2",
description="Upload an image to detect whether it contains a watermark."
)
if __name__ == "__main__":
iface.launch()
```
---
## **Demo Inference**
> [!Warning]
> Watermark
<table>
<tr>
<td><img src="https://hf.fast360.xyz/production/uploads/65bb837dbfb878f46c77de4c/sm062kFE7QJiLisTTjNwv.png" width="300"/></td>
<td><img src="https://hf.fast360.xyz/production/uploads/65bb837dbfb878f46c77de4c/UFymm_tzVRmov6vn_cElE.png" width="300"/></td>
<td><img src="https://hf.fast360.xyz/production/uploads/65bb837dbfb878f46c77de4c/bPzPAK-Mib8nFhHCkjD2B.png" width="300"/></td>
</tr>
<tr>
<td><img src="https://hf.fast360.xyz/production/uploads/65bb837dbfb878f46c77de4c/4fP8SBIYofKEeDBU0klQ2.png" width="300"/></td>
<td><img src="https://hf.fast360.xyz/production/uploads/65bb837dbfb878f46c77de4c/wD5M4YgyQGk9-QLFjMcn9.png" width="300"/></td>
<td><img src="https://hf.fast360.xyz/production/uploads/65bb837dbfb878f46c77de4c/yg0q88-0S4k4FUS4-qGNw.png" width="300"/></td>
</tr>
<tr>
<td><img src="https://hf.fast360.xyz/production/uploads/65bb837dbfb878f46c77de4c/WhRkeYw8-wIgldpaz0E4m.png" width="300"/></td>
<td><img src="https://hf.fast360.xyz/production/uploads/65bb837dbfb878f46c77de4c/Uhb1zBxQV_5CWLoyTAMmD.png" width="300"/></td>
<td><img src="https://hf.fast360.xyz/production/uploads/65bb837dbfb878f46c77de4c/7hnLD2b0f7B7edwgx_eOR.png" width="300"/></td>
</tr>
</table>
> [!Warning]
> No Watermark
<table>
<tr>
<td><img src="https://hf.fast360.xyz/production/uploads/65bb837dbfb878f46c77de4c/edyFBIETs3Dosn1edpGZ8.png" width="300"/></td>
<td><img src="https://hf.fast360.xyz/production/uploads/65bb837dbfb878f46c77de4c/3bRMcr2r0k00mMkthbYDW.png" width="300"/></td>
<td><img src="https://hf.fast360.xyz/production/uploads/65bb837dbfb878f46c77de4c/eeMLQEg4r89f9owe8jSij.png" width="300"/></td>
</tr>
<tr>
<td><img src="https://hf.fast360.xyz/production/uploads/65bb837dbfb878f46c77de4c/45jk4dvZk1wT3L7cprqql.png" width="300"/></td>
<td><img src="https://hf.fast360.xyz/production/uploads/65bb837dbfb878f46c77de4c/mrkm0JXXgSQVXi0_d7EKH.png" width="300"/></td>
<td><img src="https://hf.fast360.xyz/production/uploads/65bb837dbfb878f46c77de4c/f_5R7Inb8I-32hWJchkgj.png" width="300"/></td>
</tr>
<tr>
<td><img src="https://hf.fast360.xyz/production/uploads/65bb837dbfb878f46c77de4c/qIUTSy8SuJEsRkYGd0L5d.png" width="300"/></td>
<td><img src="https://hf.fast360.xyz/production/uploads/65bb837dbfb878f46c77de4c/DnlNo9lM4mBNUjlexKLVa.png" width="300"/></td>
<td><img src="https://hf.fast360.xyz/production/uploads/65bb837dbfb878f46c77de4c/bs4oyaapW8mi0lizOqWSf.png" width="300"/></td>
</tr>
</table>
---
## **Intended Use**
**Watermark-Detection-SigLIP2** is useful in scenarios such as:
- **Content Moderation** – Automatically detect watermarked content on image sharing platforms.
- **Dataset Cleaning** – Filter out watermarked images from training datasets.
- **Copyright Enforcement** – Monitor and flag usage of watermarked media.
- **Digital Forensics** – Support analysis of tampered or protected media assets.
|
[
"no watermark",
"watermark"
] |
prithivMLmods/PACS-DG-SigLIP2
|

# **PACS-DG-SigLIP2**
> **PACS-DG-SigLIP2** is a vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for **multi-class domain generalization** classification. It is trained to distinguish visual domains such as **art paintings**, **cartoons**, **photos**, and **sketches** using the **SiglipForImageClassification** architecture.
> [!note]
*SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features* https://arxiv.org/pdf/2502.14786
```py
Classification Report:
precision recall f1-score support
art_painting 0.8538 0.9380 0.8939 2048
cartoon 0.9891 0.9330 0.9603 2344
photo 0.9029 0.8635 0.8828 1670
sketch 0.9990 1.0000 0.9995 3929
accuracy 0.9488 9991
macro avg 0.9362 0.9336 0.9341 9991
weighted avg 0.9509 0.9488 0.9491 9991
```

---
# **ID2Label Mapping**
```py
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("flwrlabs/pacs")
# Extract unique masterCategory values (assuming it's a string field)
labels = sorted(set(example["domain"] for example in dataset["train"]))
# Create id2label mapping
id2label = {str(i): label for i, label in enumerate(labels)}
# Print the mapping
print(id2label)
```
---
## **Label Space: 4 Domain Categories**
The model predicts the most probable visual domain from the following:
```
Class 0: "art_painting"
Class 1: "cartoon"
Class 2: "photo"
Class 3: "sketch"
```
---
## **Install dependencies**
```bash
pip install -q transformers torch pillow gradio
```
---
## **Inference Code**
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/PACS-DG-SigLIP2" # Update to your actual model path on Hugging Face
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Label map
id2label = {
"0": "art_painting",
"1": "cartoon",
"2": "photo",
"3": "sketch"
}
def classify_pacs_image(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=classify_pacs_image,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=4, label="Predicted Domain Probabilities"),
title="PACS-DG-SigLIP2",
description="Upload an image to classify its visual domain: Art Painting, Cartoon, Photo, or Sketch."
)
if __name__ == "__main__":
iface.launch()
```
---
## **Intended Use**
The **PACS-DG-SigLIP2** model is designed to support tasks in **domain generalization**, particularly:
- **Cross-domain Visual Recognition** – Identify the domain style of an image.
- **Robust Representation Learning** – Aid in training or evaluating models on domain-shifted inputs.
- **Dataset Characterization** – Use as a tool to explore domain imbalance or drift.
- **Educational Tools** – Help understand how models distinguish between stylistic image variations.
|
[
"art_painting",
"cartoon",
"photo",
"sketch"
] |
prithivMLmods/Formula-Text-Detection
|

# **Formula-Text-Detection**
> **Formula-Text-Detection** is a vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for **binary image classification**. It is built using the **SiglipForImageClassification** architecture to distinguish between **mathematical formulas** and **natural text** in document or image regions.
> [!Note]
> Note: This model works best with plain text or formulas using the same font style
```py
Classification Report:
precision recall f1-score support
formula 0.9983 1.0000 0.9991 6375
text 1.0000 0.9980 0.9990 5457
accuracy 0.9991 11832
macro avg 0.9991 0.9990 0.9991 11832
weighted avg 0.9991 0.9991 0.9991 11832
```

---
> [!note]
*SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features* https://arxiv.org/pdf/2502.14786
---
## **Label Space: 2 Classes**
The model classifies each input image into one of the following categories:
```
Class 0: "formula"
Class 1: "text"
```
---
## **Install Dependencies**
```bash
pip install -q transformers torch pillow gradio
```
---
## **Inference Code**
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Formula-Text-Detection" # Replace with your model path if different
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Label mapping
id2label = {
"0": "formula",
"1": "text"
}
def classify_formula_or_text(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=classify_formula_or_text,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=2, label="Formula or Text"),
title="Formula-Text-Detection",
description="Upload an image region to classify whether it contains a mathematical formula or natural text."
)
if __name__ == "__main__":
iface.launch()
```
## **Demo Inference**
> [!Important]
> Text



> [!Important]
> Formula



---
## **Intended Use**
**Formula-Text-Detection** can be used in:
- **OCR Preprocessing** – Improve document OCR accuracy by separating formulas from text.
- **Scientific Document Analysis** – Automatically detect mathematical content.
- **Educational Platforms** – Classify and annotate scanned materials.
- **Layout Understanding** – Help AI systems interpret mixed-content documents.
|
[
"formula",
"text"
] |
taresco/newspaper_classifier_segformer
|
# newspaper_classifier_segformer
This model is a fine-tuned version of `nvidia/mit-b0` on a document OCR dataset. It classifies text document images into two categories: those requiring special segmentation processing (`segment`) and those that don't (`no_segment`). This classification is a critical preprocessing step in our OCR pipeline, enabling optimized document processing paths.
## Model Details
- **Base Architecture**: SegFormer (`nvidia/mit-b0`) - a transformer-based architecture that balances efficiency and performance for vision tasks
- **Training Dataset**: `taresco/document_ocr` - specialized collection of text document images with segmentation annotations
- **Input Format**: RGB images resized to 512×512 pixels
- **Output Classes**:
- `segment`: Images containing two or more distinct, unrelated text segments that require special OCR processing
- `no_segment`: Images containing single, cohesive content that can follow standard
## Intended Uses & Applications
- **OCR Pipeline Integration**: Primary use is as a preprocessing classifier in OCR workflows for document digitization
- **Document Routing**: Automatically route documents to specialized segmentation processing when needed
- **Batch Processing**: Efficiently handle large collections of document archives by applying appropriate processing techniques
- **Digital Library Processing**: Support for historical text document digitization projects
## Training and evaluation data
The model was fine-tuned on the `taresco/newspaper_ocr dataset`. The dataset contains newspaper images labeled as either segment or no_segment.
Dataset Splits:
Training Set: 19,111 examples, with 15% of this split set aside for cross-validation during training.
Test Set: 4,787 examples
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-5
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: AdamW with betas=(0.9, 0.999) and epsilon=1e-08
- num_epochs: 3
### Training results
The model achieved the following results on the evaluation set:
- Loss: 0.0198
- Accuracy: 99.62%
```text
precision recall f1-score support
no_segment 1.00 0.99 1.00 4471
segment 0.91 0.98 0.95 316
accuracy 0.99 4787
macro avg 0.95 0.99 0.97 4787
weighted avg 0.99 0.99 0.99 4787
```
## How to Use
You can use this model with the Hugging Face transformers library:
```python
from transformers import pipeline
# Load the pipeline
pipe = pipeline("image-classification", model="taresco/newspaper_classifier_segformer")
# Classify an image
image_path = "path_to_your_image.jpg"
result = pipe(image_path)
print(result)
```
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
[
"no_segment",
"segment"
] |
Sarthak003/finetuned-indian-food
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-indian-food
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2746
- Accuracy: 0.9341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.9137 | 0.3003 | 100 | 1.0509 | 0.8045 |
| 0.7366 | 0.6006 | 200 | 0.6186 | 0.8693 |
| 0.6302 | 0.9009 | 300 | 0.5310 | 0.8767 |
| 0.4666 | 1.2012 | 400 | 0.6071 | 0.8332 |
| 0.5461 | 1.5015 | 500 | 0.5630 | 0.8587 |
| 0.3645 | 1.8018 | 600 | 0.4435 | 0.8789 |
| 0.2982 | 2.1021 | 700 | 0.3622 | 0.9075 |
| 0.3269 | 2.4024 | 800 | 0.3381 | 0.9086 |
| 0.2817 | 2.7027 | 900 | 0.3447 | 0.9160 |
| 0.1864 | 3.0030 | 1000 | 0.3378 | 0.9171 |
| 0.1448 | 3.3033 | 1100 | 0.2802 | 0.9330 |
| 0.1908 | 3.6036 | 1200 | 0.2880 | 0.9309 |
| 0.1987 | 3.9039 | 1300 | 0.2746 | 0.9341 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"burger",
"butter_naan",
"kaathi_rolls",
"kadai_paneer",
"kulfi",
"masala_dosa",
"momos",
"paani_puri",
"pakode",
"pav_bhaji",
"pizza",
"samosa",
"chai",
"chapati",
"chole_bhature",
"dal_makhani",
"dhokla",
"fried_rice",
"idli",
"jalebi"
] |
Snppuzzle/Lanna-model-efficientnet-b0V2
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"aabnam",
"amnat",
"anong",
"anu",
"aphet",
"athik",
"banman",
"binbon",
"bochai",
"bokin",
"bolao",
"bopen",
"boron",
"buppe",
"chaehom",
"chaeyang",
"chaidi",
"chanan",
"changhan",
"chaofa",
"chaomom",
"chaomueang",
"chata",
"chatu",
"chaya",
"chiangdao",
"chiangmai",
"chingchang",
"chokdi",
"dangsaap",
"deklek",
"deumnai",
"doilo",
"doiluang",
"doitao",
"dokbua",
"eka",
"fanhan",
"hangdong",
"hangsat",
"hungtam",
"huwai",
"inta",
"iti",
"itom",
"jara",
"kadi",
"kamyao",
"kanmo",
"kapmo",
"kephet",
"kepphak",
"khaikai",
"khaipa",
"khamaen",
"khaoma",
"khata",
"kheumnguem",
"khomchai",
"khongbo",
"khongtua",
"khunyuam",
"khwaluat",
"khwamsuk",
"kinkhao",
"kinkhong",
"kinmuea",
"kinru",
"kluaibo",
"laemai",
"laichiao",
"lailong",
"lampang",
"lattho",
"loka",
"luathak",
"luatok",
"maechaem",
"maechai",
"maechan",
"maecharim",
"maelao",
"maelim",
"maemo",
"maephrik",
"maetaeng",
"maeth",
"maetha",
"maewang",
"maha",
"mahachai",
"mam",
"manpen",
"manu",
"mueangphan",
"mueangyong",
"nakrian",
"nambo",
"nanglong",
"nangsue",
"naokhong",
"nara",
"newin",
"nganban",
"nguenchae",
"nguenchat",
"omkoi",
"oprom",
"oram",
"osot",
"padaet",
"phaideuan",
"phaka",
"phakhawa",
"phayao",
"phoenwai",
"phuphiang",
"phusang",
"phuttha",
"phuttho",
"pikat",
"pikot",
"piso",
"puri",
"rakha",
"ratna",
"roisai",
"ruluem",
"saichai",
"saket",
"sana",
"sanam",
"sanya",
"sapha",
"sawa",
"sayong",
"siri",
"sitth",
"soekho",
"soekman",
"somkhuan",
"songkho",
"sukhato",
"sukka",
"taefai",
"taehai",
"tanam",
"taro",
"thairat",
"thamam",
"thawai",
"thewa",
"thuti",
"uru",
"wailang",
"wasa",
"wati",
"wihan",
"witcha",
"witwo",
"yapheng",
"yukloek"
] |
prithivMLmods/siglip2-x256p32-explicit-content
|

# **siglip2-x256p32-explicit-content**
> **siglip2-x256p32-explicit-content** is a vision-language encoder model fine-tuned from **siglip2-base-patch32-256** for **multi-class image classification**. Based on the **SiglipForImageClassification** architecture, this model is designed to detect and categorize various forms of visual content, from safe to explicit, making it ideal for content moderation and media filtering.
> [!note]
*SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features* https://arxiv.org/pdf/2502.14786
---
```py
Classification Report:
precision recall f1-score support
Anime Picture 0.9314 0.9139 0.9226 5600
Hentai Picture 0.9349 0.9213 0.9281 4180
Normal or Safe 0.9340 0.9328 0.9334 5503
Pornography 0.9769 0.9650 0.9709 5600
Enticing or Sensual 0.9264 0.9663 0.9459 5600
accuracy 0.9409 26483
macro avg 0.9407 0.9398 0.9402 26483
weighted avg 0.9410 0.9409 0.9408 26483
```

---
## **Label Space: 5 Classes**
This model classifies each image into one of the following content types:
```
Class 0: "Anime Picture"
Class 1: "Hentai Picture"
Class 2: "Normal or Safe"
Class 3: "Pornography"
Class 4: "Enticing or Sensual"
```
---
## **Install Dependencies**
```bash
pip install -q transformers torch pillow gradio
```
---
## **Inference Code**
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/siglip2-x256p32-explicit-content" # Replace with your HF model path if needed
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# ID to Label mapping
id2label = {
"0": "Anime Picture",
"1": "Hentai Picture",
"2": "Normal or Safe",
"3": "Pornography",
"4": "Enticing or Sensual"
}
def classify_explicit_content(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=classify_explicit_content,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=5, label="Predicted Content Type"),
title= "siglip2-x256p32-explicit-content",
description="Classifies images as Anime, Hentai, Pornography, Enticing, or Safe for use in moderation systems."
)
if __name__ == "__main__":
iface.launch()
```
---
## **Intended Use**
This model is ideal for:
- **AI-Powered Content Moderation**
- **NSFW and Explicit Media Detection**
- **Content Filtering in Social Media Platforms**
- **Image Dataset Cleaning & Annotation**
- **Parental Control Solutions**
|
[
"anime picture",
"hentai picture",
"normal or safe",
"pornography",
"enticing or sensual"
] |
encku/tuborg-single-04-2025
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.0007826363435015082
f1_macro: 0.9998667225525736
f1_micro: 0.9998677353649181
f1_weighted: 0.9998677517242137
precision_macro: 0.9998635579506288
precision_micro: 0.9998677353649181
precision_weighted: 0.999868129449558
recall_macro: 0.9998702523325717
recall_micro: 0.9998677353649181
recall_weighted: 0.9998677353649181
accuracy: 0.9998677353649181
|
[
"6974202725334",
"c001",
"c002",
"c003",
"c004",
"c005",
"c006",
"c007",
"c008",
"c009",
"c010",
"c012",
"c014",
"c015",
"c017",
"c018",
"c020",
"c021",
"c022",
"c023",
"c024",
"c025",
"c026",
"c027",
"c028",
"c029",
"c030",
"c031",
"c032",
"c033",
"c034",
"c035",
"c036",
"c037",
"c038",
"c039",
"c040",
"c041",
"c042",
"c043",
"c044",
"tbrg067",
"tbrg068",
"tbrg072",
"tbrg073",
"tbrg074",
"tbrg075",
"tbrg085",
"tbrg086",
"tbrg087",
"tbrg090",
"tbrg092",
"tbrg093",
"tbrg096",
"tbrg097",
"tbrg098",
"tbrg100",
"tbrg156",
"tbrg157",
"tbrg158",
"tbrg159",
"tt00277",
"tt00523",
"tt00677",
"tt00685",
"tt00765",
"tt00792",
"tt00793",
"tt00810",
"tt00811",
"tt00812",
"tt00853",
"tt00854",
"tt00857",
"tt00859",
"tt00875",
"tt00876",
"tt00944",
"tt00947",
"tt00964",
"tt00980",
"tt01001",
"tt01037",
"tt01069",
"tt01070",
"tt01071",
"tt01072",
"tt01142",
"tt01148",
"tt01149",
"tt01162",
"tt01169",
"tt01172",
"tt01174",
"tt01178",
"tt01231",
"tt01276",
"tt01277",
"tt01300",
"tt01307",
"tt01431",
"tt01460",
"tt01482"
] |
fdrmic/vit-plants
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-plants
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the dpdl-benchmark/oxford_flowers102 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7072
- Accuracy: 0.8922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 51 | 3.5425 | 0.4314 |
| 3.6417 | 2.0 | 102 | 2.6298 | 0.8039 |
| 3.6417 | 3.0 | 153 | 2.0558 | 0.8431 |
| 2.0246 | 4.0 | 204 | 1.7406 | 0.8725 |
| 2.0246 | 5.0 | 255 | 1.6419 | 0.8922 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
[
"72",
"84",
"70",
"51",
"48",
"83",
"42",
"58",
"40",
"35",
"60",
"59",
"95",
"87",
"23",
"91",
"75",
"79",
"24",
"20",
"64",
"89",
"100",
"62",
"16",
"2",
"41",
"26",
"45",
"67",
"1",
"61",
"54",
"39",
"7",
"12",
"29",
"11",
"43",
"98",
"63",
"15",
"55",
"38",
"36",
"78",
"3",
"30",
"57",
"73",
"25",
"5",
"53",
"90",
"0",
"92",
"9",
"68",
"8",
"28",
"50",
"22",
"96",
"31",
"47",
"69",
"34",
"52",
"21",
"81",
"49",
"46",
"65",
"94",
"32",
"56",
"77",
"6",
"86",
"88",
"33",
"71",
"27",
"93",
"99",
"17",
"80",
"18",
"66",
"14",
"101",
"44",
"74",
"4",
"85",
"82",
"10",
"13",
"37",
"76",
"19",
"97"
] |
danielhorvath94/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6387
- Accuracy: 0.892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.767 | 1.0 | 63 | 2.5637 | 0.838 |
| 1.8711 | 2.0 | 126 | 1.8012 | 0.871 |
| 1.602 | 2.96 | 186 | 1.6387 | 0.892 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
Granitagushi/vit-base-fruits-360
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-fruits-360
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the PedroSampaio/fruits-360 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0010
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 218 | 0.0045 | 1.0 |
| No log | 2.0 | 436 | 0.0020 | 1.0 |
| 0.0566 | 3.0 | 654 | 0.0013 | 1.0 |
| 0.0566 | 4.0 | 872 | 0.0010 | 1.0 |
| 0.0013 | 5.0 | 1090 | 0.0010 | 1.0 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
[
"5",
"11",
"20",
"66",
"102"
] |
Genereux-akotenou/Face-Mask-Detection
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Face-Mask-Detection
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0051
- Accuracy: 0.9992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0344 | 1.0 | 83 | 0.0051 | 0.9992 |
| 0.0112 | 2.0 | 166 | 0.0052 | 0.9983 |
| 0.0146 | 3.0 | 249 | 0.0045 | 0.9992 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
[
"withmask",
"withoutmask"
] |
WhoCares258/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6174
- Accuracy: 0.889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6674 | 1.0 | 63 | 2.4971 | 0.801 |
| 1.8502 | 2.0 | 126 | 1.7797 | 0.859 |
| 1.5701 | 2.96 | 186 | 1.6174 | 0.889 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
mjpsm/mazzy-specified-participation-image-classifier-updated
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"not paying attention",
"paying attention"
] |
prithivMLmods/GeoGuessr-55
|

# **GeoGuessr-55**
> **GeoGuessr-55** is a visual geolocation classification model that predicts the **country** from a single image. Based on the `SigLIP2` architecture, this model can classify images into one of **55 countries** using visual features such as landscapes, signs, vegetation, and architecture. It is useful for location-based games, geographic AI research, and image-based country inference.
```py
Classification Report:
precision recall f1-score support
Argentina 0.5292 0.5083 0.5185 482
Australia 0.7850 0.8146 0.7995 1192
Austria 0.6199 0.4380 0.5133 242
Bangladesh 0.4583 0.1486 0.2245 74
Belgium 0.2500 0.0065 0.0127 153
Bolivia 0.0000 0.0000 0.0000 81
Botswana 0.5263 0.2000 0.2899 100
Brazil 0.6562 0.8356 0.7351 1624
Bulgaria 0.5091 0.3709 0.4291 151
Cambodia 0.0000 0.0000 0.0000 82
Canada 0.7464 0.7973 0.7710 967
Chile 0.5000 0.1360 0.2138 228
Colombia 0.3191 0.0857 0.1351 175
Croatia 0.6667 0.0222 0.0430 90
Czechia 0.5000 0.0335 0.0628 179
Denmark 0.0000 0.0000 0.0000 138
Finland 0.6609 0.8338 0.7373 734
France 0.6129 0.7913 0.6908 2501
Germany 0.7943 0.8627 0.8271 488
Ghana 0.4706 0.1081 0.1758 74
Greece 0.3684 0.0809 0.1327 173
Hungary 0.5000 0.0342 0.0640 117
India 0.8261 0.5089 0.6298 112
Indonesia 0.6211 0.2935 0.3986 201
Ireland 0.6316 0.0591 0.1081 203
Israel 0.5427 0.5570 0.5498 228
Italy 0.4092 0.2736 0.3279 552
Japan 0.7996 0.9632 0.8738 2688
Kenya 0.4359 0.1868 0.2615 91
Latvia 0.0000 0.0000 0.0000 81
Lithuania 0.0000 0.0000 0.0000 98
Malaysia 0.5413 0.3986 0.4591 296
Mexico 0.4721 0.4571 0.4645 630
Netherlands 0.5101 0.3753 0.4324 405
New Zealand 0.6910 0.5116 0.5879 389
Nigeria 0.4000 0.3488 0.3727 86
Norway 0.7384 0.7055 0.7216 472
Peru 0.5000 0.3016 0.3762 189
Philippines 0.5217 0.1569 0.2412 153
Poland 0.5122 0.6275 0.5640 604
Portugal 0.2000 0.0059 0.0115 169
Romania 0.4167 0.3512 0.3812 242
Russia 0.6232 0.7946 0.6985 1232
Singapore 0.7339 0.9211 0.8169 494
Slovakia 0.0000 0.0000 0.0000 75
South Africa 0.7535 0.7717 0.7625 828
South Korea 0.5478 0.5059 0.5260 170
Spain 0.4589 0.5492 0.5000 752
Sweden 0.5311 0.3701 0.4362 508
Switzerland 1.0000 0.0165 0.0325 121
Taiwan 0.6029 0.4293 0.5015 382
Thailand 0.5309 0.7939 0.6363 660
Turkey 0.4872 0.2032 0.2868 187
Ukraine 0.0000 0.0000 0.0000 79
United Kingdom 0.6792 0.8746 0.7646 1738
accuracy 0.6485 25160
macro avg 0.4944 0.3713 0.3836 25160
weighted avg 0.6147 0.6485 0.6106 25160
```
---
## **Label Classes**
The model classifies an image into one of the following 55 countries:
```
0: Argentina 1: Australia 2: Austria 3: Bangladesh
4: Belgium 5: Bolivia 6: Botswana 7: Brazil
8: Bulgaria 9: Cambodia 10: Canada 11: Chile
12: Colombia 13: Croatia 14: Czechia 15: Denmark
16: Finland 17: France 18: Germany 19: Ghana
20: Greece 21: Hungary 22: India 23: Indonesia
24: Ireland 25: Israel 26: Italy 27: Japan
28: Kenya 29: Latvia 30: Lithuania 31: Malaysia
32: Mexico 33: Netherlands 34: New Zealand 35: Nigeria
36: Norway 37: Peru 38: Philippines 39: Poland
40: Portugal 41: Romania 42: Russia 43: Singapore
44: Slovakia 45: South Africa 46: South Korea 47: Spain
48: Sweden 49: Switzerland 50: Taiwan 51: Thailand
52: Turkey 53: Ukraine 54: United Kingdom
```
---
## **Installation**
```bash
pip install transformers torch pillow gradio
```
---
## **Example Inference Code**
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/GeoGuessr-55"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# ID to label mapping
id2label = {
"0": "Argentina", "1": "Australia", "2": "Austria", "3": "Bangladesh", "4": "Belgium",
"5": "Bolivia", "6": "Botswana", "7": "Brazil", "8": "Bulgaria", "9": "Cambodia",
"10": "Canada", "11": "Chile", "12": "Colombia", "13": "Croatia", "14": "Czechia",
"15": "Denmark", "16": "Finland", "17": "France", "18": "Germany", "19": "Ghana",
"20": "Greece", "21": "Hungary", "22": "India", "23": "Indonesia", "24": "Ireland",
"25": "Israel", "26": "Italy", "27": "Japan", "28": "Kenya", "29": "Latvia",
"30": "Lithuania", "31": "Malaysia", "32": "Mexico", "33": "Netherlands",
"34": "New Zealand", "35": "Nigeria", "36": "Norway", "37": "Peru", "38": "Philippines",
"39": "Poland", "40": "Portugal", "41": "Romania", "42": "Russia", "43": "Singapore",
"44": "Slovakia", "45": "South Africa", "46": "South Korea", "47": "Spain", "48": "Sweden",
"49": "Switzerland", "50": "Taiwan", "51": "Thailand", "52": "Turkey", "53": "Ukraine",
"54": "United Kingdom"
}
def classify_country(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
return {id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))}
# Launch Gradio demo
iface = gr.Interface(
fn=classify_country,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=5, label="Top Predicted Countries"),
title="GeoGuessr-55",
description="Upload an image to predict which country it's from. The model uses SigLIP2 to classify among 55 countries."
)
if __name__ == "__main__":
iface.launch()
```
---
## **Applications**
* **GeoGuessr-style games and challenges**
* **Geographical tagging of unlabeled datasets**
* **Tourism photo origin prediction**
* **Education and training for human geographers or ML enthusiasts**
|
[
"argentina",
"australia",
"austria",
"bangladesh",
"belgium",
"bolivia",
"botswana",
"brazil",
"bulgaria",
"cambodia",
"canada",
"chile",
"colombia",
"croatia",
"czechia",
"denmark",
"finland",
"france",
"germany",
"ghana",
"greece",
"hungary",
"india",
"indonesia",
"ireland",
"israel",
"italy",
"japan",
"kenya",
"latvia",
"lithuania",
"malaysia",
"mexico",
"netherlands",
"new zealand",
"nigeria",
"norway",
"peru",
"philippines",
"poland",
"portugal",
"romania",
"russia",
"singapore",
"slovakia",
"south africa",
"south korea",
"spain",
"sweden",
"switzerland",
"taiwan",
"thailand",
"turkey",
"ukraine",
"united kingdom"
] |
prithivMLmods/x-bot-profile-detection
|

# **x-bot-profile-detection**
> **x-bot-profile-detection** is a SigLIP2-based classification model designed to detect **profile authenticity types on social media platforms** (such as X/Twitter). It categorizes a profile image into four classes: **bot**, **cyborg**, **real**, or **verified**. Built on `google/siglip2-base-patch16-224`, the model leverages advanced vision-language pretraining for robust image classification.
```py
Classification Report:
precision recall f1-score support
bot 0.9912 0.9960 0.9936 2500
cyborg 0.9940 0.9880 0.9910 2500
real 0.8634 0.9936 0.9239 2500
verified 0.9948 0.8460 0.9144 2500
accuracy 0.9559 10000
macro avg 0.9609 0.9559 0.9557 10000
weighted avg 0.9609 0.9559 0.9557 10000
```

---
## **Label Classes**
The model predicts one of the following profile types:
```
0: bot → Automated accounts
1: cyborg → Partially automated or suspiciously mixed behavior
2: real → Genuine human users
3: verified → Verified accounts or official profiles
```
---
## **Installation**
```bash
pip install transformers torch pillow gradio
```
---
## **Example Inference Code**
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/x-bot-profile-detection"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Define class mapping
id2label = {
"0": "bot",
"1": "cyborg",
"2": "real",
"3": "verified"
}
def detect_profile_type(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Create Gradio UI
iface = gr.Interface(
fn=detect_profile_type,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=4, label="Predicted Profile Type"),
title="x-bot-profile-detection",
description="Upload a social media profile picture to classify it as Bot, Cyborg, Real, or Verified using a SigLIP2 model."
)
if __name__ == "__main__":
iface.launch()
```
---
## **Use Cases**
* Social media moderation and automation detection
* Anomaly detection in public discourse
* Botnet analysis and influence operation research
* Platform integrity and trust verification
|
[
"bot",
"cyborg",
"real",
"verified"
] |
prithivMLmods/NailbitingNet
|

# **NailbitingNet**
> **NailbitingNet** is a binary image classification model based on `google/siglip2-base-patch16-224`, designed to detect **nail-biting behavior** in images. Leveraging the **SiglipForImageClassification** architecture, this model is ideal for behavior monitoring, wellness applications, and human activity recognition.
```py
Classification Report:
precision recall f1-score support
biting 0.8412 0.9076 0.8731 2824
no biting 0.9271 0.8728 0.8991 3805
accuracy 0.8876 6629
macro avg 0.8841 0.8902 0.8861 6629
weighted avg 0.8905 0.8876 0.8881 6629
```

---
## **Label Classes**
The model distinguishes between:
```
Class 0: "biting" → The person appears to be biting their nails
Class 1: "no biting" → No nail-biting behavior detected
```
---
## **Installation**
```bash
pip install transformers torch pillow gradio
```
---
## **Example Inference Code**
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/NailbitingNet"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# ID to label mapping
id2label = {
"0": "biting",
"1": "no biting"
}
def detect_nailbiting(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=detect_nailbiting,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=2, label="Nail-Biting Detection"),
title="NailbitingNet",
description="Upload an image to classify whether the person is biting their nails or not."
)
if __name__ == "__main__":
iface.launch()
```
---
## **Use Cases**
* **Wellness & Habit Monitoring**
* **Behavioral AI Applications**
* **Mental Health Tools**
* **Dataset Filtering for Behavior Recognition**
|
[
"biting",
"no biting"
] |
prithivMLmods/RSI-CB256-07
|

# **RSI-CB256-07**
> **RSI-CB256-07** is a SigLIP2-based model fine-tuned for **coarse-grained remote sensing land-cover classification**. It distinguishes among 7 essential categories commonly used in environmental, urban planning, and geospatial analysis applications. The model is built on `google/siglip2-base-patch16-224 ` using the `SiglipForImageClassification` architecture.
```py
Classification Report:
precision recall f1-score support
transportation 0.9810 0.9858 0.9834 3300
other objects 0.9854 0.9932 0.9893 884
woodland 0.9973 0.9958 0.9966 6258
water area 0.9870 0.9837 0.9854 4104
other land 0.9925 0.9919 0.9922 3593
cultivated land 0.9918 0.9901 0.9909 2817
construction land 0.9945 0.9963 0.9954 3791
accuracy 0.9912 24747
macro avg 0.9899 0.9910 0.9904 24747
weighted avg 0.9912 0.9912 0.9912 24747
```

---
## **Label Space: 7 Remote Sensing Classes**
This model predicts one of the following categories for a given satellite or aerial image:
```
Class 0: "transportation"
Class 1: "other objects"
Class 2: "woodland"
Class 3: "water area"
Class 4: "other land"
Class 5: "cultivated land"
Class 6: "construction land"
```
---
## **Install Dependencies**
```bash
pip install -q transformers torch pillow gradio
```
---
## **Inference Code**
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/RSI-CB256-07"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# ID to label mapping
id2label = {
"0": "transportation",
"1": "other objects",
"2": "woodland",
"3": "water area",
"4": "other land",
"5": "cultivated land",
"6": "construction land"
}
def classify_rsi_image(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=classify_rsi_image,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=7, label="Predicted Land-Cover Category"),
title="RSI-CB256-07",
description="Upload a satellite or aerial image to classify it into one of seven coarse land-cover classes using SigLIP2."
)
if __name__ == "__main__":
iface.launch()
```
---
## **Applications**
* **Urban vs Rural Segmentation**
* **Land-Use Classification**
* **National/Regional Land Cover Monitoring**
* **Environmental Impact Assessment**
|
[
"transportation",
"other objects",
"woodland",
"water area",
"other land",
"cultivated land",
"construction land"
] |
prithivMLmods/SportsNet-7
|

# **SportsNet-7**
> **SportsNet-7** is a SigLIP2-based image classification model fine-tuned to identify seven popular sports categories. Built upon the powerful `google/siglip2-base-patch16-224` backbone, this model enables fast and accurate sport-type recognition from images or video frames.
```py
Classification Report:
precision recall f1-score support
badminton 0.9385 0.9760 0.9569 1125
cricket 0.9583 0.9739 0.9660 1226
football 0.9821 0.9144 0.9470 958
karate 0.9513 0.9611 0.9562 488
swimming 0.9960 0.9650 0.9802 514
tennis 0.9425 0.9530 0.9477 1169
wrestling 0.9761 0.9753 0.9757 1175
accuracy 0.9606 6655
macro avg 0.9635 0.9598 0.9614 6655
weighted avg 0.9611 0.9606 0.9606 6655
```

---
## **Label Classes**
The model classifies an input image into one of the following 7 sports:
```
0: badminton
1: cricket
2: football
3: karate
4: swimming
5: tennis
6: wrestling
```
---
## **Installation**
```bash
pip install transformers torch pillow gradio
```
---
## **Example Inference Code**
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/SportsNet-7"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Label mapping
id2label = {
"0": "badminton",
"1": "cricket",
"2": "football",
"3": "karate",
"4": "swimming",
"5": "tennis",
"6": "wrestling"
}
def predict_sport(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return prediction
# Gradio interface
iface = gr.Interface(
fn=predict_sport,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=3, label="Predicted Sport"),
title="SportsNet-7",
description="Upload a sports image to classify it as Badminton, Cricket, Football, Karate, Swimming, Tennis, or Wrestling."
)
if __name__ == "__main__":
iface.launch()
```
---
## **Use Cases**
* Sports video tagging
* Real-time sport event classification
* Dataset enrichment for sports analytics
* Educational or training datasets for sports AI
|
[
"badminton",
"cricket",
"football",
"karate",
"swimming",
"tennis",
"wrestling"
] |
Sarthak003/finetuned-indian-food-80cls
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-indian-food-80cls
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9267
- Accuracy: 0.7417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.6039 | 0.4695 | 100 | 1.9661 | 0.6067 |
| 1.0838 | 0.9390 | 200 | 1.5575 | 0.665 |
| 1.2459 | 1.4085 | 300 | 1.3433 | 0.6633 |
| 0.9601 | 1.8779 | 400 | 1.1770 | 0.7117 |
| 0.8474 | 2.3474 | 500 | 1.1096 | 0.705 |
| 0.5886 | 2.8169 | 600 | 1.0358 | 0.7267 |
| 0.3708 | 3.2864 | 700 | 0.9705 | 0.7383 |
| 0.4988 | 3.7559 | 800 | 0.9267 | 0.7417 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"adhirasam",
"aloo_gobi",
"bhatura",
"bhindi_masala",
"biryani",
"boondi",
"butter_chicken",
"chak_hao_kheer",
"cham_cham",
"chana_masala",
"chapati",
"chhena_kheeri",
"aloo_matar",
"chicken_razala",
"chicken_tikka",
"chicken_tikka_masala",
"chikki",
"daal_baati_churma",
"daal_puri",
"dal_makhani",
"dal_tadka",
"dharwad_pedha",
"doodhpak",
"aloo_methi",
"double_ka_meetha",
"dum_aloo",
"gajar_ka_halwa",
"gavvalu",
"ghevar",
"gulab_jamun",
"imarti",
"jalebi",
"kachori",
"kadai_paneer",
"aloo_shimla_mirch",
"kadhi_pakoda",
"kajjikaya",
"kakinada_khaja",
"kalakand",
"karela_bharta",
"kofta",
"kuzhi_paniyaram",
"lassi",
"ledikeni",
"litti_chokha",
"aloo_tikki",
"lyangcha",
"maach_jhol",
"makki_di_roti_sarson_da_saag",
"malapua",
"misi_roti",
"misti_doi",
"modak",
"mysore_pak",
"naan",
"navrattan_korma",
"anarsa",
"palak_paneer",
"paneer_butter_masala",
"phirni",
"pithe",
"poha",
"poornalu",
"pootharekulu",
"qubani_ka_meetha",
"rabri",
"ras_malai",
"ariselu",
"rasgulla",
"sandesh",
"shankarpali",
"sheer_korma",
"sheera",
"shrikhand",
"sohan_halwa",
"sohan_papdi",
"sutar_feni",
"unni_appam",
"bandar_laddu",
"basundi"
] |
10Devanshi/finetuned-indian-food
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-indian-food
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6964
- Accuracy: 0.8228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 3.1649 | 0.1832 | 100 | 2.9846 | 0.5133 |
| 2.5186 | 0.3663 | 200 | 2.2326 | 0.5620 |
| 2.0329 | 0.5495 | 300 | 1.9683 | 0.5737 |
| 1.9823 | 0.7326 | 400 | 1.7542 | 0.6126 |
| 1.7406 | 0.9158 | 500 | 1.5623 | 0.6379 |
| 1.3739 | 1.0989 | 600 | 1.3842 | 0.6989 |
| 1.3276 | 1.2821 | 700 | 1.3007 | 0.6937 |
| 1.0045 | 1.4652 | 800 | 1.1943 | 0.7132 |
| 1.1167 | 1.6484 | 900 | 1.1326 | 0.7502 |
| 1.1052 | 1.8315 | 1000 | 1.0366 | 0.7638 |
| 0.8164 | 2.0147 | 1100 | 1.0313 | 0.7508 |
| 0.7205 | 2.1978 | 1200 | 0.9928 | 0.7547 |
| 0.6983 | 2.3810 | 1300 | 0.9180 | 0.7761 |
| 0.568 | 2.5641 | 1400 | 0.8566 | 0.7930 |
| 0.7619 | 2.7473 | 1500 | 0.8324 | 0.7962 |
| 0.7556 | 2.9304 | 1600 | 0.8008 | 0.8014 |
| 0.5914 | 3.1136 | 1700 | 0.7661 | 0.8138 |
| 0.5826 | 3.2967 | 1800 | 0.7614 | 0.8079 |
| 0.5295 | 3.4799 | 1900 | 0.7281 | 0.8235 |
| 0.398 | 3.6630 | 2000 | 0.7051 | 0.8235 |
| 0.43 | 3.8462 | 2100 | 0.6964 | 0.8228 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"adhirasam",
"aloo_gobi",
"bhatura",
"bhindi_masala",
"biryani",
"boondi",
"burger",
"butter_chicken",
"butter_naan",
"chai",
"chak_hao_kheer",
"cham_cham",
"aloo_matar",
"chana_masala",
"chapati",
"chhena_kheeri",
"chicken_razala",
"chicken_tikka",
"chicken_tikka_masala",
"chikki",
"chole_bhature",
"daal_baati_churma",
"daal_puri",
"aloo_methi",
"dal_makhani",
"dal_tadka",
"dharwad_pedha",
"dhokla",
"doodhpak",
"double_ka_meetha",
"dum_aloo",
"fried_rice",
"gajar_ka_halwa",
"gavvalu",
"aloo_shimla_mirch",
"ghevar",
"gulab_jamun",
"idli",
"imarti",
"jalebi",
"kaathi_rolls",
"kachori",
"kadai_paneer",
"kadhi_pakoda",
"kajjikaya",
"aloo_tikki",
"kakinada_khaja",
"kalakand",
"karela_bharta",
"kofta",
"kulfi",
"kuzhi_paniyaram",
"lassi",
"ledikeni",
"litti_chokha",
"lyangcha",
"anarsa",
"maach_jhol",
"makki_di_roti_sarson_da_saag",
"malapua",
"masala_dosa",
"misi_roti",
"misti_doi",
"modak",
"momos",
"mysore_pak",
"naan",
"ariselu",
"navrattan_korma",
"paani_puri",
"pakode",
"palak_paneer",
"paneer_butter_masala",
"pav_bhaji",
"phirni",
"pithe",
"pizza",
"poha",
"bandar_laddu",
"poornalu",
"pootharekulu",
"qubani_ka_meetha",
"rabri",
"ras_malai",
"rasgulla",
"samosa",
"sandesh",
"shankarpali",
"sheer_korma",
"basundi",
"sheera",
"shrikhand",
"sohan_halwa",
"sohan_papdi",
"sutar_feni",
"unni_appam"
] |
strangerguardhf/nsfw-image-detection
|

# **nsfw-image-detection**
nsfw-image-detection is a vision-language encoder model fine-tuned from siglip2-base-patch16-256 for multi-class image classification. Built on the SiglipForImageClassification architecture, the model is trained to identify and categorize content types in images, especially for explicit, suggestive, or safe media filtering.
Original Model : https://huggingface.co/prithivMLmods/siglip2-x256-explicit-content
---
## **Evals**
```py
Classification Report:
precision recall f1-score support
Anime Picture 0.8940 0.8718 0.8827 5600
Hentai 0.8961 0.8935 0.8948 4180
Normal 0.9100 0.8895 0.8997 5503
Pornography 0.9496 0.9654 0.9574 5600
Enticing or Sensual 0.9132 0.9429 0.9278 5600
accuracy 0.9137 26483
macro avg 0.9126 0.9126 0.9125 26483
weighted avg 0.9135 0.9137 0.9135 26483
```

---
# **Quick Start with Transformers🤗**
## **Install Dependencies**
```bash
!pip install transformers torch pillow gradio
```
---
## **Inference Code**
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "strangerguardhf/nsfw-image-detection" # Replace with your model path if needed
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# ID to Label mapping
id2label = {
"0": "Anime Picture",
"1": "Hentai",
"2": "Normal",
"3": "Pornography",
"4": "Enticing or Sensual"
}
def classify_explicit_content(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=classify_explicit_content,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=5, label="Predicted Content Type"),
title="nsfw-image-detection",
description="Classifies images into explicit, suggestive, or safe categories (e.g., Hentai, Pornography, Normal)."
)
if __name__ == "__main__":
iface.launch()
```
---
## **Demo Inference**






---
The model classifies each image into one of the following content categories:
```
Class 0: "Anime Picture"
Class 1: "Hentai"
Class 2: "Normal"
Class 3: "Pornography"
Class 4: "Enticing or Sensual"
```
---
|
[
"anime picture",
"hentai",
"normal",
"pornography",
"enticing or sensual"
] |
prithivMLmods/Fire-Risk-Detection
|

# **Fire-Risk-Detection**
> **Fire-Risk-Detection** is a multi-class image classification model based on `google/siglip2-base-patch16-224`, trained to detect **fire risk levels** in geographical or environmental imagery. This model can be used for **wildfire monitoring**, **forest management**, and **environmental safety**.
---
```py
Classification Report:
precision recall f1-score support
high 0.4430 0.3382 0.3835 6296
low 0.3666 0.2296 0.2824 10705
moderate 0.3807 0.3757 0.3782 8617
non-burnable 0.8429 0.8385 0.8407 17959
very_high 0.3920 0.3400 0.3641 3268
very_low 0.6068 0.7856 0.6847 21757
water 0.9241 0.7744 0.8427 1729
accuracy 0.6032 70331
macro avg 0.5652 0.5260 0.5395 70331
weighted avg 0.5860 0.6032 0.5878 70331
```

## **Label Classes**
The model distinguishes between the following fire risk levels:
```
0: high
1: low
2: moderate
3: non-burnable
4: very_high
5: very_low
6: water
```
---
## **Installation**
```bash
pip install transformers torch pillow gradio
```
---
## **Example Inference Code**
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Fire-Risk-Detection"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# ID to label mapping
id2label = {
"0": "high",
"1": "low",
"2": "moderate",
"3": "non-burnable",
"4": "very_high",
"5": "very_low",
"6": "water"
}
def detect_fire_risk(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=detect_fire_risk,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=7, label="Fire Risk Level"),
title="Fire-Risk-Detection",
description="Upload an image to classify the fire risk level: very_low, low, moderate, high, very_high, non-burnable, or water."
)
if __name__ == "__main__":
iface.launch()
```
---
## **Applications**
* **Wildfire Early Warning Systems**
* **Environmental Monitoring**
* **Land Use Assessment**
* **Disaster Preparedness and Mitigation**
|
[
"high",
"low",
"moderate",
"non-burnable",
"very_high",
"very_low",
"water"
] |
prithivMLmods/RSI-CB256-35
|

# **RSI-CB256-35**
> **RSI-CB256-35** is a vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for **multi-class remote sensing image classification**. Built using the **SiglipForImageClassification** architecture, it is designed to accurately categorize overhead imagery into 35 distinct land-use and land-cover categories.
```py
Classification Report:
precision recall f1-score support
parking lot 0.9978 0.9872 0.9925 467
avenue 0.9927 1.0000 0.9963 544
highway 0.9283 0.9865 0.9565 223
bridge 0.9283 0.9659 0.9467 469
marina 0.9946 1.0000 0.9973 366
crossroads 0.9909 0.9801 0.9855 553
airport runway 0.9956 0.9926 0.9941 678
pipeline 0.9900 1.0000 0.9950 198
town 0.9970 1.0000 0.9985 335
airplane 0.9915 0.9915 0.9915 351
forest 0.9972 0.9945 0.9958 1082
mangrove 1.0000 1.0000 1.0000 1049
artificial grassland 0.9821 0.9717 0.9769 283
river protection forest 1.0000 1.0000 1.0000 524
shrubwood 1.0000 1.0000 1.0000 1331
sapling 0.9955 1.0000 0.9977 879
sparse forest 1.0000 1.0000 1.0000 1110
lakeshore 1.0000 1.0000 1.0000 438
river 0.9680 0.9555 0.9617 539
stream 1.0000 0.9971 0.9985 688
coastline 0.9913 0.9978 0.9946 459
hirst 0.9890 1.0000 0.9945 628
dam 0.9868 0.9259 0.9554 324
sea 0.9971 0.9864 0.9917 1028
snow mountain 1.0000 1.0000 1.0000 1153
sandbeach 0.9944 0.9907 0.9925 536
mountain 0.9926 0.9938 0.9932 812
desert 0.9757 0.9927 0.9841 1092
dry farm 1.0000 0.9992 0.9996 1309
green farmland 0.9984 0.9969 0.9977 644
bare land 0.9870 0.9630 0.9748 864
city building 0.9785 0.9892 0.9838 1014
residents 0.9926 0.9877 0.9901 810
container 0.9970 0.9955 0.9962 660
storage room 0.9985 1.0000 0.9992 1307
accuracy 0.9919 24747
macro avg 0.9894 0.9897 0.9895 24747
weighted avg 0.9920 0.9919 0.9919 24747
```
---
## **Label Space: 35 Remote Sensing Classes**
This model supports the classification of satellite or aerial images into the following classes:
```
Class 0: "parking lot"
Class 1: "avenue"
Class 2: "highway"
Class 3: "bridge"
Class 4: "marina"
Class 5: "crossroads"
Class 6: "airport runway"
Class 7: "pipeline"
Class 8: "town"
Class 9: "airplane"
Class 10: "forest"
Class 11: "mangrove"
Class 12: "artificial grassland"
Class 13: "river protection forest"
Class 14: "shrubwood"
Class 15: "sapling"
Class 16: "sparse forest"
Class 17: "lakeshore"
Class 18: "river"
Class 19: "stream"
Class 20: "coastline"
Class 21: "hirst"
Class 22: "dam"
Class 23: "sea"
Class 24: "snow mountain"
Class 25: "sandbeach"
Class 26: "mountain"
Class 27: "desert"
Class 28: "dry farm"
Class 29: "green farmland"
Class 30: "bare land"
Class 31: "city building"
Class 32: "residents"
Class 33: "container"
Class 34: "storage room"
```
---
## **Install Dependencies**
```bash
pip install -q transformers torch pillow gradio
```
---
## **Inference Code**
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/RSI-CB256-35"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# ID to label mapping
id2label = {
"0": "parking lot",
"1": "avenue",
"2": "highway",
"3": "bridge",
"4": "marina",
"5": "crossroads",
"6": "airport runway",
"7": "pipeline",
"8": "town",
"9": "airplane",
"10": "forest",
"11": "mangrove",
"12": "artificial grassland",
"13": "river protection forest",
"14": "shrubwood",
"15": "sapling",
"16": "sparse forest",
"17": "lakeshore",
"18": "river",
"19": "stream",
"20": "coastline",
"21": "hirst",
"22": "dam",
"23": "sea",
"24": "snow mountain",
"25": "sandbeach",
"26": "mountain",
"27": "desert",
"28": "dry farm",
"29": "green farmland",
"30": "bare land",
"31": "city building",
"32": "residents",
"33": "container",
"34": "storage room"
}
def classify_rsi_image(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=classify_rsi_image,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=5, label="Top-5 Predicted Categories"),
title="RSI-CB256-35",
description="Remote sensing image classification using SigLIP2. Upload an aerial or satellite image to classify its land-use category."
)
if __name__ == "__main__":
iface.launch()
```
---
## **Intended Use**
* **Land-Use Mapping and Planning**
* **Environmental Monitoring**
* **Infrastructure Identification**
* **Remote Sensing Analytics**
* **Agricultural and Forest Area Classification**
|
[
"parking lot",
"avenue",
"highway",
"bridge",
"marina",
"crossroads",
"airport runway",
"pipeline",
"town",
"airplane",
"forest",
"mangrove",
"artificial grassland",
"river protection forest",
"shrubwood",
"sapling",
"sparse forest",
"lakeshore",
"river",
"stream",
"coastline",
"hirst",
"dam",
"sea",
"snow mountain",
"sandbeach",
"mountain",
"desert",
"dry farm",
"green farmland",
"bare land",
"city building",
"residents",
"container",
"storage room"
] |
encku/pepsi-05-25
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 1.6352611055481248e-05
f1_macro: 1.0
f1_micro: 1.0
f1_weighted: 1.0
precision_macro: 1.0
precision_micro: 1.0
precision_weighted: 1.0
recall_macro: 1.0
recall_micro: 1.0
recall_weighted: 1.0
accuracy: 1.0
|
[
"8690574101191",
"8690574101207",
"8690574103706",
"8690574105892",
"8690574108107",
"8690574108596",
"8690574108602",
"8690574108626",
"8690574114191",
"8690574114214",
"8690574114221",
"8690574114245",
"8690574114337",
"8690574114368",
"8690574114399",
"8690574114412",
"8690574114429",
"8690574114443",
"8690574114450",
"8690574114467",
"8690574114481",
"8690574114511",
"8690574114528",
"8690574114535",
"8690574114566",
"8690574114573",
"8690574114641",
"8690574114658",
"8690574114665",
"8690574114672",
"8690574114696",
"8690574114702",
"8690574114726",
"8690574114733",
"8690574114740",
"8690574114757",
"8690574114788",
"8690574114795",
"8690574114801",
"8690574114856",
"8690574114863",
"8690574114887",
"8690574114894",
"8690574114900",
"8690574114917",
"8690574114955",
"8690574114979",
"8690574114986",
"8690574115020",
"8690574115150",
"8690574115969",
"8690574115983",
"8690574116270",
"8690574116287",
"8690574116768",
"8690574801039",
"8690574802036",
"8690574802838",
"8690574803835",
"8690574820344",
"8690574820542",
"8690574821549",
"86946247"
] |
Sarthak003/finetuned-indian-96cls
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-indian-96cls
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7096
- Accuracy: 0.8254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 3.1577 | 0.1832 | 100 | 2.9843 | 0.5042 |
| 2.5555 | 0.3663 | 200 | 2.2414 | 0.5633 |
| 2.0015 | 0.5495 | 300 | 1.9346 | 0.5840 |
| 2.0439 | 0.7326 | 400 | 1.7007 | 0.6275 |
| 1.7375 | 0.9158 | 500 | 1.5473 | 0.6502 |
| 1.3248 | 1.0989 | 600 | 1.4405 | 0.6781 |
| 1.3164 | 1.2821 | 700 | 1.2743 | 0.7034 |
| 1.1091 | 1.4652 | 800 | 1.2184 | 0.7099 |
| 1.0913 | 1.6484 | 900 | 1.1246 | 0.7281 |
| 1.037 | 1.8315 | 1000 | 1.0869 | 0.7365 |
| 0.8672 | 2.0147 | 1100 | 1.0038 | 0.7625 |
| 0.7188 | 2.1978 | 1200 | 0.9834 | 0.7515 |
| 0.673 | 2.3810 | 1300 | 0.9177 | 0.7703 |
| 0.5936 | 2.5641 | 1400 | 0.8707 | 0.7800 |
| 0.7091 | 2.7473 | 1500 | 0.8100 | 0.7995 |
| 0.787 | 2.9304 | 1600 | 0.8120 | 0.7995 |
| 0.5979 | 3.1136 | 1700 | 0.7536 | 0.8131 |
| 0.5319 | 3.2967 | 1800 | 0.7447 | 0.8125 |
| 0.5717 | 3.4799 | 1900 | 0.7275 | 0.8235 |
| 0.385 | 3.6630 | 2000 | 0.7247 | 0.8248 |
| 0.4009 | 3.8462 | 2100 | 0.7096 | 0.8254 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"adhirasam",
"aloo_gobi",
"bhatura",
"bhindi_masala",
"biryani",
"boondi",
"burger",
"butter_chicken",
"butter_naan",
"chai",
"chak_hao_kheer",
"cham_cham",
"aloo_matar",
"chana_masala",
"chapati",
"chhena_kheeri",
"chicken_razala",
"chicken_tikka",
"chicken_tikka_masala",
"chikki",
"chole_bhature",
"daal_baati_churma",
"daal_puri",
"aloo_methi",
"dal_makhani",
"dal_tadka",
"dharwad_pedha",
"dhokla",
"doodhpak",
"double_ka_meetha",
"dum_aloo",
"fried_rice",
"gajar_ka_halwa",
"gavvalu",
"aloo_shimla_mirch",
"ghevar",
"gulab_jamun",
"idli",
"imarti",
"jalebi",
"kaathi_rolls",
"kachori",
"kadai_paneer",
"kadhi_pakoda",
"kajjikaya",
"aloo_tikki",
"kakinada_khaja",
"kalakand",
"karela_bharta",
"kofta",
"kulfi",
"kuzhi_paniyaram",
"lassi",
"ledikeni",
"litti_chokha",
"lyangcha",
"anarsa",
"maach_jhol",
"makki_di_roti_sarson_da_saag",
"malapua",
"masala_dosa",
"misi_roti",
"misti_doi",
"modak",
"momos",
"mysore_pak",
"naan",
"ariselu",
"navrattan_korma",
"paani_puri",
"pakode",
"palak_paneer",
"paneer_butter_masala",
"pav_bhaji",
"phirni",
"pithe",
"pizza",
"poha",
"bandar_laddu",
"poornalu",
"pootharekulu",
"qubani_ka_meetha",
"rabri",
"ras_malai",
"rasgulla",
"samosa",
"sandesh",
"shankarpali",
"sheer_korma",
"basundi",
"sheera",
"shrikhand",
"sohan_halwa",
"sohan_papdi",
"sutar_feni",
"unni_appam"
] |
xavierbarbier/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1854
- Accuracy: 0.9445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3876 | 1.0 | 370 | 0.3243 | 0.9134 |
| 0.1983 | 2.0 | 740 | 0.2513 | 0.9269 |
| 0.1636 | 3.0 | 1110 | 0.2385 | 0.9283 |
| 0.1342 | 4.0 | 1480 | 0.2293 | 0.9296 |
| 0.1436 | 5.0 | 1850 | 0.2295 | 0.9269 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
chrisis2/vit-food-classification-chrisis2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-food-classification-chrisis2
Dieses Modell ist eine feinjustierte Version von [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) für den Datensatz "food-classification".
Es erzielt die folgenden Ergebnisse in der Auswertungsmenge:
- Loss: 0.2449
- Accuracy: 0.9263
## Modell Beschreibung
Das Modell basiert auf der Vision Transformer (ViT) Architektur und wurde mithilfe der Hugging Face Trainer API auf ca. 24 000 Bildern trainiert.
Ziel war es, ein robustes Klassifikationsmodell für die Anwendung von Gerichten zu entwickeln.
## Einsatzbereiche und Einschränkungen
Verwendungszweck:
- Klassifizierung von Lebensmitteln zu Bildungs-, Analyse- oder Demonstrationszwecken
- Kann in Gradio-Anwendungen zur Klassifizierung von Gerichten einschliesslich der westlichen und vor allem indischen Küche verwendet werden
Einschränkungen:
- Nicht geeignet für ungewöhnliche oder gemischte Gerichte, die nicht im Trainingssatz enthalten sind.
- Kann keine Zutaten, Kalorienwerte oder Portionsgrössen identifizieren, sondern nur das am besten passende Etikett
## Trainings- und Evaluationsdaten
Das Modell wurde auf einem öffentlichen Lebensmitteldatensatz von Kaggle.com mit rund **24 000 Bildern in 34 Klassen** trainiert. Jede Klasse umfasst ca. 700–800 Bilder.
Aufteilung des Datensatzes:
- 80 % Training
- 10 % Validierung
- 10 % Testdaten
### Training der Hyperparameter
Die folgenden Hyperparameter wurden beim Training verwendet:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 6
### Training Resultate
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0983 | 1.0 | 1194 | 0.2300 | 0.9330 |
| 0.0981 | 2.0 | 2388 | 0.2302 | 0.9359 |
| 0.0873 | 3.0 | 3582 | 0.2302 | 0.9346 |
| 0.1053 | 4.0 | 4776 | 0.2296 | 0.9372 |
| 0.0879 | 5.0 | 5970 | 0.2294 | 0.9351 |
| 0.0982 | 6.0 | 7164 | 0.2293 | 0.9355 |
## Vergleich mit Zero-Shot-Modell (CLIP)
Zur Einordnung der Leistung wurde das Modell mit einem Zero-Shot-Klassifikationsmodell [`openai/clip-vit-large-patch14`](https://huggingface.co/openai/clip-vit-large-patch14) verglichen. Beide Modelle wurden auf dem identischen Testset (2 388 Bilder) ausgewertet.
**Zero-Shot Modell:**
- Modell: CLIP (ViT-Large, Patch-14)
- Aufgabe: `zero-shot-image-classification`
- Keine Feinjustierung – nutzt nur Text-Bild-Verständnis
### Beobachtungen
- Das ViT-Modell erzielt konstant höhere Genauigkeit und klarere Top-1-Vorhersagen, besonders bei gut ausgeleuchteten, zentrierten Bildern.
- Das CLIP-Modell zeigte gute Generalisierung, besonders bei uneindeutigen oder visuell komplexen Gerichten.
- Während das ViT-Modell gezielt auf die Erkennung fester Klassen trainiert wurde, basiert CLIP auf einem allgemeinen Sprach-Bild-Verständnis und ordnet Bilder den semantisch passendsten Beschriftungen zu.
### Zero-Shot Ergebnisse:
- Accuracy: 0.8622
- Precision: 0.9058
- Recall: 0.8622
### Framework Versionen
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
[
"baked potato",
"crispy chicken",
"donut",
"fries",
"hot dog",
"sandwich",
"taco",
"taquito",
"apple_pie",
"burger",
"butter_naan",
"chai",
"chapati",
"cheesecake",
"chicken_curry",
"chole_bhature",
"dal_makhani",
"dhokla",
"fried_rice",
"ice_cream",
"idli",
"jalebi",
"kaathi_rolls",
"kadai_paneer",
"kulfi",
"masala_dosa",
"momos",
"omelette",
"paani_puri",
"pakode",
"pav_bhaji",
"pizza",
"samosa",
"sushi"
] |
arosidi/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1865
- Accuracy: 0.9432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.38 | 1.0 | 370 | 0.3080 | 0.9242 |
| 0.2037 | 2.0 | 740 | 0.2364 | 0.9350 |
| 0.1495 | 3.0 | 1110 | 0.2132 | 0.9459 |
| 0.1517 | 4.0 | 1480 | 0.2060 | 0.9432 |
| 0.1501 | 5.0 | 1850 | 0.2052 | 0.9432 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
BeckerAnas/convnext-tiny-224-finetuned-cifar10
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-cifar10
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1457
- Accuracy: 0.9566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.6039 | 1.0 | 352 | 0.2819 | 0.9326 |
| 0.3994 | 2.0 | 704 | 0.1639 | 0.9514 |
| 0.4089 | 2.9922 | 1053 | 0.1455 | 0.9574 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
SodaXII/dinov2-small_rice-leaf-disease-augmented-v4_v5_fft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dinov2-small_rice-leaf-disease-augmented-v4_v5_fft
This model is a fine-tuned version of [facebook/dinov2-small](https://huggingface.co/facebook/dinov2-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3174
- Accuracy: 0.9463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 256
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5071 | 0.5 | 64 | 0.6205 | 0.7852 |
| 0.4009 | 1.0 | 128 | 0.3635 | 0.8792 |
| 0.209 | 1.5 | 192 | 0.3144 | 0.8859 |
| 0.2231 | 2.0 | 256 | 0.2716 | 0.9128 |
| 0.1661 | 2.5 | 320 | 0.3476 | 0.8691 |
| 0.1308 | 3.0 | 384 | 0.2279 | 0.9195 |
| 0.067 | 3.5 | 448 | 0.3845 | 0.9195 |
| 0.063 | 4.0 | 512 | 0.3661 | 0.9027 |
| 0.0215 | 4.5 | 576 | 0.3287 | 0.9228 |
| 0.0148 | 5.0 | 640 | 0.2952 | 0.9329 |
| 0.0007 | 5.5 | 704 | 0.3063 | 0.9463 |
| 0.0002 | 6.0 | 768 | 0.2855 | 0.9396 |
| 0.0 | 6.5 | 832 | 0.2888 | 0.9396 |
| 0.0 | 7.0 | 896 | 0.2766 | 0.9463 |
| 0.0 | 7.5 | 960 | 0.2879 | 0.9497 |
| 0.0 | 8.0 | 1024 | 0.2960 | 0.9463 |
| 0.0 | 8.5 | 1088 | 0.2906 | 0.9463 |
| 0.0 | 9.0 | 1152 | 0.2920 | 0.9463 |
| 0.0 | 9.5 | 1216 | 0.2932 | 0.9463 |
| 0.0 | 10.0 | 1280 | 0.2921 | 0.9463 |
| 0.0 | 10.5 | 1344 | 0.2922 | 0.9463 |
| 0.0 | 11.0 | 1408 | 0.2924 | 0.9463 |
| 0.0 | 11.5 | 1472 | 0.2919 | 0.9497 |
| 0.0 | 12.0 | 1536 | 0.2925 | 0.9463 |
| 0.0 | 12.5 | 1600 | 0.2943 | 0.9463 |
| 0.0 | 13.0 | 1664 | 0.2969 | 0.9463 |
| 0.0 | 13.5 | 1728 | 0.2982 | 0.9430 |
| 0.0 | 14.0 | 1792 | 0.2977 | 0.9463 |
| 0.0 | 14.5 | 1856 | 0.2981 | 0.9463 |
| 0.0 | 15.0 | 1920 | 0.2980 | 0.9463 |
| 0.0 | 15.5 | 1984 | 0.2980 | 0.9463 |
| 0.0 | 16.0 | 2048 | 0.2982 | 0.9463 |
| 0.0 | 16.5 | 2112 | 0.2998 | 0.9463 |
| 0.0 | 17.0 | 2176 | 0.3035 | 0.9430 |
| 0.0 | 17.5 | 2240 | 0.3039 | 0.9463 |
| 0.0 | 18.0 | 2304 | 0.3029 | 0.9463 |
| 0.0 | 18.5 | 2368 | 0.3044 | 0.9430 |
| 0.0 | 19.0 | 2432 | 0.3046 | 0.9430 |
| 0.0 | 19.5 | 2496 | 0.3046 | 0.9430 |
| 0.0 | 20.0 | 2560 | 0.3047 | 0.9430 |
| 0.0 | 20.5 | 2624 | 0.3047 | 0.9430 |
| 0.0 | 21.0 | 2688 | 0.3074 | 0.9430 |
| 0.0 | 21.5 | 2752 | 0.3086 | 0.9430 |
| 0.0 | 22.0 | 2816 | 0.3083 | 0.9430 |
| 0.0 | 22.5 | 2880 | 0.3088 | 0.9430 |
| 0.0 | 23.0 | 2944 | 0.3103 | 0.9463 |
| 0.0 | 23.5 | 3008 | 0.3109 | 0.9463 |
| 0.0 | 24.0 | 3072 | 0.3107 | 0.9463 |
| 0.0 | 24.5 | 3136 | 0.3108 | 0.9463 |
| 0.0 | 25.0 | 3200 | 0.3109 | 0.9463 |
| 0.0 | 25.5 | 3264 | 0.3101 | 0.9463 |
| 0.0 | 26.0 | 3328 | 0.3133 | 0.9463 |
| 0.0 | 26.5 | 3392 | 0.3125 | 0.9497 |
| 0.0 | 27.0 | 3456 | 0.3163 | 0.9463 |
| 0.0 | 27.5 | 3520 | 0.3172 | 0.9463 |
| 0.0 | 28.0 | 3584 | 0.3166 | 0.9463 |
| 0.0 | 28.5 | 3648 | 0.3176 | 0.9463 |
| 0.0 | 29.0 | 3712 | 0.3175 | 0.9463 |
| 0.0 | 29.5 | 3776 | 0.3174 | 0.9463 |
| 0.0 | 30.0 | 3840 | 0.3174 | 0.9463 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.1
|
[
"bacterial leaf blight",
"brown spot",
"healthy rice leaf",
"leaf blast",
"leaf scald",
"narrow brown leaf spot",
"rice hispa",
"sheath blight"
] |
Dolssay/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8720
- Accuracy: 0.6020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.9995 | 0.5510 |
| 1.0085 | 2.0 | 14 | 0.9109 | 0.5714 |
| 0.7808 | 3.0 | 21 | 0.8720 | 0.6020 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cpu
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"ai",
"real",
"test"
] |
arosidi/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0484
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2249 | 1.0 | 190 | 0.0807 | 0.9726 |
| 0.1439 | 2.0 | 380 | 0.0535 | 0.9833 |
| 0.1036 | 3.0 | 570 | 0.0484 | 0.9837 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"annual crop",
"forest",
"herbaceous vegetation",
"highway",
"industrial",
"pasture",
"permanent crop",
"residential",
"river",
"sea or lake"
] |
hshankar113/region-id-pred-noCrops-convnext
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# region-id-pred-noCrops-convnext
This model is a fine-tuned version of [facebook/convnext-large-224-22k-1k](https://huggingface.co/facebook/convnext-large-224-22k-1k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1967
- Accuracy: 0.9485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
[
"1",
"10",
"5",
"6",
"7",
"8",
"9",
"11",
"12",
"13",
"14",
"15",
"2",
"3",
"4"
] |
asmae-khald/VITforBreastCancer
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"benign",
"malignant",
"normal"
] |
suzzit/skin-cancer-classification
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# suzzit/eyetracking_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6665
- Validation Loss: 0.6454
- Train Accuracy: 0.7570
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 2774, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6665 | 0.6454 | 0.7570 | 0 |
### Framework versions
- Transformers 4.51.3
- TensorFlow 2.19.0
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"0",
"1",
"2"
] |
MKnaepen/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1826
- Accuracy: 0.9459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3486 | 1.0 | 370 | 0.3232 | 0.9229 |
| 0.2016 | 2.0 | 740 | 0.2496 | 0.9229 |
| 0.152 | 3.0 | 1110 | 0.2324 | 0.9256 |
| 0.151 | 4.0 | 1480 | 0.2241 | 0.9269 |
| 0.129 | 5.0 | 1850 | 0.2226 | 0.9269 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
iqranaz230243/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1758
- Accuracy: 0.4685
- F1: 0.2481
- Precision: 0.4916
- Recall: 0.1894
- Auc Roc: 0.8295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Auc Roc |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------:|
| 0.2027 | 1.0 | 1963 | 0.1984 | 0.3955 | 0.0496 | 0.1271 | 0.0500 | 0.7076 |
| 0.1966 | 2.0 | 3926 | 0.1890 | 0.4000 | 0.0776 | 0.1877 | 0.0655 | 0.7667 |
| 0.1899 | 3.0 | 5889 | 0.1839 | 0.4160 | 0.1013 | 0.3830 | 0.0821 | 0.7890 |
| 0.186 | 4.0 | 7852 | 0.1829 | 0.4555 | 0.1097 | 0.3752 | 0.0904 | 0.7985 |
| 0.1833 | 5.0 | 9815 | 0.1797 | 0.4205 | 0.1424 | 0.3515 | 0.1073 | 0.8092 |
| 0.1845 | 6.0 | 11778 | 0.1771 | 0.4400 | 0.1255 | 0.4225 | 0.1026 | 0.8153 |
| 0.1788 | 7.0 | 13741 | 0.1769 | 0.4430 | 0.1472 | 0.3464 | 0.1114 | 0.8186 |
| 0.1769 | 8.0 | 15704 | 0.1756 | 0.4426 | 0.1448 | 0.3627 | 0.1132 | 0.8227 |
| 0.1792 | 9.0 | 17667 | 0.1745 | 0.4323 | 0.1653 | 0.3995 | 0.1252 | 0.8250 |
| 0.1755 | 10.0 | 19630 | 0.1745 | 0.4319 | 0.1842 | 0.4664 | 0.1400 | 0.8261 |
| 0.171 | 11.0 | 21593 | 0.1735 | 0.4400 | 0.1868 | 0.5672 | 0.1441 | 0.8285 |
| 0.17 | 12.0 | 23556 | 0.1744 | 0.4540 | 0.1907 | 0.5407 | 0.1462 | 0.8293 |
| 0.1726 | 13.0 | 25519 | 0.1745 | 0.4630 | 0.1750 | 0.4862 | 0.1339 | 0.8304 |
| 0.1715 | 14.0 | 27482 | 0.1732 | 0.4362 | 0.2047 | 0.5735 | 0.1524 | 0.8308 |
| 0.17 | 15.0 | 29445 | 0.1734 | 0.4504 | 0.2080 | 0.4848 | 0.1581 | 0.8316 |
| 0.168 | 16.0 | 31408 | 0.1736 | 0.4515 | 0.2320 | 0.5226 | 0.1791 | 0.8299 |
| 0.1688 | 17.0 | 33371 | 0.1729 | 0.4524 | 0.2315 | 0.5568 | 0.1780 | 0.8317 |
| 0.1663 | 18.0 | 35334 | 0.1748 | 0.4683 | 0.2146 | 0.5260 | 0.1617 | 0.8295 |
| 0.1631 | 19.0 | 37297 | 0.1747 | 0.4529 | 0.2444 | 0.5011 | 0.1908 | 0.8285 |
| 0.1649 | 20.0 | 39260 | 0.1748 | 0.4522 | 0.2404 | 0.4746 | 0.1850 | 0.8298 |
| 0.1634 | 21.0 | 41223 | 0.1747 | 0.4486 | 0.2213 | 0.5125 | 0.1655 | 0.8295 |
| 0.1621 | 22.0 | 43186 | 0.1745 | 0.4499 | 0.2420 | 0.4825 | 0.1865 | 0.8297 |
| 0.1605 | 23.0 | 45149 | 0.1753 | 0.4676 | 0.2206 | 0.4963 | 0.1669 | 0.8298 |
| 0.1573 | 24.0 | 47112 | 0.1762 | 0.4605 | 0.2409 | 0.4814 | 0.1854 | 0.8286 |
| 0.161 | 25.0 | 49075 | 0.1758 | 0.4685 | 0.2481 | 0.4916 | 0.1894 | 0.8295 |
| 0.158 | 26.0 | 51038 | 0.1761 | 0.4604 | 0.2445 | 0.4856 | 0.1883 | 0.8274 |
| 0.1576 | 27.0 | 53001 | 0.1765 | 0.4608 | 0.2539 | 0.4742 | 0.1981 | 0.8290 |
| 0.1567 | 28.0 | 54964 | 0.1767 | 0.4558 | 0.2550 | 0.4835 | 0.1959 | 0.8285 |
| 0.1585 | 29.0 | 56927 | 0.1767 | 0.4602 | 0.2485 | 0.4882 | 0.1898 | 0.8276 |
| 0.1551 | 29.9851 | 58860 | 0.1768 | 0.4616 | 0.2527 | 0.4797 | 0.1941 | 0.8281 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu121
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"atelectasis",
"cardiomegaly",
"consolidation",
"edema",
"effusion",
"emphysema",
"fibrosis",
"hernia",
"infiltration",
"mass",
"no finding",
"nodule",
"pleural_thickening",
"pneumonia",
"pneumothorax"
] |
FurqanNiazi/vit-base-patch16-224-in21k-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3479
- Accuracy: 0.5566
- F1: 0.0477
- Precision: 0.0371
- Recall: 0.0667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.3472 | 0.9938 | 121 | 0.3479 | 0.5566 | 0.0477 | 0.0371 | 0.0667 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"0",
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9",
"10",
"11",
"12",
"13",
"14"
] |
MichaelMM2000/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1940
- Accuracy: 0.9391
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.382 | 1.0 | 370 | 0.2590 | 0.9391 |
| 0.1976 | 2.0 | 740 | 0.1871 | 0.9445 |
| 0.1605 | 3.0 | 1110 | 0.1637 | 0.9567 |
| 0.1513 | 4.0 | 1480 | 0.1601 | 0.9513 |
| 0.1424 | 5.0 | 1850 | 0.1583 | 0.9513 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cpu
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
smata/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0523
- Accuracy: 0.9848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.21 | 1.0 | 190 | 0.0896 | 0.9719 |
| 0.1886 | 2.0 | 380 | 0.0917 | 0.9693 |
| 0.1307 | 3.0 | 570 | 0.0523 | 0.9848 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"annual crop",
"forest",
"herbaceous vegetation",
"highway",
"industrial",
"pasture",
"permanent crop",
"residential",
"river",
"sea or lake"
] |
Amoros/DinoAmoros-small-2025_05_05_49526-prova_bs16_freeze_monolabel
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DinoAmoros-small-2025_05_05_49526-prova_bs16_freeze_monolabel
This model is a fine-tuned version of [facebook/dinov2-small](https://huggingface.co/facebook/dinov2-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3316
- F1 Micro: 0.0
- F1 Macro: 0.0
- Accuracy: 0.0
- Learning Rate: 0.001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | Accuracy | Rate |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:-----:|
| No log | 1.0 | 1 | 3.3334 | 0.0 | 0.0 | 0.0 | 0.001 |
| No log | 2.0 | 2 | 3.3186 | 0.0 | 0.0 | 0.0 | 0.001 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.6.0+cu118
- Datasets 3.0.2
- Tokenizers 0.21.1
|
[
"algae",
"acr",
"acr_br",
"anem",
"cca",
"ech",
"fts",
"gal",
"gon",
"h_oval",
"h_uni",
"mtp",
"p",
"poc",
"por",
"r",
"rdc",
"s",
"sg",
"sarg",
"ser",
"slt",
"sp",
"turf"
] |
TianZhou621/vit-base-travel-document-classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-travel-document-classification
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the TianZhou621/travel-document-dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4433
- Accuracy: 1.0
## Model description
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 8 | 0.6273 | 0.5 |
| No log | 2.0 | 16 | 0.5903 | 0.75 |
| No log | 3.0 | 24 | 0.5776 | 0.75 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"passport",
"driver_lisence"
] |
suzzit/skin-cancer-classificationv2
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# suzzit/skin-cancer-classificationv2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0965
- Validation Loss: 1.0877
- Train Accuracy: 0.4152
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 13870, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.0993 | 1.1040 | 0.2810 | 0 |
| 1.0972 | 1.0997 | 0.3696 | 1 |
| 1.0965 | 1.0877 | 0.4152 | 2 |
### Framework versions
- Transformers 4.51.3
- TensorFlow 2.19.0
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"0",
"1",
"2"
] |
berng/myclass
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# myclass
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2064
- eval_accuracy: 0.9472
- eval_runtime: 10.9177
- eval_samples_per_second: 67.688
- eval_steps_per_second: 8.518
- epoch: 2.0
- step: 740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Tokenizers 0.21.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
suzzit/skin-cancer-classificationv3
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# suzzit/skin-cancer-classificationv3
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0941
- Train Accuracy: 0.5342
- Validation Loss: 1.0367
- Validation Accuracy: 0.5342
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 27740, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 1.0954 | 0.5190 | 1.0453 | 0.5190 | 0 |
| 1.0954 | 0.5342 | 1.0447 | 0.5342 | 1 |
| 1.0957 | 0.5342 | 1.0446 | 0.5342 | 2 |
| 1.0951 | 0.5291 | 1.0426 | 0.5291 | 3 |
| 1.0941 | 0.5342 | 1.0367 | 0.5342 | 4 |
### Framework versions
- Transformers 4.51.3
- TensorFlow 2.19.0
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"0",
"1",
"2"
] |
mbiarreta/beit-ena24
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-ena24
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the ena24 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9751
- Accuracy: 0.6809
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 2.8853 | 0.1302 | 100 | 2.8477 | 0.2176 |
| 2.7962 | 0.2604 | 200 | 2.5644 | 0.1985 |
| 2.4273 | 0.3906 | 300 | 2.3036 | 0.2885 |
| 2.1587 | 0.5208 | 400 | 2.1759 | 0.3305 |
| 2.1721 | 0.6510 | 500 | 2.0160 | 0.3405 |
| 2.0539 | 0.7812 | 600 | 1.8444 | 0.4084 |
| 1.7687 | 0.9115 | 700 | 1.7824 | 0.4069 |
| 1.7545 | 1.0417 | 800 | 1.6203 | 0.5092 |
| 1.5865 | 1.1719 | 900 | 1.5315 | 0.5176 |
| 1.3489 | 1.3021 | 1000 | 1.6056 | 0.5084 |
| 1.2064 | 1.4323 | 1100 | 1.2743 | 0.5878 |
| 1.1963 | 1.5625 | 1200 | 1.1703 | 0.6336 |
| 1.0333 | 1.6927 | 1300 | 1.1410 | 0.6412 |
| 1.1828 | 1.8229 | 1400 | 1.0684 | 0.6473 |
| 0.6996 | 1.9531 | 1500 | 0.9751 | 0.6809 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"american black bear",
"american crow",
"eastern fox squirrel",
"eastern gray squirrel",
"grey fox",
"horse",
"northern raccoon",
"red fox",
"striped skunk",
"vehicle",
"virginia opossum",
"white_tailed_deer",
"bird",
"wild turkey",
"woodchuck",
"bobcat",
"chicken",
"coyote",
"dog",
"domestic cat",
"eastern chipmunk",
"eastern cottontail"
] |
suzzit/skin-cancer-classificationv4
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# suzzit/skin-cancer-classificationv4
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.3392
- Validation Loss: 0.0292
- Validation Accuracy: 0.3848
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 27740, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0000 | 0.3259 | 0.0412 | 0.3797 | 0 |
| 0.0000 | 0.3169 | 0.0403 | 0.3797 | 1 |
| 0.0000 | 0.3079 | 0.0335 | 0.3823 | 2 |
| 0.0000 | 0.3338 | 0.0294 | 0.3848 | 3 |
| 0.0000 | 0.3392 | 0.0292 | 0.3848 | 4 |
### Framework versions
- Transformers 4.51.3
- TensorFlow 2.19.0
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"0",
"1",
"2"
] |
berng/myclass2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# myclass2
This model is a fine-tuned version of [microsoft/cvt-13](https://huggingface.co/microsoft/cvt-13) on the pcuenq/oxford-pets dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Tokenizers 0.21.1
|
[
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
lixugang/lixg_chong_model002
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lixg_chong_model002
This model is a fine-tuned version of [Falconsai/nsfw_image_detection](https://huggingface.co/Falconsai/nsfw_image_detection) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3551
- Accuracy: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 4 | 1.9344 | 0.1667 |
| No log | 2.0 | 8 | 1.8649 | 0.3333 |
| 1.6376 | 3.0 | 12 | 1.7836 | 0.5 |
| 1.6376 | 4.0 | 16 | 1.7310 | 0.4167 |
| 1.3504 | 5.0 | 20 | 1.6152 | 0.5833 |
| 1.3504 | 6.0 | 24 | 1.5817 | 0.5833 |
| 1.3504 | 7.0 | 28 | 1.5656 | 0.5833 |
| 1.3337 | 8.0 | 32 | 1.5015 | 0.6667 |
| 1.3337 | 9.0 | 36 | 1.5062 | 0.6667 |
| 1.1244 | 10.0 | 40 | 1.4488 | 0.5833 |
| 1.1244 | 11.0 | 44 | 1.4459 | 0.75 |
| 1.1244 | 12.0 | 48 | 1.4110 | 0.6667 |
| 1.172 | 13.0 | 52 | 1.4307 | 0.8333 |
| 1.172 | 14.0 | 56 | 1.4241 | 0.6667 |
| 1.0562 | 15.0 | 60 | 1.3551 | 0.75 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cpu
- Datasets 3.5.0
- Tokenizers 0.21.0
|
[
"刺蛾",
"地长蝽科",
"小线角木蠹蛾",
"桃红颈天牛",
"犀金龟",
"美国白蛾",
"致倦库蚊"
] |
Amoros/DinoAmoros-small-2025_05_06_34659-prova_bs16_freeze_monolabel
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DinoAmoros-small-2025_05_06_34659-prova_bs16_freeze_monolabel
This model is a fine-tuned version of [facebook/dinov2-small](https://huggingface.co/facebook/dinov2-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1588
- F1 Micro: 0.2
- F1 Macro: 0.1339
- Accuracy: 0.2
- Learning Rate: 0.0001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | Accuracy | Rate |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:------:|
| No log | 1.0 | 1 | 3.3289 | 0.0 | 0.0 | 0.0 | 0.001 |
| No log | 2.0 | 2 | 3.2965 | 0.0 | 0.0 | 0.0 | 0.001 |
| No log | 3.0 | 3 | 3.2832 | 0.0 | 0.0 | 0.0 | 0.001 |
| No log | 4.0 | 4 | 3.2658 | 0.0 | 0.0 | 0.0 | 0.001 |
| No log | 5.0 | 5 | 3.2574 | 0.0 | 0.0 | 0.0 | 0.001 |
| No log | 6.0 | 6 | 3.2430 | 0.1 | 0.0444 | 0.1 | 0.001 |
| No log | 7.0 | 7 | 3.2252 | 0.0 | 0.0 | 0.0 | 0.001 |
| No log | 8.0 | 8 | 3.2078 | 0.0 | 0.0 | 0.0 | 0.001 |
| No log | 9.0 | 9 | 3.2320 | 0.0 | 0.0 | 0.0 | 0.001 |
| No log | 10.0 | 10 | 3.2318 | 0.0 | 0.0 | 0.0 | 0.001 |
| No log | 11.0 | 11 | 3.2295 | 0.1 | 0.0417 | 0.1 | 0.001 |
| No log | 12.0 | 12 | 3.1906 | 0.3 | 0.3254 | 0.3 | 0.001 |
| No log | 13.0 | 13 | 3.1611 | 0.5 | 0.4278 | 0.5 | 0.001 |
| No log | 14.0 | 14 | 3.1342 | 0.5 | 0.4278 | 0.5 | 0.001 |
| No log | 15.0 | 15 | 3.1030 | 0.4 | 0.3333 | 0.4 | 0.001 |
| No log | 16.0 | 16 | 3.0616 | 0.4 | 0.3333 | 0.4 | 0.001 |
| No log | 17.0 | 17 | 3.0400 | 0.4 | 0.3333 | 0.4 | 0.001 |
| No log | 18.0 | 18 | 2.9770 | 0.4 | 0.3810 | 0.4 | 0.001 |
| No log | 19.0 | 19 | 2.9270 | 0.4 | 0.3810 | 0.4 | 0.001 |
| No log | 20.0 | 20 | 2.8651 | 0.5 | 0.5 | 0.5 | 0.001 |
| No log | 21.0 | 21 | 2.8102 | 0.5 | 0.5 | 0.5 | 0.001 |
| No log | 22.0 | 22 | 2.7777 | 0.5 | 0.6111 | 0.5 | 0.001 |
| No log | 23.0 | 23 | 2.7405 | 0.5 | 0.5 | 0.5 | 0.001 |
| No log | 24.0 | 24 | 2.6667 | 0.5 | 0.6111 | 0.5 | 0.001 |
| No log | 25.0 | 25 | 2.6132 | 0.5 | 0.6111 | 0.5 | 0.001 |
| No log | 26.0 | 26 | 2.5834 | 0.5 | 0.5 | 0.5 | 0.001 |
| No log | 27.0 | 27 | 2.5603 | 0.5 | 0.5 | 0.5 | 0.001 |
| No log | 28.0 | 28 | 2.5502 | 0.5 | 0.5 | 0.5 | 0.001 |
| No log | 29.0 | 29 | 2.5178 | 0.5 | 0.5 | 0.5 | 0.001 |
| No log | 30.0 | 30 | 2.4880 | 0.5 | 0.5 | 0.5 | 0.001 |
| No log | 31.0 | 31 | 2.4448 | 0.5 | 0.6111 | 0.5 | 0.001 |
| No log | 32.0 | 32 | 2.4291 | 0.5 | 0.6111 | 0.5 | 0.001 |
| No log | 33.0 | 33 | 2.4009 | 0.5 | 0.6111 | 0.5 | 0.001 |
| No log | 34.0 | 34 | 2.3793 | 0.5 | 0.6111 | 0.5 | 0.001 |
| No log | 35.0 | 35 | 2.3791 | 0.5 | 0.6111 | 0.5 | 0.001 |
| No log | 36.0 | 36 | 2.3485 | 0.5 | 0.6111 | 0.5 | 0.001 |
| No log | 37.0 | 37 | 2.3341 | 0.5 | 0.6111 | 0.5 | 0.001 |
| No log | 38.0 | 38 | 2.3232 | 0.5 | 0.6111 | 0.5 | 0.001 |
| No log | 39.0 | 39 | 2.3187 | 0.5 | 0.6111 | 0.5 | 0.001 |
| No log | 40.0 | 40 | 2.3267 | 0.5 | 0.6111 | 0.5 | 0.001 |
| No log | 41.0 | 41 | 2.3401 | 0.5 | 0.6111 | 0.5 | 0.001 |
| No log | 42.0 | 42 | 2.3547 | 0.5 | 0.6111 | 0.5 | 0.001 |
| No log | 43.0 | 43 | 2.3599 | 0.5 | 0.6111 | 0.5 | 0.001 |
| No log | 44.0 | 44 | 2.3857 | 0.5 | 0.6111 | 0.5 | 0.001 |
| No log | 45.0 | 45 | 2.4031 | 0.5 | 0.6111 | 0.5 | 0.001 |
| No log | 46.0 | 46 | 2.3779 | 0.5 | 0.6111 | 0.5 | 0.0001 |
| No log | 47.0 | 47 | 2.3591 | 0.5 | 0.6111 | 0.5 | 0.0001 |
| No log | 48.0 | 48 | 2.3669 | 0.5 | 0.6111 | 0.5 | 0.0001 |
| No log | 49.0 | 49 | 2.3603 | 0.5 | 0.6111 | 0.5 | 0.0001 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.6.0+cu118
- Datasets 3.0.2
- Tokenizers 0.21.1
|
[
"algae",
"acr",
"acr_br",
"anem",
"cca",
"ech",
"fts",
"gal",
"gon",
"h_oval",
"h_uni",
"mtp",
"p",
"poc",
"por",
"r",
"rdc",
"s",
"sg",
"sarg",
"ser",
"slt",
"sp",
"turf"
] |
Amoros/DinoAmoros-small-2025_05_06_35666-prova_bs16_freeze_monolabel
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DinoAmoros-small-2025_05_06_35666-prova_bs16_freeze_monolabel
This model is a fine-tuned version of [facebook/dinov2-small](https://huggingface.co/facebook/dinov2-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7420
- F1 Micro: 0.2
- F1 Macro: 0.0889
- Accuracy: 0.2
- Learning Rate: 0.0001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | Accuracy | Rate |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:------:|
| No log | 1.0 | 1 | 3.1514 | 0.0 | 0.0 | 0.0 | 0.001 |
| No log | 2.0 | 2 | 3.1508 | 0.0 | 0.0 | 0.0 | 0.001 |
| No log | 3.0 | 3 | 3.1557 | 0.0 | 0.0 | 0.0 | 0.001 |
| No log | 4.0 | 4 | 3.1625 | 0.0 | 0.0 | 0.0 | 0.001 |
| No log | 5.0 | 5 | 3.1223 | 0.2 | 0.0571 | 0.2 | 0.001 |
| No log | 6.0 | 6 | 3.1340 | 0.1 | 0.04 | 0.1 | 0.001 |
| No log | 7.0 | 7 | 3.1654 | 0.2 | 0.0606 | 0.2 | 0.001 |
| No log | 8.0 | 8 | 3.1547 | 0.3 | 0.1667 | 0.3 | 0.001 |
| No log | 9.0 | 9 | 3.1387 | 0.3 | 0.1667 | 0.3 | 0.001 |
| No log | 10.0 | 10 | 3.1150 | 0.3 | 0.1481 | 0.3 | 0.001 |
| No log | 11.0 | 11 | 3.0898 | 0.3 | 0.1667 | 0.3 | 0.001 |
| No log | 12.0 | 12 | 3.0465 | 0.1 | 0.0444 | 0.1 | 0.001 |
| No log | 13.0 | 13 | 3.0262 | 0.1 | 0.05 | 0.1 | 0.001 |
| No log | 14.0 | 14 | 3.0098 | 0.2 | 0.1222 | 0.2 | 0.001 |
| No log | 15.0 | 15 | 3.0078 | 0.2 | 0.1371 | 0.2 | 0.001 |
| No log | 16.0 | 16 | 3.0350 | 0.2 | 0.1371 | 0.2 | 0.001 |
| No log | 17.0 | 17 | 3.0396 | 0.2 | 0.1467 | 0.2 | 0.001 |
| No log | 18.0 | 18 | 3.0520 | 0.1 | 0.0667 | 0.1 | 0.001 |
| No log | 19.0 | 19 | 3.0697 | 0.1 | 0.0667 | 0.1 | 0.001 |
| No log | 20.0 | 20 | 3.1036 | 0.2 | 0.1467 | 0.2 | 0.001 |
| No log | 21.0 | 21 | 3.1122 | 0.2 | 0.16 | 0.2 | 0.001 |
| No log | 22.0 | 22 | 3.1102 | 0.1 | 0.08 | 0.1 | 0.0001 |
| No log | 23.0 | 23 | 3.1098 | 0.2 | 0.16 | 0.2 | 0.0001 |
| No log | 24.0 | 24 | 3.1053 | 0.2 | 0.1467 | 0.2 | 0.0001 |
| No log | 25.0 | 25 | 3.0939 | 0.2 | 0.16 | 0.2 | 0.0001 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.6.0+cu118
- Datasets 3.0.2
- Tokenizers 0.21.1
|
[
"algae",
"acr",
"acr_br",
"anem",
"cca",
"ech",
"fts",
"gal",
"gon",
"h_oval",
"h_uni",
"mtp",
"p",
"poc",
"por",
"r",
"rdc",
"s",
"sg",
"sarg",
"ser",
"slt",
"sp",
"turf"
] |
Amoros/DinoAmoros-small-2025_05_06_36794-prova_bs16_freeze_monolabel
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DinoAmoros-small-2025_05_06_36794-prova_bs16_freeze_monolabel
This model is a fine-tuned version of [facebook/dinov2-small](https://huggingface.co/facebook/dinov2-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8192
- F1 Micro: 0.4
- F1 Macro: 0.2333
- Accuracy: 0.4
- Learning Rate: 1e-05
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | Accuracy | Rate |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:------:|
| No log | 1.0 | 1 | 3.1980 | 0.0 | 0.0 | 0.0 | 0.001 |
| No log | 2.0 | 2 | 3.1316 | 0.0 | 0.0 | 0.0 | 0.001 |
| No log | 3.0 | 3 | 3.125 | 0.0 | 0.0 | 0.0 | 0.001 |
| No log | 4.0 | 4 | 3.1275 | 0.0 | 0.0 | 0.0 | 0.001 |
| No log | 5.0 | 5 | 3.0676 | 0.1 | 0.0571 | 0.1 | 0.001 |
| No log | 6.0 | 6 | 3.0656 | 0.1 | 0.0571 | 0.1 | 0.001 |
| No log | 7.0 | 7 | 3.0043 | 0.1 | 0.05 | 0.1 | 0.001 |
| No log | 8.0 | 8 | 2.9486 | 0.3 | 0.1531 | 0.3 | 0.001 |
| No log | 9.0 | 9 | 2.8736 | 0.4 | 0.2816 | 0.4 | 0.001 |
| No log | 10.0 | 10 | 2.8121 | 0.4 | 0.2816 | 0.4 | 0.001 |
| No log | 11.0 | 11 | 2.7541 | 0.6 | 0.4028 | 0.6 | 0.001 |
| No log | 12.0 | 12 | 2.6967 | 0.6 | 0.4028 | 0.6 | 0.001 |
| No log | 13.0 | 13 | 2.6596 | 0.6 | 0.4841 | 0.6 | 0.001 |
| No log | 14.0 | 14 | 2.6483 | 0.5 | 0.3571 | 0.5 | 0.001 |
| No log | 15.0 | 15 | 2.6144 | 0.5 | 0.3571 | 0.5 | 0.001 |
| No log | 16.0 | 16 | 2.5909 | 0.5 | 0.3571 | 0.5 | 0.001 |
| No log | 17.0 | 17 | 2.5481 | 0.5 | 0.4286 | 0.5 | 0.001 |
| No log | 18.0 | 18 | 2.5126 | 0.5 | 0.3619 | 0.5 | 0.001 |
| No log | 19.0 | 19 | 2.4791 | 0.5 | 0.4286 | 0.5 | 0.001 |
| No log | 20.0 | 20 | 2.4738 | 0.5 | 0.4286 | 0.5 | 0.001 |
| No log | 21.0 | 21 | 2.4310 | 0.5 | 0.4286 | 0.5 | 0.001 |
| No log | 22.0 | 22 | 2.4030 | 0.5 | 0.4286 | 0.5 | 0.001 |
| No log | 23.0 | 23 | 2.4001 | 0.5 | 0.4286 | 0.5 | 0.001 |
| No log | 24.0 | 24 | 2.3993 | 0.5 | 0.4286 | 0.5 | 0.001 |
| No log | 25.0 | 25 | 2.3928 | 0.5 | 0.4286 | 0.5 | 0.001 |
| No log | 26.0 | 26 | 2.3896 | 0.5 | 0.4286 | 0.5 | 0.001 |
| No log | 27.0 | 27 | 2.3909 | 0.5 | 0.4286 | 0.5 | 0.001 |
| No log | 28.0 | 28 | 2.3772 | 0.5 | 0.4286 | 0.5 | 0.001 |
| No log | 29.0 | 29 | 2.3432 | 0.5 | 0.4286 | 0.5 | 0.001 |
| No log | 30.0 | 30 | 2.3192 | 0.5 | 0.4286 | 0.5 | 0.001 |
| No log | 31.0 | 31 | 2.3088 | 0.6 | 0.5476 | 0.6 | 0.001 |
| No log | 32.0 | 32 | 2.3004 | 0.6 | 0.5476 | 0.6 | 0.001 |
| No log | 33.0 | 33 | 2.3044 | 0.6 | 0.5476 | 0.6 | 0.001 |
| No log | 34.0 | 34 | 2.2979 | 0.6 | 0.5476 | 0.6 | 0.001 |
| No log | 35.0 | 35 | 2.3048 | 0.6 | 0.5476 | 0.6 | 0.001 |
| No log | 36.0 | 36 | 2.2987 | 0.6 | 0.5476 | 0.6 | 0.001 |
| No log | 37.0 | 37 | 2.2997 | 0.6 | 0.5476 | 0.6 | 0.001 |
| No log | 38.0 | 38 | 2.3195 | 0.6 | 0.5476 | 0.6 | 0.001 |
| No log | 39.0 | 39 | 2.3158 | 0.5 | 0.4286 | 0.5 | 0.001 |
| No log | 40.0 | 40 | 2.3083 | 0.5 | 0.4286 | 0.5 | 0.001 |
| No log | 41.0 | 41 | 2.2830 | 0.5 | 0.4286 | 0.5 | 0.0001 |
| No log | 42.0 | 42 | 2.2719 | 0.5 | 0.4286 | 0.5 | 0.0001 |
| No log | 43.0 | 43 | 2.2404 | 0.5 | 0.4286 | 0.5 | 0.0001 |
| No log | 44.0 | 44 | 2.2439 | 0.5 | 0.4286 | 0.5 | 0.0001 |
| No log | 45.0 | 45 | 2.2249 | 0.5 | 0.4286 | 0.5 | 0.0001 |
| No log | 46.0 | 46 | 2.2116 | 0.5 | 0.4286 | 0.5 | 0.0001 |
| No log | 47.0 | 47 | 2.1979 | 0.5 | 0.4286 | 0.5 | 0.0001 |
| No log | 48.0 | 48 | 2.2088 | 0.5 | 0.4286 | 0.5 | 0.0001 |
| No log | 49.0 | 49 | 2.2075 | 0.5 | 0.4286 | 0.5 | 0.0001 |
| No log | 50.0 | 50 | 2.2067 | 0.5 | 0.4286 | 0.5 | 0.0001 |
| No log | 51.0 | 51 | 2.2182 | 0.5 | 0.4286 | 0.5 | 0.0001 |
| No log | 52.0 | 52 | 2.2243 | 0.5 | 0.4286 | 0.5 | 0.0001 |
| No log | 53.0 | 53 | 2.2344 | 0.5 | 0.4286 | 0.5 | 0.0001 |
| No log | 54.0 | 54 | 2.2222 | 0.5 | 0.4286 | 0.5 | 1e-05 |
| No log | 55.0 | 55 | 2.2211 | 0.5 | 0.4286 | 0.5 | 1e-05 |
| No log | 56.0 | 56 | 2.2072 | 0.5 | 0.4286 | 0.5 | 1e-05 |
| No log | 57.0 | 57 | 2.2094 | 0.5 | 0.4286 | 0.5 | 1e-05 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.6.0+cu118
- Datasets 3.0.2
- Tokenizers 0.21.1
|
[
"algae",
"acr",
"acr_br",
"anem",
"cca",
"ech",
"fts",
"gal",
"gon",
"h_oval",
"h_uni",
"mtp",
"p",
"poc",
"por",
"r",
"rdc",
"s",
"sg",
"sarg",
"ser",
"slt",
"sp",
"turf"
] |
Amoros/DinoAmoros-large-2025_05_06_37720-prova_bs16_freeze_monolabel
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DinoAmoros-large-2025_05_06_37720-prova_bs16_freeze_monolabel
This model is a fine-tuned version of [facebook/dinov2-large](https://huggingface.co/facebook/dinov2-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9011
- F1 Micro: 0.4
- F1 Macro: 0.2667
- Accuracy: 0.4
- Learning Rate: 1e-05
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | Accuracy | Rate |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:------:|
| No log | 1.0 | 1 | 3.1371 | 0.1 | 0.0667 | 0.1 | 0.001 |
| No log | 2.0 | 2 | 3.1559 | 0.1 | 0.05 | 0.1 | 0.001 |
| No log | 3.0 | 3 | 3.0740 | 0.2 | 0.1524 | 0.2 | 0.001 |
| No log | 4.0 | 4 | 3.0686 | 0.4 | 0.2333 | 0.4 | 0.001 |
| No log | 5.0 | 5 | 2.9377 | 0.5 | 0.2714 | 0.5 | 0.001 |
| No log | 6.0 | 6 | 2.8814 | 0.4 | 0.2286 | 0.4 | 0.001 |
| No log | 7.0 | 7 | 2.8164 | 0.3 | 0.1587 | 0.3 | 0.001 |
| No log | 8.0 | 8 | 2.7270 | 0.2 | 0.1333 | 0.2 | 0.001 |
| No log | 9.0 | 9 | 2.6143 | 0.4 | 0.2667 | 0.4 | 0.001 |
| No log | 10.0 | 10 | 2.4839 | 0.4 | 0.2667 | 0.4 | 0.001 |
| No log | 11.0 | 11 | 2.3587 | 0.4 | 0.2667 | 0.4 | 0.001 |
| No log | 12.0 | 12 | 2.2280 | 0.5 | 0.3048 | 0.5 | 0.001 |
| No log | 13.0 | 13 | 2.0928 | 0.5 | 0.3048 | 0.5 | 0.001 |
| No log | 14.0 | 14 | 2.0064 | 0.5 | 0.3048 | 0.5 | 0.001 |
| No log | 15.0 | 15 | 1.8491 | 0.5 | 0.3048 | 0.5 | 0.001 |
| No log | 16.0 | 16 | 1.7906 | 0.6 | 0.4048 | 0.6 | 0.001 |
| No log | 17.0 | 17 | 1.7055 | 0.6 | 0.4048 | 0.6 | 0.001 |
| No log | 18.0 | 18 | 1.6328 | 0.6 | 0.4048 | 0.6 | 0.001 |
| No log | 19.0 | 19 | 1.5969 | 0.6 | 0.4048 | 0.6 | 0.001 |
| No log | 20.0 | 20 | 1.5740 | 0.6 | 0.4048 | 0.6 | 0.001 |
| No log | 21.0 | 21 | 1.5900 | 0.6 | 0.4048 | 0.6 | 0.001 |
| No log | 22.0 | 22 | 1.6051 | 0.6 | 0.4048 | 0.6 | 0.001 |
| No log | 23.0 | 23 | 1.6009 | 0.6 | 0.4048 | 0.6 | 0.001 |
| No log | 24.0 | 24 | 1.5874 | 0.6 | 0.4048 | 0.6 | 0.001 |
| No log | 25.0 | 25 | 1.5908 | 0.6 | 0.4048 | 0.6 | 0.001 |
| No log | 26.0 | 26 | 1.5866 | 0.6 | 0.4048 | 0.6 | 0.001 |
| No log | 27.0 | 27 | 1.5541 | 0.6 | 0.4048 | 0.6 | 0.0001 |
| No log | 28.0 | 28 | 1.5380 | 0.6 | 0.4048 | 0.6 | 0.0001 |
| No log | 29.0 | 29 | 1.5012 | 0.6 | 0.4048 | 0.6 | 0.0001 |
| No log | 30.0 | 30 | 1.4620 | 0.6 | 0.4048 | 0.6 | 0.0001 |
| No log | 31.0 | 31 | 1.4533 | 0.6 | 0.4048 | 0.6 | 0.0001 |
| No log | 32.0 | 32 | 1.4629 | 0.6 | 0.4048 | 0.6 | 0.0001 |
| No log | 33.0 | 33 | 1.4447 | 0.6 | 0.4048 | 0.6 | 0.0001 |
| No log | 34.0 | 34 | 1.4385 | 0.6 | 0.4048 | 0.6 | 0.0001 |
| No log | 35.0 | 35 | 1.4305 | 0.6 | 0.4048 | 0.6 | 0.0001 |
| No log | 36.0 | 36 | 1.4212 | 0.6 | 0.4048 | 0.6 | 0.0001 |
| No log | 37.0 | 37 | 1.4205 | 0.6 | 0.4048 | 0.6 | 0.0001 |
| No log | 38.0 | 38 | 1.4142 | 0.6 | 0.4048 | 0.6 | 0.0001 |
| No log | 39.0 | 39 | 1.3952 | 0.6 | 0.4048 | 0.6 | 0.0001 |
| No log | 40.0 | 40 | 1.3966 | 0.6 | 0.4048 | 0.6 | 0.0001 |
| No log | 41.0 | 41 | 1.4088 | 0.6 | 0.4048 | 0.6 | 0.0001 |
| No log | 42.0 | 42 | 1.4072 | 0.6 | 0.4048 | 0.6 | 0.0001 |
| No log | 43.0 | 43 | 1.4033 | 0.6 | 0.4048 | 0.6 | 0.0001 |
| No log | 44.0 | 44 | 1.4202 | 0.6 | 0.4048 | 0.6 | 0.0001 |
| No log | 45.0 | 45 | 1.4144 | 0.5 | 0.3048 | 0.5 | 0.0001 |
| No log | 46.0 | 46 | 1.4181 | 0.5 | 0.3048 | 0.5 | 1e-05 |
| No log | 47.0 | 47 | 1.4177 | 0.5 | 0.3048 | 0.5 | 1e-05 |
| No log | 48.0 | 48 | 1.4321 | 0.5 | 0.3048 | 0.5 | 1e-05 |
| No log | 49.0 | 49 | 1.4261 | 0.5 | 0.3048 | 0.5 | 1e-05 |
### Framework versions
- Transformers 4.48.0
- Pytorch 2.6.0+cu118
- Datasets 3.0.2
- Tokenizers 0.21.1
|
[
"algae",
"acr",
"acr_br",
"anem",
"cca",
"ech",
"fts",
"gal",
"gon",
"h_oval",
"h_uni",
"mtp",
"p",
"poc",
"por",
"r",
"rdc",
"s",
"sg",
"sarg",
"ser",
"slt",
"sp",
"turf"
] |
itsJasminZWIN/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
rainele/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1614
- Accuracy: 0.9526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3717 | 1.0 | 370 | 0.3222 | 0.9202 |
| 0.2059 | 2.0 | 740 | 0.2576 | 0.9296 |
| 0.1669 | 3.0 | 1110 | 0.2435 | 0.9269 |
| 0.1388 | 4.0 | 1480 | 0.2392 | 0.9296 |
| 0.136 | 5.0 | 1850 | 0.2352 | 0.9310 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
## Evaluation Results (Fine-Tuned ViT)
| Metric | Score |
|------------|---------|
| Accuracy | 88.00% |
| Precision | 87.68% |
| Recall | 88.00% |
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
BerkantBaskaya/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1679
- Accuracy: 0.9567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3861 | 1.0 | 370 | 0.3222 | 0.9134 |
| 0.2019 | 2.0 | 740 | 0.2572 | 0.9080 |
| 0.1738 | 3.0 | 1110 | 0.2441 | 0.9188 |
| 0.1463 | 4.0 | 1480 | 0.2342 | 0.9161 |
| 0.1421 | 5.0 | 1850 | 0.2315 | 0.9202 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
| Metric | Score |
| ------------- | ------ |
| **Accuracy** | 0.8800 |
| **Precision** | 0.8768 |
| **Recall** | 0.8800 |
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
catmeomeo/herbal_identification
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"10_tuc_doan",
"11_thien_mon",
"12_sai_ho",
"13_vien_chi",
"14_su_quan_tu",
"15_bach_mao_can",
"16_cau_ky_tu",
"17_do_trong",
"18_dang_sam",
"19_cau_tich",
"1_boi_mau",
"20_tho_ty_tu",
"21_hoang_ky",
"22_coi_xay",
"23_huyen_sam",
"24_tang_chi",
"25_diep_ha_chau",
"26_kim_anh",
"27_cat_can",
"28_co_ngot",
"29_cuc_hoa",
"2_hoe_hoa",
"30_to_moc",
"31_kim_tien_thao",
"32_dan_sam",
"33_chi_tu",
"34_ngai_cuu",
"35_sinh_dia",
"36_nguu_tat",
"37_bach_truat",
"38_nhan_tran",
"39_duong_quy",
"3_linh_chi",
"40_nho_noi",
"41_dao_nhan",
"42_cat_canh",
"43_ha_kho_thao",
"44_xa_tien_tu",
"45_che_day",
"46_xa_can",
"47_tang_diep",
"48_ngu_boi_tu",
"49_ngu_gia_bi",
"4_thong_thao",
"50_rau_ngo",
"51_nguu_bang_tu",
"52_cam_thao_dat",
"53_dai_hoang",
"54_hoai_son",
"55_dam_duong_hoac",
"56_moc_qua",
"57_bo_cong_anh",
"58_tho_phuc_linh",
"59_mach_mon",
"5_trach_ta",
"60_ke_dau_ngua",
"61_tang_bach_bi",
"62_cam_thao_bac",
"63_o_tac_cot",
"64_thao_quyet_minh",
"65_dai_tao",
"66_kim_ngan_hoa",
"67_tao_nhan",
"68_ban_ha",
"69_ca_gai_leo",
"6_y_di",
"70_kho_qua",
"71_xuyen_tam_lien",
"72_nhan_sam",
"73_bach_gioi_tu",
"74_tam_that",
"75_bach_chi",
"76_sa_sam",
"77_bach_thuoc",
"78_cam_thao_day",
"7_can_khuong",
"8_ty_giai",
"9_cot_toai_bo"
] |
prithivMLmods/Marathi-Sign-Language-Detection
|

# Marathi-Sign-Language-Detection
> Marathi-Sign-Language-Detection is a vision-language model fine-tuned from google/siglip2-base-patch16-224 for multi-class image classification. It is trained to recognize Marathi sign language hand gestures and map them to corresponding Devanagari characters using the SiglipForImageClassification architecture.
```py
Classification Report:
precision recall f1-score support
अ 0.9881 0.9911 0.9896 1009
आ 0.9926 0.9237 0.9569 1022
इ 0.8132 0.9609 0.8809 1101
ई 0.9424 0.8894 0.9151 1103
उ 0.9477 0.9073 0.9271 1198
ऊ 0.9436 1.0000 0.9710 1071
ए 0.9153 0.9378 0.9264 1141
ऐ 0.7790 0.8871 0.8295 1089
ओ 0.9188 0.9581 0.9381 1075
औ 1.0000 0.9226 0.9598 1021
क 0.9566 0.9160 0.9358 1083
क्ष 0.9287 0.9667 0.9473 1200
ख 0.9913 1.0000 0.9956 1140
ग 0.9753 0.9982 0.9866 1109
घ 0.8398 0.7908 0.8146 1200
च 0.9388 0.9016 0.9198 1158
छ 0.9764 0.8127 0.8870 1169
ज 0.9599 0.9967 0.9779 1200
ज्ञ 0.9878 0.9483 0.9677 1200
झ 0.9939 0.9567 0.9749 1200
ट 0.8917 0.8992 0.8954 1200
ठ 0.9075 0.8425 0.8738 1200
ड 0.9354 0.9900 0.9619 1200
ढ 0.8616 0.9025 0.8816 1200
ण 0.9114 0.9425 0.9267 1200
त 0.9280 0.9025 0.9151 1200
थ 0.9388 0.9717 0.9550 1200
द 0.8648 0.9275 0.8951 1200
ध 0.9876 0.9917 0.9896 1200
न 0.7256 0.8967 0.8021 1200
प 0.9991 0.9683 0.9835 1200
फ 0.8909 0.8575 0.8739 1200
ब 0.9814 0.7917 0.8764 1200
भ 0.9758 0.8383 0.9018 1200
म 0.8121 0.8142 0.8132 1200
य 0.5726 0.9133 0.7039 1200
र 0.7635 0.7339 0.7484 1210
ल 0.9239 0.8800 0.9014 1200
ळ 0.8950 0.7533 0.8181 1200
व 0.9597 0.7542 0.8446 1200
श 0.8829 0.8667 0.8747 1200
स 0.8449 0.8758 0.8601 1200
ह 0.9604 0.8883 0.9229 1200
accuracy 0.9027 50099
macro avg 0.9117 0.9039 0.9051 50099
weighted avg 0.9107 0.9027 0.9040 50099
```
---
## Label Space: 43 Classes
The model classifies a hand sign into one of the following 43 Marathi characters:
```json
"id2label": {
"0": "अ", "1": "आ", "2": "इ", "3": "ई", "4": "उ", "5": "ऊ",
"6": "ए", "7": "ऐ", "8": "ओ", "9": "औ", "10": "क", "11": "क्ष",
"12": "ख", "13": "ग", "14": "घ", "15": "च", "16": "छ", "17": "ज",
"18": "ज्ञ", "19": "झ", "20": "ट", "21": "ठ", "22": "ड", "23": "ढ",
"24": "ण", "25": "त", "26": "थ", "27": "द", "28": "ध", "29": "न",
"30": "प", "31": "फ", "32": "ब", "33": "भ", "34": "म", "35": "य",
"36": "र", "37": "ल", "38": "ळ", "39": "व", "40": "श", "41": "स", "42": "ह"
}
```
---
## Install Dependencies
```bash
pip install -q transformers torch pillow gradio
```
---
## Inference Code
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Marathi-Sign-Language-Detection" # Replace with actual path
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Marathi label mapping
id2label = {
"0": "अ", "1": "आ", "2": "इ", "3": "ई", "4": "उ", "5": "ऊ",
"6": "ए", "7": "ऐ", "8": "ओ", "9": "औ", "10": "क", "11": "क्ष",
"12": "ख", "13": "ग", "14": "घ", "15": "च", "16": "छ", "17": "ज",
"18": "ज्ञ", "19": "झ", "20": "ट", "21": "ठ", "22": "ड", "23": "ढ",
"24": "ण", "25": "त", "26": "थ", "27": "द", "28": "ध", "29": "न",
"30": "प", "31": "फ", "32": "ब", "33": "भ", "34": "म", "35": "य",
"36": "र", "37": "ल", "38": "ळ", "39": "व", "40": "श", "41": "स", "42": "ह"
}
def classify_marathi_sign(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=classify_marathi_sign,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=5, label="Marathi Sign Classification"),
title="Marathi-Sign-Language-Detection",
description="Upload an image of a Marathi sign language hand gesture to identify the corresponding character."
)
if __name__ == "__main__":
iface.launch()
```
---
## Intended Use
Marathi-Sign-Language-Detection can be applied in:
* Educational platforms for learning regional sign language.
* Assistive communication tools for Marathi-speaking users with hearing impairments.
* Interactive applications that translate signs into text.
* Research and data collection for sign language development and recognition.
|
[
"अ",
"आ",
"इ",
"ई",
"उ",
"ऊ",
"ए",
"ऐ",
"ओ",
"औ",
"क",
"क्ष",
"ख",
"ग",
"घ",
"च",
"छ",
"ज",
"ज्ञ",
"झ",
"ट",
"ठ",
"ड",
"ढ",
"ण",
"त",
"थ",
"द",
"ध",
"न",
"प",
"फ",
"ब",
"भ",
"म",
"य",
"र",
"ल",
"ळ",
"व",
"श",
"स",
"ह"
] |
prithivMLmods/TurkishFoods-25
|

# TurkishFoods-25
> **TurkishFoods-25** is a computer vision model fine-tuned from `google/siglip2-base-patch16-224` for multi-class food image classification. It is trained to identify 25 traditional Turkish dishes using the `SiglipForImageClassification` architecture.
```py
Classification Report:
precision recall f1-score support
asure 0.9718 0.9503 0.9609 181
baklava 0.9589 0.9292 0.9438 452
biber_dolmasi 0.9505 0.9555 0.9530 382
borek 0.8770 0.8842 0.8806 613
cig_kofte 0.9051 0.9358 0.9202 265
enginar 0.9116 0.8753 0.8931 377
et_sote 0.7870 0.7688 0.7778 346
gozleme 0.9220 0.9420 0.9319 414
hamsi 0.9724 0.9763 0.9744 253
hunkar_begendi 0.9583 0.9274 0.9426 248
icli_kofte 0.9261 0.9353 0.9307 402
ispanak 0.9567 0.9343 0.9454 213
izmir_kofte 0.8763 0.9239 0.8995 368
karniyarik 0.9538 0.8934 0.9226 347
kebap 0.9154 0.8584 0.8860 706
kisir 0.8919 0.9356 0.9132 388
kuru_fasulye 0.8799 0.9820 0.9281 388
lahmacun 0.9699 0.8703 0.9174 185
lokum 0.9220 0.9369 0.9294 555
manti 0.9569 0.9482 0.9525 328
mucver 0.8743 0.9201 0.8966 363
pirinc_pilavi 0.9110 0.9482 0.9292 367
simit 0.9629 0.9284 0.9453 391
taze_fasulye 0.8992 0.9253 0.9121 241
yaprak_sarma 0.9742 0.9544 0.9642 395
accuracy 0.9186 9168
macro avg 0.9234 0.9216 0.9220 9168
weighted avg 0.9194 0.9186 0.9186 9168
```
---
## Label Space: 25 Classes
The model classifies food images into the following Turkish dishes:
```json
"id2label": {
"0": "asure",
"1": "baklava",
"2": "biber_dolmasi",
"3": "borek",
"4": "cig_kofte",
"5": "enginar",
"6": "et_sote",
"7": "gozleme",
"8": "hamsi",
"9": "hunkar_begendi",
"10": "icli_kofte",
"11": "ispanak",
"12": "izmir_kofte",
"13": "karniyarik",
"14": "kebap",
"15": "kisir",
"16": "kuru_fasulye",
"17": "lahmacun",
"18": "lokum",
"19": "manti",
"20": "mucver",
"21": "pirinc_pilavi",
"22": "simit",
"23": "taze_fasulye",
"24": "yaprak_sarma"
}
```
---
## Install Requirements
```bash
pip install -q transformers torch pillow gradio
```
---
## Inference Script
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
model_name = "prithivMLmods/TurkishFoods-25" # Replace with your Hugging Face repo
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
id2label = {
"0": "asure", "1": "baklava", "2": "biber_dolmasi", "3": "borek", "4": "cig_kofte",
"5": "enginar", "6": "et_sote", "7": "gozleme", "8": "hamsi", "9": "hunkar_begendi",
"10": "icli_kofte", "11": "ispanak", "12": "izmir_kofte", "13": "karniyarik", "14": "kebap",
"15": "kisir", "16": "kuru_fasulye", "17": "lahmacun", "18": "lokum", "19": "manti",
"20": "mucver", "21": "pirinc_pilavi", "22": "simit", "23": "taze_fasulye", "24": "yaprak_sarma"
}
def predict_food(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
return {id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))}
iface = gr.Interface(
fn=predict_food,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=5, label="Top Turkish Foods"),
title="TurkishFoods-25 Classifier",
description="Upload a food image to identify one of 25 Turkish dishes."
)
if __name__ == "__main__":
iface.launch()
```
---
## Applications
* Turkish cuisine image datasets
* Food delivery or smart restaurant apps
* Culinary learning platforms
* Nutrition tracking via image-based recognition
|
[
"asure",
"baklava",
"biber_dolmasi",
"borek",
"cig_kofte",
"enginar",
"et_sote",
"gozleme",
"hamsi",
"hunkar_begendi",
"icli_kofte",
"ispanak",
"izmir_kofte",
"karniyarik",
"kebap",
"kisir",
"kuru_fasulye",
"lahmacun",
"lokum",
"manti",
"mucver",
"pirinc_pilavi",
"simit",
"taze_fasulye",
"yaprak_sarma"
] |
suzzit/skin-cancer-classificationv5
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# suzzit/skin-cancer-classificationv5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.3266
- Validation Loss: 0.0298
- Validation Accuracy: 0.3139
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 27740, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0001 | 0.3172 | 0.0741 | 0.3215 | 0 |
| 0.0000 | 0.3360 | 0.0486 | 0.3367 | 1 |
| 0.0000 | 0.3446 | 0.0379 | 0.3519 | 2 |
| 0.0000 | 0.3252 | 0.0293 | 0.3089 | 3 |
| 0.0000 | 0.3266 | 0.0298 | 0.3139 | 4 |
### Framework versions
- Transformers 4.51.3
- TensorFlow 2.19.0
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"0",
"1",
"2"
] |
prithivMLmods/Weather-Image-Classification
|

# Weather-Image-Classification
> Weather-Image-Classification is a vision-language model fine-tuned from google/siglip2-base-patch16-224 for multi-class image classification. It is trained to recognize weather conditions from images using the SiglipForImageClassification architecture.
```py
Classification Report:
precision recall f1-score support
cloudy/overcast 0.8493 0.8762 0.8625 6702
foggy/hazy 0.8340 0.8128 0.8233 1261
rain/strom 0.7644 0.7592 0.7618 1927
snow/frosty 0.8341 0.8448 0.8394 1875
sun/clear 0.9124 0.8846 0.8983 6274
accuracy 0.8589 18039
macro avg 0.8388 0.8355 0.8371 18039
weighted avg 0.8595 0.8589 0.8591 18039
```

---
## Label Space: 5 Classes
The model classifies an image into one of the following weather categories:
```json
"id2label": {
"0": "cloudy/overcast",
"1": "foggy/hazy",
"2": "rain/storm",
"3": "snow/frosty",
"4": "sun/clear"
}
```
---
## Install Dependencies
```bash
pip install -q transformers torch pillow gradio
```
---
## Inference Code
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Weather-Image-Classification" # Replace with actual path
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Label mapping
id2label = {
"0": "cloudy/overcast",
"1": "foggy/hazy",
"2": "rain/storm",
"3": "snow/frosty",
"4": "sun/clear"
}
def classify_weather(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=classify_weather,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=5, label="Weather Condition"),
title="Weather-Image-Classification",
description="Upload an image to identify the weather condition (sun, rain, snow, fog, or clouds)."
)
if __name__ == "__main__":
iface.launch()
```
---
## Intended Use
Weather-Image-Classification is useful for:
* Automated weather tagging for photography and media.
* Enhancing dataset labeling in weather-related research.
* Supporting smart surveillance and traffic systems.
* Improving scene understanding in autonomous vehicles.
|
[
"cloudy/overcast",
"foggy/hazy",
"rain/strom",
"snow/frosty",
"sun/clear"
] |
suzzit/skin-cancer-classificationv6
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# suzzit/skin-cancer-classificationv6
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 9.7066
- Train Accuracy: 0.3717
- Validation Loss: 10.7710
- Validation Accuracy: 0.4481
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 27740, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 17.5809 | 0.3562 | 12.7019 | 0.4430 | 0 |
| 10.5613 | 0.3677 | 11.7380 | 0.4354 | 1 |
| 10.0422 | 0.3605 | 11.3135 | 0.4456 | 2 |
| 9.8473 | 0.3655 | 10.9669 | 0.4506 | 3 |
| 9.7066 | 0.3717 | 10.7710 | 0.4481 | 4 |
### Framework versions
- Transformers 4.51.3
- TensorFlow 2.19.0
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"0",
"1",
"2"
] |
prithivMLmods/Hindi-Sign-Language-Detection
|

# Hindi-Sign-Language-Detection
> Hindi-Sign-Language-Detection is a vision-language model fine-tuned from google/siglip2-base-patch16-224 for multi-class image classification. It is trained to detect and classify Hindi sign language hand gestures into corresponding Devanagari characters using the SiglipForImageClassification architecture.
```py
Classification Report:
precision recall f1-score support
ऋ 0.9832 0.9121 0.9463 512
क 0.9433 0.9357 0.9395 498
ख 0.9694 0.9589 0.9641 462
ग 0.9961 0.8996 0.9454 568
घ 0.8990 0.9784 0.9370 464
ङ 0.9758 0.9869 0.9813 612
च 0.9223 0.9519 0.9368 561
छ 0.9226 0.9597 0.9408 571
ज 0.9346 0.9709 0.9524 412
झ 0.9051 0.9978 0.9492 449
ट 0.9670 0.8998 0.9322 489
ठ 0.8992 0.9954 0.9449 439
ढ 0.9392 0.9984 0.9679 634
ण 0.9102 0.9383 0.9240 648
त 0.8167 0.9938 0.8966 650
थ 0.9720 0.9616 0.9668 651
द 0.8162 0.9185 0.8643 319
न 0.9711 0.8971 0.9327 525
प 0.9642 0.9360 0.9499 719
फ 0.9847 0.7700 0.8642 500
ब 0.9447 0.9364 0.9406 566
भ 0.8779 0.9656 0.9197 581
म 0.9968 0.9920 0.9944 624
य 0.9600 0.9829 0.9713 586
र 0.9613 0.9268 0.9437 724
ल 0.9719 0.8993 0.9342 576
व 0.9619 0.8547 0.9052 709
स 1.0000 0.9721 0.9859 502
ह 0.9899 0.9441 0.9665 626
accuracy 0.9425 16177
macro avg 0.9433 0.9426 0.9413 16177
weighted avg 0.9457 0.9425 0.9425 16177
```
---
## Label Space: 29 Classes
The model classifies a hand sign into one of the following 29 Hindi characters:
```json
"id2label": {
"0": "ऋ",
"1": "क",
"2": "ख",
"3": "ग",
"4": "घ",
"5": "ङ",
"6": "च",
"7": "छ",
"8": "ज",
"9": "झ",
"10": "ट",
"11": "ठ",
"12": "ड",
"13": "ण",
"14": "त",
"15": "थ",
"16": "द",
"17": "न",
"18": "प",
"19": "फ",
"20": "ब",
"21": "भ",
"22": "म",
"23": "य",
"24": "र",
"25": "ल",
"26": "व",
"27": "स",
"28": "ह"
}
```
---
## Install Dependencies
```bash
pip install -q transformers torch pillow gradio
```
---
## Inference Code
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Hindi-Sign-Language-Detection" # Replace with actual path
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Hindi label mapping
id2label = {
"0": "ऋ", "1": "क", "2": "ख", "3": "ग", "4": "घ",
"5": "ङ", "6": "च", "7": "छ", "8": "ज", "9": "झ",
"10": "ट", "11": "ठ", "12": "ड", "13": "ण", "14": "त",
"15": "थ", "16": "द", "17": "न", "18": "प", "19": "फ",
"20": "ब", "21": "भ", "22": "म", "23": "य", "24": "र",
"25": "ल", "26": "व", "27": "स", "28": "ह"
}
def classify_hindi_sign(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=classify_hindi_sign,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=3, label="Hindi Sign Classification"),
title="Hindi-Sign-Language-Detection",
description="Upload an image of a Hindi sign language hand gesture to identify the corresponding character."
)
if __name__ == "__main__":
iface.launch()
```
---
## Intended Use
Hindi-Sign-Language-Detection can be used in:
* Educational tools for learning Indian sign language.
* Assistive technology for hearing and speech-impaired individuals.
* Real-time sign-to-text translation applications.
* Human-computer interaction for Hindi users.
|
[
"ऋ",
"क",
"ख",
"ग",
"घ",
"ङ",
"च",
"छ",
"ज",
"झ",
"ट",
"ठ",
"ढ",
"ण",
"त",
"थ",
"द",
"न",
"प",
"फ",
"ब",
"भ",
"म",
"य",
"र",
"ल",
"व",
"स",
"ह"
] |
suzzit/skin-cancer-classificationv7
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# suzzit/skin-cancer-classificationv7
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0962
- Train Accuracy: 0.3536
- Validation Loss: 1.0305
- Validation Accuracy: 0.5038
- Epoch: 8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 27740, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 1.1015 | 0.3508 | 1.0890 | 0.4127 | 0 |
| 1.0988 | 0.3432 | 1.0872 | 0.4987 | 1 |
| 1.0978 | 0.3634 | 1.0766 | 0.3646 | 2 |
| 1.0962 | 0.3464 | 1.0730 | 0.3975 | 3 |
| 1.0962 | 0.3446 | 1.0840 | 0.4886 | 4 |
| 1.0951 | 0.3605 | 1.0515 | 0.5038 | 5 |
| 1.0950 | 0.3616 | 1.0455 | 0.5063 | 6 |
| 1.0959 | 0.3569 | 1.0243 | 0.5392 | 7 |
| 1.0962 | 0.3536 | 1.0305 | 0.5038 | 8 |
### Framework versions
- Transformers 4.51.3
- TensorFlow 2.19.0
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"0",
"1",
"2"
] |
NekoJar/trainer_output
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainer_output
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4766
- Accuracy: 0.5127
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 1024
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.9670 | 22 | 2.0708 | 0.1449 |
| No log | 1.9670 | 44 | 2.0653 | 0.1663 |
| 2.0961 | 2.9670 | 66 | 2.0564 | 0.1931 |
| 2.0961 | 3.9670 | 88 | 2.0423 | 0.2350 |
| 2.0586 | 4.9670 | 110 | 2.0171 | 0.2823 |
| 2.0586 | 5.9670 | 132 | 1.9638 | 0.3305 |
| 1.9128 | 6.9670 | 154 | 1.8130 | 0.3968 |
| 1.9128 | 7.9670 | 176 | 1.6647 | 0.4278 |
| 1.9128 | 8.9670 | 198 | 1.5676 | 0.4844 |
| 1.6466 | 9.9670 | 220 | 1.4766 | 0.5127 |
### Framework versions
- Transformers 4.51.0
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.